📰 Distill.pub
Articles from Distill.pub · 49 articles · Updated every 3 hours · View all news
All
⚡ AI Lessons (4907)
ArXiv cs.AIOpenAI NewsHugging Face BlogForbes InnovationDev.to AIWeaviate Blog
Distill.pub
📄 Paper
⚡ AI Lesson
6y ago
Computing Receptive Fields of Convolutional Neural Networks
Detailed derivations and open-source code to analyze the receptive fields of convnets.
Distill.pub
🧠 Large Language Models
📄 Paper
⚡ AI Lesson
6y ago
The Paths Perspective on Value Learning
A closer look at how Temporal Difference Learning merges paths of experience for greater statistical efficiency
Distill.pub
📄 Paper
⚡ AI Lesson
6y ago
A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features'
Six comments from the community and responses from the original authors
Distill.pub
📄 Paper
⚡ AI Lesson
6y ago
A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Example Researchers Need to Expand What is Meant by 'Robustness'
The main hypothesis in Ilyas et al. (2019) happens to be a special case of a more general principle that is commonly accepted in the robustness to distributiona
Distill.pub
🖌️ UI/UX Design
📄 Paper
⚡ AI Lesson
6y ago
A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Robust Feature Leakage
An example project using webpack and svelte-loader and ejs to inline SVGs
Distill.pub
👁️ Computer Vision
📄 Paper
⚡ AI Lesson
6y ago
A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Two Examples of Useful, Non-Robust Features
An example project using webpack and svelte-loader and ejs to inline SVGs
Distill.pub
📄 Paper
⚡ AI Lesson
6y ago
A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarially Robust Neural Style Transfer
An experiment showing adversarial robustness makes neural style transfer work on a non-VGG architecture
Distill.pub
🧠 Large Language Models
📄 Paper
⚡ AI Lesson
6y ago
A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Learning from Incorrectly Labeled Data
Section 3.2 of Ilyas et al. (2019) shows that training a model on only adversarial errors leads to non-trivial generalization on the original test set. We show
Distill.pub
📄 Paper
⚡ AI Lesson
6y ago
Open Questions about Generative Adversarial Networks
What we'd like to find out about GANs that we don't know yet.
Distill.pub
📄 Paper
⚡ AI Lesson
6y ago
A Visual Exploration of Gaussian Processes
How to turn a collection of small building blocks into a versatile tool for solving regression problems.
Distill.pub
📄 Paper
⚡ AI Lesson
7y ago
Visualizing memorization in RNNs
Inspecting gradient magnitudes in context can be a powerful tool to see when recurrent units use short-term or long-term contextual understanding.
Distill.pub
📄 Paper
⚡ AI Lesson
7y ago
Activation Atlas
By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the n
Distill.pub
🧠 Large Language Models
📄 Paper
⚡ AI Lesson
7y ago
AI Safety Needs Social Scientists
If we want to train AI to do what humans want, we need to study humans.
Distill.pub
📄 Paper
⚡ AI Lesson
7y ago
Differentiable Image Parameterizations
A powerful, under-explored tool for neural network visualizations and art.
Distill.pub
🧠 Large Language Models
📄 Paper
⚡ AI Lesson
7y ago
Feature-wise transformations
A simple and surprisingly effective family of conditioning mechanisms.
Distill.pub
📄 Paper
⚡ AI Lesson
8y ago
The Building Blocks of Interpretability
Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them -- and the rich structure of
Distill.pub
🧠 Large Language Models
📄 Paper
⚡ AI Lesson
8y ago
Using Artificial Intelligence to Augment Human Intelligence
By creating user interfaces which let us work with the representations inside machine learning models, we can give people new tools for reasoning.
Distill.pub
📄 Paper
⚡ AI Lesson
8y ago
Sequence Modeling with CTC
A visual guide to Connectionist Temporal Classification, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and othe
Distill.pub
👁️ Computer Vision
📄 Paper
⚡ AI Lesson
8y ago
Feature Visualization
How neural networks build up their understanding of images
Distill.pub
📄 Paper
⚡ AI Lesson
8y ago
Why Momentum Really Works
We often think of optimization with momentum as a ball rolling down a hill. This isn't wrong, but there is much more to the story.
Distill.pub
📄 Paper
⚡ AI Lesson
9y ago
Research Debt
Science is a human activity. When we fail to distill and explain research, we accumulate a kind of debt...
Distill.pub
📄 Paper
⚡ AI Lesson
9y ago
Experiments in Handwriting with a Neural Network
Several interactive visualizations of a generative model of handwriting. Some are fun, some are serious.
Distill.pub
📄 Paper
⚡ AI Lesson
9y ago
Deconvolution and Checkerboard Artifacts
When we look very closely at images generated by neural networks, we often see a strange checkerboard pattern of artifacts.
Distill.pub
📄 Paper
⚡ AI Lesson
9y ago
How to Use t-SNE Effectively
Although extremely useful for visualizing high-dimensional data, t-SNE plots can sometimes be mysterious or misleading.
DeepCamp AI