We often think of optimization with momentum as a ball rolling down a hill. This isn't wrong, but there is much more to the story. | Continue reading
Understanding the building blocks and design choices of graph neural networks. | Continue reading
What components are needed for building learning algorithms that leverage the structure and properties of graphs? | Continue reading
This page either does not exist, or it moved somewhere else. | Continue reading
This page either does not exist, or it moved somewhere else. | Continue reading
After five years, Distill will be taking a break. | Continue reading
Reprogramming Neural CA to exhibit novel behaviour, using adversarial attacks. | Continue reading
When a neural network layer is divided into multiple branches, neurons self-organize into coherent groupings. | Continue reading
Science is a human activity. When we fail to distill and explain research, we accumulate a kind of debt... | Continue reading
We report the existence of multimodal neurons in artificial neural networks, similar to those found in the human brain. | Continue reading
We present techniques for visualizing, contextualizing, and understanding neural network weights. | Continue reading
Reverse engineering the curve detection algorithm from InceptionV1 and reimplementing it from scratch. | Continue reading
A family of early-vision neurons reacting to directional transitions from high to low spatial frequency. | Continue reading
Neural networks naturally learn many transformed copies of the same feature, connected by symmetric weights. | Continue reading
With diverse environments, we can analyze, diagnose and edit deep reinforcement learning models using attribution. | Continue reading
Examining the design of interactive articles by synthesizing theory from disciplines such as education, journalism, and visualization. | Continue reading
Examining the design of interactive articles by synthesizing theory from disciplines such as education, journalism, and visualization. | Continue reading
A collection of articles and comments with the goal of understanding how to design robust and general purpose self-organizing systems | Continue reading
Training an end-to-end differentiable, self-organising cellular automata for classifying MNIST digits. | Continue reading
Part one of a three part deep dive into the curve neuron family. | Continue reading
By focusing on linear dimensionality reduction, we show how to visualize many dynamic phenomena in neural networks. | Continue reading
How to tune hyperparameters for your machine learning model using Bayesian optimization. | Continue reading
An overview of all the neurons in the first five layers of InceptionV1, organized into a taxonomy of 'neuron groups.' | Continue reading
By focusing on linear dimensionality reduction, we show how to visualize many dynamic phenomena in neural networks. | Continue reading
By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks. | Continue reading
Differentiable Self-Organisation: Cellular Automata model of Morphogenesis. | Continue reading
Differentiable Self-Organisation: Cellular Automata model of Morphogenesis. | Continue reading
Detailed derivations and open-source code to analyze the receptive fields of convnets. | Continue reading
A closer look at how Temporal Difference Learning merges paths of experience for greater statistical efficiency | Continue reading
By creating user interfaces which let us work with the representations inside machine learning models, we can give people new tools for reasoning. | Continue reading
How neural networks build up their understanding of images | Continue reading
Six comments from the community and responses from the original authors | Continue reading
Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them -- and the rich structure of this combinatorial space. | Continue reading
What we'd like to find out about GANs that we don't know yet. | Continue reading
What we'd like to find out about GANs that we don't know yet. | Continue reading
How to turn a collection of small building blocks into a versatile tool for solving regression problems. | Continue reading
Inspecting gradient magnitudes in context can be a powerful tool to see when recurrent units use short-term or long-term contextual understanding. | Continue reading
Science is a human activity. When we fail to distill and explain research, we accumulate a kind of debt... | Continue reading
We often think of optimization with momentum as a ball rolling down a hill. This isn't wrong, but there is much more to the story. | Continue reading
A visual guide to Connectionist Temporal Classification, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems. | Continue reading
A powerful, under-explored tool for neural network visualizations and art. | Continue reading
A powerful, under-explored tool for neural network visualizations and art. | Continue reading
A simple and surprisingly effective family of conditioning mechanisms. | Continue reading