NNCP: Lossless Data Compression with Neural Networks
(bellard.org)
NNCP is an experiment to build a practical lossless data compressor with neural networks.
NNCP is an experiment to build a practical lossless data compressor with neural networks.
The Structure of Neural Embeddings
(seanpedersen.github.io)
A small collection of insights on the structure of embeddings (latent spaces) produced by deep neural networks.
A small collection of insights on the structure of embeddings (latent spaces) produced by deep neural networks.
A Gentle Introduction to Graph Neural Networks (2021)
(distill.pub)
Neural networks have been adapted to leverage the structure and properties of graphs. We explore the components needed for building a graph neural network - and motivate the design choices behind them.
Neural networks have been adapted to leverage the structure and properties of graphs. We explore the components needed for building a graph neural network - and motivate the design choices behind them.
No More Adam: Learning Rate Scaling at Initialization Is All You Need
(arxiv.org)
In this work, we question the necessity of adaptive gradient methods for training deep neural networks.
In this work, we question the necessity of adaptive gradient methods for training deep neural networks.
Neuroevolution of augmenting topologies (NEAT algorithm)
(wikipedia.org)
NeuroEvolution of Augmenting Topologies (NEAT) is a genetic algorithm (GA) for the generation of evolving artificial neural networks (a neuroevolution technique) developed by Kenneth Stanley and Risto Miikkulainen in 2002 while at The University of Texas at Austin.
NeuroEvolution of Augmenting Topologies (NEAT) is a genetic algorithm (GA) for the generation of evolving artificial neural networks (a neuroevolution technique) developed by Kenneth Stanley and Risto Miikkulainen in 2002 while at The University of Texas at Austin.
Inferring neural activity before plasticity for learning beyond backpropagation
(nature.com)
For both humans and machines, the essence of learning is to pinpoint which components in its information processing pipeline are responsible for an error in its output, a challenge that is known as ‘credit assignment’.
For both humans and machines, the essence of learning is to pinpoint which components in its information processing pipeline are responsible for an error in its output, a challenge that is known as ‘credit assignment’.
Quark: Real-Time, High-Resolution, and General Neural View Synthesis
(quark-3d.github.io)
We present a novel neural algorithm for performing high-quality, high-resolution, real-time novel view synthesis.
We present a novel neural algorithm for performing high-quality, high-resolution, real-time novel view synthesis.
Bayesian Neural Networks
(cs.toronto.edu)
Bayesian inference allows us to learn a probability distribution over possible neural networks. We can approximately solve inference with a simple modification to standard neural network tools. The resulting algorithm mitigates overfitting, enables learning from small datasets, and tells us how uncertain our predictions are.
Bayesian inference allows us to learn a probability distribution over possible neural networks. We can approximately solve inference with a simple modification to standard neural network tools. The resulting algorithm mitigates overfitting, enables learning from small datasets, and tells us how uncertain our predictions are.
Physics-informed Shadowgraph Network: End-to-end Density Field Reconstruction
(arxiv.org)
This study presents a novel approach for quantificationally reconstructing density fields from shadowgraph images using physics-informed neural networks
This study presents a novel approach for quantificationally reconstructing density fields from shadowgraph images using physics-informed neural networks
SharpNEAT – evolving NN topologies and weights with a genetic algorithm
(sourceforge.io)
Neuroevolution of Augmenting Topologies (NEAT) is an evolutionary algorithm for evolving artificial neural networks.
Neuroevolution of Augmenting Topologies (NEAT) is an evolutionary algorithm for evolving artificial neural networks.
It all started with a perceptron
(medium.com)
In homage to John Hopfield and Geoffrey Hilton, Nobel Prize winners for their “fundamental discoveries and inventions that made machine learning and artificial neural networks possible,” I propose to explore the foundations of connectionist AI.
In homage to John Hopfield and Geoffrey Hilton, Nobel Prize winners for their “fundamental discoveries and inventions that made machine learning and artificial neural networks possible,” I propose to explore the foundations of connectionist AI.
Implementing neural networks on the "3 cent" 8-bit microcontroller
(wordpress.com)
Bouyed by the surprisingly good performance of neural networks with quantization aware training on the CH32V003, I wondered how far this can be pushed. How much can we compress a neural network while still achieving good test accuracy on the MNIST dataset?
Bouyed by the surprisingly good performance of neural networks with quantization aware training on the CH32V003, I wondered how far this can be pushed. How much can we compress a neural network while still achieving good test accuracy on the MNIST dataset?
Neural Networks (MNIST inference) on the "3-cent" Microcontroller
(wordpress.com)
Bouyed by the surprisingly good performance of neural networks with quantization aware training on the CH32V003, I wondered how far this can be pushed. How much can we compress a neural network while still achieving good test accuracy on the MNIST dataset?
Bouyed by the surprisingly good performance of neural networks with quantization aware training on the CH32V003, I wondered how far this can be pushed. How much can we compress a neural network while still achieving good test accuracy on the MNIST dataset?
New Algorithm Enables Neural Networks to Learn Continuously
(caltech.edu)
Neural networks have a remarkable ability to learn specific tasks, such as identifying handwritten digits. However, these models often experience "catastrophic forgetting" when taught additional tasks: They can successfully learn the new assignments, but "forget" how to complete the original. For many artificial neural networks, like those that guide self-driving cars, learning additional tasks thus requires being fully reprogrammed.
Neural networks have a remarkable ability to learn specific tasks, such as identifying handwritten digits. However, these models often experience "catastrophic forgetting" when taught additional tasks: They can successfully learn the new assignments, but "forget" how to complete the original. For many artificial neural networks, like those that guide self-driving cars, learning additional tasks thus requires being fully reprogrammed.
Solving mazes with neural cellular automata (2021)
(umu1729.github.io)
Interactive demonstration of Neural Cellular Maze Solver. This cellular automaton is trained to output the shortest path between the two endpoints. You can interactively edit the maze input by clicking or tapping with selected maze cell types (Wall, Road, Endpoint). The state of each cell is stochastically updated depending on the state of each cell and the four-surrounding cells.
Interactive demonstration of Neural Cellular Maze Solver. This cellular automaton is trained to output the shortest path between the two endpoints. You can interactively edit the maze input by clicking or tapping with selected maze cell types (Wall, Road, Endpoint). The state of each cell is stochastically updated depending on the state of each cell and the four-surrounding cells.
Implementing Neural Networks on the tiniest "3 cent" 8-bit Microcontroller
(wordpress.com)
Bouyed by the surprisingly good performance of neural networks with quantization aware training on the CH32V003, I wondered how far this can be pushed. How much can we compress a neural network while still achieving good test accuracy on the MNIST dataset?
Bouyed by the surprisingly good performance of neural networks with quantization aware training on the CH32V003, I wondered how far this can be pushed. How much can we compress a neural network while still achieving good test accuracy on the MNIST dataset?
Kolmogorov-Arnold networks may make neural networks more understandable
(quantamagazine.org)
By tapping into a decades-old mathematical principle, researchers are hoping that Kolmogorov-Arnold networks will facilitate scientific discovery.
By tapping into a decades-old mathematical principle, researchers are hoping that Kolmogorov-Arnold networks will facilitate scientific discovery.