The world wide web is full of excellent resources for learning more about neuroscience, machine learning, and many other interesting topics. Unfortunately it’s not always easy to know where to look. Here is a short list of talks, tutorials and papers that I’ve found particularly insightful over the course of my own studies, and I would highly recommend anyone to have a look if they have time.

Talks

About a biological ring attractor network
Vivek Jayaraman (2020)
The human brain has ~100 billion neurons, the mouse brain ~100 million, and the fly brain just 100,000. However, we’re increasingly beginning to understand how function and cognition arises from these 100,000 fly neurons, and this talk gives an awesome overview of how this understanding emerges in the context of the Drosophila navigation system.

Working memory 2.0
Earl Miller (2020)
As someone who does not work on working memory, this seminar was an awesome introduction to decades of work on how prefrontal cortex helps solve working memory tasks as well as a cool appetizer for new work looking at how the activity of neurons in PFC interact with brain-wide signals and oscillations.

Where the wild things are - the biology of non-commensal Drosophila melanogaster in Southern Africa
Marcus Stensmyr (2018)
A thrilling tale of an impressive quest to discover why Drosophila melanogaster loves oranges despite oranges not being native to Southern Africa where the wild Drosophilae live.

Human planning in large state spaces
Wei Ji Ma (2020)
While deep reinforcement learning has given us a way to unlock previously unprecedented levels of performance in games such as chess and go with artificial agents, relatively litte is still known about how humans approach such problems. Wei Ji Ma adresses this problem with custom-made apps and huge data sets, and in additon to cool conclusions it also provides a refreshingly new approach to neuroscience in the 21st century.

Tutorials

Neuromatch Academy
Neuromatch organizers and volunteers (2020)
Arguably the best resource available for learning computational neuroscience. Three weeks worth of material starting from simple principles of model selection and model fitting and moving to tutorials on dynamical systems and control, principles of deep learning & machine learning as applied to neuroscience, and much, much more.

The Good Research Code Handbook
Patrick Mineault (2021)
Excellent introduction to writing good and reproducible code in python. Walks through how to set up a new project, write modular code, and how to test and document research code.

The Mathematical Foundations of Policy Gradient Methods
Sham Kakade (2020)
This tutorial provides a very useful overview of the policy gradient methods that underlie much of modern reinforcement learning, and Sham Kakade does an excellent job explaining things in a way that provides intuition to accompany the equations.

Papers

Volitional activation of remote place representations with a hippocampal brain–machine interface
Chongxi Lai et al. (2023)
It has long been known that the firing patterns of cells in the hippocampus reflect the structure of the environment. It has also been posited that ‘offline’ hippocampal activity in the form of replays could implement a form of planning through a process of imagination. However, this and other theories rely on animals having the ability to change what is being represented in hippocampus at will. In this paper, Lai et al. show that animals do have such volitional control over hippocampal activity. They do this by training rats in a brain-computer interface task, where they succesfully learn to control their hippocampal activity to represent distant locations.

Motor cortex is required for flexible but not automatic motor sequences
Kevin Mizes et al. (2023)
Motor learning is an important area of interest to systems neuroscientists, but it remains unclear what the contributions of different mammalian motor systems are for motor learning and memory. Mizes et al. show that motor cortex is necessary for learning and executing a flexible motor task that involves following a cued sequence of movements. However, motor cortex is not required when performing the same motor sequence in an ‘automatic’ or overtrained setting, suggesting that a primary role of primary motor cortex may be to provide contextual information to downstream motor circuits.

High-performance brain-to-text communication via handwriting
Francis Willett et al. (2021)
Brain-computer interfaces is a rapidly growing field with great advances in both academic research and industry over the past decade. In this work, Willett et al. develop a recurrent neural network model that can decode neural activity in motor cortex directly to intended, bringing this technology one step closer to general use.

Prefrontal cortex as a meta-reinforcement learning system
Jane Wang, Zeb Kurth-Nelson et al. (2018)
Deep reinforcement learning is becoming increasingly important in both AI and neuroscience, and like many others I am convinced that it will be one of the keys to understanding the human brain. This paper shows how a plethora of observations in biological agents can be explained by teaching a ‘PFC-like’ system to implement task-specific reinforcement learning algorithms in its internal dynamics via a slower ‘meta-reinforcement learning’ loop across tasks.

What grid cells convey about rat location
Ila Fiete et al. (2009)
Grid cells have long been considered one of the most baffling findings in modern systems neuroscience. In this work, Ila Fiete and colleagues propose that grid cells represent a ‘modulo code’ for counting and highlight several important features of such a code, including exponential capacity and the ability to update individual modules using only local information. This provides key insights into why the brain might use periodic codes and gives intuition for why so many artificial agents have been found to learn grid-like codes in more recent computational studies.

Multimodal Learning with Deep Boltzmann Machines
Nitish Srivastava and Ruslan Salakhutdinov (2014)
Multisensory alignment will probably turn out to be important for organizing neural circuits in early development, and this is a cool paper illustrating how two sensory modalities can be used to generate a powerful inference model in a machine learning model.

Generation of stable heading representations in diverse visual scenes
Sung Soo Kim et al. (2019)
This paper uses 2-photon imaging to investigate how visual scenes are mapped onto head direction circuits in Drosophila and includes both awesome experimental work and cool computational modelling that illustrates how the experimental findings relate to early theoretical work on head direction circuits integrating visual information.

A temporal basis for predicting the sensory consequences of motor commands in an electric fish
Ann Kennedy et al. (2014)
Electric fish sense the weak electric signals from their prey but also produce their own comparably large electric discharges. In this awesome paper, Ann Kennedy shows how the fish learns to cancel the sensory effect of its self-generated electric discharge using a set of temporal ‘basis functions’ in the form of the activity of the so-called ‘granule cells’.

Neural circuits for evidence accumulation and decision making in larval zebrafish
Armin Bahl and Florian Engert (2019)
Evidence accumulation, decision making and sensory-driven motion are ubiquitous across organisms. In this awesome work, Armin Bahl uses lightsheet imaging in the zebrafish brain to elucidate how these algorithms are implemented at a mechanistic level in the context of the fish optomotor response.

The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep
Rishidev Chaudhuri et al. (2019)
This paper compares the head-direction circuit in mice that are awake and asleep using ‘spline parameterization for unsupervised decoding’ (SPUD) to analyze the consistent ring topology. This work has inspired a lot of our own ideas on unsupervised learning in non-Euclidean spaces using Bayesian non-parametrics.

Accurate angular integration with only a handful of neurons
Marcella Noorman et al. (2022)
Most work in theoretical neuroscience assumes either a pair of neurons or an infinite population of neurons. However, many computations are performed by only a handful of neurons - especially in invertebrates with smaller brains than mammals. This paper generalizes canonical work on ring attractors from infinite populations to as few as 4 neurons, showing analytically that it is still possible to get optimal performance with careful tuning of parameters.

Rationally engineered Cas9 nucleases with improved specificity
Ian Slaymaker et al. (2016)
A cool example of how we can use an understanding of chemistry and protein structure to alter the functional properties of enzymes - in this case the Cas9 protein with use cases spanning basic science and medicine.