Miscellaneous notes

This page contains a collection of notes on various topics that are more or less related to computational neuroscience and machine learning. Some of these can be considered ‘original research’ in a preliminary or non-publishable form, and some of them are simply notes and summaries of existing research.

Stochastic variational Gaussian processes regression

In GP regression, we are often limited by the cubic complexity of inference and hyperparameter optimization. Several approaches have therefore been developed in order to mitigate this cost and scale GP regression to larger datasets. A popular approach is the variational ‘SVGP’ approach developed by James Hensman and colleagues. In this note, we provide a brief overview of the mathematical underpinnings of this method.

Autoregressive priors on non-Euclidean manifolds

In Jensen et al. (2020), we developed a set of latent variable models with non-Euclidean latent spaces. One shortcoming of this approach compared to standard Eucliean methods such as GPFA or LFADS is that it is non-trivial to build in inductive biases of smoothness or continuity across time. At Cosyne 2021, we presented a method to overcome this challenge based on autoregressive processes on non-Euclidean manifolds, which we also put on bioRxiv as a short note (Jensen, Liu & Kao et al., 2022). Here, we provide a more mathematical description of the approach as well as a generalization to higher-order processes in contrast to the ‘Brownian’ process presented at Cosyne.

Policy gradient methods

Policy gradient methods are commonly used to train deep neural networks in a reinforcement learning setting, and we also use this approach to train our RL agents in Jensen et al. (2023). In this note, we provide a brief overview of the reinforcement learning problem setting and then derive some simple policy gradient methods including REINFORCE and actor critic reinforcement learning.

Supervised manifold GPLVMs

In Jensen et al. (2020), we developed a new method for latent variable modeling on non-Euclidean manifolds. However, another common problem is to perform supervised learning in such non-Euclidean spaces. This is desirable e.g. if we want to fit a model to predict head direction from neural data and then test the model during periods without labelled data, such as during sleep or mental processing. In this note, we build on our mGPLVM work to develop a GP-based model for supervised learning from neural data in non-Euclidean settings and demonstrate its utility over common methods used in the neuroscience literature.

Colab tutorial on Gaussian processes

Colab tutorial on Bayesian GPFA

Colab tutorial on supervised manifold GPLVMs

During the first two years of my PhD, I spent a substantial amount of time developing latent variable models for neuroscience, mostly using the tools of the ‘Gaussian process’ toolbox. Since then, I have given various tutorials on Gaussian processes and the supervised & latent variable models that can be developed from them. Several of these tutorials included Google colabs to allow participants to explore the models on their own, and I have included some of these notebooks here.