Spikes, Stats, and Source Code

Hacking with the Linderman Lab

We get together on Friday afternoons to write code, build models, explore datasets, and learn new things. Sometimes we go the extra yard and publish our Colab notebooks here. Follow along with us, or better yet, make a copy and start hacking too!

Weighing the Evidence in Sharp Wave Ripples

By Scott Linderman, January 24, 2022

Krause and Drugowitsch (2022) presented a novel approach to decoding and classifying sharp-wave ripples using state space models and Bayesian model comparison. We implemented a simple version of their method using our SSM package and applied it to some synthetic data. Spoiler alert: their method works really well!


Bayesian inference with normal inverse Wishart priors and natural parameters

By Scott Linderman, November 18, 2021

Building on the previous post, we show how to do Bayesian inference in a multivariate normal model using the normal-inverse-Wishart prior and its natural exponential family form.


Implementing a Normal Inverse Wishart Distribution with Tensorflow Probability

By Scott Linderman, October 27, 2021

The normal inverse Wishart (NIW) distribution is a basic building block for Bayesian models. Unfortunately it’s not one of Tensorflow Probability’s basic distributions, and making one with TFP building blocks was a bit tricky. This notebook shows my solution.


Structured Variational Autoencoders with TensorFlow Probability and JAX!

By Yixiu Zhao, November 13, 2020

Structured VAEs (Johnson et al, 2016) combine probabilistic graphical models (PGMs) and neural networks in one deep generative model, but they’re pretty tricky to implement. We show how to build them using TensorFlow Probability and JAX!