Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity

Dec 6, 2021



With meaningful and simplified representations of neural activity, we often are afforded insights into how and what information is being processed within a neural circuit. However, without labels, the process of finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning representations of neural activity called Swap-VAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). By creating these views through dropping out neurons and jittering samples in time, we essentially ask the network to find a representation that maintains both temporal consistency and invariance to the neurons used to represent the specific brain state. We then couple this with a novel block swapping latent augmentation and a generative model to simulate new high-dimensional neural activities. Through evaluations on both synthetic and real neural datasets from hundreds of neurons in different primate brains, we show that by combining our self-supervised alignment loss with a generative model, we can build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.



About NeurIPS 2021

Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%


Recommended Videos

Presentations on similar topic, category or speaker