Traveling NeuReps in Brains and Machines

Dec 15, 2023

Speakers

About

Good neural architectures are rooted in good inductive biases (a.k.a. priors). Equivariance under symmetries is a prime example of a successful physics inspired prior which sometimes dramatically reduces the number of examples needed to learn predictive models. Diffusion based models, one of the most successful generative models, are rooted in nonequilibrium statistical mechanics. Conversely, ML methods have recently been used to solve PDEs for example in weather prediction, and to accelerate MD simulations by learning the (quantum mechanical) interactions between atoms and electrons. In this work we will try to extend this thinking to more flexible priors in the hidden variables of a neural network. In particular, we will impose wavelike dynamics in hidden variables under transformations of the inputs, which relaxes the stricter notion of equivariance. We find that under certain conditions, wavelike dynamics naturally arises in these hidden representations. We formalize this idea in a VAE-over-time architecture where the hidden dynamics is described by a Fokker-Planck (a.k.a. drift-diffusion) equation. This in turn leads to a new definition of a disentangled hidden representation of input states that can easily be manipulated to undergo transformations.

Organizer

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2023