Interventional Causal Representation Learning

Dec 2, 2022

Speakers

About

The theory of identifiable representation learning aims to build general-purpose methods that extract high-level latent (causal) factors from low-level sensory data. Most existing works focus on identifiable representation learning with observational data, relying on distributional assumptions on latent (causal) factors. However, in practice, we often also have access to interventional data for representation learning, e.g. from robotic manipulation experiments in robotics, from genetic perturbation experiments in genomics, or from electrical stimulation experiments in neuroscience. How can we leverage interventional data to help identify high-level latents? To this end, we explore the role of interventional data for identifiable representation learning in this work. We study the identifiability of latent causal factors with and without interventional data, under minimal distributional assumptions on latents. We prove that, if the true latent maps to the observed high-dimensional data via a polynomial function, then representation learning via minimizing standard reconstruction loss (used in autoencoders) can identify the true latents up to affine transformation. If we further have access to interventional data generated by hard do interventions on some latents, then we can identify these intervened latents up to permutation, shift and scaling.

Organizer

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2022