InfoDiffusion: Representation Learning Using Information Maximizing Diffusion Models

Jul 24, 2023



Diffusion models feature high sample quality, but are not effective at learning semantically meaningful latent representations. Here, we propose InfoDiffusion, an algorithm that enables diffusion models to perform representation learning using low-dimensional latent variables. We introduce auxiliary-variable diffusion models—a model family that contains an additional set of semantically meaningful latents—and we derive new variational inference algorithms that optimize a learning objective regularized with a mutual information term. Maximizing mutual information helps InfoDiffusion uncover semantically meaningful representations across multiple datasets, including representations that achieve the strong property of disentanglement. We envision our methods being useful in applications that require exploring a learned latent space to generate high-quality outputs, e.g., in generative design.


Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%


Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow ICML 2023