Jul 24, 2023
Diffusion models feature high sample quality, but are not effective at learning semantically meaningful latent representations. Here, we propose InfoDiffusion, an algorithm that enables diffusion models to perform representation learning using low-dimensional latent variables. We introduce auxiliary-variable diffusion models—a model family that contains an additional set of semantically meaningful latents—and we derive new variational inference algorithms that optimize a learning objective regularized with a mutual information term. Maximizing mutual information helps InfoDiffusion uncover semantically meaningful representations across multiple datasets, including representations that achieve the strong property of disentanglement. We envision our methods being useful in applications that require exploring a learned latent space to generate high-quality outputs, e.g., in generative design.
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker