Generalizing learning with Optimal Transport: Invariances and generative models across incomparable spaces

Dec 13, 2019

Speakers

About

Optimal Transport is gaining increasing importance in machine learning. In this talk, I will highlight directions for generalizing learning ideas with Optimal Transport. In particular, we may want to respect invariances or prior knowledge. For example, GANs have shown remarkable success in learning a distribution that faithfully recovers a reference distribution in its entirety. But, sometimes we may want to only learn some aspects (e.g., cluster or manifold structure), while modifying others (e.g., style, orientation or dimension). We propose a new model for learning across incomparable spaces, and show how to steer it towards target properties. A key component of our model is the Gromov-Wasserstein distance. Second, especially learned representations of distributions may only be alignable after applying a transformation from a known class. I will summarize ideas for incorporating such invariances in Optimal Transport distances, and implications for applications. This talk is based on joint work with David Alvarez Melis, Charlotte Bunne, Tommi Jaakkola and Andreas Krause.

Organizer

Categories

About NIPS 2019

Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NIPS 2019