Jul 24, 2023
Řečník · 0 sledujících
Řečník · 4 sledující
The goal of optimal transport (OT) theory is to characterize maps that can push-forward efficiently a probability measure onto another.While difficult, that task finds many uses in science and machine learning. Recent works have drawn inspiration from 's theorem, which states that when the ground cost is the squared-Euclidean distance, the “best” map to morph a continuous measure μ∈𝒫(ℝ^d) into another ν must be the gradient of a convex function.Such works propose, following [Makkuva+20, Korotin+20], to focus exclusively on maps T=∇ f_θ, where f_θ is an input convex neural network (ICNN), as defined by Amos+17, and to fit θ with SGD using samples from μ,ν.Despite their mathematical elegance, fitting ICNNs in OT tasks raises many challenges, due notably to the many constraints imposed on θ; the need to approximate the conjugate of f_θ; or the limitation that they only work for the squared-Euclidean cost. More generally, we question the relevance of using Brenier's result, which only applies to densities, to constrain the architecture of candidate maps fitted on samples.Motivated by these limitations, we propose a radically different approach to estimating OT maps:given any cost c, we introduce a regularizer, the Monge gap ℳ^c_ρ(T) of a map T. That gap quantifies how far a map T deviates from the ideal properties we expect from a c-OT map supported anywhere on a probability measure ρ. In practice, we drop all architecture requirements for T and simply minimize a distance (e.g., the Sinkhorn divergence) between T♯μ and ν, regularized by ℳ^c_ρ(T). We study ℳ^c_ρ, and show how our simple pipeline outperforms significantly other baselines in practice.The goal of optimal transport (OT) theory is to characterize maps that can push-forward efficiently a probability measure onto another.While difficult, that task finds many uses in science and machine learning. Recent works have drawn inspiration from 's theorem, which states that when the ground cost is the squared-Euclidean distance, the “best” map to morph a continuous measure μ∈𝒫(ℝ^d) into another ν must be the gradient of a convex function.Such works propose, following [Makkuva+20, K…
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Andi Peng, …
Jing Yu Koh, …
Gerald Woo, …
Zhishuai Guo, …