Generalization Bounds for (Wasserstein) Robust Optimization

Dec 6, 2021

Speakers

About

(Distributionally) robust optimization has gained momentum in the machine learning community recently, due to its promising applications in developing generalizable learning paradigms under stochastic or adversarial environments. To understand its generalization capabilities, in this paper, we study the generalization bounds of robust optimization and Wasserstein distributionally robust optimization. We consider a broad class of piecewise Holder smooth loss functions, under both stochastic setting – i.i.d. or weakly dependent data – and adversarial setting. We derive finite-sample generalization bounds for robust optimization and Wasserstein distributionally robust optimization, assuming that the underlying data-generating distribution satisfies certain types of transportation-information inequalities. The proofs are built on a general connection between robustness and variation regularization (including Lipschitz and gradient regularization among others), as well as new local Rademacher complexity results for variation regularization. Our theory is illustrated for various machine learning tasks, including supervised learning, principal component analysis, learning with Markovian data, and risk-averse optimization.

Organizer

About NeurIPS 2021

Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2021