Learning GMMs with Nearly Optimal Robustness Guarantees

Jul 2, 2022

Speakers

About

In this work we solve the problem of robustly learning a high-dimensional Gaussian mixture model with k components from ϵ-corrupted samples up to accuracy O(ϵ) in total variation distance for any constant k and with mild assumptions on the mixture. This robustness guarantee is optimal up to polylogarithmic factors. The main challenge is that most earlier works rely on learning individual components in the mixture, but this is impossible in our setting, at least for the types of strong robustness guarantees we are aiming for. Instead we introduce a new framework which we call strong observability that gives us a route to circumvent this obstacle.

Organizer

About COLT

The conference is held annually since 1988 and has become the leading conference on Learning theory by maintaining a highly selective process for submissions. It is committed in high-quality articles in all theoretical aspects of machine learning and related topics.

Like the format? Trust SlidesLive to capture your next event!

Professional recording and live streaming, delivered globally.

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow COLT