Probable Domain Generalization via Quantile Risk Minimization

Nov 28, 2022

Sprecher:innen

Über

Domain generalization (DG) leverages labeled training data from multiple domains with the goal of generalizing to related test domains. To achieve this, DG is commonly formulated as a worst-case optimization problem over the set of all possible domains. However, this worst-case problem is generally intractable and, with adversarial shifts extremely unlikely in practice, leads to overly-conservative solutions. In fact, a recent study found that no DG algorithm outperformed empirical risk minimization in terms of average performance over test domains. To address these shortcomings, we propose a probabilistic framework for DG, which we call Probable Domain Generalization, and advocate for predictors that perform well with high probability rather than in the worst-case or on-average. Our key idea is that distribution shifts seen during training should inform us of probable shifts at test time. To achieve this, we explicitly relate training and test domains as draws from the same underlying meta-distribution, and propose a new optimization problem—Quantile Risk Minimization (QRM)—which requires that predictors generalize with high probability. We then prove that, given sufficiently many domains and samples, the empirical version (EQRM) produces predictors that generalize to new domains with the desired probability. We also show that EQRM recovers the causal predictor as the desired probability of generalization approaches one. In our experiments, we introduce a new evaluation protocol for DG, which underscores the importance of multiple test domains for evaluating the quantile performance of DG algorithms, and we show that our algorithms outperform strong DG baselines on real and synthetic data.

Organisator

Präsentation speichern

Soll diese Präsentation für 1000 Jahre gespeichert werden?

Wie speichern wir Präsentationen?

Ewigspeicher-Fortschrittswert: 0 = 0.0%

Freigeben

Empfohlene Videos

Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

Interessiert an Vorträgen wie diesem? NeurIPS 2022 folgen