Dez 6, 2021
Sprecher:in · 0 Follower:innen
Sprecher:in · 0 Follower:innen
Sprecher:in · 0 Follower:innen
Sprecher:in · 4 Follower:innen
Adversarial robustness is an increasingly critical property of classifiers in applications. The design of robust algorithms relies on surrogate losses since the optimization of the adversarial loss with most hypothesis sets is NP-hard. But, which surrogate losses should be used and when do they benefit from theoretical guarantees? We present an extensive study of this question, including a detailed analysis of the ℋ-calibration and ℋ-consistency of adversarial surrogate losses. We show that convex loss functions, or the supremum-based convex losses often used in applications, are not ℋ-calibrated for common hypothesis sets used in machine learning. We then give a characterization of ℋ-calibration and prove that some surrogate losses are indeed ℋ-calibrated for the adversarial zero-one loss, with common hypothesis sets. In particular, we fix previous calibration results presented for the family of linear models in a previous publication and significantly generalize the results to the nonlinear hypothesis sets. Next, we show that ℋ-calibration is not sufficient to guarantee consistency and prove that, in the absence of any distributional assumption, no continuous surrogate loss is consistent in the adversarial setting. This, in particular, proves that a claim presented in a previous publication is inaccurate. Next, we identify natural conditions under which some surrogate losses that we describe in detail are ℋ-consistent. We also report a series of empirical results with simulated data, which show that many ℋ-calibrated surrogate losses are indeed not ℋ-consistent, and validate our theoretical assumptions.Adversarial robustness is an increasingly critical property of classifiers in applications. The design of robust algorithms relies on surrogate losses since the optimization of the adversarial loss with most hypothesis sets is NP-hard. But, which surrogate losses should be used and when do they benefit from theoretical guarantees? We present an extensive study of this question, including a detailed analysis of the ℋ-calibration and ℋ-consistency of adversarial surrogate losses. We show that conv…
Konto · 1,9k Follower:innen
Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.
Professionelle Aufzeichnung und Livestreaming – weltweit.
Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Manli Shu, …
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Jianhao Wang, …
Ewigspeicher-Fortschrittswert: 0 = 0.0%