Dec 6, 2021
Speaker · 0 followers
Speaker · 0 followers
Speaker · 2 followers
Several recent works in machine learning have focused on evaluating the test-time robustness of a classifier: how well the classifier performs not just on the target domain it was trained upon, but upon perturbed examples. In these settings, the focus has largely been on two extremes of robustness: the robustness to perturbations drawn _at random_ from within some distribution (i.e., robustness to random perturbations), and the robustness to the _worst case_ perturbation in some set (i.e., adversarial robustness). In this paper, we argue that a sliding scale between these two extremes provides a valuable additional metric by which to gauge robustness. Specifically, we illustrate that each of these two extremes are naturally characterized by a (functional) p-norm over perturbation space, with p=1 corresponding to robustness to random perturbations and p=∞corresponding to adversarial perturbations. We then present the main technical contribution of our paper: a method for efficiently estimating the value of these norms by interpreting them as the partition function of a particular distribution, then using MCMC methods to estimate this partition function (either traditional Metropolis-Hastings for non-differentiable perturbations, or Hamiltonian Monte Carlo for differentiable perturbation). We show that our approach provides substantially better estimates than simple random sampling of the actual “intermediate-p” robustness of both standard, data-augmented, and adversarially-trained classifiers, illustrating a clear tradeoff between classifiers that optimize different metrics.Several recent works in machine learning have focused on evaluating the test-time robustness of a classifier: how well the classifier performs not just on the target domain it was trained upon, but upon perturbed examples. In these settings, the focus has largely been on two extremes of robustness: the robustness to perturbations drawn _at random_ from within some distribution (i.e., robustness to random perturbations), and the robustness to the _worst case_ perturbation in some set (i.e., adver…
Account · 1.9k followers
Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.
Professional recording and live streaming, delivered globally.
Presentations on similar topic, category or speaker
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Chenyang Wu, …
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Yifang Chen, …
Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%