(Provable) Adversarial Robustness for Group Equivariant Tasks

Dec 10, 2023

Speakers

About

A machine learning model is traditionally considered robust if its prediction remains (almost) constant under input perturbations with small norm. However, real-world tasks like molecular property prediction or point cloud segmentation have inherent equivariances,such as equivariance to rotation or permutation. In such tasks, even perturbations with large norm do not necessarily change an input's semantic content. Furthermore, there are perturbations for which a model's prediction explicitly needs to change. For the first time, we propose a sound notion of adversarial robustness that accounts for task equivariance. We then demonstrate that model equivariance improves robustness and that randomized smoothing can be used to even make models provably robust while preserving their equivariances. We additionally derive the first graph edit distance certificates, i.e. sound robustness guarantees for isomorphism equivariant tasks like node classification. Overall, developing a sound notion of robustness for equivariant tasks is an important prerequisite for future work at the intersection of robust and geometric machine learning.

Organizer

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2023