Jul 24, 2023
Sprecher:in · 1 Follower:in
Sprecher:in · 0 Follower:innen
Sprecher:in · 0 Follower:innen
Spurious correlations that degrade model performance or lead the model to be right for the wrong reasons are one of the main robustness concerns for real-world deployments. Mitigating newly found spurious correlations however for large-scale models during pre-training can be prohibitively expensive and unrealistic. This is the case especially for the parties that do not have access to a large compute infrastructure. This paper presents a method for addressing these issues during fine-tuning for a given domain of interest. With a focus on multi-modal models (e.g., CLIP), the proposed method takes advantage of different modalities in these models to detect and explicitly set apart spurious attributes from the affected class. This is achieved through a multi-modal contrastive loss function which conveniently expresses the spurious relationships via language. Our experimental results and in-depth visualizations on CLIP show that such an intervention can effectively i) improve model's accuracy when spurious attributes are not present, and ii) focus the model explanation map on the actual class rather than the spurious attribute when present.Spurious correlations that degrade model performance or lead the model to be right for the wrong reasons are one of the main robustness concerns for real-world deployments. Mitigating newly found spurious correlations however for large-scale models during pre-training can be prohibitively expensive and unrealistic. This is the case especially for the parties that do not have access to a large compute infrastructure. This paper presents a method for addressing these issues during fine-tuning for…
Professionelle Aufzeichnung und Livestreaming – weltweit.
Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind
Saro Passaro, …
Rui Ye, …