Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Towards Last-Layer Retraining for Group Robustness with Fewer Annotations
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-010-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-010-alpha.b-cdn.net
      • sl-yoda-v2-stream-010-beta.b-cdn.net
      • 1759419103.rsc.cdn77.org
      • 1016618226.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Towards Last-Layer Retraining for Group Robustness with Fewer Annotations
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Towards Last-Layer Retraining for Group Robustness with Fewer Annotations

            Dez 10, 2023

            Sprecher:innen

            TL

            Tyler LaBonte

            Sprecher:in · 0 Follower:innen

            VM

            Vidya Muthukumar

            Sprecher:in · 0 Follower:innen

            AK

            Abhishek Kumar

            Sprecher:in · 2 Follower:innen

            Über

            Empirical risk minimization (ERM) of neural networks is prone to over-reliance on spurious correlations and poor generalization on minority groups. The recent deep feature reweighting (DFR) technique achieves state-of-the-art group robustness via simple last-layer retraining, but it requires held-out group and class annotations to construct a group-balanced reweighting dataset. In this work, we examine this impractical requirement and find that last-layer retraining can be surprisingly effective…

            Organisator

            N2
            N2

            NeurIPS 2023

            Konto · 648 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            When Can We Track Significant Preference Shifts in Dueling Bandits?
            04:43

            When Can We Track Significant Preference Shifts in Dueling Bandits?

            Joe Suk, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            WebArena: A Realistic Web Environment for Building Autonomous Agents
            11:27

            WebArena: A Realistic Web Environment for Building Autonomous Agents

            Shuyan Zhou, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Learning Large-scale Neural Fields via Context Pruned Meta-Learning
            04:58

            Learning Large-scale Neural Fields via Context Pruned Meta-Learning

            Jihoon Tack, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Revisiting Scalarization in Multi-Task Learning: A Theoretical Perspective
            04:25

            Revisiting Scalarization in Multi-Task Learning: A Theoretical Perspective

            Yuzheng Hu, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Key Challenges in Foundation Models (... and some solutions!)
            22:19

            Key Challenges in Foundation Models (... and some solutions!)

            Volkan Cevher

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Post Hoc Explanations of Language Models Can Improve Language Models
            05:02

            Post Hoc Explanations of Language Models Can Improve Language Models

            Satyapriya Krishna, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2023 folgen