Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Calibration and Consistency of Adversarial Surrogate Losses
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-012-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-012-alpha.b-cdn.net
      • sl-yoda-v3-stream-012-beta.b-cdn.net
      • 1338956956.rsc.cdn77.org
      • 1656830687.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Calibration and Consistency of Adversarial Surrogate Losses
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Calibration and Consistency of Adversarial Surrogate Losses

            Dez 6, 2021

            Sprecher:innen

            YZ

            Yutao Zhong

            Sprecher:in · 0 Follower:innen

            AM

            Anqi Mao

            Sprecher:in · 0 Follower:innen

            PA

            Pranjal Awasthi

            Sprecher:in · 0 Follower:innen

            Über

            Adversarial robustness is an increasingly critical property of classifiers in applications. The design of robust algorithms relies on surrogate losses since the optimization of the adversarial loss with most hypothesis sets is NP-hard. But, which surrogate losses should be used and when do they benefit from theoretical guarantees? We present an extensive study of this question, including a detailed analysis of the ℋ-calibration and ℋ-consistency of adversarial surrogate losses. We show that conv…

            Organisator

            N2
            N2

            NeurIPS 2021

            Konto · 1,9k Follower:innen

            Über NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Generalizability of density functionals learned from differentiable programming on weakly correlated spin-polarized systems
            15:34

            Generalizability of density functionals learned from differentiable programming on weakly correlated spin-polarized systems

            Bhupalee Kalita, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Sample-Efficient Policy Search with a Trajectory Autoencoder
            03:01

            Sample-Efficient Policy Search with a Trajectory Autoencoder

            Alexander Fabisch, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Time-independent Generalization Bounds for SGLD in Non-convex Settings
            09:07

            Time-independent Generalization Bounds for SGLD in Non-convex Settings

            Tyler Farghly, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Linear-Time Probabilistic Solutions of Boundary Value Problems
            02:01

            Linear-Time Probabilistic Solutions of Boundary Value Problems

            Nicholas Krämer, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Encoding Robustness to Image Style via Adversarial Feature Perturbations
            07:36

            Encoding Robustness to Image Style via Adversarial Feature Perturbations

            Manli Shu, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Towards Understanding Cooperative Multi-Agent Q-Learning with Value Factorization
            11:35

            Towards Understanding Cooperative Multi-Agent Q-Learning with Value Factorization

            Jianhao Wang, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2021 folgen