Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-004-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-004-alpha.b-cdn.net
      • sl-yoda-v2-stream-004-beta.b-cdn.net
      • 1685195716.rsc.cdn77.org
      • 1239898752.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples

            Dez 6, 2021

            Sprecher:innen

            SL

            Sungyoon Lee

            Sprecher:in · 0 Follower:innen

            WL

            Woojin Lee

            Sprecher:in · 0 Follower:innen

            JP

            Jinseong Park

            Sprecher:in · 0 Follower:innen

            Über

            We study the problem of training certifiably robust models against adversarial examples. Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models. However, many studies have shown that Interval Bound Propagation (IBP) training uses much looser bounds but outperforms other models that use tighter bounds. We identify another key factor that influences th…

            Organisator

            N2
            N2

            NeurIPS 2021

            Konto · 1,9k Follower:innen

            Über NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            On the Variance of the Fisher Information for Deep Learning
            10:21

            On the Variance of the Fisher Information for Deep Learning

            Alexander Soen, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            FakeAVCeleb: A Novel Audio-Video Multimodal Deepfake Dataset
            04:47

            FakeAVCeleb: A Novel Audio-Video Multimodal Deepfake Dataset

            Hasam Khalid, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 1 = 0.1%

            A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs
            07:38

            A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs

            Mucong Ding, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            A Spoken Language Dataset of Descriptions for Speech-Based Grounded Language Learning
            04:26

            A Spoken Language Dataset of Descriptions for Speech-Based Grounded Language Learning

            Gaoussou Youssouf Kebe, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            A Continuized View on Nesterov Acceleration for Stochastic Gradient Descent and Randomized Gossip
            22:12

            A Continuized View on Nesterov Acceleration for Stochastic Gradient Descent and Randomized Gossip

            Mathieu Even, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization
            12:15

            Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization

            Deeksha Adil, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2021 folgen