Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Robust Overfitting may be mitigated by properly learned smoothening
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-016-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-016-alpha.b-cdn.net
      • sl-yoda-v3-stream-016-beta.b-cdn.net
      • 1504562137.rsc.cdn77.org
      • 1896834465.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Robust Overfitting may be mitigated by properly learned smoothening
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Robust Overfitting may be mitigated by properly learned smoothening

            May 3, 2021

            Speakers

            TC

            Tianlong Chen

            Speaker · 0 followers

            ZZ

            Zhneyu Zhang

            Speaker · 0 followers

            SL

            Sijia Liu

            Speaker · 0 followers

            About

            A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in adversarially robust training of deep networks, and that appropriate early-stopping of adversarial training (AT) could match the performance gains of most recent algorithmic improvements. This intriguing problem of robust overfitting motivates us to seek more remedies. As a pilot study, this paper investigates two empirical means to inject more learned smoothening during AT: one leveraging knowledge distillati…

            Organizer

            I2
            I2

            ICLR 2021

            Account · 887 followers

            About ICLR 2021

            The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning. ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Panel Discussion
            50:10

            Panel Discussion

            Xuanyi Dong, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Grounded Language Learning Fast and Slow
            11:44

            Grounded Language Learning Fast and Slow

            Felix Hill, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Sequential Density Ratio Estimation for Simultaneous Optimization of Speed and Accuracy
            09:55

            Sequential Density Ratio Estimation for Simultaneous Optimization of Speed and Accuracy

            Akinori Ebihara, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            The Importance of Pessimism in Fixed-Dataset Policy Optimization
            06:54

            The Importance of Pessimism in Fixed-Dataset Policy Optimization

            Jacob Buckman, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Scaling Symbolic Methods using Gradients for Neural Model Explanation
            04:54

            Scaling Symbolic Methods using Gradients for Neural Model Explanation

            Rishabh Singh, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Contrastive Explanations for Reinforcement Learning via Embedded Self Predictions
            14:19

            Contrastive Explanations for Reinforcement Learning via Embedded Self Predictions

            Zhengxian Lin, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICLR 2021