Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Friendly Adversarial Training: Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-014-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-014-alpha.b-cdn.net
      • sl-yoda-v3-stream-014-beta.b-cdn.net
      • 1978117156.rsc.cdn77.org
      • 1243944885.rsc.cdn77.org
      • Subtitles
      • Off
      • en
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Friendly Adversarial Training: Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Friendly Adversarial Training: Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

            Jul 12, 2020

            Sprecher:innen

            JZ

            Jingfeng Zhang

            Sprecher:in · 0 Follower:innen

            XX

            Xilie Xu

            Sprecher:in · 0 Follower:innen

            BH

            Bo Han

            Sprecher:in · 0 Follower:innen

            Über

            Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models. However, it is conservative or even pessimistic so that it sometimes hurts the natural generalization. In this paper, we raise a fundamental question—do we have to trade off natural generalization for adversarial robustness? We argue that adversarial training is to employ confident adversarial data for updating the current model. We propose a novel approach of friendly adver…

            Organisator

            I2
            I2

            ICML 2020

            Konto · 2,7k Follower:innen

            Kategorien

            KI und Datenwissenschaft

            Kategorie · 10,8k Präsentationen

            Über ICML 2020

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            On the Consistency of Top-k Surrogate Losses
            15:54

            On the Consistency of Top-k Surrogate Losses

            Forest Yang, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Do RNN and LSTM have Long Memory?
            14:43

            Do RNN and LSTM have Long Memory?

            Jingyu Zhao, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Poster #30

            Mariya Vasileva

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Task-Oriented Active Perception and Planning in Environments with Partially Known Semantics
            16:06

            Task-Oriented Active Perception and Planning in Environments with Partially Known Semantics

            Mahsa Ghasemi, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Curvature-guided Pruning of High-performance Neural Networks Using Ricci Flow
            01:16

            Curvature-guided Pruning of High-performance Neural Networks Using Ricci Flow

            Samuel Glass, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? ICML 2020 folgen