Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-014-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-014-alpha.b-cdn.net
      • sl-yoda-v3-stream-014-beta.b-cdn.net
      • 1978117156.rsc.cdn77.org
      • 1243944885.rsc.cdn77.org
      • Subtitles
      • Off
      • en
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup

            Jul 12, 2020

            Speakers

            JK

            Jang-Hyun Kim

            Speaker · 0 followers

            WC

            Wonho Choo

            Speaker · 0 followers

            HOS

            Hyun Oh Song

            Speaker · 0 followers

            About

            While deep neural networks achieve great performance on fitting the training distribution, the learned networks are prone to overfitting and are susceptible to adversarial attacks. In this regard, a number of mixup based augmentation methods have been recently proposed. However, these approaches mainly focus on creating previously unseen virtual examples and can sometimes provide misleading supervisory signal to the network. To this end, we propose Puzzle Mix, a mixup method for explicitly utili…

            Organizer

            I2
            I2

            ICML 2020

            Account · 2.7k followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            About ICML 2020

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Non-Stationary Bandits with Intermediate Observations
            14:40

            Non-Stationary Bandits with Intermediate Observations

            Claire Vernade, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Implicit differentiation for hyperparameter optimization of Lasso-type models
            16:17

            Implicit differentiation for hyperparameter optimization of Lasso-type models

            Quentin Bertrand, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Rank Aggregation from Pairwise Comparisons in the Presence of Adversarial Corruptions
            15:56

            Rank Aggregation from Pairwise Comparisons in the Presence of Adversarial Corruptions

            Prathamesh Patil, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Poster #23

            Xueru Zhang

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Accelerating Large-Scale Inference with Anisotropic Vector Quantization
            10:49

            Accelerating Large-Scale Inference with Anisotropic Vector Quantization

            Ruiqi Guo, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%

            Continuous Graph Neural Networks
            12:39

            Continuous Graph Neural Networks

            Louis-Pascal Xhonneux, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2020