Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Learning explanations that are hard to vary
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-012-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-012-alpha.b-cdn.net
      • sl-yoda-v3-stream-012-beta.b-cdn.net
      • 1338956956.rsc.cdn77.org
      • 1656830687.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Learning explanations that are hard to vary
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Learning explanations that are hard to vary

            May 3, 2021

            Speakers

            GP

            Giambattista Parascandolo

            Speaker · 0 followers

            AN

            Alexander Neitz

            Speaker · 0 followers

            AO

            Antonio Orvieto

            Speaker · 0 followers

            About

            In this paper, we investigate the principle that good explanations are hard to vary in the context of deep learning. We show that averaging gradients across examples -- akin to a logical OR of patterns -- can favor memorization and `patchwork' solutions that sew together different strategies, instead of identifying invariances. To inspect this, we first formalize a notion of consistency for minima of the loss surface, which measures to what extent a minimum appears only when examples are pooled.…

            Organizer

            I2
            I2

            ICLR 2021

            Account · 899 followers

            Categories

            Mathematics

            Category · 2.4k presentations

            AI & Data Science

            Category · 10.8k presentations

            About ICLR 2021

            The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning. ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Paper Session 1 Q&A
            07:43

            Paper Session 1 Q&A

            Ruohan Gao

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Learning Hyperbolic Representations of Topological Features
            05:15

            Learning Hyperbolic Representations of Topological Features

            Panagiotis Kyriakis, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Disentangled Recurrent Wasserstein Autoencoder
            09:17

            Disentangled Recurrent Wasserstein Autoencoder

            Jun Han, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Towards Faster and Stabilized GAN Training for High-Fidelity Few-Shot Image Synthesis
            05:08

            Towards Faster and Stabilized GAN Training for High-Fidelity Few-Shot Image Synthesis

            Bingchen Liu, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Byzantine-Robust and Privacy-Preserving Framework for FedML
            05:07

            Byzantine-Robust and Privacy-Preserving Framework for FedML

            Hanieh Hashemi, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Neural Jump Ordinary Differential Equations: Consistent Continuous- Time Prediction and Filtering
            03:50

            Neural Jump Ordinary Differential Equations: Consistent Continuous- Time Prediction and Filtering

            Calypso Herrera, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICLR 2021