Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Frequency Bias in Neural Networks for Input of Non-Uniform Density
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-014-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-014-alpha.b-cdn.net
      • sl-yoda-v3-stream-014-beta.b-cdn.net
      • 1978117156.rsc.cdn77.org
      • 1243944885.rsc.cdn77.org
      • Subtitles
      • Off
      • en
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Frequency Bias in Neural Networks for Input of Non-Uniform Density
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Frequency Bias in Neural Networks for Input of Non-Uniform Density

            Jul 12, 2020

            Speakers

            RB

            Ronen Basri

            Speaker · 0 followers

            MG

            Meirav Galun

            Speaker · 0 followers

            AG

            Amnon Geifman

            Speaker · 0 followers

            About

            Recent works have partly attributed the generalization ability of over-parameterized neural networks to frequency bias – networks trained with gradient descent on data drawn from a uniform distribution find a low frequency fit before high frequency ones. As realistic training sets are not drawn from a uniform distribution, we here use the Neural Tangent Kernel (NTK) model to explore the effect of variable density on training dynamics. Our results, which combine analytic and empirical observation…

            Organizer

            I2
            I2

            ICML 2020

            Account · 2.7k followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            About ICML 2020

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            A Meta-Learning Approach for Image Classification Architecture Recommendation
            04:35

            A Meta-Learning Approach for Image Classification Architecture Recommendation

            Loren Tsahalon, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Accelerating Large-Scale Inference with Anisotropic Vector Quantization
            10:49

            Accelerating Large-Scale Inference with Anisotropic Vector Quantization

            Ruiqi Guo, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%

            Source Separation with Deep Generative Priors
            14:17

            Source Separation with Deep Generative Priors

            Vivek Jayaram, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Counterfactual Data Augmentationi using Locally Factored Dynamics
            21:10

            Counterfactual Data Augmentationi using Locally Factored Dynamics

            Silviu Pitis, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Proper Network Interpretability Helps Adversarial Robustness in Classification
            15:00

            Proper Network Interpretability Helps Adversarial Robustness in Classification

            Akhilan Boopathy, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Ordinal Non-negative Matrix Factorization for Recommendation
            15:17

            Ordinal Non-negative Matrix Factorization for Recommendation

            Olivier Gouvert, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2020