Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Explicit loss asymptotics in the GD training of neural networks
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-008-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-008-alpha.b-cdn.net
      • sl-yoda-v2-stream-008-beta.b-cdn.net
      • 1159783934.rsc.cdn77.org
      • 1511376917.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Explicit loss asymptotics in the GD training of neural networks
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Explicit loss asymptotics in the GD training of neural networks

            Dec 6, 2021

            Speakers

            MV

            Maksim Velikanov

            Speaker · 0 followers

            DY

            Dmitry Yarotsky

            Speaker · 0 followers

            About

            Current theoretical results on optimization trajectories of neural networks trained by gradient descent typically have the form of rigorous but potentially loose bounds on the loss values. In the present work we take a different approach and show that the learning trajectory of a wide network in a lazy training regime can be characterized by an explicit asymptotic at large training times. Specifically, the leading term in the asymptotic expansion of the loss behaves as a power law L(t) ∼ t^-ξ wi…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Programming Supervision for Data-Centric Al
            19:52

            Programming Supervision for Data-Centric Al

            Paroma Varma

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization
            12:58

            Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization

            Shicong Cen, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Learning symbolic equations with deep learning
            43:07

            Learning symbolic equations with deep learning

            Shirley Ho, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs
            07:38

            A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs

            Mucong Ding, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Overparameterization Improves Robustness to Covariate Shift in High-Dimensions
            15:11

            Overparameterization Improves Robustness to Covariate Shift in High-Dimensions

            Nilesh Tripuraneni, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Model inversion using Fourier Neural Operators
            05:54

            Model inversion using Fourier Neural Operators

            Daniel MacKinlay, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021