Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Explicit loss asymptotics in the GD training of neural networks
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-008-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-008-alpha.b-cdn.net
      • sl-yoda-v2-stream-008-beta.b-cdn.net
      • 1159783934.rsc.cdn77.org
      • 1511376917.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Explicit loss asymptotics in the GD training of neural networks
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Explicit loss asymptotics in the GD training of neural networks

            Dec 6, 2021

            Speakers

            MV

            Maksim Velikanov

            Speaker · 0 followers

            DY

            Dmitry Yarotsky

            Speaker · 0 followers

            About

            Current theoretical results on optimization trajectories of neural networks trained by gradient descent typically have the form of rigorous but potentially loose bounds on the loss values. In the present work we take a different approach and show that the learning trajectory of a wide network in a lazy training regime can be characterized by an explicit asymptotic at large training times. Specifically, the leading term in the asymptotic expansion of the loss behaves as a power law L(t) ∼ t^-ξ wi…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries
            13:12

            QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries

            Jie Lei, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Progressive Feature Interaction Search for Deep Sparse Network
            14:01

            Progressive Feature Interaction Search for Deep Sparse Network

            Chen Gao, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs
            11:20

            A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs

            Runzhong Wang, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Gradual Domain Adaptation without Indexed Intermediate Domains
            15:05

            Gradual Domain Adaptation without Indexed Intermediate Domains

            Hong-You Chen, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Flexible Learning of Sparse Neural Networks via Constrained L0 Regularization
            06:28

            Flexible Learning of Sparse Neural Networks via Constrained L0 Regularization

            Juan Ramirez, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Noether Networks: Meta-Learning Useful Conserved Quantities
            05:00

            Noether Networks: Meta-Learning Useful Conserved Quantities

            Ferran Alet, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021