Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-008-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-008-alpha.b-cdn.net
      • sl-yoda-v2-stream-008-beta.b-cdn.net
      • 1159783934.rsc.cdn77.org
      • 1511376917.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent

            Nov 28, 2022

            Speakers

            AS

            Ayush Sekhari

            Sprecher:in · 0 Follower:innen

            SK

            Satyen Kale

            Sprecher:in · 0 Follower:innen

            JDL

            Jason D. Lee

            Sprecher:in · 0 Follower:innen

            About

            Stochastic Gradient Descent (SGD) has been the method of choice for learning large-scale non-convex models. While a general analysis of when SGD works has been elusive, there has been a lot of recent progress in understanding the convergence of Gradient Flow (GF) on the population loss, partly due to the simplicity that a continuous-time analysis buys us. An overarching theme of our paper is providing general conditions under which SGD converges, assuming that GF on the population loss converges…

            Organizer

            N2
            N2

            NeurIPS 2022

            Konto · 961 Follower:innen

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Memory safe computations with XLA compiler
            03:21

            Memory safe computations with XLA compiler

            Artem Artemev, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Contingency Planning with Learned Models of Behavioral and Perceptual Uncertainty
            31:17

            Contingency Planning with Learned Models of Behavioral and Perceptual Uncertainty

            Nicholas Rhinehart

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Fine-tuning language models to find agreement among humans with diverse preferences
            04:51

            Fine-tuning language models to find agreement among humans with diverse preferences

            Michiel Bakker, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Sim-to-Lab-to-Real: Safe Reinforcement Learning with Shielding and Generalization Guarantees
            09:14

            Sim-to-Lab-to-Real: Safe Reinforcement Learning with Shielding and Generalization Guarantees

            Kai-Chieh Hsu, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Latest Advances in End-to-End Speech Recognition
            29:14

            Latest Advances in End-to-End Speech Recognition

            Tara N. Sainath

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            SCONE: Surface Coverage Optimization in uNknown Environments by Volumetric Integration
            05:01

            SCONE: Surface Coverage Optimization in uNknown Environments by Volumetric Integration

            Antoine Guédon, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interested in talks like this? Follow NeurIPS 2022