Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-011-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-011-alpha.b-cdn.net
      • sl-yoda-v3-stream-011-beta.b-cdn.net
      • 1150868944.rsc.cdn77.org
      • 1511650057.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent

            Dec 6, 2021

            Speakers

            SF

            Spencer Frei

            Speaker · 0 followers

            QG

            Quanquan Gu

            Speaker · 5 followers

            About

            Although the optimization objectives for learning neural networks are highly nonconvex, gradient-based methods have been wildly successful at learning neural networks in practice. This juxtaposition has led to a number of recent studies on provable guarantees for neural networks trained by gradient descent. Unfortunately, the techniques in these works are often highly specific to the problem studied in each setting, relying on different assumptions on the distribution, optimization parameters, a…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Jonathan Stock - Director, United State Geological Survey Innovation Center
            26:09

            Jonathan Stock - Director, United State Geological Survey Innovation Center

            Jonathan Stock

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            The decomposition of the higher-order homology embedding constructed from the k-Laplacian
            19:33

            The decomposition of the higher-order homology embedding constructed from the k-Laplacian

            Yu-Chia Chen, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Practical, Provably-Correct Interactive Learning in the Realizable Setting: The Power of True Believers
            07:41

            Practical, Provably-Correct Interactive Learning in the Realizable Setting: The Power of True Believers

            Blake Mason, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Contrastive Reinforcement Learning of Symbolic Reasoning Domains
            10:41

            Contrastive Reinforcement Learning of Symbolic Reasoning Domains

            Gabriel Poesia, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            I Can't Believe Latent Variable Models Are Not Better
            30:31

            I Can't Believe Latent Variable Models Are Not Better

            Chris Maddison

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret
            13:47

            Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret

            Jean Tarbouriech, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021