Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-011-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-011-alpha.b-cdn.net
      • sl-yoda-v3-stream-011-beta.b-cdn.net
      • 1150868944.rsc.cdn77.org
      • 1511650057.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction

            Dec 6, 2021

            Speakers

            DS

            Dominik Stöger

            Speaker · 0 followers

            MS

            Mahdi Soltanolkotabi

            Speaker · 1 follower

            About

            Recently there has been significant theoretical progress on understanding the convergence and generalization of gradient-based methods on non-convex losses with overparameterized models. Nevertheless, many aspects of optimization and generalization and in particular the critical role of small random initialization are not fully understood. In this paper, we take a step towards demystifying this role by proving that small random initialization followed by a few iterations of gradient descent beha…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval
            14:33

            Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval

            Omar Khattab, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Two steps to risk sensitivity
            08:22

            Two steps to risk sensitivity

            Christopher Gagne, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach
            14:06

            Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach

            Qiujiang Jin, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Towards Sample-Efficient Overparameterized Meta-learning
            13:54

            Towards Sample-Efficient Overparameterized Meta-learning

            Yue Sun, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Fair Clustering Under a Bounded Cost
            12:52

            Fair Clustering Under a Bounded Cost

            Seyed A. Esmaeili, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Rethinking the Variational Interpretation of Accelerated Optimization Methods
            12:58

            Rethinking the Variational Interpretation of Accelerated Optimization Methods

            Peiyuan Zhang, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021