Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Stochastic Second-Order Methods Improve Best-Known Sample Complexity of SGD For Gradient-Dominated Functions
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-007-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-007-alpha.b-cdn.net
      • sl-yoda-v2-stream-007-beta.b-cdn.net
      • 1678031076.rsc.cdn77.org
      • 1932936657.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Stochastic Second-Order Methods Improve Best-Known Sample Complexity of SGD For Gradient-Dominated Functions
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Stochastic Second-Order Methods Improve Best-Known Sample Complexity of SGD For Gradient-Dominated Functions

            Nov 28, 2022

            Speakers

            SM

            Saeed Masiha

            Speaker · 0 followers

            SS

            Saber Salehkaleybar

            Speaker · 0 followers

            NH

            Niao He

            Speaker · 2 followers

            About

            We study the performance of Stochastic Cubic Regularized Newton (SCRN) on a class of functions satisfying gradient dominance property which holds in a wide range of applications in machine learning and signal processing. This condition ensures that any first-order stationary point is a global optimum. We prove that SCRN improves the best-known sample complexity of stochastic gradient descent in achieving ϵ-global optimum by a factor of 𝒪(ϵ^-1/2). Even under a weak version of gradient dominance…

            Organizer

            N2
            N2

            NeurIPS 2022

            Account · 953 followers

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Deep learning-based bias adjustment of decadal climate predictions
            09:47

            Deep learning-based bias adjustment of decadal climate predictions

            Johannes Exenberger, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Gradient Knowledge Distillation for Pre-trained Language Models
            06:33

            Gradient Knowledge Distillation for Pre-trained Language Models

            Lean Wang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            A Kernelised Stein Statistic for Assessing Implicit Generative Models
            05:00

            A Kernelised Stein Statistic for Assessing Implicit Generative Models

            Wenkai Xu, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Graph Coloring via Neural Networks for Haplotype Assembly and Viral Quasispecies Reconstruction
            04:39

            Graph Coloring via Neural Networks for Haplotype Assembly and Viral Quasispecies Reconstruction

            Hansheng Xue, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Decentralized Training of Foundation Models in Heterogeneous Environments
            04:48

            Decentralized Training of Foundation Models in Heterogeneous Environments

            Binhang Yuan, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks
            04:59

            Adversarial Attack on Attackers: Post-Process to Mitigate Black-Box Score-Based Query Attacks

            Sizhe Chen, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2022