Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Greedy and Random Quasi-Newton Methods with Faster Explicit Superlinear Convergence
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-013-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-013-alpha.b-cdn.net
      • sl-yoda-v3-stream-013-beta.b-cdn.net
      • 1668715672.rsc.cdn77.org
      • 1420896597.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Greedy and Random Quasi-Newton Methods with Faster Explicit Superlinear Convergence
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Greedy and Random Quasi-Newton Methods with Faster Explicit Superlinear Convergence

            Dez 6, 2021

            Sprecher:innen

            DL

            Dachao Lin

            Sprecher:in · 0 Follower:innen

            HY

            Haishan Ye

            Sprecher:in · 0 Follower:innen

            ZZ

            Zhihua Zhang

            Sprecher:in · 0 Follower:innen

            Über

            In this paper we follow Rodomanov and Nesterov’s work to study quasi-Newton methods. We focus on the common SR1 and BFGS quasi-Newton methods to establish better explicit (local) superlinear convergence. First, based on greedy quasi-Newton update which greedily selects the direction so as to maximize a certain measure of progress, we improve the convergence rate to a condition-number-free superlinear convergence rate. Second, based on random quasi-Newton update that selects the direction randoml…

            Organisator

            N2
            N2

            NeurIPS 2021

            Konto · 1,9k Follower:innen

            Über NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Efficient Active Learning for Gaussian Process Classification by Error Reduction
            06:56

            Efficient Active Learning for Gaussian Process Classification by Error Reduction

            Guang Zhao, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Asynchronous Decentralized SGD with Local, and Quantized Updates
            12:37

            Asynchronous Decentralized SGD with Local, and Quantized Updates

            Giorgi Nadiradze, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Oral Session 4: Vision Applications and Optimization
            1:33:58

            Oral Session 4: Vision Applications and Optimization

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Second Workshop on Quantum Tensor Networks in Machine Learning
            8:26:13

            Second Workshop on Quantum Tensor Networks in Machine Learning

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            DualNet: Continual Learning, Fast and Slow
            11:23

            DualNet: Continual Learning, Fast and Slow

            Quang Pham, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Town Hall
            1:31:08

            Town Hall

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2021 folgen