Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Escaping Saddle Points with Compressed SGD
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-014-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-014-alpha.b-cdn.net
      • sl-yoda-v3-stream-014-beta.b-cdn.net
      • 1978117156.rsc.cdn77.org
      • 1243944885.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Escaping Saddle Points with Compressed SGD
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Escaping Saddle Points with Compressed SGD

            Dec 6, 2021

            Speakers

            DA

            Dmitrii Avdiukhin

            Speaker · 0 followers

            GY

            Grigory Yaroslavtsev

            Speaker · 0 followers

            About

            Stochastic gradient descent (SGD) is a prevalent optimization technique for large-scale distributed machine learning. While SGD computation can be efficiently divided between multiple machines, communication typically becomes a bottleneck in the distributed setting. Gradient compression methods can be used to alleviate this problem, and a recent line of work shows that SGD augmented with gradient compression converges to an ε-first-order stationary point. In this paper we extend these results to…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Local Signal Adaptivity: Provable Feature Learning in Neural Networks Beyond Kernels
            13:22

            Local Signal Adaptivity: Provable Feature Learning in Neural Networks Beyond Kernels

            Stefani Karp, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning
            14:31

            Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning

            Sen Cui, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Towards a Framework for Data Excellence in Data-Centric AI: Lessons from the Semantic Web
            02:16

            Towards a Framework for Data Excellence in Data-Centric AI: Lessons from the Semantic Web

            Oshani Seneviratne, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Panel Discussion 2
            59:12

            Panel Discussion 2

            Susan L. Epstein, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Unsupervised Domain Adaptation with Dynamics-Aware Rewards in Reinforcement Learning
            08:08

            Unsupervised Domain Adaptation with Dynamics-Aware Rewards in Reinforcement Learning

            Jinxin Liu, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models
            05:30

            BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models

            Nandan Thakur, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021