Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Acceleration for Compressed Gradient Descent in Distributed Optimization
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-015-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-015-alpha.b-cdn.net
      • sl-yoda-v3-stream-015-beta.b-cdn.net
      • 1963568160.rsc.cdn77.org
      • 1940033649.rsc.cdn77.org
      • Subtitles
      • Off
      • en
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Acceleration for Compressed Gradient Descent in Distributed Optimization
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Acceleration for Compressed Gradient Descent in Distributed Optimization

            Jul 12, 2020

            Sprecher:innen

            ZL

            Zhize Li

            Sprecher:in · 0 Follower:innen

            DK

            Dmitry Kovalev

            Sprecher:in · 0 Follower:innen

            XQ

            Xun Qian

            Sprecher:in · 0 Follower:innen

            Über

            Due to the high communication cost in distributed and federated learning problems, methods relying on sparsification or quantization of communicated messages are becoming increasingly popular. While in other contexts the best performing gradient-type methods invariably rely on some form of acceleration to reduce the number of iterations, there are no methods which combine the benefits of both gradient compression and acceleration. In this paper, we remedy this situation and propose the first acc…

            Organisator

            I2
            I2

            ICML 2020

            Konto · 2,7k Follower:innen

            Kategorien

            Über ICML 2020

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Gradient Based Memory Editing for Task-Free Continual Learning
            14:46

            Gradient Based Memory Editing for Task-Free Continual Learning

            Xisen Jin, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Learning Optimal Tree Models under Beam Search
            15:14

            Learning Optimal Tree Models under Beam Search

            Jingwei Zhuo, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Historical perspective on extreme classification in language modeling
            35:42

            Historical perspective on extreme classification in language modeling

            Tomas Mikolov

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Mining Documentation to Extract Hyperparameter Schemas
            01:16

            Mining Documentation to Extract Hyperparameter Schemas

            Guillaume Baudart, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Augmenting data to improve robustness - a blessing or a curse?
            51:34

            Augmenting data to improve robustness - a blessing or a curse?

            Fanny Yang, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            ControlVAE: Controllable Variational Autoencoder
            14:21

            ControlVAE: Controllable Variational Autoencoder

            Huajie Shao, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? ICML 2020 folgen