Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Rethinking gradient sparsification as total error minimization
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-007-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-007-alpha.b-cdn.net
      • sl-yoda-v2-stream-007-beta.b-cdn.net
      • 1678031076.rsc.cdn77.org
      • 1932936657.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Rethinking gradient sparsification as total error minimization
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Rethinking gradient sparsification as total error minimization

            Dez 6, 2021

            Sprecher:innen

            ANS

            Atal Narayan Sahu

            Sprecher:in · 0 Follower:innen

            AD

            Aritra Dutta

            Sprecher:in · 0 Follower:innen

            AMA

            Ahmed M. Abdelmoniem

            Sprecher:in · 0 Follower:innen

            Über

            Gradient compression is a widely-established remedy to tackle the communication bottleneck in distributed training of large deep neural networks (DNNs). Under the error-feedback framework, Top-k sparsification, sometimes with k as little as 0.1% of the gradient size, enables training to the same model quality as the uncompressed case for a similar iteration count. We find that, from the optimization perspective, Top-k is the communication-optimal sparsifier given a per-iteration k element budget…

            Organisator

            N2
            N2

            NeurIPS 2021

            Konto · 1,9k Follower:innen

            Über NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            A Faster Maximum Cardinality Matching Algorithm with Applications in Machine Learning
            14:49

            A Faster Maximum Cardinality Matching Algorithm with Applications in Machine Learning

            Nathaniel Lahn, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions
            13:15

            Adversarially Robust 3D Point Cloud Recognition Using Self-Supervisions

            Jiachen Sun, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            On the Expected Complexity of Maxout Networks
            10:33

            On the Expected Complexity of Maxout Networks

            Hanna Tseran, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Distributed Principal Component Analysis with Limited Communication
            14:25

            Distributed Principal Component Analysis with Limited Communication

            Foivos Alimisis, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Opening remarks
            07:36

            Opening remarks

            Shiori Sagawa

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Testing Probabilistic Circuits
            09:20

            Testing Probabilistic Circuits

            Yash Pote, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2021 folgen