Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-001-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-001-alpha.b-cdn.net
      • sl-yoda-v2-stream-001-beta.b-cdn.net
      • 1824830694.rsc.cdn77.org
      • 1979322955.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees

            Nov 28, 2022

            Speakers

            AB

            Aleksandr Beznosikov

            Sprecher:in · 0 Follower:innen

            PR

            Peter Richtárik

            Sprecher:in · 0 Follower:innen

            MD

            Michael Diskin

            Sprecher:in · 0 Follower:innen

            About

            Variational inequalities in general and saddle point problems in particular are increasingly relevant in machine learning applications, including adversarial learning, GANs, transport and robust optimization. With increasing data and problem sizes necessary to train high performing models across various applications, we need to rely on parallel and distributed computing. However, in distributed training, communication among the compute nodes is a key bottleneck during training, and this problem…

            Organizer

            N2
            N2

            NeurIPS 2022

            Konto · 961 Follower:innen

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            BagFlip: A Certified Defense Against Data Poisoning
            05:00

            BagFlip: A Certified Defense Against Data Poisoning

            Yuhao Zhang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            The Implicit Delta Method
            03:44

            The Implicit Delta Method

            Nathan Kallus, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Distributed Differential Privacy in Multi-Armed Bandits
            09:21

            Distributed Differential Privacy in Multi-Armed Bandits

            Xingyu Zhou, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            CEDe: A collection of expert-curated datasets with atom-level entity annotations for Optical Chemical Structure Recognition
            04:49
            Fine-Grained Analysis of Stability and Generalization for Modern Meta Learning Algorithms
            01:03

            Fine-Grained Analysis of Stability and Generalization for Modern Meta Learning Algorithms

            Jiechao Guan, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Truly Deterministic Policy Optimization
            04:19

            Truly Deterministic Policy Optimization

            Ehsan Saleh, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interested in talks like this? Follow NeurIPS 2022