Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: GBA: A Tuning-free Approach to Switch between Synchronous and Asynchronous Training for Recommendation Models
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-010-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-010-alpha.b-cdn.net
      • sl-yoda-v2-stream-010-beta.b-cdn.net
      • 1759419103.rsc.cdn77.org
      • 1016618226.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            GBA: A Tuning-free Approach to Switch between Synchronous and Asynchronous Training for Recommendation Models
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            GBA: A Tuning-free Approach to Switch between Synchronous and Asynchronous Training for Recommendation Models

            Nov 28, 2022

            Speakers

            WS

            Wenbo Su

            Řečník · 0 sledujících

            YZ

            Yuanxing Zhang

            Řečník · 0 sledujících

            YC

            Yufeng Cai

            Řečník · 0 sledujících

            About

            High-concurrency asynchronous training upon parameter server (PS) architecture and high-performance synchronous training upon all-reduce (AR) architecture are the most commonly deployed distributed training modes for recommender systems. Although the synchronous AR training is designed to have higher training efficiency, the asynchronous PS training would be a better choice on training speed when there are stragglers (slow workers) in the shared cluster, especially under limited computing resour…

            Organizer

            N2
            N2

            NeurIPS 2022

            Účet · 962 sledujících

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Exact learning dynamics of deep linear networks with prior knowledge
            04:59

            Exact learning dynamics of deep linear networks with prior knowledge

            Clémentine C J Dominé, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %

            Distributed Learning of Conditional Quantiles in RKHS
            04:53

            Distributed Learning of Conditional Quantiles in RKHS

            Heng Lian

            N2
            N2
            NeurIPS 2022 2 years ago

            Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %

            Manifold learning via landmark diffusion and clinical applications
            05:31

            Manifold learning via landmark diffusion and clinical applications

            Yu-Ting Lin, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %

            Locally Constrained Representations in Reinforcement Learning
            05:00

            Locally Constrained Representations in Reinforcement Learning

            Somjit Nath, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %

            Sparse Hypergraph Community Detection Thresholds in Stochastic Block Model
            05:16

            Sparse Hypergraph Community Detection Thresholds in Stochastic Block Model

            Erchuan Zhang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %

            Deconfounded Imitation Learning
            05:31

            Deconfounded Imitation Learning

            Risto Vuorio, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %

            Interested in talks like this? Follow NeurIPS 2022