Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Get More at Once: Alternating Sparse Training with Gradient Correction
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-009-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-009-alpha.b-cdn.net
      • sl-yoda-v2-stream-009-beta.b-cdn.net
      • 1766500541.rsc.cdn77.org
      • 1441886916.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Get More at Once: Alternating Sparse Training with Gradient Correction
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Get More at Once: Alternating Sparse Training with Gradient Correction

            Nov 28, 2022

            Sprecher:innen

            LY

            Li Yang

            Sprecher:in · 0 Follower:innen

            JM

            Jian Meng

            Sprecher:in · 0 Follower:innen

            JS

            Jae-sun Seo

            Sprecher:in · 0 Follower:innen

            Über

            Recently, a new trend of exploring training sparsity has emerged, which remove parameters during training, leading to both training and inference efficiency improvement. This line of works primarily aims to obtain a single sparse model under a pre-defined large sparsity ratio. It leads to a static/fixed sparse inference model that is not capable of adjusting or re-configuring its computation complexity (i.e., inference structure, latency) after training for real-world varying and dynamic hardwar…

            Organisator

            N2
            N2

            NeurIPS 2022

            Konto · 961 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Incentive-Aware Machine Learning: A Tale of Robustness, Fairness, Improvement, and Performativity
            1:38:41

            Incentive-Aware Machine Learning: A Tale of Robustness, Fairness, Improvement, and Performativity

            Chara Podimata

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Unsupervised Learning of Group Invariant and Equivariant Representations
            04:56

            Unsupervised Learning of Group Invariant and Equivariant Representations

            Robin Winter, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Linear Convergence Analysis of Neural Collapse with Unconstrained Features
            05:22

            Linear Convergence Analysis of Neural Collapse with Unconstrained Features

            Peng Wang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            A Deep Learning Journey
            1:24:09

            A Deep Learning Journey

            Yoshua Bengio

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 1 = 0.1%

            Provably Efficient Model-Free Constrained Reinforcement Learning Algorithm with Linear Function Approximation
            05:02

            Provably Efficient Model-Free Constrained Reinforcement Learning Algorithm with Linear Function Approximation

            Xingyu Zhou, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            TorchOpt: An Efficient Library for Differentiable Optimization
            05:48

            TorchOpt: An Efficient Library for Differentiable Optimization

            Jie Ren, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2022 folgen