Další
Živý přenos začne již brzy!
Živý přenos již skončil.
Prezentace ještě nebyla nahrána!
  • title: Doping: A technique for Extreme Compression of LSTM Models using Sparse Structured Additive Matrices
      0:00 / 0:00
      • Nahlásit chybu
      • Nastavení
      • Playlisty
      • Záložky
      • Titulky Off
      • Rychlost přehrávání
      • Kvalita
      • Nastavení
      • Debug informace
      • Server sl-yoda-v3-stream-014-alpha.b-cdn.net
      • Velikost titulků Střední
      • Záložky
      • Server
      • sl-yoda-v3-stream-014-alpha.b-cdn.net
      • sl-yoda-v3-stream-014-beta.b-cdn.net
      • 1978117156.rsc.cdn77.org
      • 1243944885.rsc.cdn77.org
      • Titulky
      • Off
      • English
      • Rychlost přehrávání
      • Kvalita
      • Velikost titulků
      • Velké
      • Střední
      • Malé
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      Moje playlisty
        Záložky
          00:00:00
            Doping: A technique for Extreme Compression of LSTM Models using Sparse Structured Additive Matrices
            • Nastavení
            • Sync diff
            • Kvalita
            • Nastavení
            • Server
            • Kvalita
            • Server

            Doping: A technique for Extreme Compression of LSTM Models using Sparse Structured Additive Matrices

            4. dubna 2021

            Řečníci

            UT

            Urmish Thakker

            Sprecher:in · 1 Follower:in

            PW

            Paul Whatmough

            Sprecher:in · 0 Follower:innen

            ZL

            Zhi-gang Liu

            Sprecher:in · 0 Follower:innen

            O prezentaci

            Structured matrices, such as those derived from Kronecker products (KP), are effective at compressing neural networks, but can lead to unacceptable accuracy loss when applied to large models. In this paper, we propose the notion of doping - addition of an extremely sparse matrix to a structured matrix. Doping facilitates additional degrees of freedom for a small number of parameters, allowing them to independently diverge from the fixed structure. To train LSTMs with doped structured matrices, w…

            Organizátor

            M2
            M2

            MLSys 2021

            Konto · 159 Follower:innen

            Kategorie

            Informatik und IT

            Kategorie · 14,8k Präsentationen

            O organizátorovi (MLSys 2021)

            The Conference on Machine Learning and Systems targets research at the intersection of machine learning and systems. The conference aims to elicit new connections amongst these fields, including identifying best practices and design principles for learning systems, as well as developing novel learning methods and theory tailored to practical machine learning workflows.

            Baví vás formát? Nechte SlidesLive zachytit svou akci!

            Profesionální natáčení a streamování po celém světě.

            Sdílení

            Doporučená videa

            Prezentace na podobné téma, kategorii nebo přednášejícího

            Elliot: A Comprehensive and Rigorous Framework For Reproducible Recommender Systems Evaluation
            14:56

            Elliot: A Comprehensive and Rigorous Framework For Reproducible Recommender Systems Evaluation

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Opening Remarks
            04:48

            Opening Remarks

            Mu Li

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            QAPPA: Quantization-Aware Power, Performance, and Area Modeling of DNN Accelerators
            13:51

            QAPPA: Quantization-Aware Power, Performance, and Area Modeling of DNN Accelerators

            Ahmet Inci, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Bit Error Robustness for Energy-Efficient DNN Accelerators
            01:53

            Bit Error Robustness for Energy-Efficient DNN Accelerators

            David Stutz, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Towards Scalable Distributed Training of Deep Learning on Public Cloud Clusters
            05:02

            Towards Scalable Distributed Training of Deep Learning on Public Cloud Clusters

            Shaohuai Shi, …

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Science to Fuel Neural Nets and TPU Design
            40:20

            Science to Fuel Neural Nets and TPU Design

            Cliff Young

            M2
            M2
            MLSys 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Zajímají Vás podobná videa? Sledujte MLSys 2021