Další
Živý přenos začne již brzy!
Živý přenos již skončil.
Prezentace ještě nebyla nahrána!
  • title: PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination
      0:00 / 0:00
      • Nahlásit chybu
      • Nastavení
      • Playlisty
      • Záložky
      • Titulky Off
      • Rychlost přehrávání
      • Kvalita
      • Nastavení
      • Debug informace
      • Server sl-yoda-v2-stream-010-alpha.b-cdn.net
      • Velikost titulků Střední
      • Záložky
      • Server
      • sl-yoda-v2-stream-010-alpha.b-cdn.net
      • sl-yoda-v2-stream-010-beta.b-cdn.net
      • 1759419103.rsc.cdn77.org
      • 1016618226.rsc.cdn77.org
      • Titulky
      • Off
      • en
      • Rychlost přehrávání
      • Kvalita
      • Velikost titulků
      • Velké
      • Střední
      • Malé
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      Moje playlisty
        Záložky
          00:00:00
            PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination
            • Nastavení
            • Sync diff
            • Kvalita
            • Nastavení
            • Server
            • Kvalita
            • Server

            PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination

            12. července 2020

            Řečníci

            SG

            Saurabh Goyal

            Sprecher:in · 0 Follower:innen

            ARC

            Anamitra Roy Choudhury

            Sprecher:in · 0 Follower:innen

            SMR

            Souhabh M. Raje

            Sprecher:in · 0 Follower:innen

            O prezentaci

            We develop a novel method, called PoWER-BERT, for improving the inference time of the popular BERT model, while maintaining the accuracy. It works by: a) exploiting redundancy pertaining to word-vectors (intermediate encoder outputs) and eliminating the redundant vectors. b) determining which word-vectors to eliminate by developing a strategy for measuring their significance, based on the self-attention mechanism; c) learning how many word-vectors to eliminate by augmenting the BERT model and t…

            Organizátor

            I2
            I2

            ICML 2020

            Konto · 2,7k Follower:innen

            Kategorie

            KI und Datenwissenschaft

            Kategorie · 10,8k Präsentationen

            O organizátorovi (ICML 2020)

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Baví vás formát? Nechte SlidesLive zachytit svou akci!

            Profesionální natáčení a streamování po celém světě.

            Sdílení

            Doporučená videa

            Prezentace na podobné téma, kategorii nebo přednášejícího

            Debiasing Evaluations That are Biased by Evaluations
            12:12

            Debiasing Evaluations That are Biased by Evaluations

            Jingyan Wang, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            T-Basis: a Compact Representation for Neural Networks
            09:38

            T-Basis: a Compact Representation for Neural Networks

            Anton Obukhov, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Ridge Riding: Finding diverse solutions by following eigenvectors of the Hessian
            10:53

            Ridge Riding: Finding diverse solutions by following eigenvectors of the Hessian

            Jack Parker-Holder, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Learning Structured Latent Factors from Dependent Data: A Generative Model Framework from Information-Theoretic Perspective
            14:46

            Learning Structured Latent Factors from Dependent Data: A Generative Model Framework from Information-Theoretic Perspective

            Ruixiang Zhang, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Parameter-Free, Dynamic, and Strongly-Adaptive Online Learning
            14:58

            Parameter-Free, Dynamic, and Strongly-Adaptive Online Learning

            Ashok Cutkosky

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Optimistic Policy Optimization with Bandit Feedback
            14:07

            Optimistic Policy Optimization with Bandit Feedback

            Lior Shani, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Zajímají Vás podobná videa? Sledujte ICML 2020