Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-010-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-010-alpha.b-cdn.net
      • sl-yoda-v2-stream-010-beta.b-cdn.net
      • 1759419103.rsc.cdn77.org
      • 1016618226.rsc.cdn77.org
      • Subtitles
      • Off
      • en
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination

            Jul 12, 2020

            Speakers

            SG

            Saurabh Goyal

            Speaker · 0 followers

            ARC

            Anamitra Roy Choudhury

            Speaker · 0 followers

            SMR

            Souhabh M. Raje

            Speaker · 0 followers

            About

            We develop a novel method, called PoWER-BERT, for improving the inference time of the popular BERT model, while maintaining the accuracy. It works by: a) exploiting redundancy pertaining to word-vectors (intermediate encoder outputs) and eliminating the redundant vectors. b) determining which word-vectors to eliminate by developing a strategy for measuring their significance, based on the self-attention mechanism; c) learning how many word-vectors to eliminate by augmenting the BERT model and t…

            Organizer

            I2
            I2

            ICML 2020

            Account · 2.7k followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            About ICML 2020

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Ledidi: Designing genomic edits that induce functional activity
            05:15

            Ledidi: Designing genomic edits that induce functional activity

            Jacob Schreiber, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Learning Long-term Dependencies Using Cognitive Inductive Biases in Self-attention RNNs
            05:37

            Learning Long-term Dependencies Using Cognitive Inductive Biases in Self-attention RNNs

            Bhargav Kanuparthi, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Prediction of Edverse Event on Drug-Drug Combination using Graph Embedding
            05:02

            Prediction of Edverse Event on Drug-Drug Combination using Graph Embedding

            Ankita Saha, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning
            15:43

            Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning

            Dipendra Misra, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Universal Average-Case Optimality of Polyak Momentum
            13:20

            Universal Average-Case Optimality of Polyak Momentum

            Damien Scieur, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Negative Dependence and Sampling
            38:15

            Negative Dependence and Sampling

            Stefanie Jegelka

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2020