Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Long-Short Transformer: Efficient Transformers for Language and Vision
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-010-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-010-alpha.b-cdn.net
      • sl-yoda-v2-stream-010-beta.b-cdn.net
      • 1759419103.rsc.cdn77.org
      • 1016618226.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Long-Short Transformer: Efficient Transformers for Language and Vision
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Long-Short Transformer: Efficient Transformers for Language and Vision

            Dec 6, 2021

            Speakers

            CZ

            Chen Zhu

            Speaker · 0 followers

            WP

            Wei Ping

            Speaker · 0 followers

            CX

            Chaowei Xiao

            Speaker · 0 followers

            About

            Transformers have achieved success in both language and vision domains. However, it is prohibitively expensive to scale them to long sequences such as long documents or high-resolution images, because self-attention mechanism has quadratic compute time and memory complexities with respect to the input sequence length. In this paper, we propose Long-Short Transformer (Transformer-LS), an efficient transformer architecture for modeling long sequences with linear complexity for both language and vi…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
            06:48

            Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies

            Tim Seyde, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            From Optimality to Robustness: Adaptive Re-Sampling Strategies in Stochastic Bandits
            11:57

            From Optimality to Robustness: Adaptive Re-Sampling Strategies in Stochastic Bandits

            Dorian Baudry, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation
            11:30

            CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation

            Yusuke Tashiro, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization
            12:15

            Unifying Width-Reduced Methods for Quasi-Self-Concordant Optimization

            Deeksha Adil, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Pointwise Bounds for Distribution Estimation under Communication Constraints
            13:44

            Pointwise Bounds for Distribution Estimation under Communication Constraints

            Wei-Ning Chen, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion
            14:16

            Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion

            Tong Wu, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%

            Interested in talks like this? Follow NeurIPS 2021