Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Multi-timescale Representation Learning in LSTM Language Models
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-013-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-013-alpha.b-cdn.net
      • sl-yoda-v3-stream-013-beta.b-cdn.net
      • 1668715672.rsc.cdn77.org
      • 1420896597.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Multi-timescale Representation Learning in LSTM Language Models
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Multi-timescale Representation Learning in LSTM Language Models

            Mai 3, 2021

            Sprecher:innen

            SM

            Shivangi Mahto

            Sprecher:in · 0 Follower:innen

            VAV

            Vy Ai Vo

            Sprecher:in · 0 Follower:innen

            JST

            Javier S. Turek

            Sprecher:in · 0 Follower:innen

            Über

            Language models must capture statistical dependencies between words at timescales ranging from very short to very long. Earlier work has demonstrated that dependencies in natural language tend to decay with distance between words according to a power law. However, it is unclear how this knowledge can be used for analyzing or designing neural network language models. In this work, we derived a theory for how the memory gating mechanism in long short-term memory (LSTM) language models can capture…

            Organisator

            I2
            I2

            ICLR 2021

            Konto · 911 Follower:innen

            Über ICLR 2021

            The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning. ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Representing Partial Programs with Blended Abstract Semantics
            05:02

            Representing Partial Programs with Blended Abstract Semantics

            Maxwell Nye, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Voice2Series: Reprogramming Acoustic Models for Time Series Classification
            06:01

            Voice2Series: Reprogramming Acoustic Models for Time Series Classification

            Huck Yang, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            One Network Fits All? Modular versus Monolithic Task Formulations in Neural Networks
            05:18

            One Network Fits All? Modular versus Monolithic Task Formulations in Neural Networks

            Atish Agarwala, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Oral Session 12 - QA 1
            12:43

            Oral Session 12 - QA 1

            Colin Wei, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Towards Creating Models that People Can Use
            28:00

            Towards Creating Models that People Can Use

            Finale Doshi-Velez

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Rethinking Embedding Coupling in Pre-trained Language Models
            05:12

            Rethinking Embedding Coupling in Pre-trained Language Models

            Hyung Won Chung, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? ICLR 2021 folgen