Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Understanding and Improving Lexical Choice in Non-Autoregressive Translation
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-006-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-006-alpha.b-cdn.net
      • sl-yoda-v2-stream-006-beta.b-cdn.net
      • 1549480416.rsc.cdn77.org
      • 1102696603.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Understanding and Improving Lexical Choice in Non-Autoregressive Translation
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Understanding and Improving Lexical Choice in Non-Autoregressive Translation

            Mai 3, 2021

            Sprecher:innen

            LD

            Liang Ding

            Sprecher:in · 1 Follower:in

            LW

            Longyue Wang

            Sprecher:in · 1 Follower:in

            XL

            Xuebo Liu

            Sprecher:in · 0 Follower:innen

            Über

            Knowledge distillation (KD) is essential for training non-autoregressive translation (NAT) models by reducing the complexity of the raw data with an autoregressive teacher model. In this study, we empirically show that as a side effect of this training, the lexical choice errors on low-frequency words are propagated to the NAT model from the teacher model. To alleviate this problem, we propose to expose the raw data to NAT models to restore the useful information of low-frequency words, which ar…

            Organisator

            I2
            I2

            ICLR 2021

            Konto · 896 Follower:innen

            Über ICLR 2021

            The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning. ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            SIMPLE SPECTRAL GRAPH CONVOLUTION
            05:06

            SIMPLE SPECTRAL GRAPH CONVOLUTION

            Hao Allen Zhu, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Implicit Normalizing Flows
            08:03

            Implicit Normalizing Flows

            Cheng Lu, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Learning Invariant Representations for Reinforcement Learning without Reconstruction
            14:36

            Learning Invariant Representations for Reinforcement Learning without Reconstruction

            Amy Zhang, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Adversarial attacks on models for computer programs
            06:27

            Adversarial attacks on models for computer programs

            Shashank Srikant, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            A Machine Learning Model for Predicting Deterioration of COVID-19 Inpatients
            14:22

            A Machine Learning Model for Predicting Deterioration of COVID-19 Inpatients

            Omer Noy, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            ResNet After All: Neural ODEs and Their Numerical Solution
            05:10

            ResNet After All: Neural ODEs and Their Numerical Solution

            Katharina Ott, …

            I2
            I2
            ICLR 2021 4 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? ICLR 2021 folgen