Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-016-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-016-alpha.b-cdn.net
      • sl-yoda-v3-stream-016-beta.b-cdn.net
      • 1504562137.rsc.cdn77.org
      • 1896834465.rsc.cdn77.org
      • Subtitles
      • Off
      • en
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

            Jul 12, 2020

            Speakers

            JZ

            Jingqing Zhang

            Speaker · 0 followers

            YZ

            Yao Zhao

            Speaker · 0 followers

            MS

            Mohammad Saleh

            Speaker · 0 followers

            About

            Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised…

            Organizer

            I2
            I2

            ICML 2020

            Account · 2.7k followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            About ICML 2020

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Self-Attentive Hawkes Process
            14:20

            Self-Attentive Hawkes Process

            Qiang Zhang, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Implicit competitive regularization in GANs
            15:06

            Implicit competitive regularization in GANs

            Florian Schaefer, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Scalable Differentiable Physics for Learning and Control
            11:13

            Scalable Differentiable Physics for Learning and Control

            Yi-Ling Qiao, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Panel Discussion Q&A

            Sungjin Ahn, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            An Off-policy Policy Gradient Theorem A Tale About Weightings
            08:49

            An Off-policy Policy Gradient Theorem A Tale About Weightings

            Martha White

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Searching to Exploit Memorization Effect in Learning from Noisy Labels
            12:24

            Searching to Exploit Memorization Effect in Learning from Noisy Labels

            Quanming Yao, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2020