Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Theoretical Analysis of Self-Training on Unlabeled Data
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-010-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-010-alpha.b-cdn.net
      • sl-yoda-v2-stream-010-beta.b-cdn.net
      • 1759419103.rsc.cdn77.org
      • 1016618226.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Theoretical Analysis of Self-Training on Unlabeled Data
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Theoretical Analysis of Self-Training on Unlabeled Data

            May 3, 2021

            Speakers

            CW

            Colin Wei

            Speaker · 2 followers

            KS

            Kendrick Shen

            Speaker · 0 followers

            YC

            Yining Chen

            Speaker · 0 followers

            About

            Self-training algorithms, which train a model to fit pseudolabels predicted by another previously-learned model, have been very successful for learning with unlabeled data using neural networks. However, the current theoretical understanding of self-training only applies to linear models. This work provides a unified theoretical analysis of self-training with deep networks for semi-supervised learning, unsupervised domain adaptation, and unsupervised learning. At the core of our analysis is a si…

            Organizer

            I2
            I2

            ICLR 2021

            Account · 898 followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            About ICLR 2021

            The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning. ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Boosting Classification Accuracy of Fertile Sperm Cell Images leveraging cDCGAN
            04:09

            Boosting Classification Accuracy of Fertile Sperm Cell Images leveraging cDCGAN

            Dipam Paul, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            A Multi-Objective Perspective on Tuning Automatically Hardware and Hyperparameters
            03:10

            A Multi-Objective Perspective on Tuning Automatically Hardware and Hyperparameters

            David Salinas, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Ensembles of GANs for synthetic training data generation
            09:57

            Ensembles of GANs for synthetic training data generation

            Gabriel Eilertsen, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Understanding and Improving Lexical Choice in Non-Autoregressive Translation
            11:37

            Understanding and Improving Lexical Choice in Non-Autoregressive Translation

            Liang Ding, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Transformer Protein Language Models are Unsupervised Structure Learners
            05:41

            Transformer Protein Language Models are Unsupervised Structure Learners

            Roshan Rao, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Beyond Static Papers: Rethinking How We Share Scientific Understanding in ML
            6:21:46

            Beyond Static Papers: Rethinking How We Share Scientific Understanding in ML

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICLR 2021