Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Self-training For Few-shot Transfer Across Extreme Task Differences
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-005-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-005-alpha.b-cdn.net
      • sl-yoda-v2-stream-005-beta.b-cdn.net
      • 1034628162.rsc.cdn77.org
      • 1409346856.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Self-training For Few-shot Transfer Across Extreme Task Differences
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Self-training For Few-shot Transfer Across Extreme Task Differences

            May 3, 2021

            Speakers

            CP

            Cheng Phoo

            Speaker · 0 followers

            BH

            Bharath Hariharan

            Speaker · 0 followers

            About

            Most few-shot learning techniques are pre-trained on a large, labeled “base dataset”. In problem domains where such large labeled datasets are not available for pre-training (e.g., X-ray, satellite images), one must resort to pre-training in a different “source” problem domain (e.g., ImageNet), which can be very different from the desired target task. Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks. In thi…

            Organizer

            I2
            I2

            ICLR 2021

            Account · 899 followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            About ICLR 2021

            The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning. ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Regularized Inverse Reinforcement Learning
            09:50

            Regularized Inverse Reinforcement Learning

            Wonseok Jeon, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Towards Causal Federated Learning For Enhanced Robustness and Privacy
            05:11

            Towards Causal Federated Learning For Enhanced Robustness and Privacy

            Sreya Francis, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%

            Opening remarks
            03:04

            Opening remarks

            Sarah Bechtle

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Scalable Learning and MAP Inference for Nonsymmetric DPPs
            15:15

            Scalable Learning and MAP Inference for Nonsymmetric DPPs

            Mike Gartrell, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            The Role of Momentum Parameters in the Optimal Convergence of Adaptive Polyak's Heavy-Ball Methods
            05:16

            The Role of Momentum Parameters in the Optimal Convergence of Adaptive Polyak's Heavy-Ball Methods

            Wei Tao, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            HyperDynamics: Meta-Learning Object and Agent Dynamics with HyperNetworks
            05:46

            HyperDynamics: Meta-Learning Object and Agent Dynamics with HyperNetworks

            Zhou Xian, …

            I2
            I2
            ICLR 2021 4 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICLR 2021