Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Cycle Self-Training for Domain Adaptation
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-016-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-016-alpha.b-cdn.net
      • sl-yoda-v3-stream-016-beta.b-cdn.net
      • 1504562137.rsc.cdn77.org
      • 1896834465.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Cycle Self-Training for Domain Adaptation
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Cycle Self-Training for Domain Adaptation

            Dec 6, 2021

            Speakers

            HL

            Hong Liu

            Speaker · 0 followers

            JW

            Jianmin Wang

            Speaker · 0 followers

            ML

            Mingsheng Long

            Speaker · 2 followers

            About

            Mainstream approaches for unsupervised domain adaptation (UDA) learn domain-invariant representations to narrow the domain shift. Recently, self-training has been gaining momentum in UDA, which exploits unlabeled target data by training with target pseudo-labels. However, as corroborated in this work, under distributional shift in UDA, the pseudo-labels can be unreliable in terms of their large discrepancy from target ground truth. Thereby, we propose Cycle Self-Training (CST), a principled self…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Collective Intelligence of Army Ants and the Robots They Inspire
            1:21:28

            Collective Intelligence of Army Ants and the Robots They Inspire

            Radhika Nagpal

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            RedCaps: Web-curated image-text data created by the people, for the people
            05:06

            RedCaps: Web-curated image-text data created by the people, for the people

            Karan Desai, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Lexical Pragmatics in the Wild: The Case of Complement Coercion
            13:22

            Lexical Pragmatics in the Wild: The Case of Complement Coercion

            Frederick Gietz, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Class-Disentanglement and Applications in Adversarial Detection and Defense
            12:37

            Class-Disentanglement and Applications in Adversarial Detection and Defense

            Kaiwen Yang, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%

            SimiGrad: Fine-Grained Adaptive Batching for Large Scale Training using Gradient Similarity Measurement
            11:23

            SimiGrad: Fine-Grained Adaptive Batching for Large Scale Training using Gradient Similarity Measurement

            Heyang Qin, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Artsheets for Art Datasets
            03:27

            Artsheets for Art Datasets

            Ramya Srinivasan, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021