Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-012-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-012-alpha.b-cdn.net
      • sl-yoda-v3-stream-012-beta.b-cdn.net
      • 1338956956.rsc.cdn77.org
      • 1656830687.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning

            Dez 6, 2021

            Sprecher:innen

            AM

            Akshay Mehra

            Sprecher:in · 0 Follower:innen

            JH

            Jihun Hamm

            Sprecher:in · 0 Follower:innen

            BK

            Bhavya Kailkhura

            Sprecher:in · 0 Follower:innen

            Über

            Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from the target. However, UDA is not always successful and several accounts of `negative transfer' have been reported in the literature. In this work, we prove a simple lower bound on the target domain error that complements the existing upper bound. The bound shows the insufficiency of minimizing source domain error and…

            Organisator

            N2
            N2

            NeurIPS 2021

            Konto · 1,9k Follower:innen

            Über NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Multi-Objective Meta Learning
            12:27

            Multi-Objective Meta Learning

            Feiyang Ye, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Automatic Data Quality Evaluation
            01:57

            Automatic Data Quality Evaluation

            Jiazheng Li

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            An Image is Worth More Than a Thousand Words: Towards Disentanglement in The Wild
            10:20

            An Image is Worth More Than a Thousand Words: Towards Disentanglement in The Wild

            Aviv Gabbay, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Aboriginal Turing Test
            59:22

            Aboriginal Turing Test

            Tyson Yunkaporta

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Invited Speakers Panel
            48:38

            Invited Speakers Panel

            Sham M. Kakade, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks
            13:33

            Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks

            Giora Simchoni, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2021 folgen