Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-004-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-004-alpha.b-cdn.net
      • sl-yoda-v2-stream-004-beta.b-cdn.net
      • 1685195716.rsc.cdn77.org
      • 1239898752.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning

            Dec 6, 2021

            Speakers

            TM

            Timo Milbich

            Speaker · 0 followers

            KR

            Karsten Roth

            Speaker · 0 followers

            SS

            Samarth Sinha

            Speaker · 0 followers

            About

            Deep Metric Learning (DML) aims to find representations suitable for zero-shot transfer to a priori unknown test distributions. However, common evaluation protocols only test a single, fixed data split in which train and test classes are assigned randomly. More realistic evaluations should consider a broad spectrum of distribution shifts with potentially varying degree and difficulty. In this work, we systematically construct train-test splits of increasing difficulty and present the ooDML bench…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Data science for healthcare in academia and government
            46:52

            Data science for healthcare in academia and government

            Katherine Baicker, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Momentum Centering and Asynchronous Update for Adaptive Gradient Methods
            13:26

            Momentum Centering and Asynchronous Update for Adaptive Gradient Methods

            Juntang Zhuang, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Dataset and Benchmark Track 3
            1:38:22

            Dataset and Benchmark Track 3

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Particle Dual Averaging: Optimization of Mean Field Neural Network with Global Convergence Rate Analysis
            12:59

            Particle Dual Averaging: Optimization of Mean Field Neural Network with Global Convergence Rate Analysis

            Atsushi Nitanda, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interpolation can hurt robust generalization even when there is no noise
            09:40

            Interpolation can hurt robust generalization even when there is no noise

            Alexandru Tifrea, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Discussion Panel
            1:14:39

            Discussion Panel

            Xiao-Yang Liu, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021