Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: DTG-SSOD: Dense Teacher Guidance for Semi-Supervised Object Detection
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-008-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-008-alpha.b-cdn.net
      • sl-yoda-v2-stream-008-beta.b-cdn.net
      • 1159783934.rsc.cdn77.org
      • 1511376917.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            DTG-SSOD: Dense Teacher Guidance for Semi-Supervised Object Detection
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            DTG-SSOD: Dense Teacher Guidance for Semi-Supervised Object Detection

            Nov 28, 2022

            Speakers

            GL

            Gang Li

            Speaker · 0 followers

            XL

            Xiang Li

            Speaker · 0 followers

            YW

            Yujie Wang

            Speaker · 0 followers

            About

            The Mean-Teacher (MT) scheme is widely adopted in semi-supervised object detection (SSOD). In MT, sparse pseudo labels, offered by the final predictions of the teacher (e.g., after Non Maximum Suppression (NMS) post-processing), are adopted for the dense supervision for the student via hand-crafted label assignment. However, the "sparse-to-dense” paradigm complicates the pipeline of SSOD, and simultaneously neglects the powerful direct, dense teacher supervision. In this paper, we attempt to dir…

            Organizer

            N2
            N2

            NeurIPS 2022

            Account · 952 followers

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            On Learning Fairness and Accuracy on Multiple Subgroups
            01:10

            On Learning Fairness and Accuracy on Multiple Subgroups

            Changjian Shui, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            ShuffleMixer: An Efficient ConvNet for Image Super-Resolution
            04:42

            ShuffleMixer: An Efficient ConvNet for Image Super-Resolution

            Long Sun, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Kernel similarity matching with Hebbian networks
            05:01

            Kernel similarity matching with Hebbian networks

            Kyle Luther, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a Polynomial Net Study
            04:19

            Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a Polynomial Net Study

            Yongtao Wu, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Mirror Descent with Relative Smoothness in Measure Spaces, with application to Sinkhorn and Expectation-Maximization (EM)
            04:59

            Mirror Descent with Relative Smoothness in Measure Spaces, with application to Sinkhorn and Expectation-Maximization (EM)

            Pierre-Cyril Aubin-Frankowski, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Statelessness in Asylum Data
            05:54

            Statelessness in Asylum Data

            Kristin Kaltenhauser, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2022