Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-010-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-010-alpha.b-cdn.net
      • sl-yoda-v2-stream-010-beta.b-cdn.net
      • 1759419103.rsc.cdn77.org
      • 1016618226.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models

            Nov 28, 2022

            Sprecher:innen

            YS

            Yang Shu

            Sprecher:in · 0 Follower:innen

            ZC

            Zhangjie Cao

            Sprecher:in · 0 Follower:innen

            ZZ

            Ziyang Zhang

            Sprecher:in · 0 Follower:innen

            Über

            Transfer learning aims to leverage knowledge from pre-trained models to benefit the target task. Prior transfer learning work mainly transfers from a single model. However, with the emergence of deep models pre-trained from different resources, model hubs consisting of diverse models with various architectures, pre-trained datasets and learning paradigms are available. Directly applying single-model transfer learning methods to each model wastes the abundant knowledge of the model hub and suffer…

            Organisator

            N2
            N2

            NeurIPS 2022

            Konto · 962 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Generalizing Consistent Multi-Class Classification with Rejection to be Compatible with Arbitrary Losses
            03:54

            Generalizing Consistent Multi-Class Classification with Rejection to be Compatible with Arbitrary Losses

            Yuzhou Cao, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Fair Rank Aggregation
            04:59

            Fair Rank Aggregation

            Diptarka Chakraborty, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Towards Understanding the Condensation of Neural Networks at Initial Training
            04:54

            Towards Understanding the Condensation of Neural Networks at Initial Training

            Zhiqin John Xu, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            EpiGRAF: Rethinking training of 3D GANs
            05:41

            EpiGRAF: Rethinking training of 3D GANs

            Ivan Skorokhodov, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Hypothesis Testing for Differentially Private Linear Regression
            05:07

            Hypothesis Testing for Differentially Private Linear Regression

            Daniel Alabi, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits
            04:57

            Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits

            Tianyuan Jin, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2022 folgen