Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Go Wide, Then Narrow: Efficient Training of Deep Thin Networks
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-014-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-014-alpha.b-cdn.net
      • sl-yoda-v3-stream-014-beta.b-cdn.net
      • 1978117156.rsc.cdn77.org
      • 1243944885.rsc.cdn77.org
      • Subtitles
      • Off
      • en
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Go Wide, Then Narrow: Efficient Training of Deep Thin Networks
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Go Wide, Then Narrow: Efficient Training of Deep Thin Networks

            Jul 12, 2020

            Sprecher:innen

            DZ

            Denny Zhou

            Speaker · 0 followers

            MY

            Mao Ye

            Speaker · 0 followers

            CC

            Chen Chen

            Speaker · 0 followers

            Über

            We propose an efficient algorithm to train a very deep and thin network with theoretic guarantee. Our method is motivated by model compression, and consists of three stages. In the first stage, we widen the deep thin network and train it until convergence. In the second stage, we use this well trained deep wide network to warm up or initialize the original deep thin network. In the last stage, we train this well initialized deep thin network until convergence. The key ingredient of our method is…

            Organisator

            I2
            I2

            ICML 2020

            Account · 2.7k followers

            Kategorien

            AI & Data Science

            Category · 10.8k presentations

            Über ICML 2020

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Beyond Signal Propagation: Is Feature Diversity Necessary in Neural Network Initialization?
            14:14

            Beyond Signal Propagation: Is Feature Diversity Necessary in Neural Network Initialization?

            Yaniv Blumenfeld, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Few-shot Out-of-Distribution Detection
            04:54

            Few-shot Out-of-Distribution Detection

            Paul Vicol, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Parameter-free Online Optimization - Part 4
            46:35

            Parameter-free Online Optimization - Part 4

            Francesco Orabona, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Estimating Model Uncertainty of Neural Network in Sparse Information Form
            14:33

            Estimating Model Uncertainty of Neural Network in Sparse Information Form

            Jongseok Lee, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            What we learned from Argoverse Competitions
            39:08

            What we learned from Argoverse Competitions

            Jagjeet Singh, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Unsupervised Machine Translation: A Tela of Two Hammers
            35:22

            Unsupervised Machine Translation: A Tela of Two Hammers

            Pascale Fung

            I2
            I2
            ICML 2020 5 years ago

            Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%

            Interessiert an Vorträgen wie diesem? ICML 2020 folgen