Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: DessiLBI: Exploring Structural Sparsity on Deep Network via Differential Inclusion Paths
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-011-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-011-alpha.b-cdn.net
      • sl-yoda-v3-stream-011-beta.b-cdn.net
      • 1150868944.rsc.cdn77.org
      • 1511650057.rsc.cdn77.org
      • Subtitles
      • Off
      • en
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            DessiLBI: Exploring Structural Sparsity on Deep Network via Differential Inclusion Paths
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            DessiLBI: Exploring Structural Sparsity on Deep Network via Differential Inclusion Paths

            Jul 12, 2020

            Sprecher:innen

            YF

            Yanwei Fu

            Sprecher:in · 0 Follower:innen

            CL

            Chen Liu

            Sprecher:in · 0 Follower:innen

            DL

            Donghao Li

            Sprecher:in · 0 Follower:innen

            Über

            Over-parameterization is ubiquitous nowadays in training neural networks to benefit both optimization in seeking global optima and generalization in reducing prediction error. However, compressive networks are desired in many real world applications and direct training of small networks may be trapped in local optima. In this paper, instead of pruning or distilling over-parameterized models to compressive ones, we propose a new approach based on differential inclusions of inverse scale spaces. S…

            Organisator

            I2
            I2

            ICML 2020

            Konto · 2,7k Follower:innen

            Kategorien

            KI und Datenwissenschaft

            Kategorie · 10,8k Präsentationen

            Über ICML 2020

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
            14:55

            Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks

            Blake Bordelon, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Optimal Query Complexity of Secure Stochastic Convex Optimization
            16:39

            Optimal Query Complexity of Secure Stochastic Convex Optimization

            Wei Tang, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Uncertainty and Robustness for Self-Driving
            30:34

            Uncertainty and Robustness for Self-Driving

            Raquel Urtasun

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Combining PDE Solvers and Graph Neural Networks for Fluid Flow Prediction
            14:05

            Combining PDE Solvers and Graph Neural Networks for Fluid Flow Prediction

            Filipe De Avila Belbute-Peres, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Optimization from Structured Samples for Coverage Functions
            14:21

            Optimization from Structured Samples for Coverage Functions

            Wei Chen, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Duality in vv-RKHSs with Infinite Dimensional Outputs: Application to Robust Losses
            14:35

            Duality in vv-RKHSs with Infinite Dimensional Outputs: Application to Robust Losses

            Pierre Laforgue, …

            I2
            I2
            ICML 2020 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? ICML 2020 folgen