Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Sparse Deep Learning: A New Framework Immune to Local Traps and Miscalibration
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-013-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-013-alpha.b-cdn.net
      • sl-yoda-v3-stream-013-beta.b-cdn.net
      • 1668715672.rsc.cdn77.org
      • 1420896597.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Sparse Deep Learning: A New Framework Immune to Local Traps and Miscalibration
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Sparse Deep Learning: A New Framework Immune to Local Traps and Miscalibration

            Dec 6, 2021

            Speakers

            YS

            Yan Sun

            Sprecher:in · 0 Follower:innen

            WX

            Wenjun Xiong

            Sprecher:in · 0 Follower:innen

            FL

            Faming Liang

            Sprecher:in · 0 Follower:innen

            About

            Deep learning has powered recent successes of artificial intelligence (AI). However, the deep neural network, as the basic model of deep learning, has suffered from issues such as local traps and miscalibration. In this paper, we provide a new framework for sparse deep learning, which has the above issues addressed in a coherent way. In particular, we lay down a theoretical foundation for sparse deep learning and propose prior annealing algorithms for learning sparse neural networks. The former…

            Organizer

            N2
            N2

            NeurIPS 2021

            Konto · 1,9k Follower:innen

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration Learning
            11:43

            Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration Learning

            Xiao Wang, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Beyond Value-Function Gaps: Improved Instance-Dependent Regret Bounds for Episodic Reinforcement Learning
            07:58

            Beyond Value-Function Gaps: Improved Instance-Dependent Regret Bounds for Episodic Reinforcement Learning

            Christoph Dann, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Exploiting Opponents Under Utility Constraints in Sequential Games
            12:59

            Exploiting Opponents Under Utility Constraints in Sequential Games

            Martino Bernasconi De Luca, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Versatile Inverse Reinforcement Learning via Cumulative Rewards
            03:01

            Versatile Inverse Reinforcement Learning via Cumulative Rewards

            Niklas Freymuth, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Antipodes of Label Differential Privacy: PATE and ALIBI
            14:17

            Antipodes of Label Differential Privacy: PATE and ALIBI

            Mani Malek, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Spotlights 2 QA
            05:46

            Spotlights 2 QA

            Sebastian Palacio, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interested in talks like this? Follow NeurIPS 2021