Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-002-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-002-alpha.b-cdn.net
      • sl-yoda-v2-stream-002-beta.b-cdn.net
      • 1001562353.rsc.cdn77.org
      • 1075090661.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials

            Nov 28, 2022

            Speakers

            EN

            Eshaan Nichani

            Speaker · 0 followers

            YB

            Yu Bai

            Speaker · 0 followers

            JDL

            Jason D. Lee

            Speaker · 0 followers

            About

            A recent goal in the theory of deep learning is to identify how neural networks can escape the “lazy training,” or Neural Tangent Kernel (NTK) regime, where the network is coupled with its first order Taylor expansion at initialization. While the NTK is minimax optimal for learning dense polynomials (Ghorbani et al, 2021), it cannot learn features, and hence has poor sample complexity for learning many classes of functions including sparse polynomials. Recent works have thus aimed to identify se…

            Organizer

            N2
            N2

            NeurIPS 2022

            Account · 952 followers

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Quantized Training of Gradient Boosted Decision Trees
            04:24

            Quantized Training of Gradient Boosted Decision Trees

            Yu Shi, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative
            05:11

            Augmentations in Hypergraph Contrastive Learning: Fabricated and Generative

            Tianxin Wei, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            CLVR: Coordinate Linear Variance Reduction for Generalized Linear Programming
            04:53

            CLVR: Coordinate Linear Variance Reduction for Generalized Linear Programming

            Cheuk Yin Lin, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            LIPS - Learning Industrial Physical Simulation benchmark suite
            05:05

            LIPS - Learning Industrial Physical Simulation benchmark suite

            Milad Leyli Abadi, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Pragmatically Learning from Pedagogical Demonstrations in Multi-Goal Environments
            04:06

            Pragmatically Learning from Pedagogical Demonstrations in Multi-Goal Environments

            Hugo Caselles-Dupré, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Optimal Comparator Adaptive Online Learning with Switching Cost
            04:52

            Optimal Comparator Adaptive Online Learning with Switching Cost

            Zhiyu Zhang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2022