Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Benign Overfitting in Deep Neural Networks under Lazy Training
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-001-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-001-alpha.b-cdn.net
      • sl-yoda-v2-stream-001-beta.b-cdn.net
      • 1824830694.rsc.cdn77.org
      • 1979322955.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Benign Overfitting in Deep Neural Networks under Lazy Training
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Benign Overfitting in Deep Neural Networks under Lazy Training

            Jul 24, 2023

            Sprecher:innen

            ZZ

            Zhenyu Zhu

            Sprecher:in · 0 Follower:innen

            FL

            Fanghui Liu

            Sprecher:in · 0 Follower:innen

            GGC

            Grigorios G. Chrysos

            Sprecher:in · 0 Follower:innen

            Über

            This paper focuses on over-parameterized deep neural networks (DNNs) with ReLU activation functions and proves that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification while obtaining (nearly) zero-training error. For this purpose, we unify three interrelated concepts of overparameterization, benign overfitting, and the Lipschitz constant of DNNs. Our results indicate that interpolating with smoother functions leads to better generalization.…

            Organisator

            I2
            I2

            ICML 2023

            Konto · 657 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            MoleculeSDE: A Group Symmetric Stochastic Differential Equation Model for Molecule Multi-modal Pretraining
            04:51

            MoleculeSDE: A Group Symmetric Stochastic Differential Equation Model for Molecule Multi-modal Pretraining

            Shengchao Liu, …

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Blackbox Differentiation: The story so far
            28:59

            Blackbox Differentiation: The story so far

            Marin Vlastelica

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Active Ranking of Experts Based on their Performances in Many Tasks
            09:01

            Active Ranking of Experts Based on their Performances in Many Tasks

            El Mehdi Saad, …

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Neuro-Symbolic Dialogue Management using Prompt-Based Transfer Learning for Dialogue Act Controlled Open-Domain NLG
            44:43

            Neuro-Symbolic Dialogue Management using Prompt-Based Transfer Learning for Dialogue Act Controlled Open-Domain NLG

            Marilyn Walker

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Invariant Slot Attention
            04:38

            Invariant Slot Attention

            Ondrej Biza, …

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Local Learning for Higher Parallelism
            24:19

            Local Learning for Higher Parallelism

            Edouard Oyallon

            I2
            I2
            ICML 2023 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? ICML 2023 folgen