Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Efficient Algorithms for Learning Depth-2 Neural Networks with General ReLU Activations
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-016-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-016-alpha.b-cdn.net
      • sl-yoda-v3-stream-016-beta.b-cdn.net
      • 1504562137.rsc.cdn77.org
      • 1896834465.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Efficient Algorithms for Learning Depth-2 Neural Networks with General ReLU Activations
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Efficient Algorithms for Learning Depth-2 Neural Networks with General ReLU Activations

            Dez 6, 2021

            Sprecher:innen

            PA

            Pranjal Awasthi

            Sprecher:in · 0 Follower:innen

            AT

            Alex Tang

            Sprecher:in · 0 Follower:innen

            AV

            Aravindan Vijayaraghavan

            Sprecher:in · 0 Follower:innen

            Über

            We present polynomial time and sample efficient algorithms for learning an unknown depth-2 feedforward neural network with general ReLU activations, under mild non-degeneracy assumptions. In particular, we consider learning an unknown network of the form f(x) = a^𝖳σ(W^𝖳x+b), where x is drawn from the Gaussian distribution, and σ(t) = max(t,0) is the ReLU activation. Prior works for learning networks with ReLU activations assume that the bias (b) is zero. In order to deal with the presence of t…

            Organisator

            N2
            N2

            NeurIPS 2021

            Konto · 1,9k Follower:innen

            Über NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            Credal Self-Supervised Learning
            15:01

            Credal Self-Supervised Learning

            Julian Lienen, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs
            07:38

            A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs

            Mucong Ding, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            DRIVE: One-bit Distributed Mean Estimation
            14:07

            DRIVE: One-bit Distributed Mean Estimation

            Shay Vargaftik, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Neuro-Logic and Differentiable Controls
            26:03

            Neuro-Logic and Differentiable Controls

            Yejin Choi

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Supervising the Transfer of Reasoning Patterns in VQA
            12:54

            Supervising the Transfer of Reasoning Patterns in VQA

            Corentin Kervadec, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            WiML Workshop 3
            5:11:30

            WiML Workshop 3

            N2
            N2
            NeurIPS 2021 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2021 folgen