Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-009-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-009-alpha.b-cdn.net
      • sl-yoda-v2-stream-009-beta.b-cdn.net
      • 1766500541.rsc.cdn77.org
      • 1441886916.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Thinking Outside the Ball: Optimal Learning with Gradient Descent for Generalized Linear Stochastic Convex Optimization

            Nov 28, 2022

            Speakers

            IA

            Idan Amir

            Speaker · 0 followers

            RL

            Roi Livni

            Speaker · 0 followers

            NS

            Nathan Srebro

            Speaker · 0 followers

            About

            We consider linear prediction with a convex Lipschitz loss, or more generally, stochastic convex optimization problems of generalized linear form, i.e. where each instantaneous loss is a scalar convex function of a linear function. We show that in this setting, early stopped Gradient Descent (GD), without any explicit regularization or projection, ensures excess error at most ε (compared to the best possible with unit Euclidean norm) with an optimal, up to logarithmic factors, sample compl…

            Organizer

            N2
            N2

            NeurIPS 2022

            Account · 954 followers

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Revisiting Active Sets for Gaussian Process Decoders
            04:23

            Revisiting Active Sets for Gaussian Process Decoders

            Pablo Moreno-Muñoz, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Composite Feature Selection Using Deep Ensembles
            04:56

            Composite Feature Selection Using Deep Ensembles

            Fergus Imrie, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Guaranteed Conservation of Momentum for Learning Particle-based Fluid Dynamics
            04:26

            Guaranteed Conservation of Momentum for Learning Particle-based Fluid Dynamics

            Lukas Prantl, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            A Framework for Predictable Actor-Critic Control
            05:04

            A Framework for Predictable Actor-Critic Control

            Josiah Coad, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Towards User-Interactive Offline Reinforcement Learning
            17:13

            Towards User-Interactive Offline Reinforcement Learning

            Phillip Swazinna, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Nocturne: a scalable driving benchmark for bringing multi-agent learning one step closer to the real world
            04:48

            Nocturne: a scalable driving benchmark for bringing multi-agent learning one step closer to the real world

            Eugene Vinitsky, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2022