Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Derivatives and residual distribution of regularized M-estimators with application to adaptive tuning
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-009-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-009-alpha.b-cdn.net
      • sl-yoda-v2-stream-009-beta.b-cdn.net
      • 1766500541.rsc.cdn77.org
      • 1441886916.rsc.cdn77.org
      • Subtitles
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Derivatives and residual distribution of regularized M-estimators with application to adaptive tuning
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Derivatives and residual distribution of regularized M-estimators with application to adaptive tuning

            Jul 2, 2022

            Speakers

            PCB

            Pierre C. Bellec

            Sprecher:in · 0 Follower:innen

            YS

            Yiwei Shen

            Sprecher:in · 0 Follower:innen

            About

            This paper studies M-estimators with gradient-Lipschitz loss function regularized with convex penalty in linear models with Gaussian design matrix and arbitrary noise distribution. A practical example is the robust M-estimator constructed with the Huber loss and the Elastic-Net penalty and the noise distribution has heavy-tails. Our main contributions are three-fold. (i) We provide general formulae for the derivatives of regularized M-estimators β̂(y,X) where differentiation is taken with respec…

            Organizer

            C
            C

            COLT

            Konto · 20 Follower:innen

            About COLT

            The conference is held annually since 1988 and has become the leading conference on Learning theory by maintaining a highly selective process for submissions. It is committed in high-quality articles in all theoretical aspects of machine learning and related topics.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Pessimism About Unknown Unknowns Inspires Conservatism
            15:02

            Pessimism About Unknown Unknowns Inspires Conservatism

            Marcus Hutter, …

            C
            C
            COLT 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Exploration by Optimisation in Partial Monitoring
            15:19

            Exploration by Optimisation in Partial Monitoring

            Csaba Szepesvari, …

            C
            C
            COLT 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Approximation Schemes for ReLU Regression
            15:20

            Approximation Schemes for ReLU Regression

            Ilias Diakonikolas, …

            C
            C
            COLT 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            The estimation error of general first order methods
            00:59

            The estimation error of general first order methods

            Andrea Montanari, …

            C
            C
            COLT 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Optimal and instance-dependent guarantees for Markovian linear stochastic approximation
            22:11

            Optimal and instance-dependent guarantees for Markovian linear stochastic approximation

            Wenlong Mou, …

            C
            C
            COLT 3 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Complexity Guarantees for Polyak Steps with Momentum
            00:47

            Complexity Guarantees for Polyak Steps with Momentum

            Adrien Taylor, …

            C
            C
            COLT 5 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interested in talks like this? Follow COLT