Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Random Classification Noise does not defeat All Convex Potential Boosters Irrespective of Model Choice
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-003-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-003-alpha.b-cdn.net
      • sl-yoda-v2-stream-003-beta.b-cdn.net
      • 1544410162.rsc.cdn77.org
      • 1005514182.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Random Classification Noise does not defeat All Convex Potential Boosters Irrespective of Model Choice
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Random Classification Noise does not defeat All Convex Potential Boosters Irrespective of Model Choice

            Jul 25, 2023

            Speakers

            YM

            Yishay Mansour

            Speaker · 1 follower

            RN

            Richard Nock

            Speaker · 0 followers

            RCW

            Robert C. Williamson

            Speaker · 0 followers

            About

            A landmark negative result of Long and Servedio has had a considerable impact on research and development in boosting algorithms, around the now famous tagline that "noise defeats all convex boosters". In this paper, we appeal to the half-century+ founding theory of losses for class probability estimation, an extension of Long and Servedio's results and a new general convex booster to demonstrate that the source of their negative result is in fact the *model class*, linear separators. Losses or…

            Organizer

            I2
            I2

            ICML 2023

            Account · 657 followers

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping
            11:55

            BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping

            Jiatao Gu, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Invited Talk: Gautam Kamath
            16:04

            Invited Talk: Gautam Kamath

            Gautam Kamath

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration
            04:46

            Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration

            Blaise Delattre, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            ChemGymRL: An Interactive Framework for Reinforcement Learning for Digital Chemistry
            11:43

            ChemGymRL: An Interactive Framework for Reinforcement Learning for Digital Chemistry

            Mark Crowley, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            The self-supervised learning Interplay: Data Augmentations, Inductive Bias and Generalization
            05:12

            The self-supervised learning Interplay: Data Augmentations, Inductive Bias and Generalization

            Vivien Cabannnes, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Federated Learning with Personalized and User-Level Differential Privacy
            38:13

            Federated Learning with Personalized and User-Level Differential Privacy

            Li Xiong

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2023