Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-009-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-009-alpha.b-cdn.net
      • sl-yoda-v2-stream-009-beta.b-cdn.net
      • 1766500541.rsc.cdn77.org
      • 1441886916.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Revisiting Gradient Clipping: Stochastic bias and tight convergence guarantees

            Jul 24, 2023

            Speakers

            AK

            Anastasiia Koloskova

            Speaker · 0 followers

            HH

            Hadrien Hendrikx

            Speaker · 0 followers

            SS

            Sebastian Stich

            Speaker · 0 followers

            About

            Gradient clipping is a popular modification to standard (stochastic) gradient descent, at every iteration limiting the gradient norm to a certain value c >0. It is widely used for example for stabilizing the training of deep learning models (Goodfellow et al., 2016), or for enforcing differential privacy (Abadi et al., 2016). Despite popularity and simplicity of the clipping mechanism, its convergence guarantees often require specific values of c and strong noise assumptions. In this paper, …

            Organizer

            I2
            I2

            ICML 2023

            Account · 616 followers

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Quantifying the Variability Collapse of Neural Networks
            04:24

            Quantifying the Variability Collapse of Neural Networks

            Jing Xu, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            The Melting Pot of Neural Algorithmic Reasoning
            31:56

            The Melting Pot of Neural Algorithmic Reasoning

            Petar Veličkovic

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Offline Meta Reinforcement Learning with In-Distribution Online Adaptation
            04:28

            Offline Meta Reinforcement Learning with In-Distribution Online Adaptation

            Jianhao Wang, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            ContraBAR: Contrastive Bayes-Adaptive Deep RL
            05:18

            ContraBAR: Contrastive Bayes-Adaptive Deep RL

            Era Choshen, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            An Information-Theoretic Analysis of Nonstationary Bandit Learning
            05:16

            An Information-Theoretic Analysis of Nonstationary Bandit Learning

            Seungki Min, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            How to Address Monotonicity for Model Risk Management?
            05:03

            How to Address Monotonicity for Model Risk Management?

            Dangxing Chen, …

            I2
            I2
            ICML 2023 2 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow ICML 2023