Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Convergence of Gradient Descent with Linearly Correlated Noise and Applications to Differentialy Private Learning
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-008-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-008-alpha.b-cdn.net
      • sl-yoda-v2-stream-008-beta.b-cdn.net
      • 1159783934.rsc.cdn77.org
      • 1511376917.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Convergence of Gradient Descent with Linearly Correlated Noise and Applications to Differentialy Private Learning
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Convergence of Gradient Descent with Linearly Correlated Noise and Applications to Differentialy Private Learning

            Dec 10, 2023

            Speakers

            AK

            Anastasiia Koloskova

            Sprecher:in · 0 Follower:innen

            RM

            Ryan McKenna

            Sprecher:in · 0 Follower:innen

            ZC

            Zachary Charles

            Sprecher:in · 0 Follower:innen

            About

            We study gradient descent under linearly correlated noise. Our work is motivated by recent practical methods for optimization with differential privacy (DP), such as DP-FTRL, which achieve strong performance in settings where privacy amplification techniques are infeasible (such as in federated learning). These methods inject privacy noise through a matrix factorization mechanism, making the noise *linearly correlated* over iterations. We propose a simplified setting that distills key facets of…

            Organizer

            N2
            N2

            NeurIPS 2023

            Konto · 645 Follower:innen

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Disentangled Wasserstein Autoencoder for Protein Engineering
            04:50

            Disentangled Wasserstein Autoencoder for Protein Engineering

            Tianxiao Li, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            ℳ^4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities and Models
            04:38

            ℳ^4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities and Models

            Xuhong Li, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Generative Modelling of Stochastic Actions with Arbitrary Constraints in Reinforcement Learning
            04:57

            Generative Modelling of Stochastic Actions with Arbitrary Constraints in Reinforcement Learning

            Chen Changyu, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Boosting Learning for LDPC Codes to Improve the Error-Floor Performance
            04:54

            Boosting Learning for LDPC Codes to Improve the Error-Floor Performance

            Hee-Youl Kwak, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            [Re] VAE Approximation Error: ELBO and Exponential Families
            05:02

            [Re] VAE Approximation Error: ELBO and Exponential Families

            Volodymyr Kyrylov, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Federated Learning via Meta-Variational Dropout
            04:58

            Federated Learning via Meta-Variational Dropout

            Insu Jeon, …

            N2
            N2
            NeurIPS 2023 16 months ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interested in talks like this? Follow NeurIPS 2023