Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Particle Dual Averaging: Optimization of Mean Field Neural Network with Global Convergence Rate Analysis
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-016-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-016-alpha.b-cdn.net
      • sl-yoda-v3-stream-016-beta.b-cdn.net
      • 1504562137.rsc.cdn77.org
      • 1896834465.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Particle Dual Averaging: Optimization of Mean Field Neural Network with Global Convergence Rate Analysis
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Particle Dual Averaging: Optimization of Mean Field Neural Network with Global Convergence Rate Analysis

            Dec 6, 2021

            Speakers

            AN

            Atsushi Nitanda

            Speaker · 0 followers

            DW

            Denny Wu

            Speaker · 0 followers

            TS

            Taiji Suzuki

            Speaker · 1 follower

            About

            We propose the particle dual averaging (PDA) method, which generalizes the dual averaging method in convex optimization to the optimization over probability distributions with quantitative runtime guarantee. The algorithm consists of an inner loop and outer loop: the inner loop utilizes the Langevin algorithm to approximately solve for a stationary distribution, which is then optimized in the outer loop. The method can thus be interpreted as an extension of the Langevin algorithm to naturally ha…

            Organizer

            N2
            N2

            NeurIPS 2021

            Account · 1.9k followers

            About NeurIPS 2021

            Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            Fairness and privacy aspects of ImageNet
            27:34

            Fairness and privacy aspects of ImageNet

            Olga Russakovsky, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Overparameterization Improves Robustness to Covariate Shift in High-Dimensions
            15:11

            Overparameterization Improves Robustness to Covariate Shift in High-Dimensions

            Nilesh Tripuraneni, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Play to Grade: Testing Coding Games as Classifying Markov Decision Process
            14:27

            Play to Grade: Testing Coding Games as Classifying Markov Decision Process

            Allen Nie, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Continual Auxiliary Task Learning
            05:36

            Continual Auxiliary Task Learning

            Matthew McLeod, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Learning Parameterized Task Structure for Generalization to Unseen Entities
            06:49

            Learning Parameterized Task Structure for Generalization to Unseen Entities

            Anthony Liu, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Fairness via Representation Neutralization
            04:39

            Fairness via Representation Neutralization

            Mengnan Du, …

            N2
            N2
            NeurIPS 2021 3 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Interested in talks like this? Follow NeurIPS 2021