Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: Finite-Time Last-Iterate Convergence for Multi-Agent Learning in Games
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v3-stream-014-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v3-stream-014-alpha.b-cdn.net
      • sl-yoda-v3-stream-014-beta.b-cdn.net
      • 1978117156.rsc.cdn77.org
      • 1243944885.rsc.cdn77.org
      • Subtitles
      • Off
      • en
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            Finite-Time Last-Iterate Convergence for Multi-Agent Learning in Games
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            Finite-Time Last-Iterate Convergence for Multi-Agent Learning in Games

            Jul 12, 2020

            Speakers

            ZZ

            Zhengyuan Zhou

            Speaker · 0 followers

            PM

            Panayotis Mertikopoulos

            Speaker · 0 followers

            MIJ

            Michael I. Jordan

            Speaker · 13 followers

            About

            In this paper, we consider multi-agent learning via online gradient descent in a class of games called λ-cocoercive games, a fairly broad class of games that admits many Nash equilibria and that properly includes strongly monotone games. We characterize the finite-time last-iterate convergence rate for joint OGD learning on λ-cocoercive games; further, building on this result, we develop a fully adaptive OGD learning algorithm that does not require any knowledge of the problem parameter (e.g. co…

            Organizer

            I2
            I2

            ICML 2020

            Account · 2.7k followers

            Categories

            AI & Data Science

            Category · 10.8k presentations

            About ICML 2020

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Like the format? Trust SlidesLive to capture your next event!

            Professional recording and live streaming, delivered globally.

            Sharing

            Recommended Videos

            Presentations on similar topic, category or speaker

            DropNet: Reducing Neural Network Complexity via Iterative Pruning
            15:13

            DropNet: Reducing Neural Network Complexity via Iterative Pruning

            Mehul Motani, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Optimal Differential Privacy Composition for Exponential Mechanisms
            15:53

            Optimal Differential Privacy Composition for Exponential Mechanisms

            Jinshuo Dong, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Two Routes to Scalable Credit Assignment without Weight Symmetry
            14:11

            Two Routes to Scalable Credit Assignment without Weight Symmetry

            Daniel Kunin, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Topological Autoencoders
            13:09

            Topological Autoencoders

            Michael Moor, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%

            Variational Bayesian Quantization
            15:07

            Variational Bayesian Quantization

            Yibo Yang, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Submodular Optimization: From Discrete to Continuous and Back - Part I
            56:26

            Submodular Optimization: From Discrete to Continuous and Back - Part I

            Amin Karbasi, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 1 viewers voted for saving the presentation to eternal vault which is 0.1%

            Interested in talks like this? Follow ICML 2020