Next
Livestream will start soon!
Livestream has already ended.
Presentation has not been recorded yet!
  • title: ESCHER: Eschewing Importance Sampling in Games by Computing a History Value Function to Estimate Regret
      0:00 / 0:00
      • Report Issue
      • Settings
      • Playlists
      • Bookmarks
      • Subtitles Off
      • Playback rate
      • Quality
      • Settings
      • Debug information
      • Server sl-yoda-v2-stream-002-alpha.b-cdn.net
      • Subtitles size Medium
      • Bookmarks
      • Server
      • sl-yoda-v2-stream-002-alpha.b-cdn.net
      • sl-yoda-v2-stream-002-beta.b-cdn.net
      • 1001562353.rsc.cdn77.org
      • 1075090661.rsc.cdn77.org
      • Subtitles
      • Off
      • English
      • Playback rate
      • Quality
      • Subtitles size
      • Large
      • Medium
      • Small
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      My playlists
        Bookmarks
          00:00:00
            ESCHER: Eschewing Importance Sampling in Games by Computing a History Value Function to Estimate Regret
            • Settings
            • Sync diff
            • Quality
            • Settings
            • Server
            • Quality
            • Server

            ESCHER: Eschewing Importance Sampling in Games by Computing a History Value Function to Estimate Regret

            Dez 2, 2022

            Sprecher:innen

            SM

            Stephen McAleer

            Sprecher:in · 0 Follower:innen

            GF

            Gabriele Farina

            Sprecher:in · 0 Follower:innen

            ML

            Marc Lanctot

            Sprecher:in · 0 Follower:innen

            Über

            Recent techniques for approximating Nash equilibria in very large games leverage neural networks to learn approximately optimal policies (strategies). One promis- ing line of research uses neural networks to approximate counterfactual regret minimization (CFR) or its modern variants. DREAM, the only current CFR-based neural method that is model free and therefore scalable to very large games, trains a neural network on an estimated regret target that can have extremely high variance due to an im…

            Organisator

            N2
            N2

            NeurIPS 2022

            Konto · 961 Follower:innen

            Gefällt euch das Format? Vertraut auf SlidesLive, um euer nächstes Event festzuhalten!

            Professionelle Aufzeichnung und Livestreaming – weltweit.

            Freigeben

            Empfohlene Videos

            Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind

            On the impact of the quality of pseudo-labels on the self-supervised speaker verification task
            07:22

            On the impact of the quality of pseudo-labels on the self-supervised speaker verification task

            Abderrahim Fathan, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs
            04:31

            Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs

            Cristian Bodnar, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Understanding Square Loss in Training Overparametrized Neural Network Classifiers
            01:02

            Understanding Square Loss in Training Overparametrized Neural Network Classifiers

            Tianyang Hu, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Towards Video Text Visual Question Answering: Benchmark and Baseline
            05:00

            Towards Video Text Visual Question Answering: Benchmark and Baseline

            Minyi Zhao, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Adversarial Policies Beat Professional-Level Go AIs
            04:57

            Adversarial Policies Beat Professional-Level Go AIs

            Tony Wang, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Deterministic Langevin Monte Carlo with Normalizing Flows for Bayesian Inference
            01:00

            Deterministic Langevin Monte Carlo with Normalizing Flows for Bayesian Inference

            Richard Grumitt, …

            N2
            N2
            NeurIPS 2022 2 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Interessiert an Vorträgen wie diesem? NeurIPS 2022 folgen