Další
Information Theory and Estimation
Živý přenos začne již brzy!
Živý přenos již skončil.
Prezentace ještě nebyla nahrána!
  • title: Learning Theory: Games
      0:00 / 0:00
      • Nahlásit chybu
      • Nastavení
      • Playlisty
      • Záložky
      • Titulky Off
      • Rychlost přehrávání
      • Kvalita
      • Nastavení
      • Debug informace
      • Server sl-yoda-v3-stream-001-alpha.b-cdn.net
      • Velikost titulků Střední
      • Záložky
      • Server
      • sl-yoda-v3-stream-001-alpha.b-cdn.net
      • sl-yoda-v3-stream-001-beta.b-cdn.net
      • 1148202645.rsc.cdn77.org
      • 1784416251.rsc.cdn77.org
      • Titulky
      • Off
      • English (auto-generated)
      • Rychlost přehrávání
      • Kvalita
      • Velikost titulků
      • Velké
      • Střední
      • Malé
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      Moje playlisty
        Záložky
          00:00:00
            Learning Theory: Games
            • Nastavení
            • Sync diff
            • Kvalita
            • Nastavení
            • Server
            • Kvalita
            • Server

            Learning Theory: Games

            11. června 2019

            Řečníci

            AL

            Adam Lerer

            Sprecher:in · 0 Follower:innen

            AC

            Anoop Cherian

            Sprecher:in · 0 Follower:innen

            AK

            Anson Kahng

            Sprecher:in · 1 Follower:in

            O prezentaci

            Regret Circuits: Composability of Regret Minimizers Regret minimization is a powerful tool for solving large-scale problems; it was recently used in breakthrough results for large-scale extensive-form game solving. This was achieved by composing simplex regret minimizers into an overall regret-minimization framework for extensive-form game strategy spaces. In this paper we study the general composability of regret minimizers. We derive a calculus for constructing regret minimizers for composite…

            Organizátor

            I2
            I2

            ICML 2019

            Konto · 3,2k Follower:innen

            Kategorie

            KI und Datenwissenschaft

            Kategorie · 10,8k Präsentationen

            Mathematik

            Kategorie · 2,4k Präsentationen

            O organizátorovi (ICML 2019)

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Baví vás formát? Nechte SlidesLive zachytit svou akci!

            Profesionální natáčení a streamování po celém světě.

            Sdílení

            Doporučená videa

            Prezentace na podobné téma, kategorii nebo přednášejícího

            Trend-Following Trading Strategies and Financial Market Stability
            21:12

            Trend-Following Trading Strategies and Financial Market Stability

            Michael Wellman

            I2
            I2
            ICML 2019 6 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Robust Statistics and Interpretability
            1:14:51

            Robust Statistics and Interpretability

            Alain Tapp, …

            I2
            I2
            ICML 2019 6 years ago

            Ewigspeicher-Fortschrittswert: 1 = 0.1%

            Interpretability Contributed Talks
            45:01

            Interpretability Contributed Talks

            Ali Pinar, …

            I2
            I2
            ICML 2019 6 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Which Tasks Should Be Learned Together in Multi-task Learning?
            06:41

            Which Tasks Should Be Learned Together in Multi-task Learning?

            Trevor Standley

            I2
            I2
            ICML 2019 6 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Tensor Variable Elimination in Pyro
            38:53

            Tensor Variable Elimination in Pyro

            Eli Bingham

            I2
            I2
            ICML 2019 6 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Information Theory and Estimation
            1:11:55

            Information Theory and Estimation

            Aditya Grover, …

            I2
            I2
            ICML 2019 6 years ago

            Ewigspeicher-Fortschrittswert: 0 = 0.0%

            Zajímají Vás podobná videa? Sledujte ICML 2019