Další
Živý přenos začne již brzy!
Živý přenos již skončil.
Prezentace ještě nebyla nahrána!
  • title: Gradient Temporal-Difference Learning with Regularized Corrections
      0:00 / 0:00
      • Nahlásit chybu
      • Nastavení
      • Playlisty
      • Záložky
      • Titulky Off
      • Rychlost přehrávání
      • Kvalita
      • Nastavení
      • Debug informace
      • Server sl-yoda-v3-stream-014-alpha.b-cdn.net
      • Velikost titulků Střední
      • Záložky
      • Server
      • sl-yoda-v3-stream-014-alpha.b-cdn.net
      • sl-yoda-v3-stream-014-beta.b-cdn.net
      • 1978117156.rsc.cdn77.org
      • 1243944885.rsc.cdn77.org
      • Titulky
      • Off
      • en
      • Rychlost přehrávání
      • Kvalita
      • Velikost titulků
      • Velké
      • Střední
      • Malé
      • Mode
      • Video Slideshow
      • Audio Slideshow
      • Slideshow
      • Video
      Moje playlisty
        Záložky
          00:00:00
            Gradient Temporal-Difference Learning with Regularized Corrections
            • Nastavení
            • Sync diff
            • Kvalita
            • Nastavení
            • Server
            • Kvalita
            • Server

            Gradient Temporal-Difference Learning with Regularized Corrections

            12. července 2020

            Řečníci

            SG

            Sina Ghiassian

            Speaker · 0 followers

            AP

            Andrew Patterson

            Speaker · 0 followers

            SG

            Shivam Garg

            Speaker · 0 followers

            O prezentaci

            Value function learning remains a critical component of many reinforcement learning systems. Many algorithms are based on temporal difference (TD) updates, which have well-documented divergence issues, even though potentially sound alternatives exist like Gradient TD. Unsound approaches like Q-learning and TD remain popular because divergence seems rare in practice and these algorithms typically perform well. However, recent work with large neural network learning systems reveals that instabilit…

            Organizátor

            I2
            I2

            ICML 2020

            Account · 2.7k followers

            Kategorie

            AI & Data Science

            Category · 10.8k presentations

            O organizátorovi (ICML 2020)

            The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

            Baví vás formát? Nechte SlidesLive zachytit svou akci!

            Profesionální natáčení a streamování po celém světě.

            Sdílení

            Doporučená videa

            Prezentace na podobné téma, kategorii nebo přednášejícího

            Robust One-Bit Recovery via ReLU Generative Networks
            15:05

            Robust One-Bit Recovery via ReLU Generative Networks

            Shuang Qiu, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Supervised learning: no loss no cry
            15:18

            Supervised learning: no loss no cry

            Richard Nock, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Uncertainty in Deep Learning: How to be Bayesian?
            28:02

            Uncertainty in Deep Learning: How to be Bayesian?

            Finale Doshi-Velez

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Clinical-grade Artificial Intelligence: Hype and Hope for Cancer Care
            29:29

            Clinical-grade Artificial Intelligence: Hype and Hope for Cancer Care

            Thomas J. Fuchs

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            RIFLE: Backpropagation in Depth for Deep Transfer Learning through Re-Initializing the Fully-connected LayEr
            12:22

            RIFLE: Backpropagation in Depth for Deep Transfer Learning through Re-Initializing the Fully-connected LayEr

            Xingjian Li, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Is Local SGD Better than Minibatch SGD?
            14:35

            Is Local SGD Better than Minibatch SGD?

            Blake Woodworth, …

            I2
            I2
            ICML 2020 5 years ago

            Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

            Zajímají Vás podobná videa? Sledujte ICML 2020