2. prosince 2022
Sprecher:in · 0 Follower:innen
Sprecher:in · 3 Follower:innen
Sprecher:in · 1 Follower:in
In deep reinforcement learning, multi-step learning is almost unavoidable to achieve state-of-the-art performance. However, the increased variance that multistep learning brings makes it difficult to increase the update horizon beyond relatively small numbers. In this paper, we report the counterintuitive finding that decreasing the batch size parameter improves the performance of many standard deep RL agents that use multi-step learning. It is well-known that gradient variance decreases with increasing batch sizes, so obtaining improved performance by increasing variance on two fronts is a rather surprising finding. We conduct a broad set of experiments to better understand what we call the variance doubledown phenomenon.In deep reinforcement learning, multi-step learning is almost unavoidable to achieve state-of-the-art performance. However, the increased variance that multistep learning brings makes it difficult to increase the update horizon beyond relatively small numbers. In this paper, we report the counterintuitive finding that decreasing the batch size parameter improves the performance of many standard deep RL agents that use multi-step learning. It is well-known that gradient variance decreases with in…
Konto · 961 Follower:innen
Profesionální natáčení a streamování po celém světě.
Prezentace na podobné téma, kategorii nebo přednášejícího
Dong-Ki Kim, …
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Ji Lin, …
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Jiangjiao Xu, …
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Yash Pote, …
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Ewigspeicher-Fortschrittswert: 0 = 0.0%
Ewigspeicher-Fortschrittswert: 1 = 0.1%