Dec 14, 2019

Reinforcement learning is now the dominant paradigm for how an agent learns to interact with the world in order to achieve some long term objectives. Here, policy gradient methods are among the most effective methods in challenging reinforcement learning problems, due to that they: are applicable to any differentiable policy parameterization; admit easy extensions to function approximation; easily incorporate structured state and action spaces; are easy to implement in a simulation based, model-free manner. However, little is known about even their most basic theoretical convergence properties, including: i), do they converge to a globally optimal solution, say with a sufficiently rich policy class? ii), how well do they cope with approximation error, say due to using a class of neural policies? iii), what is their finite sample complexity? This talk will survey a number of results on these basic questions. We will highlight the interplay of theory, algorithm design, and practice.

Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Presentations on similar topic, category or speaker