Jul 12, 2020
Reinforcement learning (RL) applies to control problems with large state and action spaces, hence it is natural to consider RL with a parametric model. In this paper we focus on finite-horizon episodic RL where the transition model admits a nonlinear parametrization P_θ, a special case of which is the linear parameterization: P_θ = ∑_i=1^d (θ)_iP_i. We propose an upper confidence model-based RL algorithm with value-targeted model parameter estimation. The algorithm updates the estimate of θ by solving a nonlinear regression problem using the latest value estimate as the target. We demonstrate the efficiency of our algorithm by proving its expected regret bound which, in the special case of linear parameterization takes the form 𝒪̃(d√(H^3T)), where H, T, d are the horizon, total number of steps and dimension of θ. This regret bound is independent of the total number of states or actions, and is close to a lower bound Ω(√(HdT)). In the general nonlinear case, we handle the regret analysis by using the concept of Eluder dimension proposed by <cit.>.
The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Presentations on similar topic, category or speaker