Lagrangian Model Based Reinforcement Learning

Dec 2, 2022

Speakers

About

One of the drawbacks of traditional RL algorithms has been their poor sample efficiency. In robotics, collecting large amounts of training data using actual robots is not practical. One approach to improve the sample efficiency of RL algorithms is model-based RL. Here we learn a model of the environment, essentially its transition dynamics and reward function, and use it to generate imaginary trajectories, which we then use to update the policy. Intuitively, learning better environment models should improve model-based RL. Recently there has been growing interest in developing better deep neural network based dynamics models for physical systems through better inductive biases. We investigate if such physics-informed dynamics models can also improve model-based RL. We focus on robotic systems undergoing rigid body motion. We utilize the structure of rigid body dynamics to learn Lagrangian neural networks and use them within a model-based RL algorithm. We find that our Lagrangian model-based RL approach achieves better average-return and sample efficiency compared to standard model-based RL as well as state-of-the-art model-free RL algorithms such as Soft-Actor-Critic, in complex environments.

Organizer

Like the format? Trust SlidesLive to capture your next event!

Professional recording and live streaming, delivered globally.

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2022