Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning

Dec 2, 2022

Speakers

About

We present a novel multi-agent RL approach, Selective Multi-Agent PER, in which agents share with other agents a limited number of transitions they observe during training. They follow a similar heuristic as is used in (single-agent) Prioritized Experience Replay, and choose those transitions based on their td-error. The intuition behind this is that even a small number of relevant experiences from other agents could help each agent learn. Unlike many other multi-agent RL algorithms, this approach allows for largely decentralized training, requiring only a limited communication channel between agents. We show that our approach outperforms baseline no-sharing decentralized training. Further, sharing only a small number of experiences outperforms sharing all experiences between agents, and the performance uplift from selective experience sharing is robust across a range of hyperparameters.

Organizer

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow NeurIPS 2022