Multi-Agent Transfer Learning via Temporal Contrastive Learning
Abstract
This paper introduces a novel transfer learning framework for deep multi-agent reinforcement learning. The approach automatically combines goal-conditioned policies with temporal contrastive learning to discover meaningful sub-goals. The approach involves pre-training a goal-conditioned agent, finetuning it on the target domain, and using contrastive learning to construct a planning graph that guides the agent via subgoals. Experiments on multi-agent coordination Overcooked tasks demonstrate improved sample efficiency, the ability to solve sparse-reward and long-horizon problems, and enhanced interpretability compared to baselines. The results highlight the effectiveness of integrating goal-conditioned policies with unsupervised temporal abstraction learning for complex multiagent transfer learning. Compared to state-of-the-art baselines, our method achieves the same or better performances while requiring only 21.7% of the training samples.
BibTeX
@workshop{Zeng-2024-140973,author = {Weihao Zeng, Joseph Campbell, Simon Stepputtis, Katia Sycara},
title = {Multi-Agent Transfer Learning via Temporal Contrastive Learning},
booktitle = {Proceedings of MAD-GAMES: MULTI-AGENT DYNAMIC GAMES},
year = {2024},
month = {April},
keywords = {Reinforcement Learning, Transfer Learning, Contrastive Learning},
}