Expressive Attentional Communication Learning using Graph Neural Networks - Robotics Institute Carnegie Mellon University

Expressive Attentional Communication Learning using Graph Neural Networks

Master's Thesis, Tech. Report, CMU-RI-TR-24-51, July, 2024

Abstract

Multi-agent reinforcement learning presents unique hurdles such as the non-stationary problem beyond single-agent reinforcement learning that makes learning effective decentralized cooperative policies using an agent's local state extremely challenging. Effective communication to share information and coordinate is vital for agents to work together and solve cooperative tasks, as the ubiquitous evidence of communication in nature would highlight. Hence, communication within a framework between agents can potentially alleviate the problems of non-stationarity and partial observability while being highly scalable. This work examines graph neural networks (GNNs), whose message-passing mechanisms synergize well with differentiable communication learning (CL) methods. We investigate the inherent limitations of attention-based GNNs regarding their expressive power and propose a new GNN, Graph Attention Isomorphism Network (GAIN). We evaluated GAIN on the Open Graph Benchmark and showed that it outperforms state-of-the-art GNNs on various graph, link, and node property tasks across a GNN design space using GraphGym. We incorporate GAIN into a simple architecture called Graph Communication Network (GCNet) and evaluate it on tasks in the StarCraft Multi-Agent Challenge. We show that it outperforms GCNet using state-of-the-art GNNs and other baseline CL methods.

BibTeX

@mastersthesis{Chong-2024-142061,
author = {Yu Quan Chong},
title = {Expressive Attentional Communication Learning using Graph Neural Networks},
year = {2024},
month = {July},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-24-51},
keywords = {Graph Neural Networks, Communications Learning, Multi-Agent Reinforcement Learning},
}