Multi-Agent: Communication Learning - Robotics Institute Carnegie Mellon University
Multi-Agent: Communication Learning
Project Head: Benjamin (Ben) Freed

Efficient inter-agent communication is an important requirement for both cooperative multi-agent robotics tasks. In this work, while multi-agent robotics is our target application, we believe that distributed computing will also be served by this work. Regardless of domain, the rate at which information can be transferred between robots or computing nodes is often a speed bottleneck.

Techniques exist for learning communication protocols in such bandwidth-limited applications, however these approaches tend to converge very slowly, requiring large amounts of computational power and/or data. To address this problem, we have developed a differentiable communication procedure wherein messages are discretized to variable levels of precision, so as to balance performance with communication costs. Gradients can thus be backpropagated through the communication channel, enabling rapid, efficient and robust communication learning. By virtue of using discrete messages, this approach naturally incorporates bandwidth limitations and is ideally suited to real-world (digital) communication networks. To enable communicating agents to further decrease the amount of information they transmit, we’ve introduced a variable-length message code that provides agents with a means to modulate the number of bits they send to their neighbors. This message-length modulation, combined with a novel a message-length penalty objective, encourages agents to send short messages when possible, enabling agents to minimize their communication requirements while still effectively solving their task. We have evaluated these contributions on both a partially observable reinforcement learning task involving robot navigation, as well as a supervised learning task with graph neural networks to model distributed computing. We’ve found that in both tasks, our discrete differentiable communication approach enables communication learning with convergence rates comparable to approaches that allow the transmission of real-valued messages, which have been shown to converge much faster than typical discrete messaging approaches. Additionally, we have found that in the supervised learning task, our approach for encouraging limited communication enables comparable levels of validation accuracy to be achieved, with up to a of factor of 34 fewer bits exchanged, compared to an approach in which nodes communicate 32-bit precision messages, indicating that our approach provides an effective means to rapidly learn efficient communications in a distributed computing setting.

Displaying 3 Publications