Skip to yearly menu bar Skip to main content


Poster

Emergent Communication at Scale

Rahma Chaabouni · Florian Strub · Florent Altché · Eugene Tarassov · Corentin Tallec · Elnaz Davoodi · Kory Mathewson · Olivier Tieleman · Angeliki Lazaridou · Bilal Piot

Keywords: [ multi-agent reinforcement learning ] [ emergent communication ] [ representation learning ]


Abstract:

Emergent communication aims for a better understanding of human language evolution and building more efficient representations. We posit that reaching these goals will require scaling up, in contrast to a significant amount of literature that focuses on setting up small-scale problems to tease out desired properties of the emergent languages. We focus on three independent aspects to scale up, namely the dataset, task complexity, and population size. We provide a first set of results for large populations solving complex tasks on realistic large-scale datasets, as well as an easy-to-use codebase to enable further experimentation. In more complex tasks and datasets, we find that RL training can become unstable, but responds well to established stabilization techniques.We also identify the need for a different metric than topographic similarity, which does not correlate with the generalization performances when working with natural images. In this context, we probe ease-of-learnability and transfer methods to assess emergent languages. Finally, we observe that larger populations do not induce robust emergent protocols with high generalization performance, leading us to explore different ways to leverage population, through voting and imitation learning.

Chat is not available.