## Emergent Communication: New Frontiers

### Mikhail Noukhovitch · Roberto Dessi · Agnieszka Słowik · Kevin Denamganai · Niko Grupen · Mathieu Rita · Florian Strub

Abstract Workshop Website
Fri 29 Apr, 5 a.m. PDT

Abstract:

Emergent Communication (EC) studies learning to communicate by interacting with other agents to solve collaborative tasks. There is a long history of EC for linguistics and the study of language evolution but following deep learning breakthroughs, there has been an explosion in deep EC research. Early work focused on learning more complex and effective protocols for MARL but recent research has expanded scope: inductive biases, population structures, measurements, and evolutionary biology. In parallel, new research has used EC and its paradigm for practical applications in NLP, video games, and even networking. EC has significant potential to impact a wide range of disciplines both within AI (e.g. MARL, visual-question answering, explainability, robotics) and beyond (e.g. social linguistics, cognitive science, philosophy of language) so the goal of this workshop is to push the boundaries of EC as a field and methodology. To achieve this, we are proposing a novel, discussion-focused workshop format and assembling speakers from ML to CogSci to Philosophy and the Arts. Our goal is to create a space for an interdisciplinary community, open new frontiers, and foster future research collaboration.

Chat is not available.
Timezone: America/Los_Angeles »

### Schedule

 Fri 5:00 a.m. - 5:05 a.m. Opening Remarks ( Live Talk ) 🔗 Fri 5:05 a.m. - 5:50 a.m. Marco Baroni (ICREA / UPF) ( Live Talk by Presenter over Zoom ) Marco Baroni 🔗 Fri 5:52 a.m. - 5:54 a.m. Emergent Communication for Understanding Human Language Evolution: What's Missing? ( Paper Teaser )  link »    Emergent communication protocols among humans and artificial neural network agents do not yet share the same properties and show some critical mismatches in results. We describe three important phenomena with respect to the emergence and benefits of compositionality: ease-of-learning, generalization, and group size effects (i.e., larger groups create more systematic languages). The latter two are not fully replicated with neural agents, which hinders the use of neural emergent communication for language evolution research. We argue that one possible reason for these mismatches is that key cognitive and communicative constraints of humans are not yet integrated. Specifically, in humans, memory constraints and the alternation between the roles of speaker and listener underlie the emergence of linguistic structure, yet these constraints are typically absent in neural simulations. We suggest that introducing such communicative and cognitive constraints would promote more linguistically plausible behaviors with neural agents. Link » Lukas Galke · Yoav Ram · Limor Raviv 🔗 Fri 5:52 a.m. - 5:54 a.m. Emergent communication in human-machine games ( Paper Teaser )  link »    In this paper, we show how recurrent features in the Emergent Communication (EC) literature can be used to characterize a subset of human communication ability.We aim to bring emergent communication researchers closer together, by demonstrating how diverse approaches can be considered subsets of the same problem, and emphasizing the importance of include human language evolution literature in this field. Link » Nicolo Brandizzi · Luca Iocchi 🔗 Fri 5:54 a.m. - 5:56 a.m. Categorial Grammar Induction as a Compositionality Measure for Emergent Languages in Signaling Games ( Paper Teaser )  link »    This paper proposes a method for investigating the syntactic structure of emergent languages using categorial grammar induction.Although the structural property of emergent languages is an important topic, little has been done on syntax and its relation to semantics.Inspired by previous work on CCG induction for natural languages, we propose to induce categorial grammars from the sentence-meaning pairs of emergent languages.Since an emergent language born in a signaling game is represented as pairs of a message and meaning, it is straightforward to extract sentence-meaning pairs to feed to categorial grammar induction.We also propose two compositionality measures that are based on induced grammars.Our experimental results reveal that our measures can recognize compositionality.While correlating with existing measure TopSim, our measures can gain more insights on the compositional structure of emergent languages from induced grammars. Link » Ryo Ueda · Taiga Ishii · Koki Washio · Yusuke Miyao 🔗 Fri 5:56 a.m. - 5:58 a.m. Turing Test via Emergent Communication in the Game of Werewolf ( Paper Teaser )  link »    Emergent communication can lead to more efficient problem-solving heuristics and more domain specificity than a handcrafted communication protocol, potentially directing autonomous agents towards unforeseen yet effective solutions. Previous research has investigated a social deduction game, called Werewolf, where two groups of autonomous agents, villagers and werewolves, interact in an environment named RLupus. We extend it to allow the agents to communicate through multiple rounds and evaluate their language and performance against the baseline environment. We show that agents develop a highly successful heuristic using a single word vocabulary. They create an approach analogous to the Turing test allowing them to determine which agents are werewolves, which is the winning condition. Our experimental analysis shows that our approach speeds up the convergence of the agents towards effective communication strategies. Link » Olaf Lipinski · Adam Sobey · Timothy Norman · Federico Cerutti 🔗 Fri 5:58 a.m. - 6:00 a.m. Which Language Evolves Between Heterogeneous Agents? - Communicating Movement Instructions With Widely Different Time Scopes ( Paper Teaser )  link »    This paper studies the evolving communication between two agents, a listener and speaker, in a plan execution task in which the speaker needs to communicate the plan to the acting agent while operating on different time scales. We analyze the topographic similarity of the resulting language learned by the proposed imagination-based learning process.As the speaker agent perceives the movement space strictly in absolute coordinates and the actor can only choose relative actions in the movement space, we can show that the structure of their emergent communication is not predestined. Both relative and absolute encodings of desired movements develop by chance in this setting, but we can alter the chance by using a population of learners. We conclude that our imagination-based learning strategy successfully breaks the strict hierarchy between planner and executioner. Link » Marie Ossenkopf · Kevin Sebastian Luck · Kory Mathewson 🔗 Fri 6:00 a.m. - 6:25 a.m. Discussion Group 1 ( Discussion Group ) 🔗 Fri 6:25 a.m. - 6:55 a.m. Coffee Break 🔗 Fri 6:55 a.m. - 7:40 a.m. Simon Kirby (Uni. of Edinburgh) ( Live Talk by Presenter over Zoom ) Simon Kirby 🔗 Fri 7:40 a.m. - 7:42 a.m. What makes a language easy to learn? A preregistered study on how systematic structure and community size affect language learnability ( Paper Teaser )  link »    Cross-linguistic differences in morphological complexity could have important consequences for language learning. Specifically, it is often assumed that languages with more regular, compositional, and transparent grammars are easier to learn by both children and adults. Moreover, it has been shown that such grammars are more likely to evolve in bigger communities. Together, this suggests that some languages are acquired faster than others, and that this advantage can be traced back to community size and to the degree of systematicity in the language. However, the causal relationship between systematic linguistic structure and language learnability has not been formally tested, despite its potential importance for theories on language evolution, second language learning, and the origin of linguistic diversity. In this pre-registered study, we experimentally tested the effects of community size and systematic structure on adult language learning. We compared the acquisition of different yet comparable artificial languages that were created by big or small groups in a previous communication experiment, which varied in their degree of systematic linguistic structure. We asked (a) whether more structured languages were easier to learn; and (b) whether languages created by the bigger groups were easier to learn. We found that highly systematic languages were learned faster and more accurately by adults, but that the relationship between language learnability and linguistic structure was typically non-linear: high systematicity was advantageous for learning, but learners did not benefit from partly or semi-structured languages. Community size did not affect learnability: languages that evolved in big and small groups were equally learnable, and there was no additional advantage for languages created by bigger groups beyond their degree of systematic structure. Furthermore, our results suggested that predictability is an important advantage of systematic structure: participants who learned more structured languages were better at generalizing these languages to new, unfamiliar meanings, and different participants who learned the same more structured languages were more likely to produce similar labels. That is, systematic structure may allow speakers to converge effortlessly, such that strangers can immediately understand each other. Link » Limor Raviv · Marianne de Heer Kloots · Antje Meyer 🔗 Fri 7:42 a.m. - 7:44 a.m. The Language Tool ( Paper Teaser )  link »    The idea of emergent communication is something that I have been developing a theoretical framework for, from the vantage point of a philosopher of cognitive science, over the past few years. I’ve included here a blurb for the book I am currently writing and an excerpt from a recent paper "The Language Tool" as my submission. I have another paper that develops the language acquisition process under this framework further, but I'm assuming that people will be giving presentations on their ideas and then leading discussion so perhaps this is enough as a statement of what the theme of the session would be. Link » Nancy Salay 🔗 Fri 7:44 a.m. - 7:46 a.m. Conversational grounding in emergent communication -- data and divergence ( Paper Teaser )  link »    We argue for a new research direction in emergent communication, combining work on Conversational Grounding' with Symbol Grounding (SG+CG). We first present the fine-grained and targeted feedback signals provided by Conversational Grounding and discuss the potential advantages of such a combination. We argue that a key factor holding back research in this area is lack of appropriate data, where divergent agents can resolve disagreements and errors, and we propose requirements and methods for new data collections enabling such work. Link » Oliver Lemon 🔗 Fri 7:46 a.m. - 7:48 a.m. Joining the Conversation: Towards Language Acquisition for Ad Hoc Team Play ( Paper Teaser )  link »    In this paper, we propose and consider the problem of cooperative language acquisition as a particular form of the ad hoc team play problem. We then present a probabilistic model for inferring a speaker's intentions and a listener's semantics from observing communications between a team of language-users. This model builds on the assumptions that speakers are engaged in positive signalling and listeners are exhibiting positive listening, which is to say the messages convey hidden information from the listener, that then causes them to change their behaviour. Further, it accounts for potential sub-optimality in the speaker's ability to convey the right information (according to the given task). Finally, we discuss further work for testing and developing this framework. Link » Dylan Cope · Peter McBurney 🔗 Fri 7:48 a.m. - 7:50 a.m. Emergent Communication Fine-tuning (EC-FT) for Pretrained Language Models ( Paper Teaser )  link »    It has recently been argued that the currently dominant paradigm in NLP of pretraining on text-only corpora will not yield robust natural language understanding systems. One strain of this argumentation highlights the need for grounded, goal-oriented, and interactive language learning. In this position paper, we articulate how Emergent Communication (EC) can be used in conjunction with large pretrained language models as a Fine-Tuning' (FT) step (hence, EC-FT) in order to provide them with supervision from such learning scenarios. We discuss methodological issues and difficulties with making this work, and then illustrate the overall idea with a case study in unsupervised machine translation, before concluding with a discussion on the relation to multimodal pretraining. Link » Shane Steinert-Threlkeld · Xuhui Zhou · Zeyu Liu · C. Downey 🔗 Fri 7:50 a.m. - 7:52 a.m. Varying meaning complexity to explain and measure compositionality ( Paper Teaser )  link »    Compositionality is assumed to be a key property of language, but it is hard to observe in language emergence simulations. We posit that a common characteristic can underlie the emergence of intersective adjectives and argument structure: the complexity of meaning of the datapoints that agents discuss must vary. We show that this characteristic of the task facilitates the study of the emergent languages. Our first experimental results are promising. Link » Tom Bosc 🔗 Fri 7:52 a.m. - 8:15 a.m. Discussion Group 2 ( Discussion Group ) 🔗 Fri 8:15 a.m. - 8:45 a.m. Coffee Break 🔗 Fri 8:45 a.m. - 9:30 a.m. Morning Panel : Marco Baroni, Simon Kirby, Natasha Jaques, Limor Raviv ( Discussion Panel ) Simon Kirby · Marco Baroni · Natasha Jaques · Limor Raviv 🔗 Fri 9:30 a.m. - 10:15 a.m. Lunch Break 🔗 Fri 10:15 a.m. - 11:00 a.m. Natasha Jaques (UC Berkeley / Google) ( Live Talk by Presenter over Zoom ) Natasha Jaques 🔗 Fri 11:00 a.m. - 11:02 a.m. Modeling Emergent Lexicon Formation with a Self-Reinforcing Stochastic Process ( Paper Teaser )  link »    We introduce FiLex, a self-reinforcing stochastic process which models finite lexicons in emergent language experiments. The central property of FiLex is that it is a self-reinforcing process, parallel to the intuition that the more a word is used in a language, the more its use will continue. As a theoretical model, FiLex serves as a way to both explain and predict the behavior of the emergent language system. We empirically test FiLex's ability to capture the relationship between the emergent language's hyperparameters and the lexicon's Shannon entropy. Link » Brendon Boldt · David Mortensen 🔗 Fri 11:02 a.m. - 11:04 a.m. Sifting the Signal from the Noise ( Paper Teaser )  link »    Signaling games are useful for understanding how language emerges. In the standard models the dynamics in some sense already knows what the signals are, even if they do not yet have meaning. In this paper we develop a simple model we call an attention game in which agents have to learn which feature in their environment is the signal. We demonstrate that simple reinforcement learning agents can still learn to coordinate in contexts in which (i) the agents do not already know what the signal is and (ii) the other features in the agents’ environment are uncorrelated with the signal. Furthermore, we show that, in the cases in which other features are correlated with the signal, there is a surprising trade-off between learning to pay attention to the signal and success in action. We show that the mutual information between a signal and a feature plays a key role in governing the accuracy and attention of the agent. Link » Daniel Herrmann · Jacob VanDrunen 🔗 Fri 11:04 a.m. - 11:06 a.m. Learning To Ground Decentralized Multi-Agent Communication with Contrastive Learning ( Paper Teaser )  link »    For communication to happen successfully, a common language is required between agents to understand information communicated by one another. Inducing the emergence of a common language has been a difficult challenge to multi-agent learning systems. In this work, we introduce an alternative perspective to the communicative messages sent between agents, considering them as different incomplete views of the environment state. Based on this perspective, we propose a simple approach to induce the emergence of a common language by maximizing the mutual information between messages of a given trajectory in a self-supervised manner. By evaluating our method in communication-essential environments, we empirically show how our method leads to better learning performance and speed, and learns a more consistent common language than existing methods, without introducing additional learning parameters. Link » Yat Long (Richie) Lo · Biswa Sengupta 🔗 Fri 11:06 a.m. - 11:08 a.m. Emergent Covert Signaling in Adversarial Reference Games ( Paper Teaser )  link »    Emergent communication is often studied in dyadic, fully-cooperative reference games, yet many real-world scenarios involve multiparty communication in adversarial settings. We introduce an \emph{adversarial reference game}, where a speaker and listener must learn to generate referring expressions without leaking information to an adversary, and study the ability of emergent communication systems to learn \emph{covert signaling} protocols on this task. We show that agents can develop covert signaling when given access to additional training time or shared knowledge over the adversary. Finally, we show that adversarial training results in the emergent languages having fewer and more polysemous messages. Link » Dhara Yu · Jesse Mu · Noah Goodman 🔗 Fri 11:08 a.m. - 11:10 a.m. Situated Communication: A Solution to Over-communication between Artificial Agents ( Paper Teaser )  link »    Most research on communication emergence between reinforcement learning (RL) agents explores unsituated communication in one-step referential tasks. The tasks are not temporally interactive and lack time pressures typically present in natural communication and language learning. In these settings, agents can successfully learn to communicate, but they do not learn to exchange information concisely—they tend towards over-communication and an anti-efficient encoding. In our work, we introduce situated communication by imposing an opportunity cost on communication—the acting agent has to forgo an action to solicit information from its advisor. Situated communication mimics the external pressure of passing time in real-world communication. We compare language emergence under this pressure against language learning with an internal cost on articulation, implemented as a per-message penalty. We find that while both pressures can disincentivise over-communication, situated communication does it more effectively and, unlike the internal pressure, does not negatively impact communication emergence. Implementing an opportunity cost on communication might be key to shaping language properties and incentivising concise information sharing between artificial agents. Link » Aleksandra Kalinowska · Elnaz Davoodi · Florian Strub · Kory Mathewson · Todd Murphey · Patrick Pilarski 🔗 Fri 11:10 a.m. - 11:35 a.m. Discussion Group 3 ( Discussion Group ) 🔗 Fri 11:35 a.m. - 12:05 p.m. Coffee Break 🔗 Fri 12:05 p.m. - 12:50 p.m. David J. Peterson & Jessie Sams (Stephen F. Austin State University) ( Live Talk by Presenter over Zoom ) David Peterson · Jessie Sams 🔗 Fri 12:50 p.m. - 1:05 p.m. Short Coffee Break 🔗 Fri 1:05 p.m. - 1:50 p.m. Afternoon Panel : David J. Peterson, Jessie Sams, Kory Mathewson ( Discussion Panel ) David Peterson · Jessie Sams · Kory Mathewson 🔗 Fri 1:50 p.m. - 2:00 p.m. Final Remarks ( Live Talk without Slides ) 🔗