Skip to yearly menu bar Skip to main content


Poster
in
Workshop: First Workshop on Representational Alignment (Re-Align)

The Curious Case of Representational Alignment: Unravelling Visio-Linguistic Tasks in Emergent Communication

Tom Kouwenhoven · Max Peeperkorn · Bram Van Dijk · Stephan Raaijmakers · Tessa Verhoef

Keywords: [ representational alignment ] [ compositionality ] [ emergent communication ] [ reinforcement learning ]


Abstract:

Natural language has the universal properties of being compositional and grounded in the real world. A popular method to investigate the emergence of linguistic properties is by simulating emergent communication setups with deep neural agents in referential games. Despite growing interest, experiments have yielded mixed results compared to similar experiments addressing linguistic properties of human language. Various reasons for these discrepancies have been proposed and in this paper, we also address a potential contributing factor. Specifically, we focus on the representational alignment between image representations of the agents and the alignment between their representations and the actual images. We first revisit and confirm that, in the commonly used setup, the emergent language does not appear to encode visual features since the agents align their image representations while losing connection to the input. We further identify an interaction between alignment and a common metric for compositionality, topographic similarity. We mitigate the alignment problem by introducing an alignment penalty and show that the agents still communicate effectively and yet do not develop a language that is grounded in images. Overall, our findings underscore critical differences between human and artificially emergent solutions and highlight the importance of representational alignment in simulations of language emergence.

Chat is not available.