Most research on communication emergence between reinforcement learning (RL) agents explores unsituated communication in one-step referential tasks. The tasks are not temporally interactive and lack time pressures typically present in natural communication and language learning. In these settings, agents can successfully learn to communicate, but they do not learn to exchange information concisely—they tend towards over-communication and an anti-efficient encoding. In our work, we introduce situated communication by imposing an opportunity cost on communication—the acting agent has to forgo an action to solicit information from its advisor. Situated communication mimics the external pressure of passing time in real-world communication. We compare language emergence under this pressure against language learning with an internal cost on articulation, implemented as a per-message penalty. We find that while both pressures can disincentivise over-communication, situated communication does it more effectively and, unlike the internal pressure, does not negatively impact communication emergence. Implementing an opportunity cost on communication might be key to shaping language properties and incentivising concise information sharing between artificial agents.