Skip to yearly menu bar Skip to main content


Poster

Semantic Re-tuning with Contrastive Tension

Fredrik Carlsson · Amaru C Gyllensten · Evangelia Gogoulou · Erik Y Hellqvist · Magnus Sahlgren

Keywords: [ transformers ] [ Semantic Textual Similarity ] [ language modelling ] [ sentence embeddings ] [ sentence representations ] [ pre-training ] [ Fine-tuning ]


Abstract:

Extracting semantically useful natural language sentence representations from pre-trained deep neural networks such as Transformers remains a challenge. We first demonstrate that pre-training objectives impose a significant task bias onto the final layers of models with a layer-wise survey of the Semantic Textual Similarity (STS) correlations for multiple common Transformer language models. We then propose a new self-supervised method called Contrastive Tension (CT) to counter such biases. CT frames the training objective as a noise-contrastive task between the final layer representations of two independent models, in turn making the final layer representations suitable for feature extraction. Results from multiple common unsupervised and supervised STS tasks indicate that CT outperforms previous State Of The Art (SOTA), and when combining CT with supervised data we improve upon previous SOTA results with large margins.

Chat is not available.