Target-Side Input Augmentation for Sequence to Sequence Generation

Shufang Xie · Ang Lv · Yingce Xia · Lijun Wu · Tao Qin · Tie-Yan Liu · Rui Yan

Keywords: [ data augmentation ]

[ Abstract ]
[ Visit Poster at Spot E1 in Virtual World ] [ OpenReview
Mon 25 Apr 6:30 p.m. PDT — 8:30 p.m. PDT


Autoregressive sequence generation, a prevalent task in machine learning and natural language processing, generates every target token conditioned on both a source input and previously generated target tokens. Previous data augmentation methods, which have been shown to be effective for the task, mainly enhance source inputs (e.g., injecting noise into the source sequence by random swapping or masking, back translation, etc.) while overlooking the target-side augmentation. In this work, we propose a target-side augmentation method for sequence generation. In training, we use the decoder output probability distributions as soft indicators, which are multiplied with target token embeddings, to build pseudo tokens. These soft pseudo tokens are then used as target tokens to enhance the training. We conduct comprehensive experiments on various sequence generation tasks, including dialog generation, machine translation, and abstractive summarization. Without using any extra labeled data or introducing additional model parameters, our method significantly outperforms strong baselines. The code is available at

Chat is not available.