Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

Ningyu Zhang · Luoqiu Li · Xiang Chen · Shumin Deng · Zhen Bi · Chuanqi Tan · Fei Huang · Huajun Chen

Keywords: [ few-shot learning ]

[ Abstract ]
[ Visit Poster at Spot E2 in Virtual World ] [ OpenReview
Tue 26 Apr 2:30 a.m. PDT — 4:30 a.m. PDT


Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners. However, their effectiveness depends mainly on scaling the model parameters and prompt design, hindering their implementation in most real-world applications. This study proposes a novel pluggable, extensible, and efficient approach named DifferentiAble pRompT (DART), which can convert small language models into better few-shot learners. The main principle behind this approach involves reformulating potential natural language processing tasks into the task of a pre-trained language model and differentially optimizing the prompt template as well as the target label with backpropagation. Furthermore, the proposed approach can be: (i) Plugged to any pre-trained language models; (ii) Extended to widespread classification tasks. A comprehensive evaluation of standard NLP tasks demonstrates that the proposed approach achieves a better few-shot performance.

Chat is not available.