Skip to yearly menu bar Skip to main content

Affinity Workshop: Tiny Papers Poster Session 2

Self-Teaching Prompting for Multi-Intent Learning with Limited Supervision

cheng chen · Ivor Tsang

Halle B #347
[ ] [ Project Page ]
Tue 7 May 7:30 a.m. PDT — 9:30 a.m. PDT


Multi-intent learning with limited supervision involves predicting multiple intentions of utterances using only a few annotated samples. The primary motivation for this task stems from the high costs and cumbersome processes associated with annotating large datasets. To mitigate this, we propose utilising Large Language Models (LLMs) for annotation assistance. Although LLMs show promise, they struggle with response randomness, and their previous prompts is static and do not learn from their outputs. To address this, we propose `self-teaching prompting' (STP), a method that enables Large Language Models (LLMs) to iteratively learn from their consistent samples and refine their predictions over time. Our experiments with multi-intention datasets demonstrate that STP significantly enhances response accuracy.

Chat is not available.