Skip to yearly menu bar Skip to main content


Poster

Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment

Dongyoung Kim · Kimin Lee · Jinwoo Shin · Jaehyung Kim

Hall 3 + Hall 2B #541
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT
 
Oral presentation: Oral Session 5A
Fri 25 Apr 7:30 p.m. PDT — 9 p.m. PDT

Abstract:

Aligning large language models (LLMs) with human preferences becomes a key component to obtaining state-of-the-art performance, but it yields a huge cost to construct a large human-annotated preference dataset. To tackle this problem, we propose a new framework, Spread Preference Annotation with direct preference judgment (SPA), that boosts the alignment of LLMs using only a very small amount of human-annotated preference data.Our key idea is leveraging the human prior knowledge within the small (seed) data and progressively improving the alignment of LLM, by iteratively generating the responses and learning from them with the self-annotated preference data.To be specific, we propose to derive the preference label from the logits of LLM to explicitly extract the model's inherent preference. Compared to the previous approaches using external reward models or implicit in-context learning, we observe that the proposed approach is significantly more effective.In addition, we introduce a noise-aware preference learning algorithm to mitigate the risk of low quality within generated preference data.Our experimental results demonstrate that the proposed framework significantly boosts the alignment of LLMs.For example, we achieve superior alignment performance on AlpacaEval 2.0 with only 3.3% of the ground-truth preference labels in the Ultrafeedback data compared to the cases using the entire data or state-of-the-art baselines.

Live content is unavailable. Log in and register to view live content