Skip to yearly menu bar Skip to main content


Poster

Bayesian WeakS-to-Strong from Text Classification to Generation

Ziyun Cui · Ziyang Zhang · Guangzhi Sun · Wen Wu · Chao Zhang

Hall 3 + Hall 2B #531
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Advances in large language models raise the question of how alignment techniques will adapt as models become increasingly complex and humans will only be able to supervise them weakly. Weak-to-Strong mimics such a scenario where weak model supervision attempts to harness the full capabilities of a much stronger model. This work extends Weak-to-Strong to WeakS-to-Strong by exploring an ensemble of weak models which simulate the variability in human opinions. Confidence scores are estimated using a Bayesian approach to guide the WeakS-to-Strong generalization. Furthermore, we extend the application of WeakS-to-Strong from text classification tasks to text generation tasks where more advanced strategies are investigated for supervision. Moreover, direct preference optimization is applied to advance the student model's preference learning, beyond the basic learning framework of teacher forcing. Results demonstrate the effectiveness of the proposed approach for the reliability of a strong student model, showing potential for superalignment.

Live content is unavailable. Log in and register to view live content