Skip to yearly menu bar Skip to main content


Poster

Strong Preferences Affect the Robustness of Preference Models and Value Alignment

Ziwei Xu · Mohan Kankanhalli

Hall 3 + Hall 2B #526
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Value alignment, which aims to ensure that large language models (LLMs) and other AI agents behave in accordance with human values, is critical for ensuring safety and trustworthiness of these systems. A key component of value alignment is the modeling of human preferences as a representation of human values. In this paper, we investigate the robustness of value alignment by examining the sensitivity of preference models. Specifically, we ask: how do changes in the probabilities of some preferences affect the predictions of these models for other preferences? To answer this question, we theoretically analyze the robustness of widely used preference models by examining their sensitivities to minor changes in preferences they model. Our findings reveal that, in the Bradley-Terry and the Placket-Luce model, the probability of a preference can change significantly as other preferences change, especially when these preferences are dominant (i.e., with probabilities near zero or one). We identify specific conditions where this sensitivity becomes significant for these models and discuss the practical implications for the robustness and safety of value alignment in AI systems.

Live content is unavailable. Log in and register to view live content