Skip to yearly menu bar Skip to main content


Poster

More RLHF, More Trust? On The Impact of Preference Alignment On Trustworthiness

Aaron J. Li · Satyapriya Krishna · Hima Lakkaraju

Hall 3 + Hall 2B #507
[ ]
Thu 24 Apr 7 p.m. PDT — 9:30 p.m. PDT
 
Oral presentation: Oral Session 4A
Fri 25 Apr 12:30 a.m. PDT — 2 a.m. PDT

Abstract:

The trustworthiness of Large Language Models (LLMs) refers to the extent to which their outputs are reliable, safe, and ethically aligned, and it has become a crucial consideration alongside their cognitive performance. In practice, Reinforcement Learning From Human Feedback (RLHF) has been widely used to align LLMs with labeled human preferences, but its assumed effect on model trustworthiness hasn't been rigorously evaluated. To bridge this knowledge gap, this study investigates how models aligned with general-purpose preference data perform across five trustworthiness verticals: toxicity, stereotypical bias, machine ethics, truthfulness, and privacy. Our results demonstrate that RLHF on human preferences doesn't automatically guarantee trustworthiness, and reverse effects are often observed. Furthermore, we propose to adapt efficient influence function based data attribution methods to the RLHF setting to better understand the influence of fine-tuning data on individual trustworthiness benchmarks, and show its feasibility by providing our estimated attribution scores. Together, our results underscore the need for more nuanced approaches for model alignment from both the data and framework perspectives, and we hope this research will guide the community towards developing language models that are increasingly capable without sacrificing trustworthiness.

Live content is unavailable. Log in and register to view live content