Poster
3D-Properties: Identifying Challenges in DPO and Charting a Path Forward
Yuzi Yan · Yibo Miao · Jialian Li · YipinZhang · Jian Xie · Zhijie Deng · Dong Yan
Hall 3 + Hall 2B #570
Aligning large language models (LLMs) with human preferences has gained significant attention, with Proximal Policy Optimization (PPO) as a standard yet computationally expensive method and Direct Preference Optimization (DPO) as a more efficient alternative. While DPO offers simplicity, it remains underutilized in state-of-the-art LLMs, suggesting potential limitations. In this work, we revisit DPO, analyzing its theoretical foundations and empirical performance to bridge this gap. We identify three key properties—termed \textbf{3D}-properties—that emerge from DPO’s learning process: \textbf{D}rastic drop in rejected response likelihood, \textbf{D}egradation into response suppression, and \textbf{D}ispersion effect on unseen responses. We show that these issues arise from DPO’s optimization dynamics, where the interaction between chosen and rejected response gradients leads to instability. Our findings are supported by experiments on both a controlled toy model and real-world LLM tasks, including mathematical problem-solving and instruction following. To address these challenges, we propose simple regularization techniques that improve training stability and performance. Additionally, we examine how preference data distribution impacts DPO’s effectiveness, offering insights into how alignment models handle out-of-domain (OOD) data. Our work connects these observations to broader research and provides a theoretical explanation for DPO’s limitations. We hope these insights will guide future advancements in reward-model-free preference learning, bringing it closer to reward-model-based approaches.
Live content is unavailable. Log in and register to view live content