More Than What Was Chosen: LLM-based Explainable Recommendation Beyond Noisy User Preferences
Chung Park · Hyeongjun Yun · Taesan Kim · Junui Hong · Dongjoon Hong · Mira Myong · Jihoon Oh · MinCheol Cho · Kijung Park · Min Choi · Jihwan Seok · Jaegul Choo
Abstract
Recommender systems traditionally rely on the principle of Revealed Preference (RP), which assumes that observed user behaviors faithfully reflect underlying interests. While effective at scale, this assumption is fragile in practice, as real-world choices are often noisy and inconsistent. Thus, even LLM-based recommendation models (LLM-Rec) equipped with advanced reasoning capabilities may fail to capture genuine user preferences and often produce rationales of limited persuasiveness. To address this issue, we introduce the concept of Coherent Preference (CP), which complements RP by favoring items that are logically and causally coherent with user interaction history. Building on this perspective, we propose Conflict-Aware Direct Preference Optimization (C-APO), an LLM-Rec framework that jointly optimizes RP and CP while adaptively reconciling their agreement and conflict, delivering robust recommendation performance and logically consistent rationales. We construct a unified ordering approach that combines the RP signal, based on chosen versus unobserved items, with the CP signal, which ranks items by their logical consistency with past interaction history. In this unified preference ordering, we dynamically adjust the influence of each signal depending on whether RP and CP agree or conflict, allowing the model to better capture user intent and generate more plausible recommendations. On the Amazon Review dataset, our approach consistently outperforms approximately 20 state-of-the-art baseline models in both recommendation performance and rationale quality, achieving a 1.65$\times$ relative improvement in click-through rate during deployment, thereby demonstrating its practical utility. The code and dataset are available at https://anonymous.4open.science/r/C-APO.
Successful Page Load