Oral
in
Workshop: ICLR 2025 Workshop on Bidirectional Human-AI Alignment
Rethinking AI cultural alignment
As general-purpose artificial intelligence (AI) systems become increasingly integrated with diverse human communities, cultural alignment has emerged as a crucial element in their deployment. Most existing approaches treat cultural alignment as one-directional, embedding predefined cultural values from standardized surveys and repositories into AI systems. To challenge this perspective, we highlight research showing that humans' cultural values must be understood within the context of specific AI systems. We then use a GPT-4o case study to demonstrate that AI systems' cultural alignment depends on how humans structure their interactions with the system. Drawing on these findings, we argue that cultural alignment should be reframed as a bidirectional process: rather than merely imposing standardized values on AIs, we should query the human cultural values most relevant to each AI-based system and align it to these values through interaction frameworks shaped by human users.