Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICLR 2025 Workshop on Bidirectional Human-AI Alignment

Bidirectional Alignment for Inclusive Narrative Generation

Ken Kawamura


Abstract:

Aligning Large Language Models (LLMs) for narrative generation demands more than model refinement. For narratives of marginalized communities, whose voices are historically silenced or distorted, a purely AI-centric alignment is insufficient. This tiny paper argues for bidirectional human-AI alignment, emphasizing critical human engagement alongside AI development. Through literary case studies—Virginia Woolf's Judith Shakespeare and Saidiya Hartman's Venus—we demonstrate that LLMs inherit and propagate historical biases, reflecting deep epistemic gaps. Addressing these requires human interpretation to recognize data limitations and embedded assumptions. True alignment for inclusive narratives necessitates both refined AI and informed human participation, fostering AI literacy and critical engagement with LLM outputs. This bidirectional approach is crucial for ensuring AI contributes meaningfully to representative storytelling, a key challenge for inclusive AI research.

Chat is not available.