Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICLR 2025 Workshop on Bidirectional Human-AI Alignment

Representational Alignment Supports Effective Teaching

Ilia Sucholutsky · Katherine Collins · Maya Malaviya · Nori Jacoby · Weiyang Liu · Theodore Sumers · Michalis Korakakis · Umang Bhatt · Mark Ho · Joshua B Tenenbaum · Bradley Love · Zachary Pardos · Adrian Weller · Thomas L. Griffiths


Abstract:

A good teacher should not only be knowledgeable, but should also be able to communicate in a way that the student understands -- to share the student's representation of the world. In this work, we introduce a new controlled experimental setting, GRADE, to study pedagogy and representational alignment. We use GRADE through a series of machine-machine and machine-human teaching experiments to characterize a utility curve defining a relationship between representational alignment, teacher expertise, and student learning outcomes. We find that improved representational alignment with a student improves student learning outcomes (i.e., task accuracy), but that this effect is moderated by the size and representational diversity of the class being taught. We use these insights to design a preliminary classroom matching procedure, GRADE-Match, that optimizes the assignment of students to teachers. When designing machine teachers, our results suggest that it is important to focus not only on accuracy, but also on representational alignment with human learners.

Chat is not available.