Workshop
|
|
Measuring Human-CLIP Alignment at Different Abstraction Levels
Pablo Hernández-Cámara · Jorge Vila Tomás · Jesus Malo · Valero Laparra
|
|
Workshop
|
|
Towards Unified Alignment Between Agents, Humans, and Environment
Zonghan Yang · An Liu · Zijun Liu · Kaiming Liu · Fangzhou Xiong · Yile Wang · Zeyuan Yang · Qingyuan Hu · XinRui Chen · Zhenhe Zhang · Fuwen Luo · Zhicheng Guo · Peng Li · Yang Liu
|
|
Workshop
|
|
Self-supervised learning facilitates neural representation structures that can be unsupervisedly aligned to human behaviors
Soh Takahashi · Masaru Sasaki · Ken Takeda · Masafumi Oizumi
|
|
Workshop
|
Sat 6:30
|
Improving neural network representations by aligning with human knowledge
Andrew Lampinen
|
|
Workshop
|
|
Human and Deep Neural Network Alignment in Navigational Affordance Perception
Clemens Bartnik · Iris Groen
|
|
Workshop
|
|
Humans diverge from language models when predicting spoken language
Thomas Botch · Emily Finn
|
|
Workshop
|
|
Can Foundation Models Smell Like Humans?
Farzaneh Taleb · Miguel Vasco · Nona Rajabi · Mårten Björkman · Danica Kragic
|
|
Workshop
|
|
Identifying and Interpreting Non-Aligned Human Conceptual Representations using Language Modeling
Wanqian Bao · Uri Hasson
|
|
Workshop
|
Sat 0:10
|
Beyond sight: Probing alignment between image models and blind V1
Galen Pogoncheff
|
|
Workshop
|
|
Explaining Human Comparisons using Alignment-Importance Heatmaps
Nhut Truong · Dario Pesenti · Uri Hasson
|
|
Workshop
|
|
Measuring Mechanistic Interpretability at Scale Without Humans
Roland Zimmermann · David Klindt · Wieland Brendel
|
|
Workshop
|
|
Immediate generalisation in humans but a generalisation lag in deep neural networks—evidence for representational divergence?
Lukas Huber · Fred Mast · Felix Wichmann
|
|