Beyond interpretability: developing a language to shape our relationships with AI
Been Kim
2022 Invited Talk
Abstract
AI arrived in our lives, making important decisions affecting us. How should we work with this new class of co-workers? The goal of Interpretability is to engineer our relationships with AI, in part by making tools to produce explanations from AI models. But I argue that we also need to study AI machines as scientific objects, in isolation and with humans. Doing so not only provides principles for tools we make, but also is necessary to take our working relationship with AI to the next level. Our ultimate goal is a language that will enable us to learn from and be inspired by AI. This language will not be perfect– no language is–but it will be useful. As human language is known to shape our thinking, this will also shape us and future AI.
Speaker
Been Kim
Been Kim is a staff research scientist at Google Brain. Her research focuses on helping humans to communicate with complex machine learning models: not only by building tools (and tools to criticize them) but also studying their nature compared to humans. She gave a talk at the G20 meeting in Argentina in 2019. Her work TCAV received UNESCO Netexplo award, was featured at Google I/O 19'. Her work was in a chapter of Brian Christian's book on "The Alignment Problem". Been gave keynote at ECML 2020, tutorials on interpretability at ICML, University of Toronto, CVPR and at Lawrence Berkeley National Laboratory. She was a co-workshop Chair ICLR 2019, and has been an (senior) area chair at NeurIPS, ICML, ICLR, AISTATS and others. She is a steering committee member of FAccT conference and former executive board member and VP of Women in Machine Learning. She received her PhD. from MIT.
Video
Chat is not available.
Successful Page Load