AI arrived in our lives, making important decisions affecting us. How should we work with this new class of co-workers? The goal of Interpretability is to engineer our relationships with AI, in part by making tools to produce explanations from AI models. But I argue that we also need to study AI machines as scientific objects, in isolation and with humans. Doing so not only provides principles for tools we make, but also is necessary to take our working relationship with AI to the next level. Our ultimate goal is a language that will enable us to learn from and be inspired by AI. This language will not be perfect– no language is–but it will be useful. As human language is known to shape our thinking, this will also shape us and future AI.
( events) Timezone: »
Mon Apr 25 09:00 AM -- 10:15 AM (PDT)
Beyond interpretability: developing a language to shape our relationships with AI