The Challenges of Human-Centered AI and Robotics: What We Want, Need, and are Getting From Human-Machine Interaction
Abstract
Language-based AI is now ubiquitous, and user expectations for intelligent machines are scaling along with it: we expect machines to understand us, predict our needs and wants, do what we enjoy and prefer, and adapt as we change our moods and minds, learn, grow, and age. Physical AI, in the form of robotics, is the next major AI challenge, and it is not ready to leap into our daily lives yet. While massive investment is focused on functional behavior of humanoid robots (perceiving the world, moving around, and manipulating objects), human-robot interaction (HRI) is relegated to an afterthought. It is assumed that once a robot can move around and do things, it will be useful and wanted, yet over 25 years of research in HRI tells us otherwise. While the needs for human-centered services continue to grow, research and development is minimal. This talk will discuss how bringing together robotics, AI, and machine learning for long-term user modeling, real-time multimodal behavioral signal processing, and affective computing is enabling machines to understand, interact, and adapt to users’ specific and ever-changing needs. We will overview methods and challenges of sparse and noisy heterogeneous, multi-modal, personal interaction data and of creating expressive agent and robot behavior toward understanding, coaching, motivating, and supporting a wide variety of user populations across the age span (infants, children, adults, elderly), ability span (typically developing, autism, anxiety, stroke, dementia), contexts (schools, therapy centers, homes), and deployment durations (from weeks to 6 months) through socially assistive robotics. We will discuss the challenges of understanding what we humans want from interactions with machines vs. what we need vs. what we are getting, and how those distinctions are shaping the future of not just AI and ML but society at large.