Responsible AI (RAI)

Ahmad Beirami · Emily Black · Krishna Gummadi · Hoda Heidari · Baharan Mirzasoleiman · Meisam Razaviyayn · Joshua Williams


Artificial Intelligence and Machine Learning are increasingly employed by industry and government alike to make or inform high-stakes decisions for people in areas such as employment, credit lending, policing, criminal justice, healthcare, and beyond. Over the past several years, we have witnessed growing concern regarding the risks and unintended consequences of inscrutable ML techniques (in particular, deep learning) in such socially consequential domains. This realization has motivated the community to look closer at the societal impacts of automated decision making and develop tools to ensure the responsible use of AI in society. Chief among the ideals that the ML community has set out to formalize and ensure are safety, interpretability, robustness, and fairness. In this workshop, we examine the community’s progress toward these values and aim to identify areas that call for additional research efforts. In particular, by bringing researchers with diverse backgrounds, we will focus on the limitations of existing formulations of fairness, explainability, robustness and safety, and discuss the tradeoffs among them.

Our workshop will consist of a diverse set of speakers (ranging from researchers with social work background to researchers in the ML community) to discuss transparency, bias and inequity in various real-world problems, including but not limited to criminal justice, health care and medicine, poverty and homelessness, and education. In addition, our invited talks will cover interpretability, and safety of modern machine learning models, their conflicting constraints, ethical and legal issues, and unintended consequences in areas such as self-driving cars and robotics. The workshop aims to further develop these research directions for the machine learning community.

Chat is not available.

Timezone: America/Los_Angeles »