Skip to yearly menu bar Skip to main content


Workshop

Socially Responsible Machine Learning

Chaowei Xiao · Huan Zhang · Xueru Zhang · Hongyang Zhang · Cihang Xie · Beidi Chen · Xinchen Yan · Yuke Zhu · Bo Li · Zico Kolter · Dawn Song · Anima Anandkumar

Fri 29 Apr, 5:45 a.m. PDT

Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems (e.g., automated resume screening and pretrial release tool) to safety-critical tasks (e.g., financial analytics and autonomous driving). Recently, the concept of foundation models has received significant attention in the ML community, which refers to the rise of models (e.g., BERT, GPT-3) that are trained on large-scale data and work surprisingly well in a wide range of downstream tasks. While there are many opportunities regarding foundation models, ranging from capabilities (e.g., language, vision, robotics, reasoning, human interaction), applications (e.g., law, healthcare, education, transportation), and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations), concerns and risks have been incurred that the models can inflict harm if they are not developed or used with care. It has been well-documented that ML models can:-Inherit pre-existing biases and exhibit discrimination against already-disadvantaged or marginalized social groups;-Be vulnerable to security and privacy attacks that deceive the models and leak sensitive information of training data;-Make hard-to-justify predictions with a lack of transparency and interpretability.This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). In particular, we are interested in the following topics:-The intersection of various aspects of trustworthy ML: fairness, transparency, interpretability, privacy, robustness;-The possibility of using the most recent theory to inform practice guidelines for deploying trustworthy ML systems;-Automatically detect, verify, explain, and mitigate potential biases or privacy problems in existing models;-Explaining the social impacts of machine learning bias.

Chat is not available.
Timezone: America/Los_Angeles

Schedule