Workshop
Socially Responsible Machine Learning
Chaowei Xiao 路 Huan Zhang 路 Xueru Zhang 路 Hongyang Zhang 路 Cihang Xie 路 Beidi Chen 路 Xinchen Yan 路 Yuke Zhu 路 Bo Li 路 Zico Kolter 路 Dawn Song 路 Anima Anandkumar
Fri 29 Apr, 5:45 a.m. PDT
Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems (e.g., automated resume screening and pretrial release tool) to safety-critical tasks (e.g., financial analytics and autonomous driving). Recently, the concept of foundation models has received significant attention in the ML community, which refers to the rise of models (e.g., BERT, GPT-3) that are trained on large-scale data and work surprisingly well in a wide range of downstream tasks. While there are many opportunities regarding foundation models, ranging from capabilities (e.g., language, vision, robotics, reasoning, human interaction), applications (e.g., law, healthcare, education, transportation), and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations), concerns and risks have been incurred that the models can inflict harm if they are not developed or used with care. It has been well-documented that ML models can:-Inherit pre-existing biases and exhibit discrimination against already-disadvantaged or marginalized social groups;-Be vulnerable to security and privacy attacks that deceive the models and leak sensitive information of training data;-Make hard-to-justify predictions with a lack of transparency and interpretability.This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). In particular, we are interested in the following topics:-The intersection of various aspects of trustworthy ML: fairness, transparency, interpretability, privacy, robustness;-The possibility of using the most recent theory to inform practice guidelines for deploying trustworthy ML systems;-Automatically detect, verify, explain, and mitigate potential biases or privacy problems in existing models;-Explaining the social impacts of machine learning bias.
Schedule
Fri 6:00 a.m. - 2:40 p.m.
|
Invited talk
(
Invited talk
)
>
|
馃敆 |
Fri 6:20 a.m. - 6:40 a.m.
|
Opening remarks
(
Remarks
)
>
|
Chaowei Xiao 馃敆 |
Fri 6:40 a.m. - 7:20 a.m.
|
Invited talk from Prof. Ziwei Liu
(
Invited Talk
)
>
|
馃敆 |
Fri 7:20 a.m. - 8:00 a.m.
|
Invited talk from Prof. Aleksander M膮dry
(
Invited Talk
)
>
|
馃敆 |
Fri 8:10 a.m. - 8:10 a.m.
|
Invited talk from Prof. Anqi Liu
(
Invited Talk
)
>
|
馃敆 |
Fri 8:50 a.m. - 9:30 a.m.
|
Invited talk from Prof. Judy Hoffman
(
Invited talk
)
>
|
馃敆 |
Fri 10:50 a.m. - 11:30 a.m.
|
Invited talk from Neil Gong
(
Invited talk
)
>
|
馃敆 |
Fri 11:30 a.m. - 12:10 p.m.
|
Invited talk from Virginia Smith
(
Invited Talk
)
>
|
馃敆 |
Fri 12:20 p.m. - 1:00 p.m.
|
Invited talk from Prof. Marco Pavone
(
Invited Talk
)
>
|
馃敆 |
Fri 1:00 p.m. - 1:40 p.m.
|
Invited talk from Prof Diyi Yang
(
Invited Talk
)
>
|
馃敆 |
Fri 2:44 p.m. - 3:00 p.m.
|
Closing Remarks
(
Remarks
)
>
|
馃敆 |
-
|
Lost In Translation: Generating Adversarial Examples Robust to Round-Trip Translation
(
Poster
)
>
|
Neel Bhandari 路 Pin-Yu Chen 馃敆 |
-
|
Debiasing Neural Networks using Differentiable Classification Parity Proxies
(
Poster
)
>
|
Ri膷ards Marcinkevi膷s 路 Ece Ozkan 路 Julia Vogt 馃敆 |
-
|
FedER: Communication-Efficient Byzantine-Robust Federated Learning
(
Poster
)
>
|
Yukun Jiang 路 Xiaoyu Cao 路 Hao Chen 路 Neil Gong 馃敆 |
-
|
Evaluating the Adversarial Robustness for Fourier Neural Operators
(
Poster
)
>
|
Abolaji Adesoji 路 Pin-Yu Chen 馃敆 |
-
|
Robust and Accurate - Compositional Architectures for Randomized Smoothing
(
Poster
)
>
|
Mikl贸s Horv谩th 路 Mark N M眉ller 路 Marc Fischer 路 Martin Vechev 馃敆 |
-
|
Towards Differentially Private Query Release for Hierarchical Data
(
Poster
)
>
|
Terrance Liu 路 Steven Wu 馃敆 |
-
|
Incentive Mechanisms in Strategic Learning
(
Poster
)
>
|
Kun Jin 路 Xueru Zhang 路 Mohammad Mahdi Khalili 路 Parinaz Naghizadeh 路 Mingyan Liu 馃敆 |
-
|
The Impacts of Labeling Biases on Fairness Criteria
(
Poster
)
>
|
Yiqiao Liao 路 Parinaz Naghizadeh 馃敆 |
-
|
Can non-Lipschitz networks be robust? The power of abstention and data-driven decision making for robust non-Lipschitz networks
(
Poster
)
>
|
Nina Balcan 路 Avrim Blum 路 Dravyansh Sharma 路 Hongyang Zhang 馃敆 |
-
|
Fair Machine Learning under Limited Demographically Labeled Data
(
Poster
)
>
|
Mustafa Ozdayi 路 Murat Kantarcioglu 路 Rishabh Iyer 馃敆 |
-
|
Improving Cooperative Game Theory-based Data Valuation via Data Utility Learning
(
Poster
)
>
|
Tianhao Wang 路 Yu Yang 路 Ruoxi Jia 馃敆 |
-
|
Provably Fair Federated Learning via Bounded Group Loss
(
Poster
)
>
|
Shengyuan Hu 路 Steven Wu 路 Virginia Smith 馃敆 |
-
|
Secure Aggregation for Privacy-Aware Federated Learning with Limited Resources
(
Poster
)
>
|
Irem Ergun 路 Hasin Us Sami 路 Basak Guler 馃敆 |
-
|
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
(
Poster
)
>
|
Aaron Chan 路 Maziar Sanjabi 路 Lambert Mathias 路 Liang Tan 路 Shaoliang Nie 路 Xiaochang Peng 路 Xiang Ren 路 Hamed Firooz 馃敆 |
-
|
Dynamic Positive Reinforcement for Long-Term Fairness
(
Poster
)
>
|
Bhagyashree Puranik 路 Upamanyu Madhow 路 Ramtin Pedarsani 馃敆 |
-
|
ModelNet40-C: A Robustness Benchmark for 3D Point Cloud Recognition under Corruption
(
Poster
)
>
|
Jiachen Sun 路 Qingzhao Zhang 路 Bhavya Kailkhura 路 Zhiding Yu 路 Zhuoqing Mao 馃敆 |
-
|
Differential Privacy Amplification in Quantum and Quantum-inspired Algorithms
(
Poster
)
>
|
Armando Angrisani 路 Mina Doosti 路 Elham Kashefi 馃敆 |
-
|
Learning Stabilizing Policies in Stochastic Control Systems
(
Poster
)
>
|
膼or膽e 沤ikeli膰 路 Mathias Lechner 路 Thomas Henzinger 路 Krishnendu Chatterjee 馃敆 |
-
|
Disentangling Algorithmic Recourse
(
Poster
)
>
|
Martin Pawelczyk 路 Lea Tiyavorabun 路 Gjergji Kasneci 馃敆 |
-
|
Transfer Fairness under Distribution Shifts
(
Poster
)
>
|
Bang An 路 Zora Che 路 Mucong Ding 路 Furong Huang 馃敆 |
-
|
Towards learning to explain with concept bottleneck models: mitigating information leakage
(
Poster
)
>
|
Joshua Lockhart 路 Nicolas Marchesotti 路 Daniele Magazzeni 路 Manuela Veloso 馃敆 |
-
|
Few-Shot Unlearning
(
Poster
)
>
|
Youngsik Yoon 路 Jinhwan Nam 路 Dongwoo Kim 路 Jungseul Ok 馃敆 |
-
|
TOWARDS DATA-FREE MODEL STEALING IN A HARD LABEL SETTING
(
Poster
)
>
|
Sunandini Sanyal 路 Sravanti Addepalli 路 Venkatesh Babu Radhakrishnan 馃敆 |
-
|
Algorithmic Recourse in the Face of Noisy Human Responses
(
Poster
)
>
|
Martin Pawelczyk 路 Teresa Datta 路 Johannes van-den-Heuvel 路 Gjergji Kasneci 路 Himabindu Lakkaraju 馃敆 |
-
|
Perfectly Fair and Differentially Private Selection Using the Laplace Mechanism
(
Poster
)
>
|
Mina Samizadeh 路 Mohammad Mahdi Khalili 馃敆 |
-
|
Rationale-Inspired Natural Language Explanations with Commonsense
(
Poster
)
>
|
Bodhisattwa Prasad Majumder 路 Oana-Maria Camburu 路 Thomas Lukasiewicz 路 Julian McAuley 馃敆 |
-
|
Maximizing Predictive Entropy as Regularization for Supervised Classification
(
Poster
)
>
|
Amrith Setlur 路 Benjamin Eysenbach 路 Sergey Levine 馃敆 |
-
|
Data Augmentation via Wasserstein Geodesic Perturbation for Robust Electrocardiogram Prediction
(
Poster
)
>
|
Jiacheng Zhu 路 Jielin Qiu 路 Zhuolin Yang 路 Michael Rosenberg 路 Emerson Liu 路 Bo Li 路 DING ZHAO 馃敆 |