Workshop
Socially Responsible Machine Learning
Chaowei Xiao · Huan Zhang · Xueru Zhang · Hongyang Zhang · Cihang Xie · Beidi Chen · Xinchen Yan · Yuke Zhu · Bo Li · Zico Kolter · Dawn Song · Anima Anandkumar
Fri 29 Apr, 5:45 a.m. PDT
Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems (e.g., automated resume screening and pretrial release tool) to safety-critical tasks (e.g., financial analytics and autonomous driving). Recently, the concept of foundation models has received significant attention in the ML community, which refers to the rise of models (e.g., BERT, GPT-3) that are trained on large-scale data and work surprisingly well in a wide range of downstream tasks. While there are many opportunities regarding foundation models, ranging from capabilities (e.g., language, vision, robotics, reasoning, human interaction), applications (e.g., law, healthcare, education, transportation), and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations), concerns and risks have been incurred that the models can inflict harm if they are not developed or used with care. It has been well-documented that ML models can:-Inherit pre-existing biases and exhibit discrimination against already-disadvantaged or marginalized social groups;-Be vulnerable to security and privacy attacks that deceive the models and leak sensitive information of training data;-Make hard-to-justify predictions with a lack of transparency and interpretability.This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). In particular, we are interested in the following topics:-The intersection of various aspects of trustworthy ML: fairness, transparency, interpretability, privacy, robustness;-The possibility of using the most recent theory to inform practice guidelines for deploying trustworthy ML systems;-Automatically detect, verify, explain, and mitigate potential biases or privacy problems in existing models;-Explaining the social impacts of machine learning bias.
Schedule
Fri 6:00 a.m. - 2:40 p.m.
|
Invited talk
(
Invited talk
)
>
|
🔗 |
Fri 6:20 a.m. - 6:40 a.m.
|
Opening remarks
(
Remarks
)
>
|
Chaowei Xiao 🔗 |
Fri 6:40 a.m. - 7:20 a.m.
|
Invited talk from Prof. Ziwei Liu
(
Invited Talk
)
>
|
🔗 |
Fri 7:20 a.m. - 8:00 a.m.
|
Invited talk from Prof. Aleksander Mądry
(
Invited Talk
)
>
|
🔗 |
Fri 8:10 a.m. - 8:10 a.m.
|
Invited talk from Prof. Anqi Liu
(
Invited Talk
)
>
|
🔗 |
Fri 8:50 a.m. - 9:30 a.m.
|
Invited talk from Prof. Judy Hoffman
(
Invited talk
)
>
|
🔗 |
Fri 10:50 a.m. - 11:30 a.m.
|
Invited talk from Neil Gong
(
Invited talk
)
>
|
🔗 |
Fri 11:30 a.m. - 12:10 p.m.
|
Invited talk from Virginia Smith
(
Invited Talk
)
>
|
🔗 |
Fri 12:20 p.m. - 1:00 p.m.
|
Invited talk from Prof. Marco Pavone
(
Invited Talk
)
>
|
🔗 |
Fri 1:00 p.m. - 1:40 p.m.
|
Invited talk from Prof Diyi Yang
(
Invited Talk
)
>
|
🔗 |
Fri 2:44 p.m. - 3:00 p.m.
|
Closing Remarks
(
Remarks
)
>
|
🔗 |
-
|
Lost In Translation: Generating Adversarial Examples Robust to Round-Trip Translation
(
Poster
)
>
|
Neel Bhandari · Pin-Yu Chen 🔗 |
-
|
Debiasing Neural Networks using Differentiable Classification Parity Proxies
(
Poster
)
>
|
Ričards Marcinkevičs · Ece Ozkan · Julia Vogt 🔗 |
-
|
FedER: Communication-Efficient Byzantine-Robust Federated Learning
(
Poster
)
>
|
Yukun Jiang · Xiaoyu Cao · Hao Chen · Neil Gong 🔗 |
-
|
Evaluating the Adversarial Robustness for Fourier Neural Operators
(
Poster
)
>
|
Abolaji Adesoji · Pin-Yu Chen 🔗 |
-
|
Robust and Accurate - Compositional Architectures for Randomized Smoothing
(
Poster
)
>
|
Miklós Horváth · Mark N Müller · Marc Fischer · Martin Vechev 🔗 |
-
|
Towards Differentially Private Query Release for Hierarchical Data
(
Poster
)
>
|
Terrance Liu · Steven Wu 🔗 |
-
|
Incentive Mechanisms in Strategic Learning
(
Poster
)
>
|
Kun Jin · Xueru Zhang · Mohammad Mahdi Khalili · Parinaz Naghizadeh · Mingyan Liu 🔗 |
-
|
The Impacts of Labeling Biases on Fairness Criteria
(
Poster
)
>
|
Yiqiao Liao · Parinaz Naghizadeh 🔗 |
-
|
Can non-Lipschitz networks be robust? The power of abstention and data-driven decision making for robust non-Lipschitz networks
(
Poster
)
>
|
Nina Balcan · Avrim Blum · Dravyansh Sharma · Hongyang Zhang 🔗 |
-
|
Fair Machine Learning under Limited Demographically Labeled Data
(
Poster
)
>
|
Mustafa Ozdayi · Murat Kantarcioglu · Rishabh Iyer 🔗 |
-
|
Improving Cooperative Game Theory-based Data Valuation via Data Utility Learning
(
Poster
)
>
|
Tianhao Wang · Yu Yang · Ruoxi Jia 🔗 |
-
|
Provably Fair Federated Learning via Bounded Group Loss
(
Poster
)
>
|
Shengyuan Hu · Steven Wu · Virginia Smith 🔗 |
-
|
Secure Aggregation for Privacy-Aware Federated Learning with Limited Resources
(
Poster
)
>
|
Irem Ergun · Hasin Us Sami · Basak Guler 🔗 |
-
|
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
(
Poster
)
>
|
Aaron Chan · Maziar Sanjabi · Lambert Mathias · Liang Tan · Shaoliang Nie · Xiaochang Peng · Xiang Ren · Hamed Firooz 🔗 |
-
|
Dynamic Positive Reinforcement for Long-Term Fairness
(
Poster
)
>
|
Bhagyashree Puranik · Upamanyu Madhow · Ramtin Pedarsani 🔗 |
-
|
ModelNet40-C: A Robustness Benchmark for 3D Point Cloud Recognition under Corruption
(
Poster
)
>
|
Jiachen Sun · Qingzhao Zhang · Bhavya Kailkhura · Zhiding Yu · Zhuoqing Mao 🔗 |
-
|
Differential Privacy Amplification in Quantum and Quantum-inspired Algorithms
(
Poster
)
>
|
Armando Angrisani · Mina Doosti · Elham Kashefi 🔗 |
-
|
Learning Stabilizing Policies in Stochastic Control Systems
(
Poster
)
>
|
Đorđe Žikelić · Mathias Lechner · Thomas Henzinger · Krishnendu Chatterjee 🔗 |
-
|
Disentangling Algorithmic Recourse
(
Poster
)
>
|
Martin Pawelczyk · Lea Tiyavorabun · Gjergji Kasneci 🔗 |
-
|
Transfer Fairness under Distribution Shifts
(
Poster
)
>
|
Bang An · Zora Che · Mucong Ding · Furong Huang 🔗 |
-
|
Towards learning to explain with concept bottleneck models: mitigating information leakage
(
Poster
)
>
|
Joshua Lockhart · Nicolas Marchesotti · Daniele Magazzeni · Manuela Veloso 🔗 |
-
|
Few-Shot Unlearning
(
Poster
)
>
|
Youngsik Yoon · Jinhwan Nam · Dongwoo Kim · Jungseul Ok 🔗 |
-
|
TOWARDS DATA-FREE MODEL STEALING IN A HARD LABEL SETTING
(
Poster
)
>
|
Sunandini Sanyal · Sravanti Addepalli · Venkatesh Babu Radhakrishnan 🔗 |
-
|
Algorithmic Recourse in the Face of Noisy Human Responses
(
Poster
)
>
|
Martin Pawelczyk · Teresa Datta · Johannes van-den-Heuvel · Gjergji Kasneci · Himabindu Lakkaraju 🔗 |
-
|
Perfectly Fair and Differentially Private Selection Using the Laplace Mechanism
(
Poster
)
>
|
Mina Samizadeh · Mohammad Mahdi Khalili 🔗 |
-
|
Rationale-Inspired Natural Language Explanations with Commonsense
(
Poster
)
>
|
Bodhisattwa Prasad Majumder · Oana-Maria Camburu · Thomas Lukasiewicz · Julian McAuley 🔗 |
-
|
Maximizing Predictive Entropy as Regularization for Supervised Classification
(
Poster
)
>
|
Amrith Setlur · Benjamin Eysenbach · Sergey Levine 🔗 |
-
|
Data Augmentation via Wasserstein Geodesic Perturbation for Robust Electrocardiogram Prediction
(
Poster
)
>
|
Jiacheng Zhu · Jielin Qiu · Zhuolin Yang · Michael Rosenberg · Emerson Liu · Bo Li · DING ZHAO 🔗 |