Workshop
Security and Safety in Machine Learning Systems
Xinyun Chen 路 Cihang Xie 路 Ali Shafahi 路 Bo Li 路 Ding Zhao 路 Tom Goldstein 路 Dawn Song
Fri 7 May, 8:45 a.m. PDT
While machine learning (ML) models have achieved great success in many applications, concerns have been raised about their potential vulnerabilities and risks when applied to safety-critical applications. On the one hand, from the security perspective, studies have been conducted to explore worst-case attacks against ML models and therefore inspire both empirical and certifiable defense approaches. On the other hand, from the safety perspective, researchers have looked into safe constraints, which should be satisfied by safe AI systems (e.g. autonomous driving vehicles should not hit pedestrians). This workshop makes the first attempts towards bridging the gap of these two communities and aims to discuss principles of developing secure and safe ML systems. The workshop also focuses on how future practitioners should prepare themselves for reducing the risks of unintended behaviors of sophisticated ML models.
The workshop will bring together experts from machine learning, computer security, and AI safety communities. We attempt to highlight recent related work from different communities, clarify the foundations of secure and safe ML, and chart out important directions for future work and cross-community collaborations.
Schedule
Fri 8:45 a.m. - 9:00 a.m.
|
Opening Remarks
(
Talk
)
>
|
Xinyun Chen 馃敆 |
Fri 9:00 a.m. - 9:01 a.m.
|
Speaker Introduction: Alina Oprea
(
Intro
)
>
|
馃敆 |
Fri 9:01 a.m. - 9:30 a.m.
|
Invited Talk #1: Alina Oprea
(
Talk
)
>
SlidesLive Video |
Alina Oprea 馃敆 |
Fri 9:30 a.m. - 9:35 a.m.
|
Live QA: Alina Oprea
(
QA
)
>
|
馃敆 |
Fri 9:35 a.m. - 9:36 a.m.
|
Contributed Talk #1 Introduction
(
Intro
)
>
|
馃敆 |
Fri 9:36 a.m. - 9:45 a.m.
|
Contributed Talk #1: Ditto: Fair and Robust Federated Learning Through Personalization
(
Talk
)
>
SlidesLive Video |
Tian Li 路 Ahmad Beirami 路 Virginia Smith 馃敆 |
Fri 9:45 a.m. - 10:20 a.m.
|
Invited Talk #2: David Wagner
(
Talk
)
>
|
David Wagner 馃敆 |
Fri 10:20 a.m. - 10:55 a.m.
|
Invited Talk #3: Zico Kolter
(
Talk
)
>
|
Zico Kolter 馃敆 |
Fri 10:55 a.m. - 10:56 a.m.
|
Speaker Introduction: Alan Yuille
(
Intro
)
>
|
馃敆 |
Fri 10:56 a.m. - 11:30 a.m.
|
Invited Talk #4: Alan Yuille
(
Talk
)
>
SlidesLive Video |
Alan Yuille 馃敆 |
Fri 11:30 a.m. - 12:00 p.m.
|
Panel Discussion #1
(
Panel
)
>
|
Alina Oprea 路 David Wagner 路 Adam Kortylewski 路 Christopher Re 路 Tom Goldstein 馃敆 |
Fri 12:00 p.m. - 1:00 p.m.
|
Poster Session #1 ( Poster Session ) > link | 馃敆 |
Fri 1:00 p.m. - 1:20 p.m.
|
Lunch Break
|
馃敆 |
Fri 1:20 p.m. - 1:21 p.m.
|
Speaker Introduction: Raquel Urtasun
(
Intro
)
>
|
馃敆 |
Fri 1:21 p.m. - 2:00 p.m.
|
Invited Talk #5: Raquel Urtasun
(
Talk
)
>
|
Raquel Urtasun 馃敆 |
Fri 2:00 p.m. - 2:35 p.m.
|
Invited Talk #6: Ben Zhao
(
Talk
)
>
|
Ben Zhao 馃敆 |
Fri 2:35 p.m. - 2:36 p.m.
|
Speaker Introduction: Aleksander Madry
(
Intro
)
>
|
馃敆 |
Fri 2:36 p.m. - 3:10 p.m.
|
Invited Talk #7: Aleksander Madry
(
Talk
)
>
SlidesLive Video |
Aleksander Madry 馃敆 |
Fri 3:10 p.m. - 3:11 p.m.
|
Contributed Talk #2 Introduction
(
Intro
)
>
|
馃敆 |
Fri 3:11 p.m. - 3:20 p.m.
|
Contributed Talk #2: RobustBench: a standardized adversarial robustness benchmark
(
Talk
)
>
SlidesLive Video |
francesco croce 路 Vikash Sehwag 路 Prateek Mittal 路 Matthias Hein 馃敆 |
Fri 3:20 p.m. - 3:21 p.m.
|
Speaker Introduction: Christopher Re
(
Intro
)
>
|
馃敆 |
Fri 3:21 p.m. - 3:55 p.m.
|
Invited Talk #8: Christopher Re
(
Talk
)
>
SlidesLive Video |
Christopher Re 馃敆 |
Fri 3:55 p.m. - 3:56 p.m.
|
Speaker Introduction: Aditi Raghunathan
(
Intro
)
>
|
馃敆 |
Fri 3:56 p.m. - 4:25 p.m.
|
Invited Talk #9: Aditi Raghunathan
(
Talk
)
>
SlidesLive Video |
Aditi Raghunathan 馃敆 |
Fri 4:25 p.m. - 4:30 p.m.
|
Live QA: Aditi Raghunathan
(
QA
)
>
|
馃敆 |
Fri 4:30 p.m. - 5:00 p.m.
|
Panel Discussion #2
(
Panel
)
>
|
Ben Zhao 路 Aleksander Madry 路 Aditi Raghunathan 路 Catherine Olsson 馃敆 |
Fri 5:00 p.m. - 6:00 p.m.
|
Poster Session #2 ( Poster Session ) > link | 馃敆 |
-
|
Hidden Backdoor Attack against Semantic Segmentation Models
(
Paper
)
>
SlidesLive Video |
Yiming Li 馃敆 |
-
|
PatchGuard++: Efficient Provable Attack Detection against Adversarial Patches
(
Paper
)
>
SlidesLive Video |
Chong Xiang 馃敆 |
-
|
FIRM: Detecting Adversarial Audios by Recursive Filters with Randomization
(
Paper
)
>
SlidesLive Video |
Guanhong Tao 馃敆 |
-
|
Simple Transparent Adversarial Examples
(
Paper
)
>
SlidesLive Video |
Jaydeep Borkar 馃敆 |
-
|
Reliably fast adversarial training via latent adversarial perturbation
(
Paper
)
>
SlidesLive Video |
Sang Wan Lee 馃敆 |
-
|
Safe Exploration Method for Reinforcement Learning under Existence of Disturbance
(
Paper
)
>
SlidesLive Video |
Yoshihiro Okawa 馃敆 |
-
|
Accelerated Policy Evaluation with Adaptive Importance Sampling
(
Paper
)
>
SlidesLive Video |
Mengdi Xu 馃敆 |
-
|
Mind the box: l1-APGD for sparse adversarial attacks on image classifiers
(
Paper
)
>
SlidesLive Video |
francesco croce 馃敆 |
-
|
RobustBench: a standardized adversarial robustness benchmark
(
Paper
)
>
SlidesLive Video |
francesco croce 馃敆 |
-
|
Ditto: Fair and Robust Federated Learning Through Personalization
(
Paper
)
>
SlidesLive Video |
Tian Li 馃敆 |
-
|
Measuring Adversarial Robustness using a Voronoi-Epsilon Adversary
(
Paper
)
>
SlidesLive Video |
Hyeongji Kim 馃敆 |
-
|
Low Curvature Activations Reduce Overfitting in Adversarial Training
(
Paper
)
>
SlidesLive Video |
Vasu Singla 馃敆 |
-
|
Extracting Hyperparameter Constraints From Code
(
Paper
)
>
SlidesLive Video |
Ingkarat Rak-amnouykit 馃敆 |
-
|
Sparse Coding Frontend for Robust Neural Networks
(
Paper
)
>
SlidesLive Video |
Can Bakiskan 馃敆 |
-
|
What is Wrong with One-Class Anomaly Detection?
(
Paper
)
>
SlidesLive Video |
JuneKyu Park 馃敆 |
-
|
Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting
(
Paper
)
>
SlidesLive Video |
Xiangyu QI 馃敆 |
-
|
Incorporating Label Uncertainty in Intrinsic Robustness Measures
(
Paper
)
>
SlidesLive Video |
Xiao Zhang 馃敆 |
-
|
Bridging the Gap Between Adversarial Robustness and Optimization Bias
(
Paper
)
>
SlidesLive Video |
Fartash Faghri 馃敆 |
-
|
High-Robustness, Low-Transferability Fingerprinting of Neural Networks
(
Paper
)
>
SlidesLive Video |
Siyue Wang 馃敆 |
-
|
Covariate Shift Adaptation for Adversarially Robust Classifier
(
Paper
)
>
SlidesLive Video |
Sudipan Saha 馃敆 |
-
|
Coordinated Attacks Against Federated Learning: A Multi-Agent Reinforcement Learning Approach
(
Paper
)
>
SlidesLive Video |
Wen Shen 馃敆 |
-
|
DEEP GRADIENT ATTACK WITH STRONG DP-SGD LOWER BOUND FOR LABEL PRIVACY
(
Paper
)
>
SlidesLive Video |
Sen Yuan 馃敆 |
-
|
Byzantine-Robust and Privacy-Preserving Framework for FedML
(
Paper
)
>
SlidesLive Video |
Seyedeh Hanieh Hashemi 馃敆 |
-
|
SHIFT INVARIANCE CAN REDUCE ADVERSARIAL ROBUSTNESS
(
Paper
)
>
SlidesLive Video |
Songwei Ge 馃敆 |
-
|
Doing More with Less: Improving Robustness using Generated Data
(
Paper
)
>
SlidesLive Video |
Sven Gowal 馃敆 |
-
|
Data Augmentation Can Improve Robustness
(
Paper
)
>
SlidesLive Video |
Sylvestre-Alvise Rebuffi 馃敆 |
-
|
Speeding Up Neural Network Verification via Automated Algorithm Configuration
(
Paper
)
>
link
SlidesLive Video |
Matthias K枚nig 馃敆 |
-
|
Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers
(
Paper
)
>
SlidesLive Video |
Clayton C Ashcraft 馃敆 |
-
|
Mitigating Adversarial Training Instability with Batch Normalization ( Paper ) > link | Arvind Sridhar 馃敆 |
-
|
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations
(
Paper
)
>
SlidesLive Video |
Eitan Borgnia 馃敆 |
-
|
Provable defense by denoised smoothing with learned score function
(
Paper
)
>
SlidesLive Video |
Kyungmin Lee 馃敆 |
-
|
Detecting Adversarial Attacks through Neural Activations
(
Paper
)
>
SlidesLive Video |
Graham Annett 馃敆 |
-
|
Efficient Disruptions of Black-box Image Translation Deepfake Generation Systems
(
Paper
)
>
SlidesLive Video |
Nataniel Ruiz 路 Sarah A Bargal 路 Stanley Sclaroff 馃敆 |
-
|
Poisoned classifiers are not only backdoored, they are fundamentally broken
(
Paper
)
>
|
Mingjie Sun 路 Mingjie Sun 路 Siddhant Agarwal 路 Zico Kolter 馃敆 |
-
|
Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness
(
Paper
)
>
SlidesLive Video |
Linxi Jiang 路 James Bailey 馃敆 |
-
|
Safe Model-based Reinforcement Learning with Robust Cross-Entropy Method
(
Paper
)
>
SlidesLive Video |
Zuxin Liu 馃敆 |
-
|
GateNet: Bridging the gap between Binarized Neural Network and FHE evaluation
(
Paper
)
>
|
Cheng Fu 馃敆 |
-
|
Non-Singular Adversarial Robustness of Neural Networks
(
Paper
)
>
SlidesLive Video |
Chia-Yi Hsu 路 Pin-Yu Chen 馃敆 |
-
|
Adversarial Examples Make Stronger Poisons
(
Paper
)
>
SlidesLive Video |
Liam H Fowl 路 Micah Goldblum 路 Ping-yeh Chiang 路 Jonas Geiping 路 Tom Goldstein 馃敆 |
-
|
What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors
(
Paper
)
>
SlidesLive Video |
Jonas Geiping 路 Liam H Fowl 路 Micah Goldblum 路 Michael Moeller 路 Tom Goldstein 馃敆 |
-
|
Baseline Pruning-Based Approach to Trojan Detection in Neural Networks
(
Paper
)
>
SlidesLive Video |
Peter Bajcsy 馃敆 |
-
|
Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters
(
Paper
)
>
SlidesLive Video |
Javier Carnerero-Cano 馃敆 |
-
|
Examining Trends in Out-of-Domain Confidence ( Paper ) > link | Richard Liaw 馃敆 |
-
|
未-CLUE: Diverse Sets of Explanations for Uncertainty Estimates
(
Paper
)
>
SlidesLive Video |
Dan Ley 路 Umang Bhatt 路 Adrian Weller 馃敆 |
-
|
Boosting black-box adversarial attack via exploiting loss smoothness
(
Paper
)
>
link
SlidesLive Video |
Hoang Tran 馃敆 |
-
|
On Improving Adversarial Robustness Using Proxy Distributions
(
Paper
)
>
SlidesLive Video |
Vikash Sehwag 路 Chong Xiang 路 Mung Chiang 路 Prateek Mittal 馃敆 |
-
|
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
(
Paper
)
>
SlidesLive Video |
Liam H Fowl 路 Ping-yeh Chiang 路 Micah Goldblum 路 Jonas Geiping 路 Tom Goldstein 馃敆 |
-
|
Robustness from Perception
(
Paper
)
>
SlidesLive Video |
Saeed Mahloujifar 路 Chong Xiang 路 Vikash Sehwag 路 Prateek Mittal 馃敆 |
-
|
Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks
(
Paper
)
>
SlidesLive Video |
Dequan Wang 路 David Wagner 路 Trevor Darrell 馃敆 |
-
|
Moral Scenarios for Reinforcement Learning Agents
(
Paper
)
>
SlidesLive Video |
Dan Hendrycks 路 Mantas Mazeika 路 Andy Zou 路 Bo Li 路 Dawn Song 馃敆 |