Skip to yearly menu bar Skip to main content


Workshop

Backdoor Attacks and Defenses in Machine Learning

Guanhong Tao · Kaiyuan Zhang · Shawn Shan · Emily Wenger · Rui Zhu · Eugene Bagdasaryan · Naren Sarayu Manoj · Taylor Kulp-McDowall · Yousra Aafer · Shiqing Ma · Xiangyu Zhang

Virtual

Backdoor attacks aim to cause consistent misclassification of any input by adding a specific pattern called a trigger. Recent studies have shown the feasibility of launching backdoor attacks in various domains, such as computer vision (CV), natural language processing (NLP), federated learning (FL), etc. As backdoor attacks are mostly carried out through data poisoning (i.e., adding malicious inputs to training data), it raises major concerns for many publicly available pre-trained models. Defending against backdoor attacks has sparked multiple lines of research. Many defense techniques are effective against some particular types of backdoor attacks. However, with increasingly emerging diverse backdoors, the defense performance of existing work tends to be limited. This workshop, Backdoor Attacks aNd DefenSes in Machine Learning (BANDS), aims to bring together researchers from government, academia, and industry that share a common interest in exploring and building more secure machine learning models against backdoor attacks.

Chat is not available.
Timezone: America/Los_Angeles

Schedule