Backdoor attacks aim to cause consistent misclassification of any input by adding a specific pattern called a trigger. Recent studies have shown the feasibility of launching backdoor attacks in various domains, such as computer vision (CV), natural language processing (NLP), federated learning (FL), etc. As backdoor attacks are mostly carried out through data poisoning (i.e., adding malicious inputs to training data), it raises major concerns for many publicly available pre-trained models. Defending against backdoor attacks has sparked multiple lines of research. Many defense techniques are effective against some particular types of backdoor attacks. However, with increasingly emerging diverse backdoors, the defense performance of existing work tends to be limited. This workshop, Backdoor Attacks aNd DefenSes in Machine Learning (BANDS), aims to bring together researchers from government, academia, and industry that share a common interest in exploring and building more secure machine learning models against backdoor attacks.
Live content is unavailable. Log in and register to view live content