Skip to yearly menu bar Skip to main content


Robust NAS under adversarial training: benchmark, theory, and beyond

Yongtao Wu · Fanghui Liu · Carl-Johann Simon-Gabriel · Grigorios Chrysos · Volkan Cevher

Halle B #117
[ ]
Tue 7 May 1:45 a.m. PDT — 3:45 a.m. PDT


Recent developments in neural architecture search (NAS) emphasize the significance of considering robust architectures against malicious data. However, there is a notable absence of benchmark evaluations and theoretical guarantees for searching these robust architectures, especially when adversarial training is considered. In this work, we aim to address these two challenges, making twofold contributions. First, we release a comprehensive data set that encompasses both clean accuracy and robust accuracy for a vast array of adversarially trained networks from the NAS-Bench-201 search space on image datasets. Then, leveraging the neural tangent kernel (NTK) tool from deep learning theory, we establish a generalization theory for searching architecture in terms of clean accuracy and robust accuracy under multi-objective adversarial training. We firmly believe that our benchmark and theoretical insights will significantly benefit the NAS community through reliable reproducibility, efficient assessment, and theoretical foundation, particularly in the pursuit of robust architectures.

Chat is not available.