Filter by Keyword:

32 Results

Poster
Mon 1:00 Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits
Jiawang Bai, Baoyuan Wu, Yong Zhang, Yiming Li, Zhifeng Li, Shu-Tao Xia
Poster
Mon 1:00 Stabilized Medical Image Attacks
Gege Qi, Lijun GONG, Yibing Song, Kai Ma, Yefeng Zheng
Spotlight
Mon 4:40 How Benign is Benign Overfitting ?
Amartya Sanyal, Puneet Dokania, Varun Kanade, Philip Torr
Poster
Mon 9:00 Shape-Texture Debiased Neural Network Training
Yinigwei Li, Qihang Yu, Mingxing Tan, Jieru Mei, Peng Tang, Wei Shen, Alan Yuille, Cihang Xie
Poster
Mon 9:00 InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu
Poster
Mon 17:00 On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning
Ren Wang, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Chuang Gan, Meng Wang
Poster
Mon 17:00 Meta-Learning with Neural Tangent Kernels
Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, Jinhui Xu
Poster
Mon 17:00 Robust Reinforcement Learning on State Observations with Learned Optimal Adversary
Huan Zhang, Hongge Chen, Duane S Boning, Cho-Jui Hsieh
Spotlight
Mon 20:38 Information Laundering for Model Privacy
Xinran Wang, Yu Xiang, Jun Gao, Jie Ding
Poster
Tue 1:00 Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples
Ziang Yan, Yiwen Guo, Jian Liang, Changshui Zhang
Poster
Tue 1:00 Contemplating Real-World Object Classification
Ali Borji
Poster
Tue 9:00 Statistical inference for individual fairness
Subha Maity, Songkai Xue, Mikhail Yurochkin, Yuekai Sun
Poster
Tue 9:00 How Benign is Benign Overfitting ?
Amartya Sanyal, Puneet Dokania, Varun Kanade, Philip Torr
Spotlight
Tue 11:30 How Does Mixup Help With Robustness and Generalization?
Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou
Poster
Tue 17:00 Generating Adversarial Computer Programs using Optimized Obfuscations
Shashank Srikant, Sijia Liu, Tamara Mitrovska, Shiyu Chang, Quanfu Fan, Gaoyuan Zhang, Una-May O'Reilly
Poster
Tue 17:00 Information Laundering for Model Privacy
Xinran Wang, Yu Xiang, Jun Gao, Jie Ding
Spotlight
Wed 4:30 Stabilized Medical Image Attacks
Gege Qi, Lijun GONG, Yibing Song, Kai Ma, Yefeng Zheng
Poster
Wed 9:00 Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
Cassidy Laidlaw, Sahil Singla, Soheil Feizi
Poster
Wed 9:00 Provably robust classification of adversarial examples with detection
Fatemeh Sheikholeslami, Ali Lotfi, Zico Kolter
Poster
Wed 9:00 How Does Mixup Help With Robustness and Generalization?
Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou
Poster
Wed 17:00 Beyond Categorical Label Representations for Image Classification
Boyuan Chen, Yu Li, Sunand Raghupathi, Hod Lipson
Poster
Wed 17:00 Effective and Efficient Vote Attack on Capsule Networks
Jindong Gu, Baoyuan Wu, Volker Tresp
Poster
Wed 17:00 Evaluations and Methods for Explanation through Robustness Analysis
Cheng-Yu Hsieh, Chih-Kuan Yeh, Xuanqing Liu, Pradeep K Ravikumar, Seungyeon Kim, Sanjiv Kumar, Cho-Jui Hsieh
Poster
Thu 9:00 Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds
Bogdan Georgiev, Lukas Franken, Mayukh Mukherjee
Poster
Thu 9:00 Improving VAEs' Robustness to Adversarial Attack
Matthew Willetts, Alexander Camuto, Tom Rainforth, S Roberts, Christopher Holmes
Poster
Thu 17:00 LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan, John P Dickerson, Gavin Taylor, Tom Goldstein
Poster
Thu 17:00 Stochastic Security: Adversarial Defense Using Long-Run Dynamics of Energy-Based Models
Mitchell Hill, Jonathan Mitchell, Song-Chun Zhu
Poster
Thu 17:00 ARMOURED: Adversarially Robust MOdels using Unlabeled data by REgularizing Diversity
Kangkang Lu, Cuong Nguyen, Xun Xu, Kiran Chari, Yu Jing Goh, Chuan-Sheng Foo
Workshop
Mind the box: $l_1$-APGD for sparse adversarial attacks on image classifiers
francesco croce
Workshop
Detecting Adversarial Attacks through Neural Activations
Graham Annett
Workshop
Boosting black-box adversarial attack via exploiting loss smoothness
Hoang Tran
Workshop
Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks
Dequan Wang, David Wagner, Trevor Darrell