Poster
Certified Robustness Under Bounded Levenshtein Distance
Elias Abad Rocamora · Grigorios Chrysos · Volkan Cevher
Hall 3 + Hall 2B #339
Abstract:
Text classifiers suffer from small perturbations, that if chosen adversarially, can dramatically change the output of the model. Verification methods can provide robustness certificates against such adversarial perturbations, by computing a sound lower bound on the robust accuracy. Nevertheless, existing verification methods incur in prohibitive costs and cannot practically handle Levenshtein distance constraints. We propose the first method for computing the Lipschitz constant of convolutional classifiers with respect to the Levenshtein distance. We use these Lipschitz constant estimates for training 1-Lipschitz classifiers. This enables computing the certified radius of a classifier in a single forward pass. Our method, LipsLev, is able to obtain 38.80% and 13.93% verified accuracy at distance 1 and 2 respectively in the AG-News dataset, while being 4 orders of magnitude faster than existing approaches. We believe our work can open the door to more efficient verification in the text domain.
Live content is unavailable. Log in and register to view live content