Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Socially Responsible Machine Learning

Lost In Translation: Generating Adversarial Examples Robust to Round-Trip Translation

Neel Bhandari · Pin-Yu Chen


Abstract:

Language Models today provide a high accuracy across a large number of downstream tasks. However, they remain susceptible to adversarial attacks, particularly against those where the adversarial examples maintain considerable similarity to the original text. Given the multilingual nature of text, the effectiveness of adversarial examples across translations and how machine translations can improve the robustness of adversarial examples remain largely unexplored. In this paper, we present a comprehensive study on the robustness of current text adversarial attacks to round-trip translation. We demonstrate that 6 state-of-the-art text-based adversarial attacks do not maintain their efficacy after round-trip translation. Furthermore, we introduce an intervention-based solution to this problem, by integrating Machine Translation into the process of adversarial example generation and demonstrating an increased robustness to round-trip translation. Our results indicate that finding adversarial examples robust to round-trip translation can help identify insufficiency of language models that is common across languages, and motivate further research into multilingual adversarial attacks.

Chat is not available.