Real-Time Neural Voice Camouflage

Mia Chiquier · Chengzhi Mao · Carl Vondrick

Keywords: [ predictive models ] [ privacy ]

[ Abstract ]
[ Visit Poster at Spot F2 in Virtual World ] [ OpenReview
Mon 25 Apr 10:30 a.m. PDT — 12:30 p.m. PDT
Oral presentation: Oral 1: AI Applications
Mon 25 Apr 5 p.m. PDT — 6:30 p.m. PDT


Automatic speech recognition systems have created exciting possibilities for applications, however they also enable opportunities for systematic eavesdropping.We propose a method to camouflage a person's voice from these systems without inconveniencing the conversation between people in the room. Standard adversarial attacks are not effective in real-time streaming situations because the characteristics of the signal will have changed by the time the attack is executed. We introduce predictive adversarial attacks, which achieves real-time performance by forecasting the attack vector that will be the most effective in the future. Under real-time constraints, our method jams the established speech recognition system DeepSpeech 3.9x more than online projected gradient descent as measured through word error rate, and 6.6x more as measured through character error rate. We furthermore demonstrate our approach is practically effective in realistic environments with complex scene geometries.

Chat is not available.