Skip to yearly menu bar Skip to main content


Oral
in
Affinity Workshop: Tiny Papers Oral Session 2

Revamp: Automated Simulations of Adversarial Attacks on Arbitrary Objects in Realistic Scenes

Matthew Hull · Zijie Wang · Polo Chau


Abstract:

Deep learning models, such as those used in autonomous vehicles are vulnerable to adversarial attacks where attackers could place adversarial objects in the environment to induce incorrect detections. While generating such adversarial objects in the digital realm is well-studied, successfully transferring these attacks to the physical realm remains challenging, especially when accounting for real-world environmental factors. We address these challenges with REVAMP, a first-of-its-kind Python library for creating attack scenarios with arbitrary objects in scenes with realistic environmental factors, lighting, reflection, and refraction. REVAMP empowers researchers and practitioners to swiftly explore diverse scenarios, offering a wide range of configurable options for experiment design and using differentiable rendering to replicate physically-plausible adversarial objects. REVAMP is open-source and available at https://anonymous.4open.science/r/revamp and a demo video is available at https://youtu.be/ogCRO15R7-E.

Live content is unavailable. Log in and register to view live content