Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors

Zuxuan Wu, Ser-Nam Lim, Larry S. Davis, Tom Goldstein ;

Abstract


We present a systematic study of adversarial attacks on state-of-the-art object detection frameworks. Using standard detection datasets, we train patterns that suppress the objectness scores produced by a range of commonly used detectors, and ensembles of detectors. Through extensive experiments, we benchmark the effectiveness of adversarially trained patches under both white-box and black-box settings, and quantify transferability of attacks between datasets, object classes, and detector models. Finally, we present a detailed study of physical world attacks using printed posters and wearable clothes, and rigorously quantify the performance of such attacks with different metrics."

Related Material


[pdf]