Enhanced Accuracy and Robustness via Multi-Teacher Adversarial Distillation

Shiji Zhao, Jie Yu, Zhenlong Sun, Bo Zhang, Xingxing Wei ;

Abstract


"Adversarial training is an effective approach for improving the robustness of deep neural networks against adversarial attacks. Although bringing reliable robustness, adversarial training (AT) will reduce the performance of identifying clean examples. Meanwhile, Adversarial training can bring more robustness for large models than small models. To improve the robust and clean accuracy of small models, we introduce the Multi-Teacher Adversarial Robustness Distillation (MTARD) to guide the adversarial training process of small models. Specifically, MTARD uses multiple large teacher models, including an adversarial teacher and a clean teacher to guide a small student model in the adversarial training by knowledge distillation. In addition, we design a dynamic training algorithm to balance the influence between the adversarial teacher and clean teacher models. A series of experiments demonstrate that our MTARD can outperform the state-of-the-art adversarial training and distillation methods against various adversarial attacks."

Related Material


[pdf] [DOI]