Defend Against Adversarial Samples By Using Perceptual Hash

  • Changrui Liu Wuhan University
  • Dengpan Ye
Keywords: Deep neural networks, perceptual hash, adversarial samples, attack defense

Abstract

Image classifiers that based on Deep Neural Networks (DNNs) have been proved to be easily fooled by well-designed perturbations. Previous defense methods have the limitations of requiring expensive computation or reducing the accuracy of the classifiers. In this paper, we propose a novel method based on perceptual hash which appends extra judgment layer and time layer to all DNNs. Our main purpose is to destroy the process of perturbations generation by comparing the similarities of images. To prove our idea, we defended against two main attack methods (a white-box attack and a black-box attack) in different DNNs and show that, after using our defense method, the attack-success-rate for all DNNs decreases significantly. More specifically, for the white-box attack, the attack-success-rate is reduced by an average of 36.3%. For the black-box attack, the average attack-success-rate of targeted attack and non-targeted attack has been reduced by 72.8% and 76.7% respectively. The proposed method is a simple and effective defense method and provides a new way to defend against adversarial samples.

Published
2020-05-21
Section
Articles on Computational Materials / Mechanics