Skip to content

Latest commit

 

History

History
24 lines (15 loc) · 1.19 KB

File metadata and controls

24 lines (15 loc) · 1.19 KB

Multiple-Adversarial_Examples_attack

六代兴亡如梦,苒苒惊时月。纵使岁寒途远,此志应难夺。

These are adversarial examples attack for typical deep learning techniques: classifisers, Faster RCNN, YOLO

Aiming at some selected typical image classification and object detection networks, we designed specific adversarial examples attacking method to generate adversarial samples. On the basis of not affecting the visual effect, after processing the test samples, we can achieve the purpose of deceiving the recognition network model and outputting the complete error recognition results.

Dependencies

  • Tensorflow, keras
  • pytorch
  • Numpy
  • Matplotlib
  • jupyter notebook, colab

Reference

[1] Su, Jiawei, Danilo Vasconcellos Vargas, and Kouichi Sakurai. "One pixel attack for fooling deep neural networks." IEEE Transactions on Evolutionary Computation 23.5 (2019): 828-841.
[2] Wang, Derui, et al. "Daedalus: Breaking non- maximum suppression in object detection via adversarial examples." arXiv (2019): arXiv-1902.
[3] Wei, Xingxing, et al. "Transferable adversarial attacks for image and video object detection." ar Xiv preprint arXiv:1811.12641 (2018).