Transferable Sparse Adversarial Attack

摘要

Deep neural networks have shown their vulnerability to adversarial attacks. In this paper, we focus on sparse adversarial attack based on the $ell_0$ norm constraint, which can succeed by only modifying a few pixels of an image. Despite a high attack success rate, prior sparse attack methods achieve a low transferability under the black-box protocol due to overfitting the target model. Therefore, we introduce a generator architecture to alleviate the overfitting issue and thus efficiently craft transferable sparse adversarial examples. Specifically, the generator decouples the sparse perturbation into amplitude and position components. We carefully design a random quantization operator to optimize these two components jointly in an end-to-end way. The experiment shows that our method has improved the transferability by a large margin under a similar sparsity setting compared with state-of-the-art methods. Moreover, our method achieves superior inference speed, 700$times$ faster than other optimization-based methods. The code is available at https://github.com/shaguopohuaizhe/TSAA.

出版物
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

We proposed a novel method to generate transferable sparse adversarial perturbations.

Figure1
Figure 1: Pipeline of our method

  • The overall pipeline of our method. Our framework decouples the adversarial perturbation into two components which control distortion magnitude and perturbed pixel location respectively.

Figure2
Figure 2: Generated images of GreedyFool and our method

  • Figure 2 shows adversarial images generated by GreedyFool and our method. When the perturbation constraint is Eps = 255, the perturbation is marginally visible for both GreedyFool and ours.

Table1
Table 1: Eps = 255 constrained non-targeted attack transferability comparison on ImageNet dataset.

  • Table 1 shows quantitative results for Eps = 255 on ImageNet. As the sparsity increases, the transferability of baselines increases, while our method is always better than others with a large margin.

Table2
Table 2: Comparison with generator-based dense attacks.

  • Table 2 shows comparison with generator-based dense attacks. The transfer rate of our method is competitive with the two dense attacks while our perturbation is sparser.

Citation

@InProceedings{He_2022_CVPR,
    author    = {He, Ziwen and Wang, Wei and Dong, Jing and Tan, Tieniu},
    title     = {Transferable Sparse Adversarial Attack},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {14963--14972}
}
何子文
何子文
博士、联合指导,2023 届

主要从事人工智能安全、对抗样本等方面的研究。

王伟
王伟
副研究员、硕导

主要从事多媒体内容安全、人工智能安全、多模态内容分析与理解等方面的研究工作。

董晶
董晶
研究员、硕导

主要从事多媒体内容安全、人工智能安全、多模态内容分析与理解等方面的研究工作。详情访问:http://cripac.ia.ac.cn/people/jdong

谭铁牛
谭铁牛
研究员,博导

主要从事图像处理、计算机视觉和模式识别等相关领域的研究工作,目前的研究主要集中在生物特征识别、图像视频理解和信息内容安全等三个方向。