视觉对抗样本生成技术概述

摘要

With the invention of deep learning, artificial intelligence (AI) has ushered in new opportunities and is booming again. However, its privacy, security, ethics and other issues involved are also increasingly concerned by people. The adversarial samples, the vulnerability of artificial intelligence, especially deep learning models, are directly in front of us in recent years, which makes it necessary to pay attention to such problems during the practical application of AI technology. In this paper, a brief review of adversarial sample generation under white-box and black-box attack protocols is given. We summarize related techniques into three levels: signal level, content level and semantic level. We hope this paper can help readers better find the nature of the adversarial sample, which may improve the robustness, security and interpretability of the learned model.

出版物
《信息安全学报》
王伟
王伟
副研究员、硕导

主要从事多媒体内容安全、人工智能安全、多模态内容分析与理解等方面的研究工作。

董晶
董晶
研究员、硕导

主要从事多媒体内容安全、人工智能安全、多模态内容分析与理解等方面的研究工作。详情访问:http://cripac.ia.ac.cn/people/jdong

何子文
何子文
博士、联合指导,2023 届

主要从事人工智能安全、对抗样本等方面的研究。

孙哲南
孙哲南
研究员、博导

主要从事生物特征识别、计算机视觉等方面的研究。