Adversarial Analysis for Source Camera Identification

摘要

Recent studies highlight the vulnerability of convolutional neural networks (CNNs) to adversarial attacks, which also calls into question the reliability of forensic methods. Existing adversarial attacks generate one-to-one noise, which means these methods have not learned the fingerprint information. Therefore, we introduce two powerful attacks, fingerprint copy-move attack, and joint feature-based auto-learning attack. To validate the performance of attack methods, we move a step ahead and introduce the higher possible defense mechanism relation mismatch. which expands the characterization differences of classifiers in the same classification network. Extensive experiments show that relation mismatch is superior in recognizing adversarial examples and prove that the proposed fingerprint-based attacks are more powerful. Both proposed attacks show excellent attack transferability to unknown samples. The Pytorch® implementations of these methods can download from an open-source GitHub project https://github.com/Dlut-lab-zmn/Source-attack.

出版物
IEEE Transactions on Circuits and Systems for Video Technology
王伟
王伟
副研究员、硕导

主要从事多媒体内容安全、人工智能安全、多模态内容分析与理解等方面的研究工作。