Guided Erasable Adversarial Attack (GEAA) towards Shared Data Protection

摘要

In recent years, there has been increasing interest in studying the adversarial attack, which poses potential risks to deep learning applications and has stimulated numerous researches, e.g. improving the robustness of deep neural networks. In this work, we propose a novel double-stream architecture – Guided Erasable Adversarial Attack (GEAA) – for protecting high-quality labeled data with high commercial values under data-sharing scenarios. GEAA contains three phases, the double-stream adversarial attack, denoising reconstruction, and watermark extraction. Specifically, the double-stream adversarial attack injects erasable perturbations into the training data to avoid database abuse. The denoising reconstruction rebuilds the traceable denoising data from adversarial examples. The watermark extraction recovers identity information from the denoised data for copyright protection. Additionally, we introduce the annealing optimization strategy to balance these phases and a boundary constraint to degrade the availability of adversarial examples. Through extensive experiments, we demonstrate the effectiveness of the proposed framework in data protection. The Pytorch® implementations of GEAA can be downloaded from an open-source Github project https://github.com/Dlut-lab-zmn/GEAA-for-data-protection.

出版物
IEEE Transactions on Information Forensics and Security
王伟
王伟
副研究员、硕导

主要从事多媒体内容安全、人工智能安全、多模态内容分析与理解等方面的研究工作。