Guided Erasable Adversarial Attack (GEAA) towards Shared Data Protection


In recent years, there has been increasing interest in studying the adversarial attack, which poses potential risks to deep learning applications and has stimulated numerous researches, e.g. improving the robustness of deep neural networks. In this work, we propose a novel double-stream architecture – Guided Erasable Adversarial Attack (GEAA) – for protecting high-quality labeled data with high commercial values under data-sharing scenarios. GEAA contains three phases, the double-stream adversarial attack, denoising reconstruction, and watermark extraction. Specifically, the double-stream adversarial attack injects erasable perturbations into the training data to avoid database abuse. The denoising reconstruction rebuilds the traceable denoising data from adversarial examples. The watermark extraction recovers identity information from the denoised data for copyright protection. Additionally, we introduce the annealing optimization strategy to balance these phases and a boundary constraint to degrade the availability of adversarial examples. Through extensive experiments, we demonstrate the effectiveness of the proposed framework in data protection. The Pytorch® implementations of GEAA can be downloaded from an open-source Github project

IEEE Transactions on Information Forensics and Security