Contrastive Knowledge Transfer for Deepfake Detection with Limited Data

摘要

Nowadays forensics methods have shown remarkable progress in detecting maliciously crafted fake images. However, without exception, the training process of deepfake detection models requires a large number of facial images. These models are usually unsuitable for real world applications because of their overlarge size and inferiority in speed. Thus, performing data-efficient deepfake detection is of great importance. In this paper, we propose a contrastive distillation method that maximizes the lower bound of mutual information between the teacher and the student to further improve student’s accuracy in a data-limited setting. We observe that models performing deepfake detection, different from other image classification tasks, have shown high robustness when there is a drop in data amount. The proposed knowledge transfer approach is of superior performance compared with vanilla few samples training baseline and other SOTA knowledge transfer methods. We believe we are the first to perform few-sample knowledge distillation on deepfake detection.

出版物
Proceedings - International Conference on Pattern Recognition
李东泽
李东泽
在读博士、联合指导

自动化所 2020 级硕士研究生。

卓文琦
卓文琦
硕士,2023 届

自动化所 2020 级硕士研究生。

王伟
王伟
副研究员、硕导

主要从事多媒体内容安全、人工智能安全、多模态内容分析与理解等方面的研究工作。

董晶
董晶
研究员、硕导

主要从事多媒体内容安全、人工智能安全、多模态内容分析与理解等方面的研究工作。详情访问:http://cripac.ia.ac.cn/people/jdong