Recent studies highlight the vulnerability of convolutional neural networks (CNNs) to adversarial attacks, which also calls into question the reliability of forensic methods. Existing adversarial attacks generate one-to-one noise, which means these methods have not learned the fingerprint information. Therefore, we introduce two powerful attacks, fingerprint copy-move attack, and joint feature-based auto-learning attack. To validate the performance of attack methods, we move a step ahead and introduce the higher possible defense mechanism relation mismatch. which expands the characterization differences of classifiers in the same classification network. Extensive experiments show that relation mismatch is superior in recognizing adversarial examples and prove that the proposed fingerprint-based attacks are more powerful. Both proposed attacks show excellent attack transferability to unknown samples. The Pytorch® implementations of these methods can download from an open-source GitHub project https://github.com/Dlut-lab-zmn/Source-attack.