Highly realistic AI generated face swap facial imagery known as deepfakes can easily deceive human eyes and have drawn much attention. Deepfake detection models have obtained high accuracies and are still improving, but the explanation of their decisions receives little attention in current research. Explanations are important for the credibility of detection results, which is essential in serious applications like the court of law. The explanation is a hard problem, apart from the deep detection models are black boxes, the particular reason is that high-quality fakes often have no human-eye-sensitive artifacts. We call the artifacts that can be detected by models but are not human-eye-sensitive as subtle artifacts. In this work, we attempt to explain model detected face swap images to humans by proposing two simple automatic explanation methods. They enhance the original suspect image to generate its more real and more fake counterfactual versions. By visually contrasting the original suspect image with the counterfactual images, it may become easier for humans to notice some subtle artifacts. The two methods operate on pixel and color spaces respectively, they do not require extra training process and can be directly applied to any trained deepfake detection models. We also carefully design new subjective evaluation experiments to verify the effectiveness of proposed enhancement methods. Experiment results show that the color space enhancement method is more preferred by the tested subjects for explaining high-quality fake images, compared to the other pixel space method and a baseline attribution-based explanation method. The enhancement methods can be used as a toolset that helps human investigators to better notice artifacts in detected face swap images and to add weights on proofs.