Deep neural networks (DNNs) have seen extensive studies on image recognition and classification, image segmentation, and related topics. However, recent studies show that DNNs are vulnerable in defending adversarial examples. The classification network can be deceived by adding a small amount of perturbation to clean samples. There are challenges when researchers want to design a general approach to defend against a wide variety of adversarial examples. To solve this problem, we introduce a defensive method to prevent adversarial examples from generating. Instead of designing a stronger classifier, we built a more robust classification system that can be viewed as a structural black box. After adding a buffer to the classification system, attackers can be efficiently deceived. The real evaluation results of the generated adversarial examples are often contrary to what the attacker thinks. Additionally, we do not assume a specific attack method premise. This incognizance to underlying attacks demonstrates the generalizability of the buffer to potential adversarial attacks. Extensive experiments indicate that the defense method greatly improves the security performance of DNNs.