Transferability of Adversarial Examples to Attack Real World Porn Images Detection Service.
09:30 - 10:00
Adversarial learning aims at understanding the weaknesses of machine learning in the adversarial environment and developing protection against potential threats. In the field of object detection and image classification, a large number of open source machine learning models are used by industry. Researchers can attack Faster RCNN, SSD, VGG, ResNet and so on by using white boxes to generate adversarial images, then transfer learning to attack object detection and image classification systems in the real world before. But in the field of porn images detection , the only well-known open source model is Yahoo's NSFW. We have proved through experiments that transfer learning to the Yahoo's NSFW, can attack the real world porn images detection service with a lower success rate. Further research shows that by optimizing the loss function and adjusting the attack algorithm, a higher success rate can be achieved without affecting human senses through smaller disturbances.We call the new attack algorithm as FDA ( FeatureMap Destroy Attack).At the same time, we also propose a method to detection and defense Real-World Adversarial Images for Illicit Online Porn.