CAAD VILLAGE - GeekPwn - The Uprising Geekpwn AI/Robotics Cybersecurity Contest U.S. 2018 - Transferable Adversarial Perturbations

Video thumbnail (Frame 0) Video thumbnail (Frame 780) Video thumbnail (Frame 1159) Video thumbnail (Frame 1396) Video thumbnail (Frame 1933) Video thumbnail (Frame 3352) Video thumbnail (Frame 4806) Video thumbnail (Frame 6280) Video thumbnail (Frame 8160) Video thumbnail (Frame 8613) Video thumbnail (Frame 9012) Video thumbnail (Frame 9883) Video thumbnail (Frame 10200) Video thumbnail (Frame 10549) Video thumbnail (Frame 12648)
Video in TIB AV-Portal: CAAD VILLAGE - GeekPwn - The Uprising Geekpwn AI/Robotics Cybersecurity Contest U.S. 2018 - Transferable Adversarial Perturbations

Formal Metadata

Title
CAAD VILLAGE - GeekPwn - The Uprising Geekpwn AI/Robotics Cybersecurity Contest U.S. 2018 - Transferable Adversarial Perturbations
Title of Series
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
2018
Language
English

Content Metadata

Subject Area
Abstract
State-of-the-art deep neural network classifiers are highly vulnerable to adversarial examples which are designed to mislead classifiers with a very small perturbation. However, the performance of black-box attacks (without knowledge of the model parameters) against deployed models always degrades significantly. In this paper, We propose a novel way of perturbations for adversarial examples to enable black-box transfer. We first show that maximizing distance between natural images and their adversarial examples in the intermediate feature maps can improve both white-box attacks (with knowledge of the model parameters) and black-box attacks. We also show that smooth regularization on adversarial perturbations enables transferring across models. Extensive experimental results show that our approach outperforms state-of-the-art methods both in white-box and black-box attacks. Bruce Hou, senior security researcher with more than four years of experience in Tencent Security Platform Department, mainly focuses on the classification of images and videos, human-machine confrontation and the attacks and defenses of cyber security. Wen Zhou, senior security researcher with multiple years of experience in Tencent Security Platform Department, mainly focuses on the research of computer vision, adversarial-examples and so on. Tencent Blade Team was founded by Tencent Security Platform Department, focusing in security researches of AI, mobile Internet, IoT, wireless devices and other cutting-edge technologies. So far, Tencent Blade Team has reported many security vulnerabilities to a large number of international manufacturers, including Google and Apple. In the future, Tencent Blade Team will continue to make the Internet a safer place for everyone.
Presentation of a group Goodness of fit Moving average Plastikkarte Computer-assisted translation
Perturbation theory
Rule of inference Noise (electronics) Mapping Artificial neural network Archaeological field survey Perturbation theory Online help Heat transfer Black box Regular graph Distance Permutation Message passing Process (computing) Robotics
Noise (electronics) Empennage Mapping Artificial neural network Insertion loss Parameter (computer programming) Heat transfer Distance Mathematical model Medical imaging Message passing Different (Kate Ryan album) Circle Endliche Modelltheorie Resultant Computer architecture Spacetime Row (database)
Area Pattern recognition Mapping Smoothing Artificial neural network Weight Multiplication sign Maxima and minima Black box Distance Flag Cuboid Hill differential equation Mathematical optimization Mathematical optimization
Pattern recognition Message passing Functional (mathematics) Mapping State of matter Artificial neural network Insertion loss Lie group Term (mathematics) Distance
Artificial neural network Self-organization Endliche Modelltheorie
Filter <Stochastik> Trail Noise (electronics) Email Divisor Mapping Smoothing Regulator gene Artificial neural network Menu (computing) Insertion loss Maxima and minima Distance Regular graph Mathematical model Medical imaging Mathematics output Self-organization Information security Perimeter Computer architecture
hello everyone welcome to cat village today the first presentation we from tension play team play team yeah yesterday they have one card CTF so let's welcome good one in Iran today I
want to talk to you about the transferable and historic competitions and before talk I want to introduce your
our team members Bruce hall dr. Joanne and mom town and I uhm
we also attached in the last year's the no targeted anniversary attack and defense of nips accommodations and today is the only help or introduction about our methods so if you are interested you can find more in our easy survey paper
we all know that given your network is easy to fall but the black box attack is still a hard job so our message have to basic idea the first one is the maximizing the distance in the intermediate visual maps that can improve the attacker transferability and the second one is the way introducing a smooth regularization our our adversary permutations which can make it more efficient when the network is well designed it may be some defense method such as the noise and adversary trade your network so in this situation the robot nest very important so first I will show our
results you can say that this method is our message this is the last official Maps of the neural network we see that the distance of our adversary examples is far away from the original images and we also compare two basic master that is f gsm and PRM we can say that the in the official space they are very close from each other so the large distance that will make our transferable trust transfer to different models because different model have different architectures and parameters so the so the distance of the Fisher map is very important so how to how to wait wait
the first the first row is maximizing the loss of our adversary examples from the original label and the second row is our is as want to do is maximizing the distance of the feature maps and the key faction is our foreign of metallization which can which can decrease the contribution of large values and make the makes a feature much more stable and the last one is a small circle initiation or which can punish is the discontinued noise and can improve the robustness also it compares the
different ways they influence the recognition recognition occurs this area a little bit nervous and we say that and they are smooth Robin Linn ization can benefit a lot when the neural network is the anniversary trade we can say that the recognition occurs it is decreased a lot the white one yeah and ending and maximizing the future Maps distance can decrease of those networks no matter that is a white flags Oh white box the attack all the black box attacked so but so away finale which was the the weight that is actually off that which was the last one yes this one so we may call the network have yeah not bad performance so that is the
optimization stab a way we do it several times and you know our experiment the K is equal to 5 so we compare our methods
to some basic lie methods and also a state of H message and without that the
if the yeah if the Yurok have too many layers that only and the loss only maximizing the loss function and not enough so maximizing the federal maps the distance can get better results because such as resonate who we were that is our very worried if network so and in the future maps distance we can make a good performance and the botanist that that is the
influence by the small organization and the way say that for all the networks no matter white books are black balls oh yo we got the past without compared
with all the other methods so motivation
by our attack masters we designed our defense the Masters in the attacker way maximizing yes we maximizing the future maths distance but for the defense we want to minimizing this loss so this is the original input the key image and this is some image generated by a diverse array methods so we put it together into the neural network and the minimizing the distance of the feature maps for all layers and then and with the last loss of occurrences of the max loss and together that is our final loss factor for so to to do this with it were just takes that as oh the attacker model architecture and perimeter is different from us but if we maximizing the map distance and we mail will mail make her it's the more close to the original one so we maybe gets the right label as the last so like so today the idea our
muscle idea is either to understand or we only use the so well way Oh right for track we want to maximizing the distance of veteran Maps and for defense minimizing the distance and a smooth regularization is very important while some one of em one more people notice that's the security of a matter the security of different your network so a small organization can make your noise the most most and maybe there are annoys filters and some Rowdy's and then your network smooth regulation can make your attract more efficient so today that is my talk and here are some references about all methods I compared with my message and thank you [Applause]
Feedback