Targeted attacks of image classifiers are difficult to transfer from one model to another. Only strong adversarial attacks with the knowledge of the classifier can bypass existing defenses. To defend against such attacks, we implement an “adversarial^2 training” method to strengthen the existing defenses. Yao Zhao is an applied scientist at Microsoft AI & Research working on natural language understanding/generation and search ranking. During his Ph.D. at Yale University, he worked in the field of computuer vision and optics. Yuzhe Zhao is a software engineer in Google Research, working on natural language understanding. He recently earned his Ph.D. from Yale University. Previously, he received his undergraduate degree in mathematics and physics from Shanghai Jiao Tong University. |