We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Don't Red Team AI Like a Chump

Formal Metadata

Title
Don't Red Team AI Like a Chump
Title of Series
Number of Parts
335
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
AI needs no introduction as one of the most overhyped technical fields in the last decade. The subsequent hysteria around building AI-based systems has also made them a tasty target for folks looking to cause major mischief. However, most of the popular proposed attacks specifically targeting AI systems focus on the algorithm rather than the system in which the algorithm is deployed. We’ll begin by talking about why this threat model doesn’t hold up in realistic scenarios, using facial detection and self-driving cars as primary examples. We will also learn how to more effectively red-team AI systems by considering the data processing pipeline as the primary target. Ariel Herbert-Voss Ariel Herbert-Voss is a PhD student at Harvard University, where she specializes in adversarial machine learning, cybersecurity, mathematical optimization, and dumb internet memes. She is an affiliate researcher at the MIT Media Lab and at the Vector Institute for Artificial Intelligence. She is a co-founder and co-organizer of the DEF CON AI Village, and loves all things to do with malicious uses and abuses of AI.