We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Don't Red Team AI Like a Chump

Formale Metadaten

Titel
Don't Red Team AI Like a Chump
Serientitel
Anzahl der Teile
335
Autor
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
AI needs no introduction as one of the most overhyped technical fields in the last decade. The subsequent hysteria around building AI-based systems has also made them a tasty target for folks looking to cause major mischief. However, most of the popular proposed attacks specifically targeting AI systems focus on the algorithm rather than the system in which the algorithm is deployed. We’ll begin by talking about why this threat model doesn’t hold up in realistic scenarios, using facial detection and self-driving cars as primary examples. We will also learn how to more effectively red-team AI systems by considering the data processing pipeline as the primary target. Ariel Herbert-Voss Ariel Herbert-Voss is a PhD student at Harvard University, where she specializes in adversarial machine learning, cybersecurity, mathematical optimization, and dumb internet memes. She is an affiliate researcher at the MIT Media Lab and at the Vector Institute for Artificial Intelligence. She is a co-founder and co-organizer of the DEF CON AI Village, and loves all things to do with malicious uses and abuses of AI.