We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Hacking Humans with AI as a Service

Formal Metadata

Title
Hacking Humans with AI as a Service
Title of Series
Number of Parts
84
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
As the proliferation of Artificial Intelligence as a Service (AIaaS) products such as OpenAI's GPT-3 API places advanced synthetic media generation capabilities in the hands of a global audience at a fraction of the cost, what does the future hold for AI-assisted social engineering attacks? In our talk, we will present the nuts and bolts of an AIaaS phishing pipeline that was successfully deployed in multiple authorized phishing campaigns. Using both paid and free services, we emulated the techniques that even low-skilled, limited resource actors could adopt to execute effective AI-assisted phishing campaigns at scale. By repurposing easily-accessible personality analysis AIaaS products, we generated persuasive phishing emails that were automatically personalized based on a target's public social media information and created by state-of-the-art natural language generators. We will also discuss how an AI-assisted phishing workflow would impact traditional social engineering teams and operations. Finally, we look at how AIaaS suppliers can mitigate the misuse of their products. REFERENCES 1. T. Karras, S. Laine, and T. Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks,” arXiv:1812.04948 [cs.NE], 2019. 2. S. Gehrmann, H. Strobelt, and A. M. Rush, “GLTR: Statistical Detection and Visualization of Generated Text,” arXiv:1906.04043 [cs.CL], 2019. 3. G. Jawahar, M. Abdul-Mageed, and L. V. S. Lakshmanan, “Automatic Detection of Machine Generated Text: A Critical Survey,” arXiv:2011.01314 [cs.CL], 2020. 4. J. Seymour and P. Tully, “Weaponizing Data Science for Social Engineering: Automated E2E Spear Phishing on Twitter,” 2016. 5. P. Tully and F. Lee, “Repurposing Neural Networks to Generate Synthetic Media for Information Operations,” 2020. 6. OpenAI, “OpenAI Charter,” OpenAI, 09-Apr-2018. [Online]. Available: https://openai.com/charter/. 7. G. Brockman, M. Murati, and P. Welinder, “OpenAI API,” OpenAI, 11-Jun-2020. [Online]. Available: https://openai.com/blog/openai-api/. 8. A. Pilipiszyn, “GPT-3 Powers the Next Generation of Apps,” OpenAI, 25-Mar-2021. [Online]. Available: https://openai.com/blog/gpt-3-apps/. Would like to thank contributing author Timothy Lee Timothy is a security researcher who likes to break things and tries to understand how the system works during the process. In the past year, he is researching with iOS security and is starting his journey on iOS vulnerability research. Additionally, he has contributed to red team social engineering operations and security tooling, with practical experience in vishing and in-person social engineering. https://www.linkedin.com/in/timothylee0/