AI VILLAGE - DeepPhish: Simulating the Malicious Use of AI

Video thumbnail (Frame 0) Video thumbnail (Frame 916) Video thumbnail (Frame 3387) Video thumbnail (Frame 5040) Video thumbnail (Frame 5810) Video thumbnail (Frame 6360) Video thumbnail (Frame 11437) Video thumbnail (Frame 15345) Video thumbnail (Frame 15885)
Video in TIB AV-Portal: AI VILLAGE - DeepPhish: Simulating the Malicious Use of AI

Formal Metadata

Title
AI VILLAGE - DeepPhish: Simulating the Malicious Use of AI
Title of Series
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
2018
Language
English

Content Metadata

Subject Area
Abstract
Machine Learning and Artificial Intelligence have become essential to any effective cyber security and defense strategy against unknown attacks. In the battle against cybercriminals, AI-enhanced detection systems are markedly more accurate than traditional manual classification. Through intelligent algorithms, detection systems have been able to identify patterns and detect phishing URLs with 98.7% accuracy, giving the advantage to defensive teams. However, if AI is being used to prevent attacks, what is stopping cyber criminals from using the same technology to defeat both traditional and AI-based cyber-defense systems? This hypothesis is of urgent importance - there is a startling lack of research on the potential consequences of the weaponization of Machine Learning as a threat actor tool. In this talk, we are going to review how threat actors could exponentially improve their phishing attacks using AI to bypass machine-learning-based phishing detection systems. To test this hypothesis, we designed an experiment in which, by identifying how threat actors deploy their attacks, we took on the role of an attacker in order to test how they may use AI in their own way. In the end, we developed an AI algorithm, called DeepPhish, that learns effective patterns used by threat actors and uses them to generate new, unseen, and effective attacks based on attacker data. Our results show that, by using DeepPhish, two uncovered attackers were able to increase their phishing attacks effectiveness from 0.69% to 20.9%, and 4.91% to 36.28%, respectively.
Similarity (geometry) Game theory Formal language
Cybersex Noise (electronics) Real number Decision theory Projective plane Physicalism Parallel computing Twitter Hypothesis Radical (chemistry) Term (mathematics) Personal digital assistant Software testing Figurate number Information security Resultant
Cybersex Email Phishing Computer crime Personal digital assistant
Domain name Point (geometry) Group action Projective plane Model theory Data storage device Set (mathematics) Database Bit Number Uniform resource locator Process (computing) Strategy game Visualization (computer graphics) Bit rate Personal digital assistant String (computer science) Pattern language Physical system
Point (geometry) Domain name Group action Algorithm Model theory Boom (sailing) Set (mathematics) Mereology Power (physics) Number Product (business) Uniform resource locator Bit rate Personal digital assistant Semiconductor memory output Pattern language Error message Resultant Physical system Form (programming)
next we have Ivan with T fish simulating malicious AI we'd like to thank our sponsors end game silence efforts and tinder and without further ado here's avid okay so let's talk before lunch let's see how it is so um my name is Ivan thoroughly low or even Toledo in my language I'm going to give a talk that we call dead fish similar in malicious AI so first of all let's see who I am so
I am the lead scientist for the research team axe xterra I studied physics and economics additional to the data science research that I do I also like to use Fortin and do a lot of stuff doing parallel computing and finally I am from Columbia however I hate coffee yeah sad sad story for me in Columbia but so here is my Twitter account if you want to follow me so let's start I like to start my talk using this figure because basically it provides the motivation that we had at the beginning of this project here we have the Google Trends results for the keyword cyber security and AI as you can see in the last three or four years there has been our a dramatically increase in the in terms of the people around the relationship between these both topics so if I had to summarize the our current scenario it should be this yeah everyone is talking about AI in cyber security so knowing this at the beginning of the year we started to look for the newest cyber security marketing trend and we found out several news a wondering about how hey I can be applied to cyber attacks so if these were a Terminator movie people will be
wondering about if Skynet is already among us trying to destroy the world so despite all these news and marketing noise we don't have a real evidence if this is happening so the real question that we need we first need to ask ourself was is AI a real threat ok so as a research team we decided to start testing these questions and these hypotheses so by analyzing deeply the question idea we realized that the first question that we need to answer is how attackers may enhance their campaigns and attacks using AI ok so to start testing this question we needed first to define a use case so we choose a phishing campaign some phishing attacks as the first one the main reason behind this decision was that almost 91% of
cyber attacks and cyber crime starts with a phishing email so fishing is still is one of the most important tools that each Tiger has and if an attacker start using AI probably they would a star you see a star improving their phishing attacks so once we find a question once we define the use case we can start creating an experiment so we
create an experiment that has three steps and we call that simulating malicious AI so at the first step we want to able to identify some individual tractors so why we need to do that so
basically the main goal of this project is to understand affected patterns of each attacker and try to improve them using AI so as we can not now attackers directly we must learn about them using their their attacks so to achieve this we collected a database with almost 1.1 million phishing attacks coming from fish tank and we decided to analyze this data so how do you analyze 1 million phishing attacks incredible text so we start by looking for the most common domains in our database it led us to this first domain a called Naylor antiques.com basically this domain belongs to our online attic a store that obviously was compromised previously and in the whole database we found 406 URLs using the same domain now to check if this set of URLs belong to the same attacker we want to verify if they were targeting the same institution so we did a visual check from the to the strings screenshots available in fish tank and we realize that they were target in the same institution in this case a male Brazilian bank called Bradesco so finally we can say that this set of URLs belong to the same attacker because they were targeting the same institution now what happens if the attacker is not only using one domain sufferer question so we started to look for this strategy in the whole database so by analyzing the data we realize that they were some keywords commonly using the data so we collected these keywords to define the tagger strategy now by checking this strategy in the whole database voila we found 105 additional domains using the same strategy so finally the last thing that we need to do is to verify if they were targeting the same institutions so again we did the visual check and voila we verify that they were targeting also the same institution so finally at the end we can say that this a whole set of neural of URLs will learn to the same attacker because they were targeting the same institution and they were using the same strategy okay so could we just uncover one traductor in our 1.1 million phishing attacks so we keep doing this project for a while so we found additional domains using additional strategies and targeting and other institutions for example in this case we found a that they were targeting a Canadian bank called TD Canada Trust so at the end we just uncovered several tractors using this strategy so that was the first step now once we in covers a
we uncover or we identify some tractors we wanted to evaluate how effectively they can bypass our detection system indeed our own AI detection system so how we do first of all of course we needed to define our detection system so for this case we use an AI a classification model that we created previously basically it's a model that is using Alice TM neural networks and basically by analyzing each URL the model is able to learn the affected patterns or they or to gather the strategies and the patterns that each Ural has and at the end produce a probability if the URL is being used for phishing attacks that's the whole intuition behind this model okay so once we define the detection system or our own a detection system we start to measure how effectively they come by our detection system so to measure this we define something that we call the effectiveness rate the effectiveness rate is defined as the percentage of URLs that are able to bypass our detection systems so for the whole database we found that we found that the effectiveness rate was zero point two four percent okay now for the traductor number one that we just uncovered we found that the effect in his rate was a little bit higher 0.69% finally for the 3dr number two we found that the effectiveness rate was four point nine one percent now at the end we can say that that despite these two traductor that we just uncover we're a little bit more affected to bypass our detection systems that the average attacker in the whole database we are doing a really good job with our with our detection systems in the whole database so we can say yes we are winning the battle cool so that was the
second step now let's see the final and the most interesting step the third one so in this step we wanted to improve their phishing attacks of vista of each attacker using a a I so let's see how we achieve this so to answer to achieve
this we created an algorithm called deep fish so let's see how it works first of all the model divides the data of each attacker into a set of non effective euros that the algorithm our detection system was able to catch and a set of effective errors that bypass our detection systems okay so taking this last set we transform the data we encode the data into a mathematical representation such that we can input this data into an AI model so what kind of model did we use again we use an LSTA model or long short-term memory that basically the whole intuition behind this model is we give one URL to the model the model a catch or gather all the patterns in the URL and then start producing new characters that at the end we collect these characters to build new URLs following the same patterns that the previous your hat okay so that's the model that the a a model that we use so once we have this model and once once we have this train model we are now able to start producing new URLs in the following way so first of all we give to the model once it for example a it can be a segment of an URL that attacker has and we start producing new characters to create a new parts then we filter that parts to gain some aloe pads and by joining this data with a set of compromised domain that the attacker should have at the end we are able to create something that we call a synthetic Europe with this form so basically to summarize a we created an algorithm direct that was able to analyze a the URL that we gave to the model and produce new URLs following the same patterns that's the whole idea behind small okay so that was the experiment so let's see the results so as you remember in the traditional way the attacker number one has an effect in a race affecting his rate of 0.69% and the second one a four point ninety one percent so what happens when the attackers start using AI boom we can see that for the product a number one we increase the effectiveness rate from zero point six nine percent to twenty percent and for the traductor number two we increase the effectiveness rate to very six percent so with these results we can say that if an attacker start using AI in the way that dip is does they will be able to bypass our detection assistance more effectively than before yeah that's the experiment and that's the conclusion so if the fish could say something they will say something like this you are doing yeah okay so power down panic we keep improving so they follow the next steps in this experiment is to include another AI tools for example data birds are learning or generative models that basically at the end with the including these models we are going to be able to anticipate how attackers may use this may use AI to enhance their attacks and by anticipating this we can say that we are going to be winning the battle against AI and Skynet again for the next year so thanks so much [Applause]
Feedback