AI VILLAGE - Responsible Offensive Machine Learning

Video thumbnail (Frame 0) Video thumbnail (Frame 4283) Video thumbnail (Frame 7547) Video thumbnail (Frame 8880) Video thumbnail (Frame 19876) Video thumbnail (Frame 22170) Video thumbnail (Frame 24509) Video thumbnail (Frame 27613) Video thumbnail (Frame 37653) Video thumbnail (Frame 44089) Video thumbnail (Frame 53669)
Video in TIB AV-Portal: AI VILLAGE - Responsible Offensive Machine Learning

Formal Metadata

Title
AI VILLAGE - Responsible Offensive Machine Learning
Title of Series
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
2018
Language
English

Content Metadata

Subject Area
Building Multiplication sign Robot Online help Information privacy Number Neuroinformatik Malware Machine learning Robotics Videoconferencing Endliche Modelltheorie Information security Physical system God Dependent and independent variables Theory of relativity Nuclear space Analytic set Bit Cryptography Social engineering (security) Degree (graph theory) Word Data management Process (computing) Universe (mathematics) Computer science Right angle Game theory
Area Presentation of a group Scaling (geometry) Information Robot Virtual machine Set (mathematics) Exploit (computer security) Theory Social engineering (security) Wave packet Radical (chemistry) Type theory Machine learning Robotics output Right angle Quicksort Codomain Information security Position operator Physical system
Group action Decision theory Parameter (computer programming) Solid geometry Function (mathematics) Information technology consulting Facebook Mathematics Machine learning Endliche Modelltheorie Data conversion Information security Vulnerability (computing) Area Algorithm Arm Theory of relativity Moment (mathematics) 3 (number) Relative risk Type theory Process (computing) Chain output Self-organization Pattern language Right angle Quicksort Spacetime Ramification Computer file Divisor Transformation (genetics) Robot Blind spot (vehicle) Virtual machine Black box Field (computer science) Twitter Internetworking Robotics Computing platform Task (computing) Dependent and independent variables Scaling (geometry) Surface Expert system Frame problem Subgroup Table (information) Family
Context awareness Group action Hoax Image resolution Source code Image processing Streaming media Bookmark (World Wide Web) Hypothesis Formal language Wave packet Software bug Machine learning Robotics Different (Kate Ryan album) Hypermedia Software framework Information security Area Interactive television Content (media) Physicalism Social engineering (security) Exterior algebra Software Video game Quicksort Spacetime
Domain name Point (geometry) Building Information Surface Robot Source code Bit Mereology Cartesian coordinate system Metadata Entire function Revision control Mathematics Word Personal digital assistant Internetworking Different (Kate Ryan album) Chain Video game Quicksort
NP-hard Group action View (database) Multiplication sign Parameter (computer programming) Information privacy Mereology Disk read-and-write head Computer programming Neuroinformatik Software bug Formal language Malware Machine learning Computer configuration Different (Kate Ryan album) Negative number Endliche Modelltheorie Information security Position operator Physical system Area Curve Software developer Electronic mailing list Bit Social engineering (security) Category of being Data management Order (biology) Computer science Self-organization Quicksort Whiteboard Resultant Point (geometry) Statistics Server (computing) Robot Patch (Unix) Blind spot (vehicle) Virtual machine Black box Web browser Login Theory Field (computer science) Wave packet Element (mathematics) Twitter Latent heat Cross-correlation Robotics Internetworking Term (mathematics) Hacker (term) Authorization Energy level Software testing Computing platform Plug-in (computing) Dependent and independent variables Gender Prisoner's dilemma Cartesian coordinate system Software Personal digital assistant Network topology Universe (mathematics) Video game
Context awareness Group action Variety (linguistics) Multiplication sign View (database) Decision theory Real number Open set Information privacy Mereology Proper map Field (computer science) Computer programming Product (business) Twitter Software bug Malware Machine learning Hacker (term) Computer configuration Different (Kate Ryan album) Boundary value problem Software testing Data conversion Information security Computing platform Physical system Vulnerability (computing) Area Pairwise comparison Dependent and independent variables Algorithm Bit Perturbation theory Antivirus software Word Software Software repository Universe (mathematics) Order (biology) Normal (geometry) Right angle Quicksort Whiteboard Table (information) Spacetime
great panel that is called responsible offensive machine learning and here we have Boadicea filer and strays and we have our delta0 moderating so take it away
all right let's just really quick have a time for the panelists to introduce themselves to start what are you working on today and what path did you take to get there so hi everyone I'm strafe primarily what I work on is using robots to social engineer people so physical robots and you know getting people to do things they wouldn't otherwise do or say things that wouldn't otherwise do and my path here has been really weird started in business and then went through to project management and then to IT Help Desk and then university where I did the reticle computer science and robotics and now I'm in a cryptography security and privacy lab none of those words or AI or machine learning but I've kind of done a little bit along the way and obviously it does feed into the robot social engineering hi I'm Bobby Fowler data scientist end game my primary responsibilities there is building malware classification models employing an LP to try to help security analysts do their job faster easier my path here equally kind of bizarre International Relations background I wanted to write about North Korea nuclear weapons now I'm on stage talking about offensive AI yeah so I jumped from kind of the policy side of the house to geo spatial analytics then into information security and then into information security data science so I'm Sarah I'm a data scientist I work for a very large ad tech company so my day job I'm going through very fast very very large amounts of data trying to get more people to look at videos through the ends of videos which balls I held at me sometimes so as a hobby I track misinformation very large-scale bots troll activity and workout counters that and I got there god I commend being a kid I could aside whether to psychology or computing and then discovered AI first job I refused to go to university so I went straight to a graduate job designing sonar systems and intelligent torpedoes got one of the very first AI degrees in the world we are talking 30-something years ago and just no unmanned vehicles intelligent systems just fun stuff if it looks like fun I do it awesome so starting right into things during the past week at blackhat Def Con and besides we've seen quite a number of examples of like offensive ml adversarial examples things like that
what concerns you the most in the whole ml AI InfoSec intersection the biggest thing that concerns me is probably the marketing of AI n ml so I'll make sure to apologize to our marketing department but in in reality it is it's often positioned it's like a silver bullet that can solve or catch anything that it can reduce all of your data and there's no false positives and that's sort of a kind of definition is is dangerous because then people start to believe it and spend a lot of resources of people and money on things that don't solve everything well can we talk about the training data sets that are used for all these companies that are adding in AI am machine learning like my favorite thing has been walking around vendor areas and saying hey how do you train your systems like how do you feel about that sort of stuff - oh boy I got a long story about that one huh um it's like human inputs a very bad bias so my biggest worry is that we're getting a lot better at replicating the appearance of being human and yeah I know it's all like sort of terminator that a I type stuff but I I build this crap and there is a danger of not really people not being ready and not knowing how to work with bots that aren't necessarily labeled this is called autonomy theory we've we've had a lot of it in some of the work on mind vehicles for years but it's really hitting at scale now and so that brings up something else is the thing that worries me the most is people constantly think of AI and they jump right to to being a terminator right is once you put an AI in a body it's very very different and to me what like seeing what everybody's doing an AI machine learning the biggest threat is like the social and cultural impact not what happens after you put it into a body yet which is why I'm working on the robot social engineering but the idea that AI and machine learning is already
affecting us now and I don't think people understand how much it's affecting us because they haven't physically seen it so it worries me that people are waiting too much for the physical impacts before they see the cultural and social actually on that same note what are the impacts of AI ml and how do we actually keep them there so I think the the idea of keeping things ethical and fair is is certainly a difficult one I believe it was you who had a presentation last year on phishing using targeting phishing using AI nml where you mind publicly available information to create a kind of curated target set and exploit individuals to gain access and and kind of take advantage Africa fair you're only as good as your inputs I am seeing so many
groups using human generated internet generated inputs which are by themselves usually likes come on let's talk sexist racist and all of the other bats I know Tay was engineered to be a racist
asshole but there's a lot of that going on um so do you repeat the question yeah so what are the impacts of AI and ml and how do we keep them ethical unfair oh yeah that question so what worries me is what we count as ethical and fair understanding what is fair for one person is not fair for another person it is not necessarily ethical for another person and it really worries me what we are doing in North America or even in certain subgroups and how specific the AI and machine learning is getting for one task but then you try and put that machine learning anywhere else and it could actually harm people or hurt them so I'm definitely worried about the contextualization of these tools that's kind of talking about definition of ethics so I've worked a lot on data ethics so what it means to be ethical when you're using data about people even before you start machine learning on top and the framing that I use it seems to stick most is of ethics as a risk problem so ethics is all about there is a risk to people so you have a population you have a risk of something you have a probability of that risk and then you can start talking about relative risks and relative problems from relative populations without that you're just waving your hands in the air and say it's not fair and under the fair argument that I believe strength was making explain ability is certainly something that at least us in the InfoSec group strives to do because black box can be extremely dangerous when you are producing some sort of output based off of math and stats that a human being then makes a decision on that could have actual dollar ramifications so when you look at things from through that lens you have to be much more careful yeah so um money is one factor but there are so many other factors especially when we start looking at some of the tools we are making and how they impacts for example even LGBT q plus plus groups and how their model is different even though they might be in the same house as somebody else that the tools might be working for so that worries me too is that even when you have a contextualized in one specific spot just some of these personal experiences can change things so much and then it's not even deliberate offensive machine learning it's accidental or like non conscious which is almost worse to me very cool so on the next stop so obviously we're moving towards the future like MLA I robots are becoming more widespread how will different teams say red team's blue teams or even other fields need to adjust for the future so yeah I think the Red Team Blue Team conversation is probably something that folks in here are interested in as well I think putting together solid red and blue teams is pretty much the same way you put together a solid basketball team everybody kind of has a role and responsibility and the advent of ML backed platforms particularly in the security space means that you will likely need to have a resident expert on AI and how to exploit that on the red side and on the blue side when you're paying vendors hundreds of thousands of dollars for ML backed security you're gonna need to understand the underlying blind spots and and problem areas and how to debug these machine learning models I mean he'll when we say mmm most of the stuff out there it's like hey we've got a table we got some labels let's go like put the thing together so this is not exactly complicated yet what was the question again how will other teams like Red Team Blue team go so I've been through quite a few industries that we've added data science and ml into one my job used to be a consultant going in and changing organizations and what you see is changes in not just the technologies but you're talking about the speed you can do things the scale you can do things with the types of data you can start using it's like log files at boring as hell but when you get enough of them together it gets interesting and there's always this sense of transformations of people process technology culture all of those change the skills people have changed because they have to be able to into to understand the vulnerabilities within their own algorithms they have to understand how to attack similar things and ok I can see straight it seemed to get the microphone I'll stop for the moment so you can Guerette well the other interesting thing with this too is when we're looking at red teams and blue teams we're usually talking about corporations and large groups and communities right but a lot of stuff you've talked about in the past few days if anybody's had a chance to see is social BOTS especially on platforms like Twitter and Facebook and how they're getting people individually and how they're getting like the individuals so how do you red and blue team for yourself - right and a lot of it comes down to trying to develop communities around you know blue team and community to protect your friends and family and and countries to like what can we all be doing individually to be like hey stop that so this is something to think about too is with how the social bots are being integrated into our lives we each need to start taking a role in this and not just leaving it to the companies not just leaving it to the platforms but also judging for yourself who is a body who is not a bot and going through and seeing whether you can help parse that out for other people and prevent it which is why this is now going back to Sarah sorry I'm one of the things I want to talk about is subtlety so one of the things that if you're starting to use machine learning you're starting to use AI instead of that humans putting a buttons is you can do much more interesting complex under-the-radar attacks so at the moment the botnets that I've been tracking have been pretty dumb sorry I swear every talk I'm done so they've been pretty dumb it's like hey let's just like retweet this thing they're pretty easy to find but once you can start adapting so you can take that right in you can generate out once you can start randomly creating patterns of behavior that are more human-like they're the more random it makes my job as a hunter a lot harder and it's that arms walk that I'm expecting over the next year between the bot makers and the bot finders using machine learning on either side and starting to race up up that chain so it's gonna get interesting and yes that hits individuals as well as groups as well as countries as well as the surface is enormous you should say something no III
my question to you then would be when you look at things in Ariel mention this as well with deep fakes and there was the the Jordan Peele Barack Obama sort of fusion or my favorite Nic Cage Donald Trump which was fantastic how do we not quite that good yet but what what steps would you recommend to kind of the layperson to educate and arm themselves because I think for a lot of people here who are very well guarded in the information security space and know how to handle themselves at places like DEFCON or in day-to-day interactions when you remove yourself into a more political or geopolitical spectrum and open yourself up to these alternate media streams and things like that there's it's kind of a lot to expect of people to to understand so this is something I touch on a lot with the robot social engineering is because most of it is one-on-one because it's a physical robot and a physical human in a space interacting with with each other socially so talking you know body posturing is huge with robots like being able to use body language that's primarily the advantage of physical embodiment from AI over versus when it's on the web and for that there's not a lot you can do like I've been trying to write this defenses chapter in my thesis and it's just like magical like awareness will solve everything because there's just not a lot otherwise that you can do I've been
doing some experiments on whether people can tell the difference between when a robot is acting on its own and when it's being controlled by a human and the answer is no like so far there's been no distinction between the two people can't tell me when hey that robot it's acting weird they think it's a bug before they think it's another human or else they don't think it's a bug they're like oh that's a feature so this is a thing too is is depending on how the AI is presented people usually can't tell the difference as soon as it's in a body because they prescribed so much more like life likeness to the robots so this is things specifically from my area that concerns me about AI and machine learning as people don't know the difference between that and real yeah I mean really when we we look on two different axes so this is comes from my old school intelligence training it's like you always look at the content and the source so the content of something isn't enough anymore you need to know the context around it now where is it come from what does this trust framework come within and there are media literacy trainings on how to start spotting things but know things like deep fakes I mean I I I've had a little play I am good at image processing and I've reused some old mind finding software stuff and it's it's not easy it's not easy to spot I mean you end up then doing common sense you look at things like Snopes but if you're trying to protect that group
of humans and those humans that weren't where is the active the attack surface generally that you just got to go all the way back the supply chain and stop this stuff before it gets to the humans because they're they are really a very last resort yeah actually that's really intriguing just generally because like I know in my own research it's really hard to sort of spot like you know
spearfishing or audio like generated I'd love to hear more just on the idea of like what happens if it's impossible to actually detect this stuff like I know your research and you know stopping it from the source can you just expand a bit like this is probably not the place I would record it yeah there was there's a certain building as in Petersburg that if it accidentally disappeared just life would get an awful lot easier for me but that's part of the change so you've got the people who are generating the people who are pushing out you have bot mop minute but amplification you have the people at the end to it then applicant and flying onto so it literally is a chain of information that you need to at every point find places to stop it it's find ways to stop it and at the end were the people I mean you can do like domain squatting and typosquatting and dye versions so there's a whole pile of and that's an entire different talk on words and this is something you and I have discussed like outside but the idea of whether we want to be able to use up metadata to find BOTS like we also want anonymity for people we want like there are some cases where you know having that metadata it can harm people so if you start stripping all that out in an effort to protect individuals and have a more like free internet or however you want to think about what information we release while we're doing things you use a lot of that to find BOTS so once like if that is gone and if we have a more
anonymous internet like how we're gonna be able to tell between people and BOTS and where does that sort of acting our culture more than it is already - right yeah and and bringing this back to kind of the InfoSec problem and and for me malware classification that's where the onus falls on folks in the crowd fellow data scientists where you have to participate in this sort of research adversarial machine learning as you could tell by the tracklist over the past two days pretty heavy on that side and the reason why it's so powerful is not because it's cool to make stickers to trick a machine into thinking you're a toaster it's about identifying your models blind spots and once you can identify those blind spots you can attempt to patch them and by patching them you help make that platform more secure and and hopefully in theory organizations people what-have-you more secure well it's also the human blind spots one of the beautiful things about doing machine learning is you start learning where the people have missed thinks it's kinda cool awesome so on that same note actually let's let's move things more to the responsibility side for people creating you know AI ml systems what responsibilities do they have when creating those systems whether they're blue team red team or anyone yeah take first crack here so for me personally it's a lot of eliminating things so eliminating false positives to engender like trust and make people believe that the answer coming out of these systems is fair and and trustworthy trying to reduce kind of the black box feel of it explainable AI as I said earlier is is a huge thing the majority of the industry right now uses tree based classifiers which should insurance some sort of explained ability based off the features they're using and and things like that and then the last thing I believe strafe harped on which is eliminating bias in your training data so for me bias isn't necessarily a gender or anything like that but it could be nationality based language packs and software a lot of malware comes from Eastern Europe so we don't want to create models that the first time it sees anything from Eastern Europe that is like a browser or a plugin that it immediately labels it bad just because so much of our training data is oh if Russia that's definitely bad so that's that's not an ideal thing and that can lead to a lot of problems both within international organizations and just developers trying to do right by themselves to create software and we're back to risk equation so you're actually talking about false positives actually I've worked with a bunch of Nigerian tech people as well it's the same problem nobody learns of them hello Lagos so you've got to think about what the inherent harm is in your false positives new false negatives and which way you should shift to keep the harm down so that's like if I mean terms of bots quite we quite often catch real human beings it was just enough on the borderline that they that the list put out by Congress had a bunch of human beings in it their life's can get completely screwed by that but the other way if you don't catch a major bot that's causing a lot of harm or a major incursion then you screwed that way so it's like who were the harms to what are the what are the risks what is the cost and not just in money but the cost in general of doing this or not doing this of those parameters you're using let's actually move it like back to robots actually if I'm designing a robot do I have any sort of responsibilities or anything in that sense oh sorry let's move it back to like for example robots if if I'm designing a robot are there any responsibilities that I should have or should follow in order to you know I don't know try to not deceive people or anything so there's some interesting things with robots um most of it is not AI or machine learning related but more in how the robot is perceived by people so some of the things that really bother me are robots that are heavily gendered and put into gender specific roles so we have a lot of female robots that are servants waitresses and all that sort of thing that bothers me because then you have the male robots which are being trained as managers are being trained to be in positions of authority but they have huge biceps they're built like football players they have like they have a like huge crotch area it's like it's a robot could be not or it's like why does the waitress robot need huge tits they're like I mean they're hard plastic there's you to tablet in front of them anyway so it's like why is this happening so when you start doing that and you throw a ayah in it you also get a bunch of really interesting interesting quirks in how people perceive what the robot is doing so if the robot does not act like the gender that is perceived as people also treat it differently which is kind of neat um they get really confused about what role the robot has which needs more research but so far the papers are just really neat the other thing too is how we do voices with AI or what we name our AI assistants Siri like female name again kind of like another thing that why isn't it called like Jeff I'd love an AI called Jeff you like Jeff can you check my calendar for me today although I'm sorry if anybody hears named Jeff but this is the thing is like you know I love being able to change the voice on my assistant as well but like how many people change it and stick with a male voice or stick with a neutral voice right there usually female as well like if you go to the Computer History Museum in Mountain View there's Watson in the room male voice but then the AI that controls the lights is a female voice and it's like why wasn't it the same why wasn't it you know what other options could happen and how does that affect people but then I use that for social engineering and messing with people more because when the like oh it's such a cute robot she's like so tiny and it's like yeah she's in the up your day there is kind of could you not element so I was just remembering something I did for my last last but one company so I was using all the security locks I was looking for anomalous behaviors of humans maybe
humans so I had all the logs across all our systems and I started finding stuff out about my team that maybe I didn't want to know there were specific members of my team doing some really interesting stuff then you could find out it's really you know just going through the activity they had it's like I know who's awake when I know there's some interesting curves going on there I know some interesting correlations I look I can go you some open data to go find some more about them it was getting a little bit creepy and some of it is like how far do you go to protect your system and in this case it was like I had to track my own team to be able to spot anomalous anomalies and other parts of the team and to spot whether any of my team's accesses changed to a point we had to worry about them no supey is this you don't want them going broke yeah yeah I mean if anybody here works on insider threat problems you'll know that yeah somebody's shaking their head it's incredibly creepy how insider threat programs work because they need to know every single thing when you log on when you log off when you use the restroom what do you print when are you on Gmail and what do you get to that sort of level it creates I think an interesting moral dilemma about security versus privacy which is something that the book fellow panelists I think is hard in multiple talks cool also let let's transition a bit to responsible disclosure so there's a lot of different cultures surrounding responsible disclosure there's like the academic IRB model there's the you know sort of hacker just tweet it out model how what what youth what does responsible disclosure or what what responsibilities do you have when you're doing a pen test in any of your fields what should that look like so I've been going through IRB is the past month I they don't like me because I fight back on a lot of these things and what is actually ethical like an ethics review board is not the like end-all be-all of ethics um for example the one I was dealing with is like you have to keep your raw data for seven years on the university servers and and they animals like why and they said oh just in case like know them with science the best way to have reproducible science is to release your data set but you want to take care of your participants so you anonymize it the best you can you strip out as much data as you can and leave the bits that you need to get the stats that you got so other people could get the same thing if they took that data and your paper they should be able to get the same results easily and I said yeah look I'm gonna as soon as the raw data has been shifted over and I've anonymized it I'm gonna delete it it'll be gone and then they're like no and fighting it and so this is an interesting thing because when we're talking about how much that data reveals and how much you can get out of it if you haven't anonymized it and it's sitting there on a server and like honestly most people aren't there for seven years that would be a masters and a PhD in Kenna and maybe some undergrad that's a very long time who's maintaining that who's actually gonna take it down who's actually making sure the data is safe and so like even then the ethics boards are like well I don't know you have to you have to take care of that and it's like it's seven years who's gonna remember that later on except for maybe a Google Cal update this is hey delete data it's like what did Oh what did I do like this is a problem and also with the ethics boards they are not usually geared toward specific fields um an ethics board covers everything in the university so they might be biologists there might be sociologists there might be English professors and then maybe you'll have a computer scientist like they're not necessarily ready for AI or machine learning ethics applications or even going through the methodology that's used right so this is another problem no one verified my methodology in any of my experiment that I have put in for ethics this is also a problem is I don't know how many people here had seen the
paper that came out a few months ago on telling people telling whether people are gay or anything like that based on their face like how is that ethical where was the ethics Review Board it got ethics clearance apparently so you know this is an issue that was all about hats and face hair well and then this is a thing as there is a company people can ask me about after this outside but this is what they do is they collect people's faces at it public places and they say oh yeah it's for glasses and facial hair and hats and to see what the latest trends are so they can sell it to companies to see like what products are selling the best but then they were talking about how at lunch time they go through the facial hair pictures to make fun of people like that's not great ethics yeah this can be an issue if you're doing something that's so new that nobody's really kind of I mean you talk to InfoSec people they know that misinformation is an InfoSec problem they know that it's a massive hack on people's brains and communities but there's no real place to put it it's like we find stuff we find stuff all the time and all we can do is like a very quiet word with somebody in the right sort of place so how do you tell people that a piece of their system that they don't even think of as a piece of system is broken and vulnerable and but they're completely screwing up protecting it on the other side thank you anonymous for releasing a huge pile of names of people who connected to Q and on but they're not the people we've seen so i'm you've seen everything from us like sometimes we keep stuff because we don't know where to put it and we don't want to do harm by putting it out to people just like chucking stuff out because hey it's hot it's not for us and you're so quick the label butter no I I think with responsible disclosure the thing that most interests me is we as a community have finally gotten really good I think reasonably good at responsible disclosure for the vulnerability research community has anybody here participated in like bug bounties or anything like that do vulnerability research yeah a few people we have guidelines and and boards and clearing houses and and for the most part proper steps to take to go and publish a vulnerability for a network system or a piece of software for a I backed security platforms that doesn't exist at all yet we kind of open up these platforms to public view in places like virustotal where you can test you know your malware against a variety of anti viruses and then if you happen to pass one and you'll feel like vendor shaming or anything like that or gaining publicity through tweeting that's that's an option on the table there's no real ethics or guidelines attached to that and that is probably an opportunity for the adversarial research community to learn from the vulnerability research community and attempt to establish these sort of norms and guidelines and this is sometimes where academia does come in because we aren't beholden to a company we don't have like in Canada specifically we don't have to worry about funding so you basically have a General Grant that covers like privacy and security and you can do whatever you want under it essentially so this is a good opportunity sometimes to work with academics who have more protections who can publish this stuff out underneath the protection of a university and have the funding to do it whereas you know a company might be like no you're under us we own your IP you can't do that or maybe you're worried as a community group like whether you'll personally be targeted of course academics also do have that worried that we will be personally targeted but sometimes the university can help out with that or we can publish under the university generally and not put out specific names so this is we're definitely start talking to academics as well and we could bridge things a bit more to maybe help solve some of this problem especially for your lone hacker or lone view hackers it can be pretty scary out there you have it's an appetite cereal context it's just a seriously adversarial context and the risk is high can you think of any other like ethical you know just responsibilities or decisions or boundaries that you know exist in this space I know we kind of already covered it so yeah I mean I think what we're talking about and and what folks like spin and an ariel and a few others who have gone over the the applied adversarial approach doing this and testing vendor based AI platforms is a murky area because you are inherently doing making the software do something different and that's where I drew kind of the the comparison to the vulnerability research community which is very much the same idea and and that's kind of that's kind of where I was going at least with my do we partner with academia do we partner with impartial third parties or NGOs in an attempt to create some sort of clearinghouse in order to generate or establish these these norms yeah I mean it's like your data was broken your access is broken and your algorithm is broken it's it's hey who the hell do ya awesome so the final question before we actually turn it over to audience and have audio to ask some questions how would you recommend someone get into the field or these research areas yeah I mean a lot of people are doing it right now going to things like AI village Sarah leads an excellent conference that will be in Washington DC I don't need it I'm just the program chair but you should go it's called camless conference on the Smith applied machine learning and information security and we're looking
at both offensive so attacking with machine learning we were looking at defensive defending with machine learning and we're looking at attacking the algorithms ourselves so things covered AI village also ml SEC we have a little problem with the website address but if you look at ml SEC as a Twitter you should be able to find the community join the slack channels join the long conversations we have I was a and the other thing would be just talk to people in the in the room follow follow folks on Twitter if you don't have an account don't you don't need to tweet just follow people it's such an excellent way to learn about what everybody and academia and industry is working on I completely forgot I've got a kid hub repo with listening to other people's repos of interesting things to read plus people to follow
Feedback