AI VILLAGE - Towards a framework to quantitatively assess AI safety – challenges, open questions and opportunities.

Video thumbnail (Frame 0) Video thumbnail (Frame 1037) Video thumbnail (Frame 2849) Video thumbnail (Frame 3870) Video thumbnail (Frame 4992) Video thumbnail (Frame 5743) Video thumbnail (Frame 6650) Video thumbnail (Frame 7492) Video thumbnail (Frame 8711) Video thumbnail (Frame 9968) Video thumbnail (Frame 10705) Video thumbnail (Frame 11333) Video thumbnail (Frame 12856) Video thumbnail (Frame 14147) Video thumbnail (Frame 15895) Video thumbnail (Frame 17452) Video thumbnail (Frame 19155) Video thumbnail (Frame 21211) Video thumbnail (Frame 22694) Video thumbnail (Frame 23679) Video thumbnail (Frame 24498) Video thumbnail (Frame 31660) Video thumbnail (Frame 33679) Video thumbnail (Frame 35684) Video thumbnail (Frame 36752) Video thumbnail (Frame 39149) Video thumbnail (Frame 40496) Video thumbnail (Frame 41429) Video thumbnail (Frame 42069) Video thumbnail (Frame 44081) Video thumbnail (Frame 44803) Video thumbnail (Frame 45787)
Video in TIB AV-Portal: AI VILLAGE - Towards a framework to quantitatively assess AI safety – challenges, open questions and opportunities.

Formal Metadata

Title
AI VILLAGE - Towards a framework to quantitatively assess AI safety – challenges, open questions and opportunities.
Title of Series
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
2018
Language
English

Content Metadata

Subject Area
Abstract
While the papers are piling in arxiv on adversarial machine learning, and companies are committed to AI safety, what would a system that assess the safety of ML system look like in practice? Compare a ML system to a bridge under construction. Engineers along with regulatory authorities routinely and comprehensively assess the safety of the structure to attest the bridge’s reliability and ability to function under duress before opening it to the public. Can we as security data scientists provide similar guarantees for ML systems? This talk lays the challenges, open questions in creating a framework to quantitatively assess safety of ML systems. The opportunities, when such a framework is put to effect, are plentiful – for a start, we can gain trust with the population at large that ML systems aren’t brittle; that they just come in varying, quantifiable degrees of safety. Ram Shankar is a Data Cowboy on the Azure Security Data Science team at Microsoft, where his primary focus is modeling massive amounts of security logs to surface malicious activity. His work has appeared in industry conferences like BlueHat, DerbyCon, MIRCon, Strata+Hadoop World Practice of Machine Learning as well as academic conferences like NIPS, IEEE Usenix, ACM - CCS. Ram graduated from Carnegie Mellon University with a Masters in Electrical and Computer Engineering. If you work in the intersection of Machine Learning and Security, he wants to learn about your work!
Machine learning Internetworking Software framework Open set Mereology Product (business)
Cybersex Trail Standard deviation Game controller State of matter Physical law State of matter Computer Information privacy Neuroinformatik Latent heat Tablet computer Telecommunication Robotics Software framework System programming Physical law Right angle Software framework Game theory Information security
Email Potenz <Mathematik> Beta function Machine learning Bit rate Repository (publishing) Core dump Virtual machine File archiver Computer network Propositional formula
Algorithm Standard deviation Performance appraisal Machine learning State of matter Virtual machine Software framework Self-organization Physical law Information security Perspective (visual) Twitter
Area Software developer Virtual machine Software framework System programming Software framework Client (computing) Quicksort Mereology Information security Open set
Standard deviation Building Open source Physical law Virtual machine Mereology Open set Mereology Field (computer science) Software Strategy game Robotics System programming Right angle Software framework Quicksort Spacetime
Fraction (mathematics) Standard deviation Service (economics) Service (economics) Internet service provider Virtual machine Point cloud System programming Product (business) Vulnerability (computing)
Standard deviation Physical law Process capability index Point cloud Plastikkarte Independence (probability theory) PCI Express Physical law Information security Plastikkarte
Programming paradigm Support vector machine Logistic distribution Linear regression Decision theory Forcing (mathematics) Virtual machine Set (mathematics) Mereology Angle CNN Natural number Different (Kate Ryan album) Network topology Infinite conjugacy class property System programming Software framework Escape character
Point (geometry) Context awareness Presentation of a group Algorithm Distribution (mathematics) Confidence interval Model theory Multiplication sign Virtual machine Perturbation theory Bit rate Bound state Sphere Mereology Wave packet CNN System programming Software testing Information Game theory Linear map Rule of inference Algorithm Weight Bound state NP-complete Set (mathematics) Limit (category theory) Computer programming Existence Category of being Error message System programming Statement (computer science) Right angle Quicksort Musical ensemble Resultant
Addition Asynchronous Transfer Mode Functional (mathematics) Theory of relativity Kolmogorov complexity Multiplication sign Bound state Infinity Bound state Wave packet Error message Natural number Commodore VIC-20 Boundary value problem Software testing Nichtlineares Gleichungssystem Error message
Point (geometry) Source code Email Linear regression Model theory Virtual machine Planning Bit Mereology Perspective (visual) Field (computer science) Usability File archiver Linearization Interpreter (computing) System programming Software framework Quicksort Information security
Point (geometry) Web crawler Real number Multiplication sign Virtual machine Determinism Field (computer science) Wave packet Sign (mathematics) Category of being Sign (mathematics) Single-precision floating-point format Data mining System programming Cuboid System programming Software testing Escape character Information security Information security
Context awareness Multiplication sign Parameter (computer programming) Open set Perspective (visual) Medical imaging Mechanism design Machine learning Bit rate Different (Kate Ryan album) Ubiquitous computing Formal verification Query language Cuboid ARPANET Software framework Process (computing) Data conversion Information security Position operator Vulnerability (computing) Enterprise architecture Pattern recognition Channel capacity Open set Degree (graph theory) Military rank Process (computing) Sample (statistics) Internetworking Angle Internet service provider System programming Software framework Right angle Ranking Quicksort Enterprise architecture Service (economics) Transformation (genetics) Virtual machine Calculus of variations Black box Product (business) Wave packet Frequency Robotics Internetworking Touch typing System programming Energy level Computer-assisted translation Demo (music) Content (media) Machine code Limit (category theory) Logic Query language Formal verification Musical ensemble Oracle
Email Decision theory Direction (geometry) Cybersex Virtual machine Perspective (visual) Medical imaging Mathematics Machine learning Internetworking Hypermedia System programming Directed set Category of being Metropolitan area network God Cybersex Email Physical law Content (media) Internet service provider Sound effect Core dump Basis <Mathematik> Instance (computer science) Category of being In-System-Programmierung Personal digital assistant System programming Charge carrier Chord (peer-to-peer) Electric current Stability theory
Cybersex Context awareness Context awareness Physical law Virtual machine Design by contract Insertion loss Basis <Mathematik> Student's t-test Computer Computer Neuroinformatik Internetworking Personal digital assistant Internetworking Order (biology) System programming System programming MiniDisc Right angle MiniDisc
Context awareness Pattern recognition Open source Software developer Virtual machine Machine code Symbol table Computer Medical imaging Software System programming System programming Right angle Game theory MiniDisc Exception handling
Source code Algorithm Service (economics) Open source Algorithm Machine vision Internet service provider Information privacy Open set Product (business) Neuroinformatik Product (business) Strategy game Personal digital assistant Internet service provider System programming System programming Relief
Cybersex Curve Standard deviation Focus (optics) Artificial neural network Forcing (mathematics) System administrator Virtual machine Artificial intelligence Focus (optics) Public key certificate Machine learning Term (mathematics) Personal digital assistant Autonomic computing System programming Software framework Traffic reporting Task (computing) Ultrasound Task (computing) Relief ARPANET
Model theory Direction (geometry) Multiplication sign Virtual machine Trojanisches Pferd <Informatik> Protein Machine code Field (computer science) Computer programming Sound effect Machine learning Internetworking Software System programming Spacetime Statement (computer science) Extension (kinesiology) Forcing (mathematics) Model theory Computer program Sound effect Machine code Internetworking Software Statement (computer science)
all right so our next talk is RAM here and he's going to speak about a framework to quantitatively assess AI safety which includes challenges open questions and opportunities so give it up for him and I'm gonna let him take it away so quick show of hands how many of you are currently using machine learning in as part of your product or your company wants you to okay perfect how many of you want to okay sweet okay so that's that's great you know let's go back to a little bit of early 90s if the Internet is just starting out you know things are fine and all of a sudden you know we see a
spate of cyber attacks like you know and I don't have explained this to you guys and then what happened was the
regulatory landscape kind of quickly changed you know federal law started coming into place like you know the Computer Fraud and Abuse Act apparently Reagan saw the war games movies got really inspired and you know within like a couple of years this Act was formed hopefully people are not watching mr. robot and we get like stronger acts like that state laws started cropping up to fill in the gaps anybody wants take a wild guess which was the first state to actually make ransomware illegal you're in the right track closer it starts with a W oh my gosh you're right Wyoming yeah Wyoming interesting now is the first date I think 2015 legislated ransomware is illegal and behind like a spate of like standardized cybersecurity frameworks if you are a company and you want to convince your customers that your system is secure you would adopt one of these frameworks and that's how you would convince your customers that hey I pass all these controls and this is pretty much I'm a secure system let's
go to shift to like machine learning today this is like the machine learning landscape by Siobhan zilla's venture capitalists at Bloomberg beta if you can't see anything which I'm sure you can a lot of companies out there who are actually investing in machine learning as their core value proposition and the interesting thing is
the attacks on machine learning is now grown at an exponential rate Archive which kind of documents how many it you know the it's a public repository of documents has close like a hundred papers on adversarial machine learning just in the past two years and this is
kind of like a timeline so you can see it but I know I'll tweet this slides out by Batista Biggio kind of like mapping out extensively like what are the different attacks that are possible you know starting all the way back from 20 2004 but today our regulatory rent
landscape with respect to machine learning looks pretty empty you know there's no way there's no guidance from the government there's no guidance from standards organizations to tell you which ml algorithm is safe and this kind of is a problem because we've now all bought into this kool-aid of machine learning and we have no way to tell our customers hey what I'm doing is safe from my machine learning perspective so
that's what we're going to talk about today you know my name is Rahm I work in the ajna security data science team our promise to customers is you know we'll keep a sure and you secure and this is this work is done as I'm also a research fellow at Berkman client center and this is what I'm gonna be working on there so a lot of this is ongoing work so it's rough around the edges and if you're working on this area I'd love to learn talk to you more about and this part and
this talk basically has two parts so the first half is gonna be about how do I assess the safety rating of a machine learning system so you know your development team say you know let's say that they built a machine learning system you know can we pass it through some sort of framework and get a safety rating just like you know how you would expect of like a the health of a food or you know the safety of your car seats which you can very much look at what kind of guarantees can you get about the safety of your machine learning system so that's going to be the the first half of the talk the
second part is going to be about the legal underpinning of machine learning system you know we all have you all heard about the doomsday scenario right like there's a drone you know using an open-source system and instead of landing in your backyard what if it kind of like you know chops your hand off what do you do that like who do you who do you sue do you sue the drone maker you sue the open-source software what kind of like legal underpinning is there for like machine learning safety so those are the two things so first of all like the call for
like machine learning safety is not new new ish so earth gasser you know wrote a fantastic paper on AI governance and said hey the first thing you want to be tackling is some sort of standard some sort of data governance with respected machine learning safety Ryan K low is the robotics prefer I was a law professor at u-dub it's pretty much laid a lot of open questions on this field and proposing a lot of fantastic ideas and things are getting real you know Europe's AI strategy specifically says by 2019 you're going to come up with a standard framework and a liability framework for ML safety so things are picking up at a great space it's not like I'm all of these are new ideas I'm building on top of them so
first off why do you why should he care like you know what is why is this an important topic for us to discuss at DEFCON I think the most important thing
is we all we all want to hold our service providers to some accountability like your trust just like how you trust your data to say the cloud and you assume that the cloud is secure if you're going to use a machine learning system like you know you want to know that the machine learning system is secure as well can we gave a fantastic talk just like an hour before about the software vulnerabilities that come in that you inherit when you build machine learning systems and that's kind of like unacceptable because if you are running a production system you want your customers to feel safe from that perspective and from a
regulatory standpoint just like you know if you want to take the cloud as an example you have a bunch of laws that you know Congress you know regulatory agencies and even councils independent councils have created so you know if you want a host say the credit card data you would have to be PCI compliant and if you fail you pretty much get fines you
lose the ability to operate and you can be criminally prosecuted for this so
hopefully we don't want any of you guys to go to get a jail so the first half is gonna be about like you know how to assess safety ratings some before like you know we go to the most important part about how to assess a bulk of this talk is going to talk about why this is a very difficult problem to solve that so angle I'm not going to have like one framework one definite answer it's just going to be a proposal so what are the
challenges first off you cannot escape adversarial machine learning there's no vendor who can claim that you know my machine learning system is adversarial and mal free because it's ubiquitous among all I mal paradigms whether you take supervised learning whether take like reinforcement learning adversarial examples are just a force of nature and paper not kind of figured out that adversely examples are also transferable so if you have a system say a logistic regression system and you train on that it's pretty much those attacks are transferable to say decision trees as well so not only are you kind of like guaranteed but it's also transferable scheduie in like 2016 said that hey it's not only transferable but even if you have two completely different data sets you will still have like adversarial machine learning so you're pretty much caught in like a triple whammy at this point and
the second challenge is verifying if an algorithm is safe is practically difficult in fact like it's an np-complete computationally intractable that was like one of the stellar results that we got this year that even if you can it might be theoretically possible but computationally it's impossible because of limited training data and the biggest kicker is it's an np-complete problem especially in the context of deep neural nets which is like really big popular right now so if somebody you know if somebody gives you a deep neural net and if they ask you to verify a property about it hey is it safe it's an np-complete problem there's no like exact you can come up with a solution but it's very in polynomial time I can't really verify it there's
been some good news I don't want to like damper the whole system you can get some sort of upper bounds some confidence bounds but this work is super nascent at this at this point the third challenge is you know completely safe ml systems can only occur you know if you have a non zero tester so just to like you know what this statement means is if you when you train your machine learning system to get like two you know two things that you want to really care about like how does it perform in the training set and how does it perform the test set and even if you get like a machine learning system that performs really well or a training test on a training data there's no guarantee it's gonna perform well on your test data so imagine that you've built like it's not osami thank you yeah there's no music as part of this presentation so the only way one of the insights that to recent papers had is the only way that you can kind of assume like a system is safe is if it has nonzero tester and nonzero tester is
really not possible we will see why this is kind of like a relation it's called it's called a VC bound it bounds your test error as like you know your training error and as a function of something called a VC VC bound we won't go into that but just like you know keep in mind that tester is kind of like proportional to some you know some additive nature of BC's the problem is these bounds are really loose like if you have a linear classifier it is like you know and if you have like a million features you know your vc bound is kind of like million plus one so if you think if you go back here and you put like a million you know the boundaries really loose and for some learners it's kind of infinity so imagine like you know you've always wanted as a kid to put an infinity in an equation now is your now is your time so it's really these bounds are very loose and they're getting like a zero tester is impossible which means that has big safety implications and this being Def
Con you know I found this like oh now you can see the picture right this is vapnik so kind of like you know found the VC bound and it if you could read it it says all your base are belong to us but they spelled with a Y like Bayesian learning like nerds right so so of
course it's a lot of like defenses that have been published in archive you know scores and scores of those but there is no single defense unfortunately from a machine learning perspective that you can use to protect yourself against attackers there's a great paper which actually I think one of the best paper at ICL are a top-notch machine learning plan for academic conference well the paper showed was like other than they took like the nine defenses that were published as part of the conference and they broke seven of them so it's very easy you know to kind of like get over these defenses Goodfellow is kind of like the father of this field at this point kind of like ranked some of these defenses along certain axes and you'll see that a little bit the next thing is
like the next big challenge of constructing a framework and you know guaranteeing safety is there's a big tension between adversarial examples and interpretability how many of you know what gdpr is are you perfect you have all got in those annoying emails right because if a company didn't then awesome you know they're probably gonna get slapped big fines so one of the things that gdpr articles say is that they want to support explanation so you know you will which means like some sort of that explained ability should so this way if you're more get get if your mortgage gets denied you know you know exactly why it's been denied if if there's a machine learning system behind it so a lot of people are now using like explainable models as you know part of you know to get like explain ability and the crux of most explainable models is that they're linear learners and linear learners are extremely vulnerable to tampering so if you have this tension brand like you want to bring explain ability so we use linear models and if you use linear models you're you know you kind of have this adversarial tampering so this is kind of like a big tension just like how we face in security field where we have attention between usability and security as well and finally like ml safety is a lot more
about adversarial tampering Justin Gilmer who wrote a paper this time about like what it means to do adversarial example in real world gave gave this anecdote if you think of like you know the big poster child for adverse oil examples is autonomous cars you know hey what if you know somebody puts a sticker on my stop sign and now stops and it's not like recognizing speed love it well they said think let's be more realistic here what if there's no stop sign in the beginning in the first place what did the stop sign gets like knocked over by a gust of wind what if like a guy is wearing a stop sign and like stop sign t-shirt and walking on the road does that mean the car is gonna just come to a halt so a lot of these things in a practical sense can dictate which Mel likes bigger than the scope of you know training or testing within like a dead machine under your box so these are kind of like the
challenges at this point you know you cannot escape attacks there is no single defense and it's very it's extremely difficult to verify the security properties and really say you know the safety is like more than one predominant technique existential dread let's all crawl under a bed and now like you know wait for the doom to get over but you guys are perfect audience like in a security field we faces every single time like everything that is makes a mouth systems kind of insecure is what we face in a security setting on a day-to-day basis and be tackled is risk we tackle this by
minimizing risk right you know this is just one framework like you know the dread framework of like you know you try to see how much an attacker can cause damage you know how hard is it for an attacker to kind of reproduce other attackers kind of reproduce it are always worrying about like a PT or is it gonna take a strip kitty can bring your system down how easy is it for somebody to discover this particular vulnerability so with that in mind like you know here's like one possible
framework that you know that just came to my mind is the goal is to kind of score the kind of attackers can score different parts of like your ml pipeline so the first thing is like you want to score the attackers capability hey does the attacker have access to my architecture and training data is very different from like if the attacker can just like query my API so you want to have two different like you know just like how you calculate a bread score you want to have a score for that hey what are like you know some of the possible attacks you know is it like a white box attacks or is it like you know black box attacks with some sort of limited query and have some you know a score for defense kind of rank them in some aspect if you're yeah if you're ml engineer is just using like rate DK as a defense technique it's really you're really not in a good position where as he knows they're using some sort of if they're using strong adversive training with strong attacks or they're doing something that logic pairing you're probably in a better place so so the the same framework that you use in the context of assessing security risk making an argument that you want to do the same thing for ml risk and once you get that you can kind of like you know from a regulatory perspective it becomes a very easy conversation let's assume that you get a score at the end of it right hey then you can make a claim that all ml you know if there's a medical device that uses machine learning that must have a safety rating of nine or about or you know if you're using like military grade drones where's the civilian great drones should have like different levels of safety ratings but of course there's like a lot of open questions right we saw like the challenges you sound like one possible framework like who's gonna certify you know then ml system is safe if you go to like a CDC or like and is there gonna be like FAA that takes care of the safety of airplanes who's gonna take care of the safety of like ml systems especially that they're so pervasive Ryan kalo kind of argues for I mean mentions like a federal robotics Commission and that that idea is not too far-fetched like you know given the fact that ml is so pervasive we still don't know what the process of certifying it looks like if anybody has been through like the gdpr process you kind of know how an immense amount of burden is put on folks to kind of like get that GD P are safe systems and how do you ensure verifiability if somebody you know if you go to a custom if you buy a service from a service provider and they say that our machine learning is safe and we have a safety rating of X how do you verify that the safety rating is still the same and and they're also like very practical questions right when do you calculate the safety ratings you calculated every day every time you push a feature well tough luck because we're all now in the DevOps world and we're all like you know shipping features every single day so does that mean every time somebody touches code do you calculate this ml safety rating a lot of the mechanics of ml safety has not been nailed yet so lots of open questions and if you have ideas I'd love to hear more about it so that kind of like brings our first half of the talk you know to a kind of a close well quickly kind of like look at you know what are the legal underpinnings of machine learning safety and we're going to look at it from the lens of Internet safety just because like you know internet and machine learning seem to have a lot of collar like you know both are transformative technologies nobody could have imagined that people can send cat pictures when they had dark our product ARPANET and just like how like machine learning today is used for very similar purposes you also have the idea of like transforming it from so many different angles and Jonathan's the train called internet a generative technology which means that it has the capacity to produce new content for broad and varied audience and the same thing is with machine learning yesterday there was this amazing demo of creating music with code from from this DJ she thought was very interesting really shows the power of machine learning and the internet and both have like great degrees of instability you know in a lost talk they say that how you could take the internet in 30 minutes or less it's the same thing with the machine learning hangri said how he was basic very easily able to open a shell in like a top like image recognition system in very short period of time and from a legal aspect we you
know it makes sense to look at ml and the ends of internet just because of precedent and precedent is really really core to common law legal system right my favorite quote is you know in one of the verdicts from like Vasquez and Hillary they said that you we will I think the court really relies on precedent because it establishes a bedrock of principles as opposed to some individuals like you know making up law as they go and and the court and the cases that God decided in like 1997 is still having an effect today in the internet for instance like in Reno versus ACLU is essentially why you can have profane content on the internet because on the legal basis of like freedom of the First Amendment and in 2018 there was a case packing on vs. North Carolina where North Carroll and I said you know sex offenders do not have the ability not be on social media anymore but the court said no on a unanimous decision and they cited Reno as one of their legal precedents and really presidents do not get overturned by the court so easily they were they wait for societal changes so really looking at it from the lens of Internet's internet safety makes a lot of sense and but invited and by are we doing this
from a legal perspective so imagine like poisoning attacks and machine learning poisoning attacks are essentially when an attacker can send chaffed data to your machine learning system with the goal of subverting it so either misclassify it or you know or even completely change the goal a relevant cyber law case could be like CompuServe the cyber promotions so this was the case in 1997 so the the basic gist was CompuServe was an isp and cyber promotions was a direct marketing email sis and they were basically sending spam via CompuServe and CompuServe customers were obviously like super pissed off they're like hey man like you guys need to stop doing this because my customers are gonna bail else and cyber promotion says hey no that's kind of like you know you're a common carrier this is our first amendment and the chord basically said no it's not the case and they established precedents that even on a non-electronic province or an electronic property you can pretty much have like trespassing and this might be possible this might be relevant for poisoning attacks when an attacker is sending like spam images to kind of derail your machine learning system or let's think
of liability right imagine that you know you go to a scenario where you you know you get some sort of like a robotic system to detect art forgery like you know you're expecting this top-notch likes ml system to check forgery but you basically get a lemon and now you lost a bunch of money because it's a Picasso you've got in some of their like your art student painting now can you take the person who who said who sold you this machine learning system who said that it came like identified fake art to port because use of you suffered like millions of dollars in damage is a very interesting system
because liability in the context of machine learning safety is is still a Nissen system and irrelevant cyber law cases transport the IBM so essentially trans corporate America bought like a lot of equipment back in 1994 from IBM and those discs failed and they suffered economic loss so obviously TCA sued IBM for economic loss and in the initial and this is like back when the internet was very nascent and the court basically tried to limit the liability because the courts basically said you can sue them for contractual you know for Runet for them reneging on the contract but it cannot sue them on the basis of economic loss cannot sue them for all the money that you lost and it's kind of crazy because today when be by computers and say your your ex your OS failed and you've lost all your documents we do not think of going and suing our computer you know the people wrote the OS for economic loss because we kind of implicitly assume computer systems are faulted and today you know we kind of learned that machine learning systems can be attacked and machine learning systems can be implicitly faulty so because of the changing landscape it is very much possible that the courts will also try to limit the liability because in order to foster innovation I mean there's a big exception like if you got
you know if there's a drone that came in
you know chop your hand off there might be you know this this probably doesn't apply but here's a very interesting thing that Ryan Kahlo pointed out if you have the scenario where there was a
bodily harm caused by you know a you know like a machine learning system that used open-source software what do you do then who do you assign proximate cos to the the people who built the drones or you know the thousands of developers who contributed code to the open source systems and these are a game like very not easy questions to answer oh this by the way cafes like if you're wondering why this was an open source symbol Cafe is a very popular like image recognition bike system right there so
yeah kind of like you know wrap up with the open questions here like when and I'm all system breaks down how do you get relief like where the damage is foreseeable do you go you go after the company that made the product the open source toolkit the company use or if the company can hosted it on a different service the service provider or the researchers who kind of like whose algorithm the computer years and all of these are important questions for answers we do not really know because there is no case that's being tested so far so to wrap up if you were to think
of like all the countries you know they there's been a spate of them releasing AI strategy so far like you know and these are some of the handful of company countries who have you know pretty much hurt AI safety and privacy as a big strategy you know in the in their vision of course you might see our red white and blue is not here and that's because
we do not have one yet the us's approach to machine learning safety has been very different earth being like us our focus is a lot on autonomous weapons and the obama administration back in the 2016 created like you know the US artificial intelligence R&D task force they produced two reports and recommendations obviously would take it back with this new administration they disbanded the task force and kind of like you know push those reports aside and if you're wondering where we are they created like a select committee of which it's co-chaired by wait for it our president and a bunch of other people so we don't the very there was a bill called a future of cyber artificial intelligence act for that's been proposed and nowhere are we as Congress even mentioned about AI safety so we're not really in a great place in terms of like even thinking about this problem yet and it's a shame because a lot of the companies have taken this on themselves to kind of define standards and try to get ahead of the curve so to kind of conclude you
know safety frameworks are super critical for machine learning systems and that will help us like create robust standards and certifications that we can use to our advantage whether to convince customers or to add value and this instead will help us get meaningful relief should there be or when there is a case that involves a mile safety and but that like you know I want to end
with these things the court is still grappling with the effects of internet this was a this was Justice Anthony Kennedy in Packingtown V North Carolina and 2018 and he said that the forces and directions the internet are so new so protein and so far-reaching that courts must be conscious of what they say today might be obsolete tomorrow and they're talking about internet which was started in the 1980s and think about like feel like machine learning and what effects and how the legal system is going to handle it and this is a big open question and if I were to wear my
engineering hat on you know ken Thompson you know a curing price Award winner you know ask this question you know to what extent like you know should you trust a statement that a program is free from protein horses and the answer is like you can't you can't trust any code that you did not write yourself and that's shocking but think of like you know how ml is being viewed karpati you know one of Oprah's rector for machine learning and probably a big tightening this feel compared machine learning as software 2.0 you know that machines will now be able to write better programs than humans and I'm going to end with this unsettling note which is how much are you going to be trusting ml models that we did not build ourselves and that's pretty much I feel is going to define the field in the next time years so that's about it
Feedback