Bestand wählen
Merken

Know your terrorist credit score!

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
the I might presents
good
the the last while how the
biggest thing of trust by the end of the session you will know all your terrorist credit score isn't that something this is the wants to our next keynote speaker Kate Crawford tells us Kate is not so much of an expert for terrorism I think which is more of a brilliant scholar and writer in the field of data analysis of social media and digital communication that is originally from Australia and studied at the University of Sydney where she co-founded the communications and media department she was the deputy director of the Journalism and Media Research Centre at the University of New South Wales in Sydney today Kate is a visiting professor at MIT Center for Civic Media a senior fellow at NYU's informational Institute and the edge and at the principal researcher at the Microsoft Microsoft Research New York Microsoft Research New York was found in 2012 and it's a touchpoint between academia and industry designed to help developers understand how humans behave online and to predict their actions big Data is what comes to your mind when you think about that area and it is exactly where Kate focuses she's the co-director of the Council for Big Data ethics in society where she and her colleagues looked for answers to the challenge that big data means for a democratic and free society the intersection of peak of 80 the Ethics and Big Data is also want Kate will focus on her keynote today that is convinced that the old modes of privacy and not to keeping up with technology at the same time she things we need a strong framework for data ethics work on on the stage Kate Crawford the
uh and
going in and runs low everybody it is such a pleasure to be here Republic losses and particularly a pleasure to
be here on the 10th anniversary and I say it's also an honor to be speaking here after which the Senate and I think you might find some interesting parallels in some of the things that we have to say so you might have noticed something strange happening with the current debate about the ethics of artificial intelligence it's all about the singularity that's mythical moment when we might create an artificial intelligence so much smarter than us so infinitely more capable that it will actually overthrow the human race now possibly there's some good reasons why this is the year when people are worried about it last month we saw alpha go which is the reinforcement learning algorithm defeat 1 of the reigning human masters and go in Korea we also saw an open letter from a set of concerned scientists worried about the potential pitfalls of any I work and of course we just found out there's going to be a
whole new trilogy trilogy of the Terminator series so yes any 1 of these might be enough to get people anxious let alone all 3 in combination but what is interesting here is that in many ways the concern about the singularity is not new
it's been around since the 19 fifties with our venerable mathematicians like John von Oi who initially expressed his concern at the very end of his career and now most personified by Nick Bostrom a philosopher who recently wrote the book superintelligence and a superintelligence thesis is that has capacities to increase the intelligence in agents growing grow we might get to the point where we create a despotic AI that is able to self replicate and then basically turn people into its slave labor and what's interesting about this kind of perspective of course is that the singularity and superintelligence are still the stuff of theory even the people who believe that this is where we're headed say that it's decades possibly centuries away and some computer scientists say that it's basically not possible at all but this hasn't stopped a series of very well people from expressing their fears and of course the long must be billion technology magnates the CEO of both space X and Tesla I went to the tweet storm recently expressing his concern about superintelligence but let's
take a look at the people who are the most vocal In their concerns about the superintelligence so here's some of the top 4 can you tell me any similarities that you might see here what can you say the yes absolutely 10 points on the all men and they're all white and in addition to that I would add is at least 2 billion as up there and at least 1 multi million and so I might just change my slides slightly to read this the
Singularity might seem like a big problem if your rich and white but to be fair I think we have to give them their use here there is a fear of creating an artificially intelligent apex predators and that is particularly frightening if you are currently the apex predator but for the rest of this I would suggest that some of the threats that they're worried about already starting to emerge so what I wanna suggest today is that the
panics about the singularity a hiding the real problems amongst all of this hypothesizing about a terrifying Future of nonbiological intelligence running the world we have failing to look at the technologies that we're already relying on in everyday life the most pressing concern for me is not the super intelligent agents of the future it's the inequality that were embedding in the systems right now such if the thing that the singularity theorist fears most is effectively the disenfranchisement the exploitation and disempowerment of humans this is already starting to happen but interestingly there to coin a phrase it's just being unevenly distributed if so what I think we should be doing is working to address these current forms of discrimination and inequality that a built into a big data systems right now so what I mean by
artificial intelligence here I'm a take us back to the original definition by John McCarthy when he defines artificial intelligence he described it as the science and engineering of making machines intelligent but what I think is more interesting in almost the harder challenge is the social and ethical frames of how we make machines intelligent and today I'm gonna be talking about soft AI or narrow AI what that means is a Izard designed to basically complete a single task very well but by no means i they self-aware self-conscious or creative and unlike hard AI software is publié something you already using some of the stuff they eyes have personalities in names like series Cortana Amazon's Alexa Facebook's em but many of them are faceless back systems we could think of the image recognition systems that Google and Facebook use for example and their intelligence such as that is is premised on gathering as much data about you as possible be a Harvard academic Shoshana's evolve uses the term surveillance capitalism to try and encapsulate both the idea logical and the economic imperative to gather maximal data both because it might support your profit model but also because it helps train your data models so they can understand the world in a more and more fine-grained manner regardless of whether or not that creates risks to individuals or to groups so as to give you a quick
roadmap of the tool and I'm going to take you on today 1st of all I wanna talk about the shift that I see happening from human subjects to data subjects and what that might mean and then I'm going to talk a little bit about the limitations of both of the 20th century models of privacy and transparency and for Italy under suggest why we urgently need a stronger framework for days ethics and why we need to do better on inclusion so there are many forms of autonomous software out there today but as they become increasingly autonomous we have to ask better questions about what kind of values they possess what kind of world view is being mapped onto their systems and to begin I'm gonna
take you here this is a refugee camp 1 of many that currently housing the millions of people fleeing Syria in what has become 1 of the largest humanitarian crises of our time but this is also the site of an enormous machine learning experiment we recently found out that IBM was approached by a client who was looking at the refugees getting off the boats and they said some of these people are looking to healthy is the way that you could design a machine learning system that could separate the real refugees from potential jihadists IBM said yes they could and using their I to enterprise software what they did is they ingested and enormous amount of unstructured data from places like the dark where where they looked at where you might be able to buy a passport for example and they looked at twitter likes to see if people had been liking the same DJ all the same venues and then cross correlated that with parking tickets to see if they'd been possibly staking out those venues for extended periods possibly for a bomb attack and then they looked at various datasets the border God might have and through this concoction they created what they call a terrorist credits school now IBM was quick to point out that the credit score is not a definitive marker of guilt or innocence but nevertheless this is an extraordinarily intimate profile of what someone is like where they were and what their social graph looks like these all of these forms of trace data that are extremely personal to us but that effectively still defined as public data so what concerns me about these kinds of systems it
were ultimately doing 2 things on 1 side we're creating all of this room for spurious correlations for errors of false positives so this difficulty of how you even verify these predictive suspicions kind of boggles the mind certainly in my research I look at the way the bias creeps into models and the way that both at the algorithmic level and at the training data level we can start to see these errors emerge but what concerns me most if we're creating these other verified systems for predicting forms of crime ultimately under a proprietary secret we have no idea how that system is really working and there is no ability for an accused person or a refugee to possibly see that data or to question it so I think the media's is also highlight something I wanna talk about today privacy is being stretched to its limit to try and contend with these new forms of dealing with data lending caught from IBM
this is a a quote that came out of defense 1 which is a military publication where this an experiment that IBM was running 1st became public IBM representatives pointed out that the IQ doesn't collect intelligence it just helps ingest and make sense of unstructured data so they are not spies or agents or operatives they just engineers feel better maybe not but this is a very carefully constructed argument limit tell you why when I say they're not collecting intelligence what they mean to say is that are not directly coming up to you and getting your PII your personally identifiable information instead they just getting all of his public unstructured data which is much harder to have any kind of privacy complaint about but also I think what's interesting about this is that systems like the i 2 already inferring an enormous amount of sensitive data about people this is already happening and to some degree that outside of the bounds of what privacy law was designed to protect even though they are in many ways so much more invasive so much more able to tell us very very personal things now the
theorist just who why has this fantastic concept of the trace body that she uses in her book terrorist assemblages a trace body follows our physical body and every time we cross the border for example both are traced body and a physical body are under extraordinarily high levels of intense surveillance so what's interesting about public data is even though it can seem relatively trouble-free I mean what's wrong with public data when it becomes connected to your trace body it can prevent you from crossing borders from catching a plane from getting health care from getting a job so what happens to these trace data bodies is actually extremely important as just be acquires pointed out that and for me the reason why I think the refugee example is so powerful is because it highlights how public data is feeding these machine learning experiments that are intensifying surveillance ultimately on the least privileged on the people who are least able to fight back into question the use of these systems how would you ever contested your terrorist credits call I mean this is the irony of the title today because how would you possibly even know that somebody had generated the score about you let alone question the kind of data using now donna Haraway has this fantastic phrase where she talks about the common move all of the Technological Sciences where they try to reduce the world to a problem of coding she describes this as an informatics of domination and I can think of the history of this kind of Informatics of domination we can go back here because obviously I can't really be
standing in Berlin without thinking about the resonance of IBM's earlier systems of Informatics this was actually produced by IBM subsidiary in 1933 and this is the Hollerith punch card system In many ways especially the the predecessor to the computerized database that this is being used extensively by the 3rd Reich to track and identify Jews Roma people and other ethnic minorities and these punch cards would record gender nationality profession as well as ancestral line family trees known associates for example this is a very cool part of how populations were being tracked and manage
and in what is possibly the most terrifying promotional poster of all time this is this is the advertisement for the hordes punch card which says and I translate see everything with punch gods as they have a gigantic by which is like seeing like a state in its most literal formation beaming light onto the ground below through the perforations all the punch card so this systems have histories and in many ways they illuminate the parser we take towards massive population tracking and classification but
so to returning to the present day my hope is that we can start to broaden very narrow debates about the ethics of AI to start talking about real world makes now and how we got here we're living in this era of massive data collection of deployment of sensors throughout cities which indeed Richard Senate talked about earlier today as well as predictive analytics I was going to suggest is that it's making us into data subject's little points with an enormous aggregates which are effectively being assessed compared to probabilistic models of human behavior and I think to understand the shift from human subjects the data subjects is that the really important if we're to try and understand how to build systems in order to make them fairer but what is interesting of course is that the data subjects have a very different experience depending on who you are so what I wanna do is talk a little bit about the ways in which some data subjects experienced discrimination and misrecognition at the hands of some of these systems 7 Machine learning systems as if you will the precursors of the AI to come have been experiencing this extraordinary rapid advancement in the last 5 years and actually as part of the reason there's been this huge excitement about AI again but interestingly we can learn a lot by looking at where machine learning is failing words basically not working now you've probably heard about Google's fo to act which basically is an image recognition algorithms so you import all the photos and it will basically tell you club that's a bunch of people at Republica all all this is a vase of flowers or if you input images of your
black friends of all he is a group of guerrillas so I use of found this particular area and you can imagine they were not very happy to see that they're friends were not being miss categorised the black friends were now Google apologized profusely for this as you can imagine but it's clearly unintentional what this teaches us is at so much about the training data that the system was built on we can presume that in actual fact this system had just been trained on so many white faces that when it's on non white faces it was basically calling them guerrillas this is so interesting to me because it's exactly this moment at which training data reveals how your coding the human norms how your understanding what the average human looks like and this tells you that black faces are not part of it and that's a problem but this is a new many show you another 1
this is the cons camera software that initially was detecting all Asian people as permanently blinking bit of a problem but another 1 with Hewlett-Packard where a free its image recognition it was basically unable to detect anyone who had dark skin tones and HPs response was maybe should just use better background lighting it'll be fine there were about it but again this is showing us the working logic behind how these systems are being trained the beyond the world of visual
recognition their concerns about discrimination in law enforcement and employment and I'll just talk briefly about these now some of you in the room all of heard of New York's stop and frisk program it's not something we're particularly proud of have been during its operation 4 . 4 million people was stopped and frisked in the streets of New York guess how many of them were black and Hispanic 83 per cent pretty shocking and just as in court when it was found that this system was indeed illegal the judge said that this was basically an inadvertent form of indirect racial profiling not just imagine for a minute if you gave that data to an AI system it would say well of 83 % of the people being arrested a black and Hispanic will clearly they're the people who are the most suspicious and probably the people who are committing most of the crimes that is certainly 1 interpretation of that data
but another interpretation is that those extreme numbers represent a deep and a visit history between the police and communities of color in the US and indicates that we need much stronger civil rights protection 2 different stories same dataset and this is
just a matter of hypothetical speculations right now in the US lots and lots of police department a rolling out predictive policing and talking about New York Chicago Miami LA just to name a few now to be clear pretty policing is not an AI system by any stretch but it's a lot closer to us than the superintelligence and that's why I think we should be paying much closer attention to how works so this the predictive policing important enormous amount historical data about where crimes have been taking place and would generate
this type of heat map which will tell police where they spend their resources where they should guard to look for suspicious activity
but you could see how you can create a particular type of reinforcement area here if historically the poorer black neighborhoods have been seeing more police surveillance and more rest didn't come up as being an area of concern so the police will continue to go there surveil more arrest more people while widened wealthier neighborhoods will be actually getting far less scrutiny so what is interesting about this kind of approaches is that unless we think about the long history of race and class and policing when we design systems we run the risk of creating these supposedly objective data-driven models but nonetheless carry these deep biases right in the infrastructure of how they work now is a
new 1 you might just heard about it broke last week Bloomberg did a big investigation of various zip codes in the US and Where Amazon Prime was offering same-day delivery to its prime customers and surprise surprise what they found was a predominantly black neighborhoods were not getting access to same-day services and that masses because for some people that's where your ordering of fresh fruit and vegetables if you live somewhere in the US it's a food desert and you know in their supermarket same-day delivery can really change the way you eat the kind of things you have access to so take a look at some level the Boston and rise to live back at Roxbury is by far away 1 of the most uh black neighborhoods in the center of Boston and look right the middle of the only area that doesn't have service so what is really interesting about looking at this kind of data is that these maps almost exactly the same as the early redlining maps of the US and interestingly redlining came about because of the National Housing Act of 1934 which unfortunately ended up resulting in fewer black families being able to get mortgages and the Federal Housing Association used to have these maps of cities and they would draw circles and red around the areas where you couldn't get mortgages looking very very similar to the kind of maps that you see before you here so again we see the way that discrimination starts to live on in these digital systems deep in the infrastructure of how they work so we talked a bit about
race let's talk about gender this picture here is the thing that you will get if you type into your preferred search engine C. O. yes you'll notice a few other similarities here as well and know what they're saying is that if an essay I was scanning the top results from its from its top searches and said 0 all this is just 1 of CEO looks like it's a guy in a time that's what is here is gonna be evenly and this is again something that is just a joke because if you look at a study that just came out from CMU was starting to see these very problematic ways that certain kinds of opportunities are being made invisible to women the so what the researchers did is it created this enormous automated system called add Fisher all of them fake accounts half of them were labeled as as half of them are labeled as women and then they went so what kind of job ads they're getting search by Google and what was interesting is at far far greater number of the mail accounts we're getting adds to highly paid jobs jobs in the kind of 200 thousand a year category and for career coaching to try and get them those jobs when they just weren't seeing them at the same rate the this is a problem because of this type of reinforcement means that women are not going to see these opportunities did not apply for them this certainly not going to get those jobs which then reinforces this pattern that men are the target market you should pitch your adds to men because they're the ones will apply they're the ones who get it so you can see how we create this kind of vicious circle now the research is a very careful to point out that we don't know if it was because the basically advertisers were targeting men or if it was an unintended side-effect of the machine learning algorithms but that difference matches that's an incredibly significant thing to know and we have to know that if I ever going to ensure that we have a
quality of opportunity in these kinds of systems otherwise we risk importing that bias into the AI systems to come so this is why
I would it connects the debates about data discrimination to artificial intelligence if we look at how systems can unintentionally discriminate now when much better placed to design fair AI now 1 of the best recent examples was Microsoft's to pay you might have heard about this i in the 24 hours that she was active she went from being a charming and friendly chat bot to being a belligerent anti-Semitic misogynists was quite an achievement what was really interesting and I think the great lesson of the day is that when systems are designed to reflect the values of the users they can be manipulated and essentially the future of AI depends on addressing those kinds of social and ethical questions as much as the technical ones we so I basically given you multiple examples of where bias can creep in through overly narrow training data through the interpretation and perpetuation of historical data through to these kinds of effectively restrictive targeting battalion reminds me that the bias doesn't have to come from the data even if you had a perfect system when people start using things they can bring their own prejudices and biases to the table so we have multiple systems working at once which is why these spaces are so complex and so fascinating so are do
about it well certainly in terms of the massive data collection that's been going on a lot of people then turning to privacy as a framework that can help us answer that but I want to suggest that privacy has a threat modeling problems which is that it is currently modeling in ways that I'm missing half of the things that are happening in the machine learning universe so this privacy has always mapped to the dominant media forms of the time in the 18 eighties it was all about newspapers you could possibly write salacious stories about your love life and then when it was the emergence of photography the concern was how do we stop people from taking photographs inside your windows but then we see the emergence of these guys early computing and at this point basically the way computers but information values you would log
in to give information lot back out again so we start to see the emergence of the Fair Information Practice Principles which of these undergirding positions around what data privacy means it's things like notice and consent I give you notice about how a system is the work you either give me consent or you don't use it another classic principle from this era is the idea of access to your data you data sitting there discrete you don't have a look at it and you can access it whenever you want and finally the idea of reinforcement but if I do something with your data that you don't like you can have an
action against me fair enough but then we start to move into the 19 nineties and something interesting starts to happen I tried to find the most nineties image for you what I found was this this is 2 of the cast of friends Jennifer Aniston Matthew carried during an infomercial about how to use PCs absolutely priceless I think basic quintessential nineties right here so what this is this is the time we see the mass popularization of the Internet but also the rise of online advertising targeting in ever more invasive ways everything that you during online so here the threat model changes from 1 to 1 to many to many because we now know that data brokers are buying and trading datasets and combining things and here we start to see this metaphor of privacy which is built around a the a three-part process collection use and disclosure what I'd like to suggest is
that basically that three-part process has broken down and it's really difficult to think about what collection is in a predictive system what you see is when you're doing a whole set of different kinds of inferences In this metadata and machine learning era was starting to see this enormous trade in behavioral data but data that you put on even aware that you've shedding every day every time you read an article online every time you buy something every time you walk past a facial recognition camera it's collecting emotional data about your face every time you wear wearable fitness span with a galvanic skin sensor scanning enormously intimate data about how you feel from minute to minute and now we
can add long range iris scanning and facial recognition at a distance of this particular slide came from a presentation from a senior researcher and as you can see the girl is saying they look like SUSY has a new boyfriend I wonder who he is and the system says that is Bill Baker nothing creepy about that nothing at all
and it's time extraordinary actually that they were using is to promote the system without making them maybe people might find it all it's bit disturbing but yes perhaps all you have to do is imagine that system being used in a public protest and recording the names and faces of everybody walking past and maybe I'll start as and of course
cities already starting to use these kinds of systems back when I was living in Boston in 2013 something very unusual happened there was a big outdoor music festival that's not be unusual but um you know lots of usual bands of playing the national on the daddy projected is and now houses the people want to go the big main part of the city outdoors but know the new who went to that constant and the promoters didn't know is it every single person who walked in with having a face recorded and then run against an algorithm to determine their race their age and gender and when it finally came to life in the city of Boston said 0 look look we're just doing it as an experiment it's about ensuring security in public events it's gonna be fine guys to worry about it which if you think about it is is another dog whistle for terrorism and this makes sense because this particular experiment in Boston happened 2 months after the Boston bombings so we used to seeing this kind of operationalizations operationalization of the rhetorics of terrorism as a way to increase the already flourishing infrastructures of surveillance and why this is really gonna
start to matter is that this is already happening that's 2013 so know that most people in this room probably already have a terrorist credits that is almost no way that you can find out what that is such ultimately word a position now where systems don't even need to gather data about value directly at all they can infer from your social graph from a whole lot of different trace data points I read about this in a couple of research papers and describe it as predictive privacy harms the basically live outside of the previous metaphors that we had for privacy law and we just don't have new metaphors to deal with these kinds of systems yet for example have I do you protect something from somebody who was inferring a value was this system basically predict something about you that you don't even know yet how do you have any kind of rights and that information it's a really hard question the so what we're starting to see is a lot of activists and academics move into the question of transparency that somehow transparency will give us accountability if privacy cont well if the theory here is just on what the black boxes and you can bend somehow understand how they work and produce accountability in governance I'm afraid I think it's gonna be a lot harder than that and here I'm drawing on research that I'm doing right now with might Anthony at USC and what it is saying here is that all of course computer scientists have had a long interest in how to
visualize and represent and make transparent systems this is a beautiful as system which you get online on sorting out sorting you'll see all of these different sorting algorithms represented an animated it's a fantastic way of looking into how something works but how do we do this we say the types of machine learning algorithms that I've been talking about today is a lot harder 1 of the metaphors for how I actually show you how they
work so this is why I particularly love all the all the little ASA screens a learning from the 1 the beginning that's machine learning right there and so this is the academics like the excellent Frank Pasquali demand greater transparency of algorithmic systems the idea here is that transparency if you could have this visibility if you could look into it you would understand it and then you would be able to make it accountable but I think we can question almost every jump in that logical tree because they're very real technical limitations to how transparency
could work but ultimately it's really difficult to open a black box on these kinds of systems this is because even to the creators of some of the systems they
can't tell you how they work so if you have a chance you get the chance to talk to some people here who are working on deep neural nets they can show you that the systems are really good at detecting the faces of dogs all the faces of cats but they more it necessarily be ever tell you why it's so effective all what the logic is that allows the system to work and they have all the data and all of the underlying programming and yet even then transparency is enough and this is our fantastic paper that came out in 2015 that showed you how easy it is to fall a deep neural net these are just a set of like abstract pictures and images that they fed into the system which was saying with certainty that this squiggle is a bagel and this little representation appears clearly a comic book and down here yeah that's a monarch butterfly I was adjusting again is that why these systems fail this is when they succeed is still somewhat mystifying this is my favorite 1 where they basically import a picture of the Queen of England and it says with 99 . 7 % certainty that she is uh shower cap fair enough I'm I'm not sure how the queens and feel that that probably not very good I'm guessing the here we go into a different world vision and I something that is particularly fascinating here that even the we might be offended by some of these misrecognitions it's not coming from a universe where these misrecognitions have any value it's a Machinic way of seeing and this is something that artists like Adam highly intra paddler researching at the moment but basically what this means is that transparency is part of making a system accountable but it is by no means sufficient right so the other thing that I
love is thinking about temporality and transparency this is a representation of high-frequency trading just a few seconds of it and what is the thing about is if you think about it if you wanted to have a look inside the black box if you begin to see and that a few seconds of this it's changing so rapidly what part represents the true system let me give you another example this is a
visual representation of high-frequency trading done by the design firm statement working with Nasdaq this is exactly 1 minute of trading on March 8 2011 let's say I actually understood how system was working right here in the middle of the middle of that right middle them in a very different what's happening over here all what's happening over there so this gets particularly tricky with adaptive systems that a learning all the time as they get more and more data even if you think you've got all the training data you got the algorithm and you got the testing data you still only getting a snapshot 1 moment where the actual fact this is a system in flux it's changing its learning you really not getting a sense of how it's affecting people little and how it's affecting different users so I think when they have any idea of transparency auditing we have to contend with this temporal dimension or we're really missing this idea of accountability at all so looking inside the black
box I would suggest is a bit of a red herring because ultimately it's a redirection from the material and idea logical consequences of the system that we're building besides this not just 1 black box there are many black boxes and they're all actually interacting with each other and producing unexpected effects that differ from user to user so Marvin Minsky who was 1 the founders of AI said if you only understand something from 1 way then he barely understandable and for me I think about that in relation to the systems that we have to look at these broader forms of understanding within social contexts and trying to think about accountability which brings me to
return to data and X the some color to remind myself sometimes that the trajectories of technological development I'm not inevitable you can actually
intervene and technologies as they're growing you can actually limit the dangers and we've seen this with technologies like nuclear power and this is interesting too because I've been going back and looking at the history of how and code to been emerging with the fantastic post Jake Metcalfe and what's fascinating is that concern about Annex spikes when you have periods of
enormous technological scientific change but you know what it takes to make an ethics code a crisis you need something really bad happened and then suddenly everybody is all that and it's so historically we can look back at the Medical atrocities of World War Two which then brought about the nuremberg code in 1949 we can look at the type of ethics crises that happened in the fifties and sixties which then resulted in the Belmont Report of 1979 which lives on today in what we call the common rules which basically mandates how academics can do ethical research and again we can look at what happened with the rise of bioethics that came about because of the rise of Genetics
organ transplantation and all the technologies that we use to extend human life and what's interesting to me is that bioethics shows us that we need to really clearly delimited fields to deal with specific technologies and they come at this critical moments and what I like to suggest is that were having a critical moment right now and that we urgently need to have a rigorous sense of data ethics underpinning the systems that we're building today all these ML AI systems that I'm talking about race whole new questions that the old frameworks of privacy and transparency are really struggling to keep up with the other reason why I think we have to start talking about and X now is because it's already being pulled in multiple directions you have the singularity guys who basically talking about
ethics in terms of the rise of the robots on the other side you have an X in terms of corporate compliance as another set of checklist that we just take off but I is the just that there's a 3rd way of thinking about ethics that we deal with real world implications of the technologies that we're building now and we look at the kinds of power asymmetries that are being perpetuated in some cases exacerbated by them that to me is how we really start to make antics matter in these kinds of systems and it doesn't mean we have to give up on privacy at all it just means we need to put it within this broader framework of ethics because in so many cases the systems are doing things that live outside of the privacy laws that we have yeah so if we
can understand this new realpolitik of data subjects it seems to me that we have 3
big areas so we need more work on and I'm hoping that some of the people in the room today a working on these kinds of questions already 1 is fairness how we decide what fairness looks like an assistance how we start to compare the weighted 1 user is being treated compared to another and the other of course is power what do we do with these new power asymmetries and how are they affecting our definitions of citizenship participation and democracy and and finally due process how are you going to
intervene in a system if you're designing something to think about the way in which people can actually find out the data that you using about them they can actually correct the record of using incorrect data and this matters because you're getting schooled for everything right now you getting employability score you're getting at health score and you certainly getting a terrorist credit school so due process gives us a way to intervene into that kind of metric making of human life I wanna give you a ray of hope before I finish the chain is possible you might remember the facebook emotional contagion experiment of a couple of years ago basically Facebook was experimenting on 700 thousand users without the knowledge and 0 I seeing the happy news feed
all the sad news feed what they were doing is checking to see if they could manipulate your emotions so that you would start to post I happy things'll sad things now in the international furor law when it went public something really interesting happened to Facebook introduced an epic system and actual protocol the researchers have to go through they gonna start conducting these kinds of experiments now obviously that is a private system we don't know
how works but it's a lot better than what they had before which was absolutely nothing so what's interesting here is that public input actually saying what you think about these systems it matters and it can produce change and I think that is something that actually gives me a lot of hard so I wanna end today by going
back to the beginning of my talk and asking you this who gets to design the AI future whose model of the world are we taking as ground truth so this is 1 possibility I yeah and this is my Zuckerberg at the launch of the Oculus
pretty homogenous looking audience there and what is interesting is that class ficial Intelligence will for the foreseeable future anyway reflect the values of its created so it really matters who was running these companies who is designing the systems who is sitting on their ethics boards because these are the people who will get to decide what kinds of systems where going to be living within and when it comes down to an artificial intelligence systems can map is basically near and very narrow and privileged model of society if we choose to let them but actually I think it's
critically important for the ethics of AI we wanna start demanding greater diversity and inclusion now we wanna make sure that the systems are considering the communities that have the least power are the ones that have the least of visibility because in this sense as
people start to build these worlds we wanna make sure that these are places we want to inhabit thank you very much and few thank you thank you so do you have a captive audience and was really fast and you and my question is what what was your motivation to get into that it's at the intersection of Big Data and ethics it's OK um to be Cervantes with you I 1st got interested in this space by starting to do large-scale data studies um I started out so let's see only 7 years ago now looking at Twitter data and in particular Twitter data after crisis events and the theory was that we can look at Twitter data in it could tell us things like what was the most affected community where should you send resources but what I noticing is that units which data is extremely skewed I mean basically reflects young privileged to urban populations so if you starting to use that as how you understand what's happening in disaster you basically sending resources to the privileged you giving the people who are likely in the best possible position the most help and so that's when I started looking at these issues around inequality and how we design fair systems and there are there any questions from the
audience I railroad man with a beard and dress or there on there I had microphones coming
is running on up to you the there we're here to buy them yes thank you served would be nice if could introduce yourself lucrative by the
the top my name is Alex of articles of the following question you're talking about data matrix what my experience is that it's it's in general
do not the reflect positively on the profits quite the opposite list it it's you have the more is the profits and we are also talking as you we're describing the AI we we're not talking about what's there governments in in the 1st place we were talking about Facebook and Google and everything else things were talking about private companies and so my question is so what would be the incentives for private companies to introduce it seeks to reduce their profits how is that going to work it's nice thing but how is that going to work I love it you've gone straight to the heart of
questions is fantastic need you for that and by these 2 ways of looking at it 1 way is that ultimately they don't have a choice if we're gonna side using these systems and relying on them you do have the force of public pressure to start to say we go win the demand due process of demand forms of accountability that's possibly a little bit polyamorous and and maybe to some degree to hopeful that the other side is to say that it doesn't necessarily have to eat into your proffer models all a lot of the things that I've been talking about things you can actually generate into your systems without necessarily increasing the cost I mean there's a million things studies right now the question this idea that gathering as much data as possible actually makes your company more valuable and I think what I see in the next 5 to 10 years is a much more
nuanced account of what makes a company actually worthwhile as much as it has an enormous amount of data it's what it's doing with that data and how it's actually treating the people who are represented there so I guess in my more optimistic moments I like to say that data and acceptance that happen and ideally it's the sort of thing that will actually not damage your prospects that in theory will actually make you a company that people are more likely to foster more likely to use but i'm i'm absolutely agreeing with you that this is going to have to be part of a much larger landscape of how we think about the use of data and it's going to be a hard sell problems we have time for 1 more question please Is there anything right up frontiers and test 2nd can you please give the microphone to the lady in the 1st row thank you I think you put a fantastic talk my name is Iván inspection instead I'm just asking myself if it's not in difficult to help to convince the users themselves because I think 1 big obstacle in this state idix question our users them so they are more looking after comfort of a way to get things to use gadgets of tools are at and I'm not as much concerned with this question can you know what you think is there to do to detect aligned heads of the companies and themselves moral how to convince the users so I think this is
getting to this call perception of what a user's value and there's been this sort of very dominant beliefs that people don't care about privacy Don't care about we do with the data and but it is not this is who really knows how your data is being dealt with to some degree this is an enormous literacy problems that we assume that people know when they don't care the other problem is that we assume that people have a choice in many cases we're dealing with system where there is an alternative for you to go to so a basically putting up with the fact that you don't have a lot of autonomy in control area data but what I'm really sort of inspired by the people who were doing studies looking at how communities are engaging with the systems to try and change the way they work and I'm thinking here
of a book by Helen Hellenism balancing Brunton called obser cation the way people try to actually hide within systems where they give fake data where they use systems like signal and tall was starting to see this kind of resistance to systems that will follow the collection of data now do I think that the long-term strategy no I don't actually think that an arms race that we can't win I think it's much much more powerful to start thinking about how we train the data scientists in the computer scientists of tomorrow to
start thinking about these kinds of ethical frameworks I honestly think this starts at university and possibly even earlier when we think about the fact that data is just as a random thing that is just out there it's representing actual humans it's people's wives it has this radioactivity they can impact on somebody's employability or the ability to get health insurance through their ability to sort of get on a flight when you start to look at
data that way I think you use it with a lot more care so for me sort of think about my great hope is that we
start to introduce these kind of ethical frameworks into how we teach computer science to how we teach data science and that to me is that it is going to produce an enormous amount of change OK we could just do flowers obviously you everybody's really fascinated is a fascinating topic please bring your questions if you still have them out to got a ring on Wednesday that the digital eatery year Berlin and will be also by Microsoft and Kate will be there to I believe yes and solve thank you for coming today and so on thank you can make very much that was really a pleasure to thank you thank you all how you mean no
Nachbarschaft <Mathematik>
Blackbox
Summengleichung
Gebundener Zustand
BEEP
Chatbot
OISC
Client
Prognoseverfahren
Algorithmus
Gruppe <Mathematik>
Mustersprache
Radikal <Mathematik>
Auswahlaxiom
Gerade
Verschiebungsoperator
Computersicherheit
Profil <Aerodynamik>
Natürliche Sprache
Eigenwert
Forcing
Einheit <Mathematik>
Digitalisierer
Ordnung <Mathematik>
Computerunterstützte Übersetzung
Tabelle <Informatik>
Subtraktion
Lochstreifen
Selbst organisierendes System
IRIS-T
Mathematische Logik
Interrupt <Informatik>
Virtuelle Maschine
Bildschirmmaske
Weg <Topologie>
Reelle Zahl
Perspektive
Endogene Variable
Äußere Algebra eines Moduls
Normalvektor
Soundverarbeitung
Videospiel
Vererbungshierarchie
Datenmodell
Parser
Gamecontroller
Wort <Informatik>
Kantenfärbung
Mustererkennung
Chipkarte
Prozess <Physik>
Inferenz <Künstliche Intelligenz>
Datenanalyse
Selbstrepräsentation
Gruppenkeim
Familie <Mathematik>
Computerunterstütztes Verfahren
Template
Eins
Internetworking
Umweltinformatik
Wechselsprung
Informatiker
Bildschirmfenster
Visualisierung
Parallele Schnittstelle
Maschinelles Sehen
Parametersystem
ATM
Interpretierer
Hauptideal
Datenhaltung
Machsches Prinzip
Reihe
Systemaufruf
Prozessautomation
Mustererkennung
Frequenz
Kugelkappe
Teilmenge
Arithmetisches Mittel
Druckverlauf
ATM
Mathematikerin
Autonomic Computing
Identifizierbarkeit
Aggregatzustand
Sichtbarkeitsverfahren
Telekommunikation
Web Site
Ortsoperator
Rahmenproblem
Gruppenoperation
Schaltnetz
Besprechung/Interview
Maschinelles Lernen
Term
Physikalische Theorie
Code
Hypermedia
Magnettrommelspeicher
Software
Inverser Limes
Booten
Optimierung
Grundraum
Bildgebendes Verfahren
Schreib-Lese-Kopf
Leistung <Physik>
NP-hartes Problem
Linienelement
Relativitätstheorie
Magnetooptischer Speicher
Physikalisches System
Primideal
Quick-Sort
Lochkarte
Vorhersagbarkeit
Roboter
Flächeninhalt
Reduktionsverfahren
Hypermedia
Ruhmasse
Codierung
Modelltheorie
Unternehmensarchitektur
Vektorpotenzial
Einfügungsdämpfung
Momentenproblem
Baumechanik
Gesetz <Physik>
Statistische Hypothese
Raum-Zeit
Computeranimation
Richtung
Netzwerktopologie
Metadaten
Suchmaschine
Zustand
Notepad-Computer
Vorlesung/Konferenz
E-Mail
Korrelationsfunktion
Metropolitan area network
Softwaretest
Addition
Befehl <Informatik>
Sichtenkonzept
Kategorie <Mathematik>
Singularität <Mathematik>
Güte der Anpassung
Gebäude <Mathematik>
Temporale Logik
Ruhmasse
Strömungsrichtung
Bitrate
Kontextbezogenes System
Hoax
Auswahlverfahren
Ereignishorizont
McCarthy, John
Dienst <Informatik>
Menge
Rechter Winkel
Grundsätze ordnungsmäßiger Datenverarbeitung
Fehlermeldung
Fitnessfunktion
Facebook
Wellenpaket
Klasse <Mathematik>
Mathematisierung
Fluss <Mathematik>
Analytische Menge
Whiteboard
Informationsmodellierung
Spannweite <Stochastik>
Datentyp
Verband <Mathematik>
Abstand
Maßerweiterung
Inklusion <Mathematik>
Algorithmische Lerntheorie
Informatik
Binärdaten
Protokoll <Datenverarbeitungssystem>
Schlussregel
Chipkarte
Neuronales Netz
Resultante
Sensitivitätsanalyse
Bit
Resonanz
Punkt
Übergang
Arbeit <Physik>
Einheit <Mathematik>
Prozess <Informatik>
Asymmetrie
Algorithmische Programmierung
Serviceorientierte Architektur
Inklusion <Mathematik>
Nichtlinearer Operator
Abstraktionsebene
Ähnlichkeitsgeometrie
Exploit
Ein-Ausgabe
Checkliste
Rechenschieber
Verkettung <Informatik>
Datenfeld
Twitter <Softwareplattform>
Geschlecht <Mathematik>
Strategisches Spiel
Dateiformat
Information
Speicherverwaltung
Perpetuum mobile
Ebene
Gewicht <Mathematik>
Quader
Zahlenbereich
Kombinatorische Gruppentheorie
Trajektorie <Mathematik>
Datenmissbrauch
Framework <Informatik>
Wurm <Informatik>
Task
Multiplikation
Datensatz
Unterring
Ungleichung
Polarkoordinaten
Digitale Photographie
Jensen-Maß
Softwareentwickler
Touchscreen
Assoziativgesetz
Beobachtungsstudie
Expertensystem
Datenmissbrauch
Kreisfläche
Graph
Zwei
Kanalkapazität
Mailing-Liste
Automatische Differentiation
Inverser Limes
Mapping <Computergraphik>
Singularität <Mathematik>
Minimalgrad
Mereologie
Normalvektor
Verkehrsinformation

Metadaten

Formale Metadaten

Titel Know your terrorist credit score!
Serientitel re:publica 2016
Teil 10
Anzahl der Teile 188
Autor Crawford, Kate
Lizenz CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
DOI 10.5446/20549
Herausgeber re:publica
Erscheinungsjahr 2016
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik

Ähnliche Filme

Loading...
Feedback