Artificial Intelligence is Hard to See: Social & ethical impacts of AI

Video thumbnail (Frame 0) Video thumbnail (Frame 3555) Video thumbnail (Frame 5172) Video thumbnail (Frame 6272) Video thumbnail (Frame 14105) Video thumbnail (Frame 20224) Video thumbnail (Frame 28587) Video thumbnail (Frame 31773) Video thumbnail (Frame 35525) Video thumbnail (Frame 38271) Video thumbnail (Frame 49536)
Video in TIB AV-Portal: Artificial Intelligence is Hard to See: Social & ethical impacts of AI

Formal Metadata

Title
Artificial Intelligence is Hard to See: Social & ethical impacts of AI
Title of Series
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
2016
Language
English
Producer
Chaos Computer Club e.V.

Content Metadata

Subject Area
Abstract
ARTIFICIAL INTELLIGENCE IS HARD TO SEE: ON THE SOCIAL & ETHICAL IMPACTS OF AI This will be an on-stage conversation with academic and author Kate Crawford being interviewed by artists and researcher Trevor Paglen about the recent turn to AI in our wider social systems – from healthcare to criminal justice – and what the implications might be in terms of power, ethics and accountability.
Machine learning Roundness (object) Artificial neural network View (database) Direction (geometry) Multiplication sign Data acquisition Expert system Videoconferencing Bit Digital signal Number
Scaling (geometry) Term (mathematics) State of matter Internetworking Telecommunication Projective plane Mass Sphere Power (physics)
Theory of relativity Multiplication sign Moment (mathematics) Data center Reduction of order Mathematical singularity Information privacy Mereology
Computer chess Group action Logistic distribution Code Multiplication sign Range (statistics) Set (mathematics) Open set Information privacy Mereology Perspective (visual) Food energy Web 2.0 Medical imaging Inference Machine learning Hypermedia Different (Kate Ryan album) Core dump Software framework Endliche Modelltheorie Physical system Predictability Pattern recognition Theory of relativity Channel capacity Concentric Structural load Moment (mathematics) Shared memory Sound effect Computer scientist Maxima and minima Type theory Arithmetic mean Process (computing) Mathematical singularity Normal (geometry) Website Right angle Pattern language Quicksort Figurate number Resultant Wage labour Virtual machine Mass Power (physics) Wave packet Term (mathematics) Internetworking Mathematical optimization Form (programming) Graphics processing unit Multiplication Shift operator Scaling (geometry) Artificial neural network Weight Projective plane Mathematical analysis Limit (category theory) Frame problem Equivalence relation Word Logic Data center Video game Natural language Game theory Window
Point (geometry) Classical physics Context awareness Pay television Observational study Wage labour INTEGRAL Decision theory Real number Multiplication sign Virtual machine Set (mathematics) Black box Function (mathematics) Rule of inference Event horizon Computer programming Wave packet Number Revision control Facebook Mechanism design Bit rate Different (Kate Ryan album) Software framework Data structure Computing platform Physical system Predictability Area Weight Sound effect Plastikkarte Bit Offenes Kommunikationssystem Data model Digital photography Process (computing) Personal digital assistant Calculation Interpreter (computing) Self-organization Pattern language Right angle Quicksort Local ring Row (database)
Meta element Wage labour Logistic distribution Software developer Electronic mailing list Parameter (computer programming) Mereology Computer programming Sieve of Eratosthenes Self-organization Video game Natural language Software testing Right angle Data conversion Quicksort Weizenbaum, Joseph Mathematical optimization Physical system
Context awareness Multiplication sign Direction (geometry) Range (statistics) 1 (number) Materialization (paranormal) Set (mathematics) Parameter (computer programming) Mereology Front and back ends Medical imaging Mathematics Mechanism design Machine learning Encryption Software framework Data conversion Series (mathematics) Position operator Physical system Social class Area Algorithm Arm Channel capacity Moment (mathematics) Feedback Sound effect Computer scientist Process (computing) Vector space Computer science Self-organization Quicksort Resultant Spacetime Reverse engineering Point (geometry) Vacuum Statistics Game controller Observational study Virtual machine Graph coloring Machine vision Automatic differentiation Field (computer science) Number Power (physics) Latent heat Term (mathematics) Program slicing Autonomic computing Energy level Software testing Traffic reporting Computing platform Mathematical optimization Form (programming) Condition number Shift operator Dependent and independent variables Multiplication Scaling (geometry) Gender Projective plane Objekterkennung Horizon Total S.A. Basis <Mathematik> Sphere Software Information retrieval Universe (mathematics) Video game Pressure Routing
I think it's safe to say that we've saved one of today's highlights for last so we're coming to the last session of today after our next two speakers take the stage Marcus is going to give you a little bit of a view what's happening next what's happening here next year when we hopefully meet again and before that though I'm very very excited that we have both Kay Crockford and Trevor pagan here today to talk about artificial intelligence and to see how we can have a futuristic view and a positive shaping of the social and ethical impacts of AI I'm sure that most of you know that a number of the world's leading thinkers people like Stephen Hawking and Elon Musk have just decided to donate millions and millions of dollars to make sure that AI is developing in the right direction because they feel we're all going to destroy each other in a much shorter time that we thought possible so this is a very important topic to look at I think I don't really have to announce both speakers like we're very honored to have them both here today but I do want to introduce them to you Kate is a leading expert on artificial intelligence on machine learning and the impacts of that she's a researcher at NYC and a visiting professor at MIT she was one of my favorite speakers of this year's rep?blica so if you didn't see her speak there please do go check out her video online and we're equally excited that she'll be joined on stage by Trevor pagan he's an artist whose work has exhibited at the Metropolitan Museum of Art in New York at the Tate Modern in London and in a number of other leading institutions and they're both going to be speaking together on stage as I said about the consequences of AI which are sometimes hard to see welcome Kate and waking Trevor big round of applause please I'm just getting miked up I'll take this one how does that sound hi it's great to see you here
so just I guess like I can kind of start where I came to this topic from which is that you know I started thinking a lot
about the social and political implications of you know I don't even think we can call it the Internet we didn't need something else I know in Berlin here at the house de Cultura and I built there there's this concept the techno sphere that they're being intergrated so we could just use that for now despite you know any kind of nitpicky problems one might have with that but I think for me you know being somewhat attached to the Snowden project was it was really when I started thinking about what these kind of planetary infrastructures of communication and surveillance kind of were and what their implications might be and of course there was there's been a lot of concern about what the implications of you know mass surveillance at a global scale are in terms of democracies in terms of state power in terms of culture and the like but I think that when we look at the NSA kind of mass surveillance infrastructure's the questions that it
poses are what's our relation to
politics there's a question about privacy etc etc but at the same time that those infrastructures have been built you know NSA might be tapping the cables between the Google Data Centers but Google has the data center and has the data and I and I think we're kind of arriving at a moment where it's starting to become clear what that's going to mean and a big part of what that is has to do with AI so I think you should just talk about like what are we talking about when we're talking about a are we talking about the singularity that's going to take over the world well it's funny I actually think despite that excellent reduction that the big concerns that I have about artificial intelligence are
really not about the singularity which frankly computer scientists say is at if it's possible at all it's hundreds of years away I'm actually much more interested in the effects that we are seeing of AI now so to be clear I'm talking about this constellation of technologies for a machine learning to natural language processing - image recognition and - deep neural nets it's really this sort of cluster of technologies that it's a very loose term but allow us to just sort of work with that today but know that that's the sort of underlying technological layer that were interested in here and I think what's what's sort of fascinating about this moment is that people don't realize how much AI is part of everyday life it's already sort of part of your device often it has a personality in a name like Siri or katana or Alexa but most of the time it doesn't most of the time it is a nameless faceless back-end system that is working you know at the themes of multiple datasets and making a set of inferences and often a set of predictions that will have real outcomes and in the u.s. which is where I've been doing a lot of my research and looking at how these kinds of large AI predictive systems are being deployed in core social institutions so here I'm talking about things like the health system education criminal justice and policing and what is I guess to me as a researcher so interesting and I think I think concerning about this moment is that we're deploying these things with no agreed-on method about how to study them what effects they might be having how they might be disparate impact with people from different communities from different races low-income populations the the downsides of these systems seem to be very much clustered around populations who differ from the norm so that's that's the project that I think is really open and really needs a lot of us to be thinking about and working on it seems like the kind of policy framework and even kind of intellectual framework that that we have to think about this is very much the the words privacy and surveillance and for a long time knife I've kind of felt like these are really inadequate concepts to hang our hats on when we're talking about how data is being used in society and how it will be used in society and I think you know it doesn't mean that they're not important they are absolutely crucially important ideas I think the limitation that I've found with privacy is that it comes from a very legalistic individualist perspective you know you have individual privacy rights it is inherently a bizarre concept that sort of emerges from you know basically around the sort of late 1880s and you see with the emergence of new technologies at the time it was sort of popular newspapers the people were like oh this newspaper is writing a scandalous story about me there should be some sort of right of privacy and so we start to see the emergence of a juridical framework of privacy but it was always designed to protect the elites to protect individuals what I think is going to be needed to address this new set of challenges with machine learning and artificial intelligence is a much more collective based set of practices both in terms of how we represent political action together as groups but also concepts around ethics and power are very important here because we are particularly when it we talk about AI we're really talking about seven companies in the world who are deploying this at scale who have the data infrastructure who have the capacities to be really doing this and that is extraordinarily concentrated that is the thing that I think we have to really think about in terms of they're going to be the companies that decide what education looks like what health looks like so that's why you know I think we need to to be thinking about power and ethics and move on perhaps from the individualistic framing of of privacy yeah so I think we should kind of go into that we can ask some questions about what these implications are you know I think for me you know and we've talked about this a lot a few months ago you know earlier this year the company deepmind kind of famously won this go game and it was considered this huge you know it advanced bender this really spectacular things nope nobody thought that you would be able to beat a grandmaster it go it's much much more complicated game than chess deep mine you know did it and lots of media attention about it but then after that they did something that didn't get someone as much media attention what they did was apply that same AI framework to look at power consumption at Google data centers and what they were able to do was to reduce the power consumption of the cooling costs by 40% um and that that sort of efficiency is really really remarkable I mean you don't see that kind of happen every day and to me that does that's a kind of a microcosm of a phenomena that might become more widespread and and that kind of optimization is gonna have massive effects for things like labor for things like logistics for things like healthcare for insurance credit these kinds these kinds of things that are very much a part of our everyday lives right and so what maybe we gonna talk through like some of that ya know see interesting I mean both Trevor and I was particularly fascinated with his story I mean what happened with NGO is that part
of the reason there was such media attention was because it was predicted that we wouldn't be able to defeat a human go master for another ten years so there was a sense that we taken a leap forward in time but what was interesting about the energy consumption project is that they used exactly the same technique it was a game engine so they thought about data centers as a game that you play where you could open windows increase the temperature or decrease the temperature according to the energy load and they got very precise about tines and then how would you play with all of these levers and and the result I still think is extraordinary and and and if we're going to talk about where I think there's real positive upside to how we can start thinking about using AI imagine if we could do that across a whole range of different energy consumption technologies I mean that's astounding however if you apply that same logic to all right you're a worker who is in a you know basic supermarket job and we've basically got you on the clock coming in at the peak optimal time there's gonna be maximum crowds and you're really only gonna have like a two-hour shift in then we get rid of you that's great for me as the person who runs a supermarket I'm maximizing my value for you know buying your labor but for you it's terrible because your honor you're waiting waiting to see we've been summoned it's this kind of idea of the sharing economy terribly miss named and spread to everything so that would like some labor absolutely absolutely or so-called flexible but flexible for whom they're certainly not flexible for the person who's doing the labor so I think these are the these are the sort of shifts that we have to take that there has been this this way of talking about AI is a singular thing that if applied to everything will be brilliant everything will be more efficient but I think we need a much more granular analysis that says where are we gonna get maximum benefits with minimum human costs and that is not an easy question to ask right now because we have so little data no absolutely so I mean we're talking about systems that can radically transform everyday life and that has political implications that has cultural implications as sociological implications but how do you there's there's a couple of questions here in relation to power and one of the things that you mentioned before was that this is really five companies right I mean so we're talking about like with that right to seven decide whether whether IBM is in there but what what first of all how did that concentration happen why can't I just go and create my own AI and figure out how to my studio well this is this is actually something that I think is fascinating because if we compare it to the last sort of extraordinary technological shift that I think most of us in this room witnessed which was around the internet in the web right so we had this sense of like well you could more or less teach yourself some forms of code and you could you know pretty much create a website you could do a whole lot of things without too much you know sort of self teaching it was pretty straightforward the difference with AI is that the costs to first of all have large-scale training data it is huge just to get that training data it's extremely valuable companies don't share it because its proprietary there are open data equivalents but it's again then the issue becomes processing so then you're running big GPUs it's very expensive and actually another artist RS Kazemi just did a really interesting short paper looking at if he was trying to start again now as a kid saying I want to do DIY AI how would I do it he's like I could not afford this so I think we're that's that's part of the issue also these are all the companies who have been collecting data for some time they have different types of data so they'll be producing different types of sort of AI interventions but what is interesting is what happens when they start deploying those models is we're starting to see this pattern which is really interesting which is that we're really good at machine learning for some things but keep in mind machine learning systems are really looking for patterns and they're very very bad at unpacking why
those patterns are there or thinking about the context so let me give you an example to make this concrete it was a really interesting study done at the University of Pittsburgh Medical Hospital where they were studying pneumonia patients so they look let's basically you know train this on a deep neural net which we don't really know what it's doing but we'll see what the outputs are well we'll train it on a more open system where we can see what patterns they find but we didn't understand the rules and what they found with the DNN version and the deep neural net was that it was extremely good at figuring out who was actually likely to have complications from pneumonia apart from in one case it wanted to send home all of the people who had chronic asthma of course they're the people who were the most vulnerable and most likely to die so it was a very bad decision but the reason why it came to their conclusion is actually quite logical which was that the data indicates that the doctors had been so efficient if you came to me and you said you had pneumonia I only heard of chronic aspirin like straight to intensive care off you go so you are actually now unlikely to get complications because I've moved you straight into an intensive care system but of course if I'm a data model I just see that oh people with chronic asthma don't have complications send them home so it was a really interesting it's a really interesting study to show the difference between interpretability and data patents there's a pattern there but how are you interpreting it so I think we have a set of issues there that also relate to how we think about the deployment of AI into social systems so we can think about its deployment in healthcare labor what about other I think like what are the classic kind of sectors in the post for this economy is like insurance real estate credit right and very much everyday lives right and so in and you know it's almost like credit is almost a kind of right in a way like what what I mean by that is that when you do these are de-facto things that you can do as a human in the world right and if your credit score is modulating you effectively have different rights than somebody with a different kind of credit score right and so one of the things that when we think through what will the integration of AI into those sorts of industries what will be the effects of that well it's gonna be really interesting one of the things that suggests is that the system is gonna get really good at hyper personalizing to you to the point where you know if you're an eighteen year old who's you know having a few beers at a party and there's there are Facebook photos and in sure it's kind of like ah interesting and you're driving a car we might be increasing your insurance premiums on this very granular like very oh this this week this month but actually and again I'm gonna speak to the context that I know best which is the sort of the US legal system we do have some protections that we can use around credit because let's face it credit and insurance agencies have been using data to you know really pinpoint people for some time so there's some pushback there but I'm actually more worried about when this gets deployed into areas like the criminal justice system so I'm sure some of you read the Pro Publica story machine bias that was based on Julia Angwin s work over 14 months with five journalists basically firing the hell out of this company called North Point North Point has used this software platform throughout courtrooms in the US what it does is it gives you it gives a criminal defendant a number between one and ten to indicate the risk of them being a violent offender in the future so it's basically called like a recidivism risk and what she found in this big investigation was that basically black defendants were getting a false positive rate of twice that of white defendants so the race disparity was extraordinary and it was in the failure rates were really very clear but what was fascinating about this huge story blew up everyone was was concerned we still don't know why North Point hasn't released the data they won't reveal now these calculations we made because it's proprietary so this system that was being deployed to judges that they were using to make these really key decisions is still a complete black box to us so that's where we're actually really bad at thinking about you know what are the due process structures how do we make these kinds of predictive systems accountable now that of course is not an AI system to be super clear it's a predictive data system I wouldn't call it autonomous in the way that I would call AI but I think it's an it's a precursor system solutions what they do sound ominous I don't know it's a company that mostly caters to law enforcement what they do is they deploy lpr cameras so they have cameras all over cities and all over their own fleet of cars that just go and take a picture of everybody's license plate now what they did in and they sell this to law enforcement insurance collections agencies and that sort of thing they had a program in Texas where they were partnering with local law enforcement agencies and they were installing ALPR cameras on cop cars and so anywhere that cop car drove it would record the license plate Vigilant would ingest that data merge it with their own and to make it available back to the cops and what the other move that the that Texas did was gave police the ability to swipe people's credit cards as a way to you know pay fines traffic tickets get take care of arrest warrants and that sort of thing so then what the police had was here's a record of where everybody is here's everybody who we have something on we can just drive to their house take out your credit card you know and so
this is like Ferguson you leave very predatory kind of municipalities kind of gone you know that's extraordinary I mean it's interesting so that we don't depress you too much and leave a little time for questions the the thing that I'm also really interested in and we should talk about this to Trevor is like what do we do about it you know what are the things that we can do about this and we've talked a little bit about sort of existing legislative frameworks I think most of the time they're not actually up to this challenge I think we have a lot of work to do to think about where we get accountability and due process in these kinds of quite opaque systems the thing that I've been working on recently was we did a White House event on the social and economic implications of AI with Meredith Whitaker who's here tonight looking specifically at what we could do and I know this was something that's interesting you Trevor because one of these questions is how do we give access to people you know how do we make sure that people get access to these tools but then secondly if you're being judged by a system how do we start thinking about due process mechanisms so I mean I think that's that's one of the areas where I think we have the most work to do but I also think that collectively we can actually really start pressuring for these kinds of issues in the due process case yeah and of course predictive policing is another big thing here too are you having much predictive policing in Germany is this a thing that's happening here anybody yes a little bit little bit okay all right well I would be keeping a close eye on that and we we've had this is tragically one of the areas that the US has really been leading the way there are predictive policing systems in New York in Miami in Chicago in LA and it's been a really interesting set of studies looking at how these systems are working there I'm often built by Palantir Palantir is one of the major I'm sure many of you are familiar with Palantir as a company that provides a lot of technologies to various military organizations around the world but there's an interesting thing has just happened we just got the first study that looked at the effectiveness of
predictive policing in Chicago this was by Rand so it's not a radical organization and they found that it was completely ineffective at predicting who would be involved in a crime but it was effective at one thing which was
increasing police harassment of the people on the list so that you know if you're on a heat list you're going to get a lot of attention but it's not necessarily going to help predict who's going to be involved in a violent crime so we're already starting to see just like empirical testing of these systems is that they're not even meeting baseline criteria of what they say they're gonna do so I think this is where we have a lot of potential to move and potential to to work collectively around political issues is to say show us the evidence that this predictive policing system will actually work and without producing disparate impact I think there's two layers of concerns here like you know I tend to take the bigger concern like you know I'm more of a meta concern which is like that the problem with these AI is you know being using policing is not that they're racist it's that the idea of quantifying human activity in the first place I find very violent you know and we can like for example labor or something like that like in if you're gonna have a capitalist society then the capitalism was all about optimization right creating efficiencies that's one other ways in which you make money so how do we start to rican sieve of I guess my concern is that we actually don't even have a political or economic framework with within which to address something like a 40% you know increase in efficiency across a logistics sector or something like that no I think that's right and I think this is part of the issue around what's happening now and I want to really avoid the kind of technological inevitability arguments which come up a lot people say well this is the new thing so it's gonna happen and it's gonna touch every part of life not necessarily and and what's interesting what I've been doing is going back to a lot of the early works of being written about AI and it's sort of first decades of development basically back to the 1970s there's an extraordinary AI professor called Joseph Weizenbaum who wrote the program iliza you might have seen this program it's a natural language processing very early program designed to simulate conversation very basic but he was amazed by how people would take it in by it and and it was a very simple kind of cheering task like we'd have a conversation you know it sounds like a real person he very quickly started to ask critical social questions about AI
and he had this total conversion moment where he was like if we start deploying AI into all of our social systems it will be a slow-acting poison so I mean it's a pretty it's a pretty harsh critique but what it did was it started to make people think about where can this work and where might it not work I don't think we're gonna win Trevor between you and I in the sort of trying to say well not all of life should be Metro sized I think that's been happening for well over a century but I think we have the chance to push back when it comes to this issue of where should this be deployed are there areas where we simply don't have sophisticated enough systems to produce fair outcomes I think you know one of the things they did I know you've done a huge amount of work on - it's just in terms of the ethics of research that goes into this you know like what are the human subjects implications for people you know in universities doing the kind of groundwork that you know right doing the kinds of studies right in the kinds of algorithms that will eventually become a deep face or a deep mind or whatever Google or what-have-you this is really this is a really interesting space and I'm gonna basically give away a forthcoming research paper that we have that's about to be publicized but basically we've been looking into what I think is a really really interesting shift we already had a culture where a lot of scientists and academic researchers particularly computer scientists felt as though this is data that we've just collected from mobile phones it's not human subjects data we can do what we want with it we don't have to ask about consent we don't have to think about lifetime of the data we don't have to think about risk because computer science has never really thought of itself as a human subjects discipline so
it has been outside of all of that sort of human subjects work that happened to the critical social sciences and humanities in the late 20th century but here's where it gets really weird there's this thing that has just started to happen and by just I mean probably in the last 24 months where we're moving to forms of autonomous experimentation what that means is that these are systems where there isn't a person designing the experiment and looking at the result this is basically a machine learning algorithm that is looking at what you're doing poking you to see if you will click on our ads if we show you these images in quick succession if it gets a good response it will continue to optimize and optimize and re experiment and re experiment and this could happen to you thousands of times a day and you won't be aware of it this certainly isn't any kind of ethics framework around autonomous experimentation but there's a new set of platforms things called multi world testing where this is being deployed into things like everything basically how you read news so experimenting and to see what kinds of news will make you buy more ads to traffic directions right so if you're in an autonomous experiment someone will be allocated to the optimal route so you'll get to work faster but somebody has to be allocated to the suboptimal route otherwise we'll put you on the same road so that won't work now that might be okay if all it means is that you're going to be late to work at five minutes not a big deal but what if you're rushing a hospital you know what if you've got a sick kid what if you have no way to say do not assign me to the experimental condition which is suboptimal please like there's no consent mechanism there's no feedback mechanism so once you start deploying something at that scale we're kind of used to the traffic optimization thing because we can see it but what happens when that's in a whole range of sort of back-end datasets where you're being optimized and experimented on multiple times a day so for me I've been collaborating with people specifically in machine learning and information retrieval and we've been testing their systems and looking at them and going okay what are the possible downsides here how might you create mechanisms of feedback so people would be able to say look to me it's worth it really not to be experimental when I'm sick and racing the hospital but these are mechanisms that haven't been designed yet so so what I'm most interested in doing right now and where I think we have a big job to do is to create a field around what are the social implications of AI get people working on these systems trying to test them sometimes that will be reverse engineering from afar there are legal restrictions there like the CF double-a in the US that really worry me but I think that process of really trying to hack and test these systems is going to be critical one of the recommendations that you made in the AI report that I think is actually quite important is that you're calling for more diversity in the research and we're playing with some AI systems in the studio and when there are sure autonomous experiments where there's nobody in control but sometimes like you see the very specific subject tivities of the people who are creating the system so for example if you're running object recognition we ran it on the painting and said oh this looks like a burrito a burrito is only a thing that you would think of as a class of things that you would want off' i if you were a white young person living in San Fran go in particular so there are these moments where you really do see the specificities of the experience of the people developing the software and I think that that translates into many other kinds of spheres so for example if you are a Big Data Corporation and you decide oh we're not going to encrypt this data because you know it doesn't really hurt anybody I don't really have anything to hide that is you you have coming from a class position in a race position where yeah maybe you don't have anything how you are not the you know you are not being preyed upon by police by you know other kinds of agencies and so to me that was really interesting one of your recommendations like actually you need more diverse people yeah I mean this is actually where we are not doing very well at all so right now if you look at the stats on what the engineering departments are like at the big kind of seven technology companies the the ratio the basis around 80 to 90 percent depending on which company men so just getting women into those rooms has been extremely difficult for a whole lot of reasons and then if you look at people of color the numbers are even more dismal and underrepresented minorities I mean this is like these this is a extraordinarily homogeneous workforce the people in these rooms designing these systems look like each other think like each other and come from generally speaking very upwardly mobile very wealthy kind of sectors of society so they're mapping the world to match their interests and their way of seeing and that might not sound like a big deal but it is a huge deal when it comes to the fact that certain ways of life simply don't exist in these systems so I mean it's interesting that I think that strangely sort of race and gender which has always been an issue in computer science is actually even more important in AI because AI it's not just an economic argument about getting people jobs and getting people skills it's about these are the people mapping the world and they are only seeing this narrow dominant slice if we don't get more diversity in those spaces or at least different ways of thinking about the world we are going to create some serious problems absolutely that's for me one of the takeaway things to think about when we're thinking about AI this is not neutral there's specific kinds of power that these systems are optimizing for some of them are maybe unconscious you know can racial positions or that sort of thing but some of them are quite conscious you know like the kinds of systems that are going to become more profitable and reproduce themselves are ones that are going to make money are ones that are going to enhance military effectiveness ones that law enforcement when they capitalism these are the kinds of vectors of power that are flowing through these and so I think it for me it's always important to make that point is that these this is not happening in a vacuum that's not a level playing field and that's probably part of the civic project is to think about well how what kinds of power do we want flowing through these organizations and I think showing people how power works is really key here this is where I think of your work now on machine vision is that you're really showing people like these are the different ways that bodies are tracked and understood it's very different to humans seeing but it has a whole range of capacities that people are not used to looking at and and you know I know you're making a series of works that will really start to show people what this quite alien way of seeing looks like and I think that is quite a radical important act right now is simply a lot of people are not aware of how much these systems are around us all the time so part of I think what we can do now and where I think artists and activists and academics can all really start to work together is first of all how do we show to people the materiality of these systems and how do we start to think politically not just about hiding from them or you know that that encryption is going to be the answer because I fear that we're in an arms race now it's actually going to take a lot more political pressure it's gonna take a lot more research that's going to take a lot more public interest in this question because I mean one of the things I know we agree on is that this feels like a very big storm cloud that's on the horizon like a lot of changes are about to happen and and a lot of people just not aware of it yet so a part of just making this public awareness a bigger issue I think is really important at this point I think that with that maybe we have time for a question or two I'm not quite sure but are we allowed to do questions no we're not sorry about that you can come and talk to us later or tonight at
the party but thank you so much Thank You charity thank you guys
Feedback