Human Rights in the Age of AI - Dystopia or shining future?
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 275 | |
Author | ||
License | CC Attribution 4.0 International: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/52278 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
rC3 - remote Chaos Experience274 / 275
7
8
9
10
11
13
14
19
20
22
24
27
35
37
38
42
43
47
48
52
53
55
56
57
60
61
62
65
67
69
77
80
82
84
86
87
88
90
92
96
97
100
101
102
107
108
109
112
114
116
117
119
121
122
126
128
129
132
134
136
137
140
142
143
145
146
148
149
150
151
152
153
154
157
159
161
162
163
164
165
166
167
168
169
177
179
180
181
186
188
192
193
194
195
198
200
201
208
209
211
212
213
214
215
219
220
221
222
223
226
232
234
235
236
237
240
242
243
244
246
247
248
249
251
254
255
259
260
264
265
266
267
270
272
275
00:00
Hill differential equationArtificial neural networkGroup actionExpert systemRight angleDigitizingComputing platformSound effectTwitterAlgorithmComputer animationMeeting/Interview
01:05
Representation (politics)Right angleInformationField (computer science)Level (video gaming)Computer animation
01:51
Independence (probability theory)Self-organizationPhysical lawRight anglePressureType theoryGroup actionStructural loadCondition numberPresentation of a groupWebsiteResultantOrder (biology)Point (geometry)Term (mathematics)Power (physics)Computer animation
04:24
ACIDSoftwareStatisticsPattern languagePredictionAlgorithmTerm (mathematics)SoftwareAlgorithmContent (media)Matching (graph theory)Context awarenessNeuroinformatikAiry functionBoom (sailing)CASE <Informatik>Presentation of a groupPattern recognitionMultiplication signVideoconferencingMetropolitan area networkData storage deviceLocal ringPredictabilitySystem callPoint (geometry)Pattern languageCross-correlationResultantShift operatorWaveRoboticsChainMathematical singularityMachine learningSpectrum (functional analysis)Set (mathematics)Wave packetError messageBit rateDifferent (Kate Ryan album)Group actionVirtual machineCausalityCoefficient of determinationComputer clusterComputer hardwareState of matterArithmetic meanDisk read-and-write headData miningGraph coloringArrow of timeWeightECosFlag
13:04
GenderAuthorizationAutomatic differentiationType theoryGenderOcean currentRule of inferenceFacebookEqualiser (mathematics)Decision theoryAlgorithmProcess (computing)WebsiteMathematicsRegular graphDegree (graph theory)Observational studyTerm (mathematics)Graph coloringWave packetAiry functionRepresentation (politics)Goodness of fitAssociative propertyContent (media)Digital photographyContext awarenessBitPhysical systemMedical imagingOrder (biology)Mathematical optimizationBit rateClient (computing)FlagOnline helpProxy serverCoefficient of determinationCASE <Informatik>AverageMathematical analysis1 (number)Product (business)Multiplication signOffice suiteFunctional (mathematics)Scaling (geometry)Data managementData structureObject (grammar)Condition numberSign (mathematics)Ferry CorstenCompass (drafting)Key (cryptography)Film editingReliefComputer animation
21:28
GenderComputer-generated imageryVideoconferencingComputer networkRegulator geneInternetworkingGroup actionWordSystem programmingFreewareGroup actionBit1 (number)Order (biology)Speech synthesisComputer configurationRight angleVideoconferencingMusical ensembleHoaxProcess (computing)Pattern recognitionPhysical system19 (number)CausalityInformation privacyObject (grammar)Sound effectClosed setPresentation of a groupAttribute grammarGoodness of fitMultiplication signWebsiteParameter (computer programming)InternetworkingCondition numberDomain nameDecision theoryLeakRoboticsState of matterVideo gameMedical imagingAutonomic computingArtificial neural networkCASE <Informatik>Physical lawRational numberThermal conductivityUniverse (mathematics)Field (computer science)Declarative programmingContext awarenessWordAlgorithmSinc functionSmoothingForcing (mathematics)Computer animation
29:51
ForceTwitterMathematicsStandard deviationGoodness of fitRight angleGroup actionSound effectCondition numberDomain nameFundamental theorem of algebraPosition operatorVideoconferencingRing (mathematics)Video gamePrice indexInformationReal numberContent (media)TwitterSpeech synthesisSpacetimeAlgorithmPressureProduct (business)Standard deviationProteinProtein foldingContext awarenessEnergy conversion efficiencySimilarity (geometry)Order (biology)SatelliteRevision controlPattern recognitionRemote procedure callForcing (mathematics)Materialization (paranormal)HypermediaMedical imagingAreaPower (physics)Projective planeFocus (optics)Hand fanCausalityVotingScaling (geometry)MereologyFood energyStatement (computer science)Arithmetic meanComputer animation
37:42
Regulator geneExpert systemOrder (biology)CodeBlack boxComputer programmingSource codeElement (mathematics)Wave packetDecision theoryRepresentation (politics)System callState of matterFerry CorstenRaw image formatSoftware developeroutputFunction (mathematics)AlgorithmAreaThresholding (image processing)Point (geometry)Vector potentialCuboidCommitment schemeCausalityCASE <Informatik>Physical systemResultantSound effectDeclarative programmingTape driveProcess (computing)Virtual machineField (computer science)Expert systemTraffic reportingContrast (vision)Parameter (computer programming)TriangleArtificial neural networkAiry functionData managementAutonomic computingPattern recognitionGraph (mathematics)Level (video gaming)Group actionWhiteboardAnalytic continuation
44:34
NeuroinformatikGoodness of fitVector potentialTrajectoryRegulator geneType theoryCausalityFreewareComputer animation
45:40
GenderObservational studyElectronic visual displaySource codeExplosionState of matterPerformance appraisalSystem programmingOperations researchPlastikkarteAlgorithmCloningMathematicsPoint (geometry)AuthorizationSoftware developerTwitterSource codeFreewareShooting methodGame theoryDigital photographyMedical imagingSource code
46:28
SphereField (computer science)Context awarenessMedical imagingFreewareVector potentialGame controllerScaling (geometry)1 (number)Level (video gaming)Expert systemPhysical systemForcing (mathematics)Touch typingHome pageShift operatorDigital photographyPoint (geometry)Source codeTwitterBlack boxDecision theoryTouchscreenCodeCASE <Informatik>Perspective (visual)Endliche ModelltheorieGroup actionArithmetic progressionAlgorithmProcess (computing)Regulator genePower (physics)Form (programming)Representation (politics)Traffic reportingRight anglePresentation of a groupHypermediaFrustrationVideo gameYouTubeMultiplication signLatent heatAnalogyTerm (mathematics)Moment (mathematics)BootingView (database)Boss CorporationGame theoryType theoryRadical (chemistry)DatabaseObservational studyTable (information)CoalitionOpen setFood energyMereologyProjective planeComa BerenicesSound effectHand fanMathematicsShape (magazine)OctahedronTunisComputer animation
Transcript: English(auto-generated)
00:15
Johannes will talk about human rights in the age of artificial intelligence.
00:21
Johannes is a volunteer at Amnesty International. He is a member of the Amnesty International expert group on human rights in the digital age. Professionally, he works on the topic of effects of algorithms on digital platforms. If you would like to post questions for the Q&A session afterwards,
00:46
you can post them on Twitter under hashtag RC3OU, in one word, in small, or in the IRC channel under RC3-OU. Now a warm welcome to Johannes and enjoy the talk.
01:05
Welcome to our talk, Human Rights in the Age of AI, Dystopia or Shining Future. Tonight, I want to take you on a top-level overview tour through this vast and possibly endless-seeming field.
01:24
My goal is to give an introduction to the topic that is accessible to beginners, but also to enrich this talk by information that makes it valuable and worthwhile for advanced audiences.
01:42
My name is Johannes Walter and I am speaking to you tonight as a representative of Amnesty International. Let me try to explain in four bullet points or less who Amnesty International is and what we do, just so you know who is talking to you and why.
02:06
Maybe in one sentence, Amnesty's mission is to campaign for a world where human rights are enjoyed by everyone. We are a non-governmental organization that is independent of any political ideology,
02:25
or economic interest, or any type of religious belief. So what is it that we are actually doing in order to achieve our goal of living in a world where everybody enjoys human rights?
02:40
Very broadly speaking, it is two things we do. For one, we are a lobby group. We lobby governments and corporations, companies, such that they stick to their promises and that they respect international law.
03:02
Amnesty globally has several million members and we are leveraging that human power in order to document and uncover human rights violations all over the world and then to use our ability to create publicity to build up pressure on governments and corporations
03:28
to make sure that they respect human rights. And then the second thing we generally do is we try to keep the public informed about human rights-related topics
03:47
because we believe that the best outcomes for a society are achieved when that society is engaging in a debate, in a discussion on how to solve any kind of problem really.
04:08
And we believe that the results that come out of these debates are the better, the better informed the public is. And that is also the reason why I am speaking to you tonight.
04:22
Now, I feel like it is warranted to start any type of presentation that throws the term AI around by clearly stating and clearly defining what is meant by artificial intelligence.
04:42
Such a definition is crucial for a couple of reasons really. For one, the ethical assessment of the moral challenges that come about with AI hinge critically on the definition.
05:00
You get the definition wrong and the discussion turns into science fiction in the best case. In the worst case it's a distraction from the actual problem. But then there is also this phenomenon, and we talk about this topic a lot to people,
05:20
there is this phenomenon that mentioning the term AI by this point really causes a mental chain reaction that is going on in the heads of people. For some, mentioning AI causes them to be immediately annoyed and turned off because they are worn down and dulled by the constant overuse of the term in meaningless marketing-like settings.
05:46
And then on the other end of the spectrum you have people who are super excited and thrilled as soon as they hear the term AI and they are ready to embark on a discussion about the singularity and superhuman AI.
06:01
And I think a scientifically sound approach and one that is also closest to the results that the leading IT companies are achieving these days could be the following. Say AI is software that uses statistical algorithms to search and find patterns in large amounts of data
06:31
and then it's using these learned correlations in the data to make predictions about data points that it hasn't seen yet.
06:41
And such a software of course can also run on hardware, so this would include robotics obviously just as well. Now with this method, companies over the course of the last five or maybe even already eight years
07:03
have achieved tremendous results. And those results are the reasons really why we are in an up wave in an AI boom these days. So computers can nowadays reliably see and speak and listen, react in intelligent ways
07:26
and in the last two years or so we started seeing AIs pop up that can really start to generate and create their own creative content.
07:44
And that's why it makes sense to talk about this. Now I know that some of you might be thinking that the definition I just gave you is closer to what is typically meant by machine learning. And I'm aware that usually AI is an umbrella term that is including but not limited to machine learning.
08:06
But as you will see throughout the presentation, this definition will serve us in the scope of this presentation just fine and therefore I will run with it.
08:21
Now in what ways does artificial intelligence hurt us already today? And in what ways can it possibly develop into an even bigger threat in the future? One research article that really kick-started the whole wider debate about the ethical repercussions of AI
08:44
is the one from two years ago from Bourlamvini and Gebru in which they looked at facial recognition algorithms. So two years ago at the time they took three of the most widely used commercial facial recognition algorithms.
09:06
One was Microsoft and I forgot the other two but the big IT companies. And what they did was they were trying to assess the accuracy of the algorithm but breaking that accuracy down for different demographics.
09:24
And what they found was quite striking. So for the group of light-skinned males the algorithm worked almost perfectly. The error rate was 0.8% but for dark-skinned women the error rate was more than 40 times worse.
09:48
So if the algorithm was trying to, if it saw a new picture of a female people of color, then it would in almost 35% of the cases misclassify that person as male for example.
10:10
Now these algorithms were already used at the time so we're not talking about something that lies in the future. In fact, harm by such algorithms is happening right now.
10:23
2020 saw the first case where an American citizen, Robert Williams, was wrongfully arrested due to a mismatch by a facial recognition algorithm that the police were running.
10:41
The story would be kind of entertaining if it wasn't that unfortunate and sad because as he states he was working a normal shift when he got a call from the local police department asking him to turn himself in for jail time.
11:01
So what happened was the police was investigating a case of a minor robbery of a local store and the CCTV, the video footage of that store, recorded the face of a black man.
11:22
The police ran that facial recognition algorithm and the match spit out this man, Robert Williams. And he even ended up doing jail time even though later it was of course then discovered that he was not responsible and he received an apology from the police. It is an interesting case as it is the first account we know of.
11:52
So, like we've seen in these examples, algorithms can show discriminatory behavior and that comes maybe as a surprise to some
12:04
because naively you could think that computers are these hyper rational machines that are strangers to any kind of emotional bias and therefore discrimination shouldn't be a problem. But as we just saw it is and so the question is how can that happen?
12:28
And of course, as many of you probably already know, one way biases can be introduced into AI is by using bad training data. And one particularly striking example is the story of Inyoluwa Raji.
12:47
This young woman, Nigerian born but now living in the US, did an internship at the AI company Clarify. And what she was working on there was a facial recognition algorithm that
13:09
was supposed to help clients flag inappropriate images as not safe for work. What she soon realized was that images that contained people of color were
13:26
deemed inappropriate at a much higher rate than imagery that contained only white people. And so she started to investigate and what she curiously found out was the problem was in the way the AI was trained.
13:45
So the AI learned inappropriate content from pornography footage and appropriate content from looking at stock photos. As it turns out, porn is much more diverse in terms of skin colors than is stock footage which contains mostly white people.
14:07
So the algorithm learned to associate black skin with inappropriate content. Interestingly, when she brought it to the awareness of her managers, they were in fact not doing anything about it.
14:29
The sentiment was, it is difficult enough to find good training data or training data at a large scale at all. So we're not going to worry too much about representativeness for now.
14:49
So much about bad training data, but there are other ways in which AIs can be biased as well. One important other reason is if you tell the AI the wrong thing to do,
15:09
if you're not careful about how to specify the target objective function of the algorithm. So I want to share a very interesting story, at least in my opinion.
15:24
And that is the story of how two researchers found a gender bias in ad algorithms on Facebook. So what the authors did was they ran an ad campaign for STEM degrees on Facebook.
15:44
STEM, of course, being science, technology, engineering and mathematics. And what would happen is they just went into the regular advertising way on Facebook and people would be shown this ad
16:03
and when they click on it, it would take them on a website that would inform them about the advantages of studying STEM and about finding out job opportunities and the like. So they ran this for a couple of weeks and when the campaign was done, they analyzed the data.
16:24
And what they saw was that the algorithm chose to show this ad much more often to male audiences than to female ones. Now, if we are making an ad now, for example, for any type of consumer product, that might not be a problem.
16:47
But if we're talking about campaigning, advertising for something that is having further implications for the society, like who studies what and why, then maybe we want to drill into the reason for why the algorithm chose to discriminate between genders here.
17:06
And the authors first thought that, well, okay, maybe men are just more interested in this ad and are more likely to click on it. More likely than women, anyways, and therefore it would be somewhat justified to show it more often to men.
17:22
But when they did the analysis, they, to their surprise, found out that the chances for men and women to click on this ad were basically exactly the same. So now it's getting really interesting, right? Why, if that is the case, then it really seems like the algorithm is discriminating women here.
17:44
When they drilled further, they found out that the reason lies in the way the target for the AI is defined. So the algorithm was told, or the way the algorithm is coded, is to maximize the ratio between impact and cost.
18:06
So that it would show the ad, yeah, that it would maximize this ratio. And now it turns out that female eyeballs having a contact with an ad impression
18:23
is actually more valuable for advertisers than showing ads to men, on average, or things equal. Because, as it turns out, at least in the US, and I don't doubt that it is very similar in Europe, as it turns out, women are making the most decisions about what to buy.
18:47
Big ticket items and all the way down to everyday grocery shopping. And because of that, it is more enticing and interesting for advertisers to reach women. And because it's more valuable, it's also more expensive.
19:03
Now, because the probability for men and women was more or less the same to click on the ad, but women were more expensive, it was optimal for the algorithm to show it more often to men. Now, when the authors find out about that result, their immediate first reaction was indeed to go to Facebook and say,
19:25
Hey, Facebook, please, we are aware of this problem here. The algorithm seems to discriminate unjustifiably. Please make sure that you show this ad in equal proportions to men and women. But, quite ironically, exactly that is not possible under the current rules on Facebook, exactly in order to prevent discrimination based on gender.
19:50
So, that story really is a nice example also of how we might have to rethink certain rules now with the emergence of AI as a widespread technology.
20:05
I want to share another example story of how a bad objective function can cause problems. And that is from a study that was very nicely published in Science.
20:21
What they did, they looked at an algorithm that was used in the American healthcare system. And the job of the algorithm was to support doctors of medicine. It would make a suggestion of who should receive further intensive care,
20:41
which patients should receive more care and which are okay with receiving a little bit less intensive care. When they looked at this algorithm, again they found that for patients in the same conditions, black patients were recommended at much lower rates for intensive care than white patients.
21:04
And what they found out was the algorithm was told to proxy the need for medical intensive care by how much money the healthcare system spends on a certain type of patient.
21:24
Now, because the American healthcare system is structurally disadvantaging black people, there is less money spent in the healthcare system already over the last decades on black people than on white people.
21:43
So, again, with the same conditions black people would, decided by humans now, receive less care and less money. The algorithm seeing this data would infer that black people are more healthy and don't need as much care, which is of course bringing this whole argument at Absurda.
22:06
So, we've seen now how AIs can discriminate and for what reasons, that is. Now, I want to talk a little bit about another important way in which AI could be detrimental to our societies,
22:24
and that is talking about deepfakes. Now, without delving into the technicalities too much, deepfakes are manipulated video, audio or images, and they have been manipulated by so-called deep neural networks.
22:45
And in effect, what that means is we can now create videos that can be altered at an unprecedented ease and at almost close to zero cost.
23:00
And it is easy to imagine how that can be dangerous. For example, I've seen a paper recently that introduced an AI that is capable of removing people or objects out of a video entirely, without leaving almost any artifacts in the image.
23:28
We've seen the last two US elections, we've seen Brexit, we've seen over the last nine months the debate going on about COVID-19.
23:41
And it is really easy to see how in the 2020s decade that is lying ahead of us, our democratic discourse can be negatively influenced by bringing about fake news, deepfakes into the discussion,
24:00
especially considering that there are internationally actors that have a vested interest in interrupting a smooth democratic process in Western countries. But as it turns out, it's not only Western countries that are concerned about deepfakes. So China's internet regulator, for example, announced a ban of fake news that have been created by deepfakes,
24:26
and they even discussed to ban the deepfake technology altogether. And then on the other side of the earth, in the US, California has already taken action against deepfakes,
24:43
such that since last year it is now illegal to use deep neural networks to alter images or video that would bias the way politicians' action or words are received by a wider audience.
25:06
So I've talked now about discrimination and about deepfakes a little bit in greater detail, because discrimination by AI is really a topic that has seen a lot of attention by policymakers and researchers,
25:22
and because deepfakes are becoming more and more prevalent. But of course, there are many other ways in which AI can be problematic for us. And I just want to list a couple of ways, and I want to embed that by adding a couple of words to the question of do we need new human rights?
25:48
Do we need digital human rights possibly in order to deal with these problems? There is an ongoing debate, and it is far from being settled, but at least speaking for our group at Amnesty,
26:04
I think it is safe to say that there is a tendency forming to say that, no, in fact, we do not need new human rights in order to cover all these problems that I've talked about, but the ones that we already have just need to be applied in the appropriate manner.
26:23
But of course, this discussion is far from being over. Just to sort the cases, the examples I've talked about so far, and to give a little bit of a taste for what other problems are being out there and how they relate to human rights. If we look at the human rights as defined by the Universal Declaration of Human Rights, we can go through a couple.
26:48
Of course, I'm aware that the Universal Declaration is not legally binding, as it isn't a contract of international law. But of course, most, if not all, rights have been implemented into legally binding, very much legally binding national law.
27:08
For example, the case about Robert Williams that I've mentioned a couple of minutes ago would fall into the domain of Article 2, which is the right to non-discrimination.
27:25
Another field about which we could do an entire presentation is predictive policing, which falls in the domain of this Article 2. And of course, there is Article 3, the right to life and liberty. And here, of course, we have to mention autonomous weapons systems,
27:44
which is basically killer robots that have been deployed with some kind of, for example, facial recognition, AI, or an AI that allows it to make the decision of whether to go forth with a lethal strike without a human in the loop.
28:08
Then, of course, there is Article 12, the right to privacy. And I've talked in great detail about facial recognition by now. But of course, here, we could also talk about this system of data surveillance
28:23
that the big IT companies are basically putting us all into. Article 20, the freedom of assembly, could be endangered, for example, by facial recognition AI because some people might choose not to go to a demonstration
28:43
if they are afraid that the police might identify them individually. And that this is far from a dystopian, in the future, lying problem we have seen at the protests in Hong Kong over the last years.
29:03
Of course, Article 18, freedom of thought, could be endangered by, for example, the problem of deepfakes poisoning our democratic discussion. And even all the way down to people being discriminated based on protected attributes.
29:24
We have seen, for example, gender and race. But of course, there are many other that could be in question here. So this just to, like I said, give you a glimpse of how far-reaching this is.
29:42
But the title is called dystopia or shining future. So I also want to talk a little bit about how AI can be used as a force for good. And there is good reason to be hopeful and to believe that AI can be helpful as well.
30:06
So, for example, AI image recognition algorithms have been used to document human rights violations in Yemen, in Syria. And Amnesty International, for example, has used it to document human rights violations in Darfur, which is a western region of Sudan.
30:32
And what was happening there, the region of Darfur wants more participation in the national political affairs of the state.
30:47
And so the conflict escalated and the government was fighting against rebels. And Amnesty is accusing the national government to use chemical weapons against the population.
31:03
And now in order to gather evidence of these crimes, what Amnesty did was looking at satellite images before and after such a chemical attack. Because these chemical attacks would expel the population of certain villages.
31:23
And of course what we could have done is using drawing on a large amount of volunteers who would then classify these images by hand. But of course it is much more efficient and faster and impactful to use AI in this context.
31:41
And in a very similar vein, Amnesty is running the toxic Twitter project. So that is now switching subjects. Now talking no longer about human rights violations in countries, but about the problem of violent sexualized hate speech against women on Twitter.
32:11
And what Amnesty is doing here is again trying to document this problem and to build up pressure and force Twitter to take action such that everyone feels safe and secure in this social space that Twitter is nowadays.
32:33
And again what we are doing is now we use text NLP, text analyzing algorithms
32:43
that help us classify millions of tweets into dangerous hate speech or into appropriate content. And for example doxing is a large problem.
33:02
That is for example the act of publishing private information about someone online such that then others can go and use that information to make death threats in real life or so on.
33:22
These are two very precise examples of what we did. But of course Amnesty is not the only one. Great work has been done to use AI to recognize displaced people or to use AI to analyze the background of child pornography videos.
33:49
Such that then similar backgrounds could be an indication that it was filmed by the same group or individual which would be a hint that helps the police to find the criminals who made this video and therefore breaking child pornography and sex trafficking rings.
34:14
But then there is also this very fundamental hope that AI as a general
34:24
purpose technology can have tremendous positive effects on humanity on a global scale even. So what I mean by saying general purpose technology is that AI is really considered to be not just any
34:45
other new innovation but it is considered to be an innovation that is impactful in basically all domains of human lives. This is a result that like in a domino effect causes new innovations and discoveries that improve the living conditions.
35:07
Just like electricity did 140 years ago. AI for example could be used in the context of fighting climate change.
35:23
We could for example use it to monitor the biodiversity and climate conditions heat in remote areas in the world. It could be used to improve the predictive power of the climate models such that we can adjust our behavior accordingly.
35:46
Then not only in the fight against climate change it could also be used in the domain of health. One noteworthy example here is Google's Alpha Fold which is a discovery or achievement that some of you might have heard.
36:06
A very recent one just last month and one that I think did not actually receive the media attention that it deserved. What this group around this AI achieved is to solve the protein folding problem
36:21
which was one of the fundamental problems of the last 50 years in molecular biology. Meaning that the AI can now predict the way in which protein folds up and that allows us to much faster and much cheaper devise new materials.
36:41
Materials which then again could be used in the fight against climate change because they are more energy efficient. Or new proteins that could allow for better and more efficient medication. And then of course in an economic sense AI could be hopefully used to improve the productivity and to boost global living standards.
37:09
And that is important of course because human rights are not limited to these political rights that you might be typically thinking of as we've seen.
37:21
Freedom of assembly, freedom of speech and so on. But of course human rights encompass nowadays also socio-economic rights. And if we make the best out of that technology we can be hopeful that all these achievements come into fruition in the future.
37:46
But in order to achieve that we have to make sure that artificial intelligence does actually behave in a safe manner. And so how would we go about to do that?
38:00
Policy makers and researchers have really started to think in detail about this problem. So you see expert boards popping up in the last couple of years that are dealing with this problem of safe AI all over the place. There is the AI high level expert group of the European Commission.
38:24
There is the German Data Ethics Commission. And basically any kind of company that thinks of itself as IT company has set up an AI ethics or has at least published an ethics paper about AI.
38:41
For example to the left you see a graph from the report of the German Data Ethics Commission. And what they say is, well we can divide AI according to their potential harm. They call it according to their potential criticality. And the base of this triangle in green is the vast amount of AI algorithms that is unproblematic.
39:07
And they say these algorithms would not cross the threshold in order to have the need to be regulated. And then on the other end of the triangle you have this red tip which would be very few AIs but these really should not be allowed to be used at all.
39:29
So for example in the green field you could think of an algorithm that identifies whether the coin that is thrown into a vending machine is actually the appropriate amount of money.
39:43
And an example for an AI that should be forbidden entirely could be one out of a field of autonomous weapon systems. But of course the interesting debate is going on in this yellow to orange field in the middle.
40:07
Then there is also a report that Amnesty has published with AccessNow called the Toronto Declaration. And in it Amnesty is demanding that public and private actors who employ AI systems are being held accountable.
40:27
That they ensure a safe development of AI and a couple of concrete suggestions. For example to make sure that the developer team of an AI is diverse in many senses.
40:44
Thinking back to the example of the story of Ineoluwa Raji, you remember that her managers did not actually care even after she brought the problem to their attention. And having a diverse team that is possibly even affected by the detrimental effects of AI could help out here.
41:08
What all of these suggestions to ensure safe AI have in common is that they are calling for an element of human oversight. And for a way in which we can make sure that humans can understand how the AI is coming to its decisions.
41:28
And while that is desirable it is also extremely difficult for two reasons. So in contrast to traditional code you can't just look at the source code and do a code audit in order to find out the flaws in the program.
41:46
AI is also called black boxes. You see the input that goes in and you observe the output. But in these billions of parameter large neural networks it is impossible even for the developers to determine how the AI arrives at a certain result.
42:03
And the second problem is that unlike for example auditing to make sure that a car runs safely, AIs are changing in such a frequent or possibly even continuous manner that the auditing process should also be somewhat made continuously.
42:27
Now like I said there is a lot of research going on about this and ideas exist about how to tackle these problems. So what all of these possible solutions have in common is kind of what I would call a crowd or expert based AI challenging system.
42:51
So what that means is you circumvent this black box problem by feeding the AI with
43:01
input and trying to feed it with input that brings the AI to commit a mistake. And then you can infer where the problem area of an AI really lies. And it is also of course important to ensure that the consideration for safe AI is in the mind of the developers from point one of the development.
43:36
So that we can do these challenging processes not just after the AI has been deployed and affected possibly millions or billions of people,
43:46
but already that there is an internal auditing process that defines clearly steps and documents these steps of what decision is made in order to develop this AI and how.
44:01
Such that in the end there is an accountability report that can possibly already take the biggest kinds of problems out of the AI before it is even reaching a larger audience. So with the examples I gave you from earlier it is easy to see how
44:22
it would have been possible to spot a problem in the facial recognition algorithms for example. By just making sure that the training data is actually representative of the general US population. That brings me to my conclusion. So are we headed for a dystopia or are we headed for a shining future?
44:45
Now I could make my life easy and say we're going for a middle ground, but I want to be there here and I think there is good reason to be optimistic. Of course I've talked a lot about problems that we already have today and about potential problems in the future.
45:05
And of course with every type of new technology regulation and supervision is always trailing a little bit behind. But as you have also seen researchers and policy makers have become aware of the potential problems.
45:21
And with the potential of this technology if we make sure that we continue on a good trajectory into the future I think we are actually headed more for the shining future than for the dystopia. Let me end the talk by just pointing out a few things about the literature.
45:45
So these are my sources and all of these except for the last one here should be accessible for free. Also Timnit Gebru who is the author of the third paper has some interesting developments
46:07
going on about her if you want to follow her on Twitter that is interesting. Also the last bullet point here, Yoluwa Raji is a shooting star of the ethical AI scene. It's also worth following her on Twitter.
46:20
And the rest of the sources also except for the first one all available for free online. I want to thank the awesome and talented photographers who were kind enough to allow me to use their stock images for free. And I want to end by saying that if you are interested in any of the things that I have mentioned today,
46:47
especially if you are interested in some of the topics I have merely touched upon like predictive policing or data surveillance, then please don't hesitate to get in touch with our expert group, visit our homepage.
47:00
Or if you have questions directly about this talk then get in touch with me directly. But of course I am also looking forward to see you now in the Q&A session and take your questions there. Thank you very much. Political decision makers on a broader level have an awareness about the problem or
47:25
do you think this is really just tied to some experts for the moment? I think we begin to see that the awareness for the problem trickles down to the general political sphere.
47:44
So I would imagine that during the next 10 years we as a society in general will start discussing this problem on a much wider scale. So I am optimistic about that. Okay. So going on the more positive side, if there was to be a shining future, what possible obstacles are there to overcome still?
48:20
I think one problem and there are a lot of talks during this RCE3 that are concerned with that is getting the big IT companies in check. We will have to find a way to deal with the big tech monopolies because they are the ones who are employing the most cutting edge AI technology.
48:48
And if we succeed in that, then I think we can be also optimistic about leveraging the technology to the full potential and so that it actually does good.
49:03
And I mean that has a lot to do with your first question. So I mean of course none of this is out of our control. If there is enough political will, then it's feasible. Okay. So what would your opinion be actually on SEALS? In German the term is Gute Siegel. On
49:23
approval and discussion at the moment for AI to ensure safe technology. Can you say anything about SEALS? So like I tried to point out in my talk, there is a lot of talks going on about the fact that
49:40
we have to audit in one way or another AI, but nobody is really going into the specifics of how to do that. Attaching a SEAL onto an AI like the way you would attach a SEAL on a car when you send it to the TÜV in Germany after it got checked is probably a poor analogy.
50:04
Like I said in the talk, you have the problem that AIs are changing constantly and you can't just open the hood of the car and look at the motor as they are these black boxes. So we will have to find new ways to do these audits and I think a SEAL that
50:23
only ever confirmed at a certain point in time that the AI isn't misbehaving is a fundamentally flawed concept. But there is a lot of research going on in this field right now and I guess we will see new approaches in the next years. We have to. I mean there is no other way.
50:44
Is there actually anything that we can do as individuals to take action as non-researchers and non-experts? I think, well that's a difficult question. I mean probably on a general note it is important
51:07
that the public is aware of the problem and that people are informed enough about the details so that they can come to a useful judgement about their everyday life use of technology that employs AI.
51:24
So I mean for example when we use YouTube or other social media that are using recommender systems and we grow aware that there are problems like echo chambers that are arising then we need to channel our frustration with that into a constructive form.
51:48
For example talk to your representative in your national parliament and call them, write them about this problem so that we can then use the political power to ensure safe regulation.
52:05
Otherwise I don't know it's difficult on an individual level of course but together leveraging that force could do something. Raising awareness is always a very good first step. Can I actually ask how did
52:20
you personally get interested in this topic or how did you first become aware of it? So I have been working on the problem of how algorithms affect society for one and a half years now in my
52:41
job and I've been a member of the Amnesty expert group on human rights in the digital age for about two years now. In fact it's also a new field for us at Amnesty. Basically this presentation is also a report about the work in
53:02
progress that we're doing, wrestling with, coming up with concepts on how to work on the problem of AI and human rights. So I grew into that over the last two years or so. Okay. So since 2020 for many reasons has been a challenging year but regarding the topic that you're working on what are your wishes for 2021?
53:32
Well it would be cool if from a research perspective it would be cool if some large IT companies open up their source
53:43
code for for example AI models they no longer use to allow the research community to have a deep dive look at that. And in the research community also in a similar vein that researchers start sharing the code
54:01
they produce with their papers for everyone which is shockingly not the case for many papers. So there needs to be a shift in mindset and we see that beginning already and that would be a cool trend to continue for 2021.
54:22
Great. Well I hope the right people were listening just now. Thank you Johannes very much for this interesting talk. If you and everyone at home at your screens would like to continue the discussion then please join Johannes in the Jitsi room.
54:40
You can find that under discussion point RC3 point OU point social. I repeat discussion point RC3 point OU point social. Thank you very much and see you there. Thank you.