Cameras everywhere
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Untertitel |
| |
Serientitel | ||
Teil | 47 | |
Anzahl der Teile | 68 | |
Autor | ||
Lizenz | CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Deutschland: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben. | |
Identifikatoren | 10.5446/21591 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache | ||
Produktionsort | Berlin |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
| |
Schlagwörter |
re:publica 201147 / 68
3
6
9
11
12
14
15
16
17
20
21
22
23
24
25
26
28
29
30
31
33
37
39
41
42
44
45
47
48
50
51
54
56
57
58
60
61
62
63
66
67
00:00
Bildgebendes VerfahrenHyperbelverfahrenXML
00:41
Hill-DifferentialgleichungQuick-SortBildgebendes VerfahrenBildschirmfensterMetropolitan area networkSichtenkonzeptVerweildauerVideokonferenzSchwebungMagnetbandlaufwerkCAMFamilie <Mathematik>ComputeranimationVorlesung/KonferenzBesprechung/Interview
01:37
VideokonferenzRechter WinkelSelbst organisierendes SystemRelativitätstheorieMultiplikationsoperatorKanalkapazitätMathematikHypermediaVorlesung/KonferenzBesprechung/Interview
02:27
Rechter WinkelHypermediaVideokonferenzRahmenproblemMathematikMultiplikationsoperatorSelbst organisierendes SystemComputerspielCodecVorlesung/KonferenzBesprechung/Interview
03:16
Rechter WinkelROC-KurveMomentenproblemRuhmasseFlächeninhaltPunktDifferenteHypermediaBitSpannweite <Stochastik>VideokonferenzEnergiedichteMinkowski-MetrikRechenschieberGesetz <Physik>GruppenoperationDatensatzKomplex <Algebra>Besprechung/InterviewComputeranimationVorlesung/Konferenz
05:16
StatistikQuick-SortVideokonferenzGruppenoperationMetropolitan area networkRechter WinkelFlächeninhaltHypermediaTermComputeranimation
06:17
VideokonferenzWorkstation <Musikinstrument>Web SiteRechter WinkelDatensatzNeue MedienLeistung <Physik>GruppenoperationDreiecksfreier GraphComputeranimation
07:25
SchlüsselverwaltungHypermediaRechter WinkelMusterspracheVideokonferenzLeistung <Physik>Spannweite <Stochastik>PunktVorlesung/Konferenz
08:02
VerkehrsinformationRechter WinkelFlächeninhaltMomentenproblemCASE <Informatik>HypermediaGefangenendilemmaGruppenoperationAuswahlverfahrenYouTubeVorlesung/KonferenzBesprechung/Interview
09:21
Rechter WinkelLeistung <Physik>MereologieVerkehrsinformationWasserdampftafelCASE <Informatik>SpieltheorieMultiplikationsoperatorVideokonferenzMehrrechnersystemBesprechung/Interview
10:20
InformationsüberlastungBildgebendes VerfahrenMultiplikationsoperatorElement <Gruppentheorie>MomentenproblemLoopComputersicherheitIdentitätsverwaltungVideokonferenzVorlesung/KonferenzBesprechung/Interview
11:06
IdentitätsverwaltungCoxeter-GruppeMereologieZweiSpannweite <Stochastik>Rechter WinkelVideokonferenzGrundraumElement <Gruppentheorie>Vorlesung/KonferenzBesprechung/Interview
11:52
Rechter WinkelFlächeninhaltDifferenteVideokonferenzBildgebendes VerfahrenSpannweite <Stochastik>Güte der AnpassungMomentenproblemWärmeausdehnungComputeranimationVorlesung/KonferenzBesprechung/Interview
12:38
VideokonferenzRechter WinkelMomentenproblemMetropolitan area networkMultiplikationsoperatorTwitter <Softwareplattform>GruppenoperationPunktBildgebendes VerfahrenVisualisierungMinkowski-MetrikMathematikLeistung <Physik>AuthentifikationVideoportalHypermediaParametersystemSichtenkonzeptVorlesung/KonferenzBesprechung/Interview
14:15
TermRechter WinkelParametersystemSystemplattformTypentheorieYouTubeVideokonferenzBitRuhmasseDivergente ReiheVorlesung/KonferenzBesprechung/Interview
15:10
BildschirmfensterSchießverfahrenRechter WinkelEinsVideokonferenzMessage-PassingFamilie <Mathematik>EreignishorizontWeb SiteForcingWeb logArbeit <Physik>Kontextbezogenes SystemCASE <Informatik>BildschirmmaskeTypentheorieTermNachbarschaft <Mathematik>Dreiecksfreier GraphFormale SpracheBildschirmsymbolZentralisatorHypermediaGeradeBildgebendes VerfahrenComputeranimation
19:10
Rechter WinkelVideokonferenzHypermediaAuthentifikationComputersicherheitIntegralSpeicherabzugGruppenoperationZweiComputeranimation
20:32
Hausdorff-RaumDatenmissbrauchBildgebendes VerfahrenMultiplikationsoperatorGrundraumRechter WinkelSelbst organisierendes SystemSpannweite <Stochastik>ErwartungswertRotationsflächeVideokonferenzBasis <Mathematik>Quick-SortMereologieZuckerberg, MarkEingebettetes SystemBijektionProgrammierumgebungMusterspracheVideoportalFacebookEntscheidungstheorieProjektive EbeneSoftwaretestTermMomentenproblemBesprechung/Interview
24:34
MereologieBildschirmmaskeRechter WinkelGemeinsamer SpeicherSchnittmengeHypermediaEingebettetes SystemOrdnung <Mathematik>DatenmissbrauchYouTubeVorlesung/KonferenzBesprechung/Interview
25:21
GrundraumMereologieEinfache GenauigkeitMaßerweiterungBijektionArithmetischer AusdruckFlächeninhaltRechter WinkelBildgebendes VerfahrenPhysikalisches SystemDatenmissbrauchVideokonferenzUmsetzung <Informatik>SpeicherabzugMAPTelekommunikationDeklarative ProgrammierspracheRuhmasseSchnittmengeDemoszene <Programmierung>Ordnung <Mathematik>MomentenproblemVisualisierungAuthentifikationVererbungshierarchiePerspektiveHypermediaNetzadresseE-MailZweiEreignishorizontVorlesung/KonferenzBesprechung/Interview
28:59
KreisbogenGarbentheorieSpezialrechnerVideokonferenzÄhnlichkeitsgeometrieWeg <Topologie>SchießverfahrenStochastische AbhängigkeitGruppenoperationMultiplikationsoperatorMinkowski-MetrikVorlesung/KonferenzComputeranimation
30:17
VideokonferenzArithmetisches MittelBefehl <Informatik>Minkowski-MetrikMereologieOffene MengeComputeranimation
31:33
Einfache GenauigkeitVideokonferenzBitMultimediaForcingMereologieWeb SiteYouTubeBildgebendes VerfahrenDreiecksfreier GraphTelekommunikationVorlesung/KonferenzBesprechung/Interview
32:44
VideokonferenzAuthentifikationWinkelProgrammierumgebungMultiplikationAnalysisMereologieProzess <Informatik>SchießverfahrenQuellcodeTouchscreenLeistung <Physik>MomentenproblemInzidenzalgebraYouTubeProgramm/Quellcode
33:41
SchnittmengeStichprobenumfangVideokonferenzMomentenproblemEreignishorizontAuthentifikationVorlesung/KonferenzBesprechung/Interview
34:24
Prozess <Informatik>VideokonferenzSichtenkonzeptMultiplikationRechter WinkelPerspektiveAdressraumSystemaufrufMetropolitan area networkBildgebendes VerfahrenFormation <Mathematik>RuhmasseWort <Informatik>MultiplikationsoperatorBildschirmmaskeComputeranimationVorlesung/Konferenz
35:41
Metropolitan area networkRechter WinkelVideokonferenzFormation <Mathematik>TouchscreenVorlesung/KonferenzComputeranimation
36:24
Visuelles SystemVideokonferenzRechter WinkelTypentheorieDifferenteWeb logPerspektiveMetropolitan area networkWeb-SeiteDigitale PhotographieGruppenoperationHypermediaGewicht <Ausgleichsrechnung>Vorlesung/Konferenz
37:31
VideokonferenzSchnittmengeRechter WinkelYouTubeWeb SiteMailing-ListeVorlesung/KonferenzBesprechung/Interview
38:11
QuellcodeWeb SiteHypermediaRechter WinkelPerspektiveProgrammverifikationAuthentifikationTermFlächeninhaltSichtenkonzeptPunktZweiOrtsoperatorVideokonferenzMereologieSystemidentifikationMultiplikationsoperatorTypentheorieComputeranimation
39:19
Mixed RealityElement <Gruppentheorie>Rechter WinkelTypentheorieVideokonferenzFacebookBildgebendes VerfahrenMultifunktionMinkowski-MetrikBildschirmsymbolQuick-SortYouTubeDatenmissbrauchDivergente ReiheWeb SiteProzess <Informatik>InternetworkingAuthentifikationHypermediaService providerSchaltnetzIdentitätsverwaltungZentralisatorKategorie <Mathematik>Inhalt <Mathematik>BitNebenbedingungSchnittmengeVisualisierungGemeinsamer SpeicherVerkehrsinformationMAPProjektive EbeneDatenverwaltungZahlenbereichZweiProgrammierumgebungComputeranimationVorlesung/Konferenz
42:46
SpezialrechnerMathematikInhalt <Mathematik>HypermediaMomentenproblemRechter WinkelKontextbezogenes SystemFacebookNatürliche ZahlMinkowski-MetrikURLSmartphoneRichtungTypentheorieProjektive EbeneProzess <Informatik>MetadatenTouchscreenMultiplikationsoperatorWechselseitige InformationAutonomic ComputingMereologieDatenverwaltungGruppenoperationTermBildgebendes VerfahrenVideokonferenzYouTubeDatenmissbrauchSchnitt <Mathematik>Element <Gruppentheorie>AuthentifikationSoundverarbeitungTexteditorComputersicherheitApp <Programm>Computeranimation
45:50
CAMProjektive EbeneOnlinecommunityBildgebendes VerfahrenApp <Programm>BenutzerbeteiligungHilfesystemGrundraumVideokonferenzOpen SourceRechter WinkelSoundverarbeitungCOMZweiHyperlinkVorlesung/KonferenzBesprechung/Interview
46:53
Rechter WinkelBildgebendes VerfahrenMathematikVideokonferenzQuick-SortGrundraumCASE <Informatik>ParametersystemEuler-WinkelVorlesung/Konferenz
47:52
MittelwertXML
Transkript: Englisch(automatisch erzeugt)
00:18
Guten Morgen. I'm going to speak in English, unfortunately.
00:23
Let me start with two images. And they make up something of the beginning and the middle, and hopefully not the end of the story I'm going to tell. And the first one is from the US, from 20 years ago.
00:41
And the second one is from Libya, exactly 20 years later. And I will use those as kind of the beginning and the tentative end of what I want to tell you about. Just over 20 years ago, a man called George Holliday filmed these images from his balcony window
01:00
in a suburb of Los Angeles called Lakeview Terrace. It was quarter to one at night, and he had heard a commotion outside. And when he stepped out, holding the camcorder he'd probably bought to film a wedding, or maybe his kids, his family, he captured on tape the beating of an African-American man,
01:21
Rodney King, by the Los Angeles Police Department. He filmed about nine minutes, and there was one minute in that that became a really crucial piece of video evidence, which showed the Los Angeles Police Department attempting to hit Rodney King 56 times. It's a piece of footage that generated a lot of media attention,
01:43
a massive debate in the US about race relations, public protests, a criminal trial, and it initiated the organization where I work, Witness. And our mission is to ensure that cameras and the capacity to use video are in the hands of people who, in some sense, choose to be in the wrong place at the right time.
02:05
If you think about George Holliday, he was not a human rights activist, he didn't want to film human rights abuses. Witness aims to help the people who want to film human rights abuses to do that, to do that effectively, and to do that purposefully.
02:21
And we help them to do that in ways that help change behaviors, policies, and practices. And over the past 20 years, we've worked with groups in around 80 countries and thousands of individual activists. Let me start by talking about what we've learned as an organization,
02:41
and then I'm going to open up the lens much broader to talk about the world of ubiquitous video and human rights now, and the thousands of people who are trying to use video and related social media for human rights change. Let me show a short video which will just frame Witness's work.
03:40
What you saw in that video were a range of human rights activists around the world who are using video in a range of different ways for human rights work. And a good starting point to understand is that Rodney King, as an example, is a little bit of an illusory example. So when you look at the Rodney King footage which I started with, it created a massive media storm,
04:03
it was used as evidence in a trial. Most human rights footage doesn't have that impact. Most human rights footage is ignored. Most human rights footage doesn't get media attention. And this is also important perhaps to remember at the moment now as we look at what's happening in Egypt, or Libya, or Syria,
04:24
when there's a lot of attention on human rights footage. And most human rights footage isn't impactful in the mass media or in the court of law. So Rodney King was a little bit of an illusory example. Most of the human rights footage that the groups which Witness is working with
04:40
are using video in kind of the big space between getting it on CNN, or getting it on Al Jazeera, and showing it to six or seven people in a court. So it might be, and actually I put this slide in here just to indicate a little bit of the complexity of showing video in a courtroom. In the 1990s, the criminal tribunal for Yugoslavia that was looking at the human rights crimes in Yugoslavia
05:07
couldn't even work out how to think about video when they were documenting what happened in the trials. So if you look through the records of the criminal tribunal for Yugoslavia, it doesn't tell you what video was showed. If they showed a piece of evidence, they just put this little note,
05:22
video played, which is sort of an irony of kind of how a lot of the human rights world tries to deal with visual media rather than text or statistics. It just goes, no, can't deal with that, have to step away. Some examples of the kinds of groups we work with. So groups in Burma who go across the border into Thailand.
05:43
This young man fled when his village was burnt to the ground by the Burmese military, and he goes back over the border into a country where filming is illegal, can be punished by long terms of imprisonment, and he goes back to film in areas where the Burmese government is burning down ethnic minority villages.
06:05
This is another example of our work. This is a sex worker in Macedonia who was trying to find a way to communicate to the police what it meant to face police violence. She could never walk into a police station, but she could express through video
06:21
what it felt like when the police would pick her up because she was under suspicion of prostitution and then take and rape her in a back alley. And so we were working with this group to help them think how do you use video to put it in front of the police in a way that will confront them with the reality of what's occurring.
06:44
This is a still from a video we worked on with groups who are looking at extraordinary rendition in the US and the US record of taking people to jails in Morocco, in other countries to commit torture. And increasingly we're working in the world of what I would call citizen generated video.
07:06
This site was a site called The Hub which Witness launched, which was the first online media sharing site for human rights. And the clip you see on it is an example of footage shot by Cambodian human rights activists using cell phones and flip videos to record forced evictions.
07:26
A couple of key things that we've learnt through that work. And I want to use those as kind of a starting point to think about the growing range of video that is out there now. One is the idea that participation is key.
07:42
In human rights work, human rights work used to be a very professionalised kind of industry in a sense. And I call it an industry because in some ways it is. It has patterns of behaviour, ways in which you do things, patterns in which you document material. But the whole power of video, the whole power of social media is that it opens up the ability for the people
08:02
who are often closest to something to express it in very direct terms. It's hard if you live in a community in eastern Burma to think how to write a human rights report. It is easy to think what would happen if I wanted to ask my mother what it's like to suffer a human rights violation like having her village burnt down.
08:23
The second area which I think is really important to understand, and again thinking about the moment now where we're so focused on the idea that everyone is looking at human rights footage, that everyone cares about what happens, it's not true. In the vast majority of human rights situations, and I could tell you now that if we looked on YouTube
08:43
there would be a police brutality shot from East Timor, a prison violence in Brazil, police brutality somewhere in the US. That's never going to get the media attention, nor are issues that are much more structural, like issues like you don't have access to clean water. So a lot of the work we've done is really to think about audience,
09:04
and how to think about audience in a very specific way. So a lot of the work at WITNESS is about narrow casting. It's about how do I reach five or six people who will do something. To give an example of that, we were involved in a case where a group of indigenous people in Kenya
09:21
were trying to persuade the African Commission of Human and People's Rights to recognise that their land had been illegally seized from them to make way for a game park in Kenya. And what they thought was, we can keep sending reports, and that's an important part of how we make our legal case, but we also need to show very visually what it looks like to not have access to water,
09:42
to see your cows dying literally on their feet, and to think what happens when you take that and place that in a place of power. So often the places of power in the human rights world are very distant from the realities on the ground. If any of you have spent time in Geneva in the human rights work there,
10:02
it's so distant from what it's like to be a villager in Burma, to be someone in eastern Kenya facing eviction. So how do you give that sense of the reality of what it's like to the people in that room? And that's where video comes in.
10:20
At the same time, if you give that sense of reality but no way for people to act, nothing happens. And I'll come back later to this idea of image overload. There's so many images out there that if you don't offer people ways to act, they start to glaze over. And I think it's starting to happen now, even as we look at images of really grotesque brutality coming from the Middle East,
10:41
that people are starting to say, wait a second, I just can't see another image of what could be Syria or Libya or Yemen or any of those countries that are in turmoil at the moment. The final element, and I'm going to loop back to this as well, is safety and security. Video is a very visible medium in the sense that it's hard to hide someone's identity in video.
11:03
If you think about text, you can anonymize someone. It can be John Smith said this. But you'll recognize this guy if you know him. You'll recognize him if you see him on YouTube, if you live nearby him, or you've seen him walking down the street. It's a lot harder to hide identity.
11:22
What I want to talk about in the main part of my presentation today are three elements. One is the expanding range of human rights video that's out there and give you a sense of kind of the universe. The second is to talk about human rights values and how they practically relate to a lot of this video that is being seen.
11:44
And the third is to suggest some things we need to do differently. So I don't want to leave you just with a tale of woe. I want to suggest what we could do differently if we want to see safer, more effective, more ethical human rights video, video that makes a difference.
12:03
This image from Egypt, from Tahrir Square, or the area around there from January 25th, I think is a good reminder of kind of the range of video that is out there. And what I think is happening at the moment is less that, is really about the expansion of range of ways people are using video in human rights work.
12:21
So what I started with is explaining some of the more traditional human rights video. It's being used as evidence. It's being used to lobby in parliaments and governments. It's being used on CNN, and as we might see here, I think that's probably an Al Jazeera cameraman, who's filmed with a big camera on the left. But it's also coming from citizen footage. So you can see there the guy with his digital still camera,
12:41
and of course the man with the cell phone. It's undoubtedly a moment of promise for the use of video in human rights work. And at the same time I want to think about some of the qualities of video that make it challenging for human rights work.
13:01
So if you think about online videos, some of the key qualities about it are that it's accessible, it's easy to create, it's spreadable, you can move it anywhere, it's malleable, you can shift it and change it, and it really depends on being circulated. It's all about someone sharing it, someone passing it on to someone else.
13:22
So of course this creates opportunities for things like transparency, participation and action, the powerful upsides of which we're seeing in somewhere like Egypt, or in Libya, or in user engagement in a space like Twitter from people like ourselves who might be in Europe or in the US who are far away from the situations on the ground.
13:41
At the same time these very same qualities raise significant issues about things like authenticity, factuality, the point of view, do we know who this comes from, and control, and also how these images transform into action when there's so many of them.
14:02
So as we think about this human rights visual media, as it's created, remixed, recirculated by many more people, both amateurs, so the guy on the right, maybe prosumer, the guy in the, well actually the guy on the right's probably the prosumer, the guy on the middle is the amateur, maybe in those terms, as well as professionals.
14:21
What are the ethical and practical issues that are there for activists, for technology developers, for technology professionals, for the people who circulate it, as well as the more traditional human rights activists? How do we increase the upside of this massive spread of video and reduce the downside?
14:43
Let me give a little bit of a kind of overview of some of the types of video that I see in my work across the parameters of human rights video. And I'm going to show a series of clips. They're taken from both commercial platforms like YouTube and Daily Motion, as well as some of the more niche social justice platforms.
15:02
And I hope they'll illustrate some of the parameters of what human rights video looks like online. So to start with, it's shot by both bystanders, witnesses, and perpetrators.
15:23
So that was forced evictions in France, filmed by a bystander. This is in one of many cases where the perpetrators themselves film it, Slovakian police force members forcing Roma boys to hit each other. It's also as much about individuals speaking out as it is about the graphic footage of violations.
15:47
That was from Brazil. And it's also often very simple acts. I love this video from Saudi Arabia of a woman driving. And I think this also indicates that it's not just, as I said, about the graphic violations. It's often about issues of social and economic rights or discrimination.
16:05
This video documents racial discrimination in Lebanon in a very simple way, as they try and enter a swimming pool. And some of it is also just very simple stories. This is a story shared on a niche site called Engage Media,
16:23
of a young woman preparing to go to Saudi Arabia from Indonesia. And the other key thing is that it's not always about global struggles. This is a video circulated in a particular city in the Philippines about labor rights in a hospital. So it's bystanders, it's perpetrators, it's witnesses, it's global, it's local.
16:42
And of course human rights video looks like a lot of the genres that are out there. So from Iran it's remix video, but remix with human rights footage. And as we've seen in places like Egypt, it's again a remix video here from Egypt that brings together some of the iconic footage.
17:32
And it also uses very simple formats. We'll all be familiar with the idea of the straight-to-camera video blog or video message.
17:42
And of course one of the most pivotal ones of those recently was this by Asma Mahfouz in Egypt, addressing the Egyptian public to ask why they were not coming out on the streets.
18:17
And of course there's also the graphic footage. This would be in the domestic context, just filming an event around your house
18:24
or with your family in the context of somewhere like Bahrain. It is of course the shooting of peaceful protesters, just captured with a simple twist of the cellphone. Or in Libya, as in this clip, leaning out the window to film militiamen walking down the street banging on doors,
18:42
trying to work out what is happening in your neighbourhood. And I'll come back later to some of the questions that circulate around all three of these types of forms of genres that we see in other places on the line, but when they're reapplied in human rights work.
19:00
So human rights values, human rights practicalities. I want to focus on three central concerns of human rights, both in value terms but also practicalities. One is a concern about safety, security, dignity and consent, and I'll talk more about that. The second is a concern about authentication and evidence.
19:22
You want things to be provable if you're trying to make human rights claims. You have to have material that claims to show and shows the truth. And finally, you want to think about how that footage turns into action. Sixty-two years ago, the Universal Declaration of Human Rights was worked up.
19:46
Eleanor Roosevelt, I always love this statue of her, she looks very stern, sort of like the godmother of human rights, worked to create the Universal Declaration of Human Rights. And really at the core of that is one central belief, which is in the dignity and the integrity of every person.
20:06
And I think it's a really good place to start if we want to think about human rights values and that kind of spreadable, malleable media we're talking about. Those values of spreadability, accessibility, malleability, fluidity that we prize so much in video work.
20:21
How do they align with a concern really for the dignity and integrity of individuals? And particularly in human rights work, individuals who are in situations of trauma and of pain and of threat. How do we think about online video when we're thinking about victims and survivors, when we're trying to avoid them being re-victimized, re-traumatized,
20:44
either psychologically but possibly also physically? And how do we do that in a universe where it's the police who are filming as well as the bystanders? It's the people who film the Roma boys who are part of that ecosystem. It's not just the people who see themselves as purposeful witnesses, as activists.
21:05
I think often when we think about online culture we forget about some of these dignity and privacy issues. We have this very strong sort of democratic ecstasy about it. And I think clearly some of the biggest proponents of that are the people who have the most to benefit from it.
21:24
If we look of course to the founders and the owners of someone like Facebook, someone like Mark Zuckerberg will say there is no privacy, people don't have an expectation of privacy, of the preservation of their sense of dignity or their individuality. But if you look at kind of the range of situations of human rights situations,
21:46
the concept of privacy and the needs to really think about what happens with your image certainly hasn't gone away. If you look at what happened in Burma in 2007 where individuals were identified on the basis of video that circulated on YouTube,
22:04
the risks of video become all too apparent. Hopefully we won't see this in the countries in the Middle East but it's very early to tell. It certainly happened in Iran. And we're in a world where you have to make an assumption that video will circulate. So when this woman, who was interviewed in Eastern Burma in 2007, shortly after the Saffron revolution was filmed,
22:28
one of the things that would have been important to tell her is this image could be seen by anyone. And that's a challenging concept to put out in a human rights universe and to people who are facing human rights violations
22:41
because what you have to tell this woman, Noor Popor, is that your image and her video has been seen a million times on YouTube, if it's successful, it's going to be seen by the perpetrators of that violation. And they're probably going to come and try and hunt you down for what you said. This makes it very complicated to think about what consent is when we start to look at these images.
23:04
A lot of the way in which human rights workers think about consent, and this is quite similar in some ways to people like documentary filmmakers, the people who have traditionally had to engage with these issues, is they think of it in terms of one-to-one agreements.
23:20
So I sit down with you and I say, I want to film you and I'm going to use it for my documentary project, or if I work at a human rights organisation I say, we're going to use this to document and to prove a pattern of abuses. But that doesn't work in a climate of ubiquitous video, in an environment where anyone can film. So one of the questions I want to put out to you, and I'm going to propose some ideas and solutions to which we could also use at the end,
23:48
is how do we shift our ideas of what consent is in a world where everyone has access to video and can film this kind of material, which carries these kinds of risks with it. A couple of ideas that we might want to think about is we're going to have to shift away from ideas of one-to-one consent.
24:06
We have to shift from an idea that it's about me agreeing with you that I'm filming you, and that you've agreed to a certain use to an ethics of consent that's about how we embed it in the image, and how we know that an image is going to be circulated.
24:20
So when this image is filmed, as I said, Noor Pawpaw should know that this image will be seen or could be seen by anyone. And so she should make decisions about whether she wants to be filmed in that moment. The second idea, which I think is really important, is we need to embed ideas of consent that have been developed in the human rights world,
24:42
that have come often from medical practice. We have to reinvent them for the world of YouTube. How do we think about that, and how do we embed it in the way that people create media for that? And one of the things I'll share later on is how Witness has been thinking about how you reinvent the idea of the camera to better emphasize the idea of consent when you're filming.
25:02
What is the thing you do in order to make the capturing of someone's understanding of why they're being filmed, and the sense that it will be shared, part of the way you film with a cell phone. Let me move on to another set of issues that come up that are really around consent and dignity and privacy.
25:24
There's two key parts of the Universal Declaration that are really important to anyone who cares about the idea of communication in media. One is the right that everyone has the right to freedom of expression. It's a core value of democracy, of human rights. The second is the right to privacy.
25:43
And these two rights come together when we think about anonymity. It's really important if you want to have the right to free expression to have the ability to be anonymous. And this gets complicated when you're on video. Most of the conversations about anonymity think about how we stay anonymous when we're dealing with data.
26:03
So there have been human rights controversies about corporations handing over data about email accounts. Or the access to internet, how you use circumvention systems, for example, to hide your IP address. But if we think that more and more video, more and more communication, is moving to a more visual material,
26:26
we're going to start just pulling out our cameras and filming people rather than writing down what they say, how do we think about this idea of visual anonymity and how we better enhance that? And from a human rights perspective, the reason you do that is because it's super important that people have the right to be anonymous
26:41
in order to express, to use their rights to freedom of expression. Not everyone wants to say their name and show their face when they speak out. In some ways, the situation in Egypt and Libya and the Middle East at the moment is a little atypical, again, because there we're seeing people in protest settings, we're seeing a mass protest where everyone wants to speak out and have their face seen.
27:06
But if you're a gay activist in Uganda or that sex worker in Macedonia or the Burmese villager in Eastern Burma, you might want to speak out, you might want to make sure that your voice is expressed
27:20
in the dominant medium of the day, which is the visual medium, but you also might want to be anonymous. So I want to put out as a challenge here, again, as we try and think about human rights values meeting this world of ubiquitous video, how do we start to create better ways for people to be visual yet anonymous? I know this sounds a little strange, it seems contradictory, but it's actually where
27:44
we need to go if we want to guarantee that right to freedom of expression. So two ideas there. Consent, how do we reinvent consent? How do we think about this idea that it's no longer just about that relationship one to one between two people because that image can go anywhere? And the second is how do we rethink anonymity when it's about video?
28:05
The second area, the big area that comes up in human rights work is around authenticity. And traditionally this has been done in the human rights world through extensive interviews, you talk to lots of people, you gather data,
28:22
but increasingly we're having to grapple with how you authenticate in a world where a lot of the evidence is coming from video. Now a lot of that is still in what in some ways is kind of like the Rodney King event. So there are pieces of video that appear that have been shot by perpetrators or shot by witnesses that emerge and there's only a single piece of evidence.
28:43
So for example this came out of Sri Lanka in late 2009, apparently showed Sri Lankan military executing Tamils during the final days perhaps of the offensive in northern Sri Lanka in 2009. And so the way people authenticated that video was very similar to how you'd authenticate a piece of video in Rodney King.
29:03
They looked at the gunshots, they worked out if the recoil of the people shooting was appropriate to an actual firing, they looked at how the people who'd been shot reacted to the shot, the blood trail, they looked for the forensic layers in the video to see whether there'd been alterations and that kind of thing.
29:26
A similar example from West Papua earlier this year where the Indonesian military were torturing a Papuan independence activist or a supposed independence activist.
29:41
And what you see here is just again, and a lot of the time this video is coming from perpetrators, was the video that the perpetrators themselves shot torturing the man. And in fact this video is a good example also of how when it was circulated, the group that circulated it chose not to show the most graphic footage.
30:02
They said actually for the general public you don't need to see the most graphic material here. But in that video you looked at it, you could see what the military insignia were of the Indonesian military, you could prove what it was. And even in the most traditional spaces this is happening. This clip was shown at the opening of
30:21
the trial of the first person at the International Criminal Court, a warlord in the Congo called Thomas Lubanga. You see him here in the picture. And in fact in the opening statement of this trial, video evidence was used to talk about what was happening. And all around him are the soldiers. This is Bosco Nataganda in purple. You see behind him there is no house, the camp is isolated from the village.
30:50
You start to see soldiers, these two are girls, these two are under 16. The persecutor will prove that. And see this now is the bar. Start to see the others. Start to see who are under 16. This one, this one, way under 16, way below.
31:20
This one, this one, this one. We act like this because it's behind.
31:31
So even in these most traditional spaces video is starting to appear. But again it's a little bit like the Rodney King video. It's just a single piece of video. And of course there are also videos that confuse when they're used in this way.
31:45
So this video I saw and apparently it was about abuses by the forces of Ouattara, the Ivory Coast, the president-elect in Ivory Coast. This circulated on YouTube about a week or 10 days ago. It claimed to show the burning of civilians by the forces of Ouattara.
32:08
And one of my colleagues, I was reminded, had about two years before when we were working on this multimedia site called The Hub, had had a very problematic piece of footage where she had had burnings. And I looked back at that footage and I noticed, okay, two years ago they were saying this was Kenya.
32:24
So there's an interesting also kind of even when we're looking at these pieces of footage as evidence, there's this kind of recycling you've got to watch out for. The kind of here's a horrible graphic piece of image, let's pretend this is Ivory Coast, let's pretend this is Kenya. And it's even not clear that this is Kenya to begin with.
32:41
What I think is more interesting actually now, and this is the really evolving part of what's happening, is what we're starting to see with multiple video sources. So how do we move towards authentication that comes from not only expert analysis but the fact that there are multiple viewpoints on something. And this is really the power of citizen video.
33:01
So up on the screen is actually another shot that shows the same incident from Bahrain that I showed earlier, where we saw a couple of moments of the shooting from a cell phone that showed the aftermath of a shooting of five or six peaceful protesters who were walking along this road, which I believe is in the center of the city.
33:21
So you'll see on this video, and I think this is interesting as well, is the uploader has annotated it to say there's another video you can look at. So this process of kind of like authentication, of proving this as evidence, is taking place even within a YouTube native environment. Here's an annotation to the next video that shows you the other angle. So if we look at this video, we'll see from a different angle. You saw before from this angle.
33:49
And now we're looking from behind as they walk towards it.
34:02
This kind of approach works well for the more public settings. I think the question that is in our minds actually at the moment is how you think about this in settings that are less publicly orientated, when it's not a public event, when there aren't multiple cameras. But certainly in a setting like this, this is proving to be a powerful way to authenticate what's happening.
34:25
And increasingly it'll probably get automated. Both myself and Patrick Mayer, who will come and talk I believe afterwards on Ushahidi, have both talked about how it's quite likely that this process will eventually be automatable, so that you'll see multiple video viewpoints and be able to amalgamate them into a single 360 view around something.
34:45
One of the things that's been really exciting about the last few months from the perspective of understanding how video can be used for human rights has been the use of video in aggregation and crowdsourcing. This addresses two questions you often get about human rights video and video in general.
35:03
One is how do we show a massive video? It's an interesting thing, I don't know, there's a phrase in English, people often talk about being bombarded by images, the idea that we can't deal with more than one image at a time. It's funny, no one ever says you're bombarded by words, but we're often bombarded by images.
35:23
And I think what we've been seeing in the last few months is a real attempt to try and work out how you make sense of multiple images emerging from human rights situations. Now sometimes that's in the form of individual curators. The clip from Iran I showed earlier called The Scream Remix, which had the music over the human rights footage,
35:48
was actually created or uploaded by a man called Onli Mehdi, who's an Iranian activist who lives in New Jersey. He's in his 20s, he doesn't live in Iran. But he has become a voice that helps authenticate and give credibility to pieces of human rights video.
36:05
And I put up this shot because it's actually from later in that same video that we saw a clip of earlier, the strong, beat-driven remix video. And it's interesting because what Onli Mehdi has done in this video is he's seen this particular graphic shot, this is a pile of bodies on the floor.
36:20
And what he's done, and you may not be able to see it exactly, is he's put not Iran on the video. So in the same way the other people annotated the video to say, you can see another viewpoint here, he annotated the video to say actually this footage is not true, this is a footage from another massacre. There's bad enough things happening in Iran, we don't need to have this.
36:41
So he's the type of individual curator who's starting to play a really powerful role in authenticating human rights footage. This is another reconstruction from the UK of the death of a man during the G20 protests, where an individual blogger put together all the different video perspectives and photo perspectives on how this man was potentially assaulted by the London police.
37:04
What has also been interesting in the last few months has been the growth of aggregation and curation for human rights. And I wanted to share some of the examples of the more group crowd-sourced approaches. So this is crowd-sourced that comes out of the Middle East, a way to aggregate the different voices and social media perspectives on what is happening.
37:23
Here's an example of the crowd-sourced page from Libya, from earlier, from Azerbaijan, from 11th March. This is a tool called Storify, it's happening on YouTube.
37:41
Though interestingly, I think on YouTube, the approach doesn't quite work. If you aggregate video as a playlist, I don't think it works. It's an interesting challenge when you think in video format, the playlist is not the right way to really watch a set of human rights footage. If you've ever tried to watch a playlist of human rights video, it's a strange experience.
38:02
It's a linear experience when in fact you want to be jumping around to find the right pieces of evidence in a collection of video. And then of course, from somewhere like Syria, you have this site which I've been using a lot called Now Syria, which collects together not only video but social media sources, showing different perspectives on what is happening in the human rights.
38:28
So I think this is a really promising area in terms of this second concern around authentication and verification. One note of course is that this curational voice isn't always from the point of view of positive purpose.
38:43
So from Iran in 2009, this is the Iranian government curating video in the sense of pulling from the videos that were uploaded to YouTube, the pictures of activists and putting it up on a public website and crowdsourcing the identification.
39:01
But in general, I think there's something tremendously powerful happening now in this area of authentication and crowdsourcing and curation of video. And I know Patrick Mayer, who's going to talk about Ushahidi next, will talk also more about that idea of crowdsourcing and data and how it relates to not only human rights footage but other types of material.
39:23
Clearly the other element that's really been thrown into the mix in the last few months has been the role of corporations. This is a sort of iconic image that's been floating around over the last few months, originally tweeted by an NBC correspondent, the Thank You Facebook image.
39:45
However, the relationship of video and activism to spaces like Facebook and YouTube is complicated to say the least. The reason for that is, of course, the underlying challenge for anyone who is a human rights worker in Facebook or YouTube, or any commercial video sharing site, is that you are a tiny minority there of people.
40:06
You are working with a particular set of circumstances, a particular set of constraints, in a space that, although it looks like a public space, isn't one. Earlier this week, I think Gillian York spoke about this idea of the quasi-public space, and Ethan Zuckerman, the internet commentator researcher,
40:24
has said, you know, when you're an activist in somewhere like Facebook or YouTube, it's a little bit like holding a rally in a shopping mall. It looks like a public space, but it's not. And this gets problematic when you start to think about some of these values I'm mentioning around dignity and consent and privacy,
40:43
which are not necessarily the values that are prioritized in Facebook, definitely, for example, and perhaps not in some of these other video sharing sites. So what comes next? I've presented a series of, I think, human rights questions that come out and opportunities that are emerging.
41:03
They're around consent, they're around this idea of visual anonymity, they're around how we authenticate and prove things better. WITNESS is involved in a project which is trying to think on a number of levels about this. Next month, we have a report coming out with an initiative called Cameras Everywhere.
41:24
It's a report written alongside Samir Padania, who was the manager of the Hub project that I mentioned earlier, this media sharing site for human rights video. And it looks to try and think, what is the right way to address some of these challenges around privacy, consent, authentication, in this environment where so many of the spaces that we're using are commercial ones?
41:49
And I think the answer is actually a combination of things. Firstly, it's actually important that we try and think about how you engage with those commercial providers to improve the way the sites work for some of these human rights purposes.
42:04
And we have a series of ideas that we're recommending and proposing and encouraging people to sign on to. One of them is how to think about visual privacy, how to build that better into upload processes. So if you tag something as human rights, should something nudge you to say, would you like to anonymize this person?
42:26
So if you tag something for a human rights content category, or you put a human rights tag on a piece of footage, what about a nudge that says, would you like to anonymize someone's identity in this piece of footage? A second thing is, you know, if you look at a piece of human rights, a human rights user online, how do you better protect their anonymity?
42:48
If you look at something like Facebook, how do you have more anonymous accounts for people who are uploading footage or using social media in a human rights context? And secondly, how do you think about how you curate that in those human rights spaces?
43:01
How do you make sure that that human rights footage moves from being a niche that is often isolated except for when it really surges up in moments of popular protest into the mainstream of the footage? At the same time, we also know that human rights footage will continue to be a very small part of what you find on YouTube and Facebook.
43:22
So we've also been working, and here's actually the three recommendations there, that we've been working around in terms of changing usage policies, incorporating tools and technologies, and building dedicated spaces. At the same time, we know that the commercial spaces will never address some of these questions.
43:42
So I wanted to highlight one project we're working on, which I'm actually putting this out really as an invitation to you for potential involvement, is to think what are the autonomous tools that we need to develop to address some of these questions. This is a project we're working on with a group called the Guardian Project who are developing a secure phone for human rights.
44:05
We've been thinking with them, what would it look like to have a secure human rights friendly smartphone camera app for human rights? What would it mean if you tried to think about some of these issues of consent and privacy and authentication right from the start?
44:22
It would probably mean that you could anonymize as you film. For anyone who's a video editor here, you've probably, if you've ever had the experience of anonymization, it usually means you go into After Effects when you get home after you've filmed a piece of video. You put the layer over someone's face. Why not do that right as you're filming, to cut that element out of the process?
44:43
And if we're concerned about issues like consent, why not have that built into the filming process? So as you film someone, you could tap the screen to indicate if someone had consented, and to indicate what they said this image could be used, and have that embedded in the metadata of the image, so it's not circulating in a different space.
45:00
And again, this goes back to this idea that images are going to circulate, so it's better that you embed the data in them, give the ability to anonymize at the moment of creation if you can. Other aspects of this project include the ability to securely wipe all the data, and also to gather the types of data you need to authenticate something.
45:23
So what is the material you can gather from a smartphone that tells you location, direction, what other cameras are nearby, so that you can authenticate and use smartphone video in a more effective way for evidence? So this is a way in which we're trying to think about it, knowing that there are some things you can hope that a
45:43
space like YouTube or Facebook will do, but we also need to think about what an autonomous social media and technology developer community does. And of course, neither of these exist in isolation from continuing to think about how you engage with user communities. The best way to help people create effective ethical human rights footage is to engage directly with
46:04
them to help them know how to do that, and that's something that within my own organizational work we do directly, thinking about how you translate some of these ideas into 30-second videos. How do you explain consent in 30 seconds? How do you explain how to film for evidence in a minute? All of those things that speak to this vastly expanding universe of people who are creating video.
46:28
There's a couple of other screenshots of this camera app showing how it identifies where people are at. And as I said, this is a project we're right in the middle of working on. It lives in public. It's an open
46:40
source project. If you're interested, you can learn more about it at this web link, the github.com Guardian Project slash secure smart cam. So I want to end just by actually looping back to that first image. And this was an image shot by a Libyan human rights activist and journalist called Mohamed Nabous.
47:03
It was shot actually almost exactly. I looked for an image that was shot almost exactly 20 years after Rodney King to think kind of where we're at. Mohamed was killed probably by a sniper in late March, but I think it's sort of inspiring to see how now everywhere people are taking up video.
47:24
And that's obviously a whole continuing universe of how we think about liveness and human rights video and using it for change. And I think the challenge now is how do we make sure that they can do that embedded in human rights values safely, ethically, and of course most importantly, effectively, so that the video they create creates change. Thank you.