The Automated Public Sphere
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 234 | |
Author | ||
License | CC Attribution - ShareAlike 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/33041 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
re:publica 2017104 / 234
1
5
7
10
12
13
18
19
20
22
31
32
37
42
43
44
48
51
65
72
74
78
79
88
89
101
104
106
107
108
109
115
127
138
142
150
152
156
159
160
166
167
172
175
181
183
185
186
188
191
197
201
204
206
208
210
213
214
215
217
220
221
223
224
226
234
00:00
Celestial sphereMultimediaHash functionPoint (geometry)Type theoryRight angleHypermediaCartesian coordinate systemSoftware developerAreaPresentation of a groupTransformation (genetics)Order (biology)Artificial neural networkQuicksortAlgorithmState of matterVector potentialCelestial sphereExtension (kinesiology)Revision controlRegulator geneDenial-of-service attackBlack boxResultantGastropod shellForcing (mathematics)Power (physics)Extreme programmingWebsiteSoftwareNatural numberInternetworkingComputer animationJSONXMLUML
03:18
TwitterComputer-generated imageryHash functionFacebookCore dumpNP-hardContent (media)Electric generatorTime evolutionHypermediaTuring testInformationMultimediaReal numberLevel (video gaming)GoogolRevision controlCloud computingDifferenz <Mathematik>Hill differential equationLimit (category theory)Mobile WebNumberSphereAutomationDependent and independent variablesGroup actionInternetworkingMetreRegulator geneIdeal (ethics)System programmingBlogTheoryDigital electronicsRational numberFile formatShared memoryType theoryEuler anglesVotingQuicksortDifferent (Kate Ryan album)Rule of inferenceCASE <Informatik>Game controllerCommitment schemeOpen setControl flowAreaPower (physics)Normal (geometry)Asynchronous Transfer ModeDependent and independent variablesAuthorizationInternetworkingCelestial sphereHypermediaRegulator geneOcean currentComputer programmingFacebookProduct (business)FreewareLie groupState of matterMereologyFundamental theorem of algebraIdeal (ethics)TwitterResultantMultiplication signExtension (kinesiology)Right angleVortexRoboticsStandard deviationSoftwareSpeech synthesisPhysical systemLoginContent (media)InformationRobotObservational studyHoaxAlgorithmExpressionAutomatic differentiationEvoluteLevel (video gaming)Logical constantCore dumpState observerComputing platformCircleSource code1 (number)WordSoftware testingReflection (mathematics)Radical (chemistry)Total S.A.Sound effectValidity (statistics)Presentation of a groupWebsiteTransport Layer SecuritySpring (hydrology)Video gameExecution unitPoint (geometry)Slide ruleMusical ensembleAcoustic shadowServer (computing)DataflowOpen source
12:23
Observational studyResultantQuicksortSlide ruleTerm (mathematics)Type theoryRight angle
13:10
Web pageRankingGoogolAdditionWordRadio-frequency identificationSimilarity (geometry)PC CardGoogolFigurate numberDependent and independent variablesQuicksortWebsiteSearch engine (computing)ResultantGroup actionQuery languageGreatest elementInformationCommitment schemeMultiplication signBit rateWeb pageRight angleLogical constant
14:59
Physical lawIncidence algebraResultantLecture/ConferenceMeeting/Interview
15:44
GoogolShape (magazine)System programmingDependent and independent variablesBlogExplosionBeta functionInformationResultantWebsiteParameter (computer programming)Greatest elementInformationMultiplication signService (economics)Ring (mathematics)Logic gateMereologyLoginSource codeWeb 2.0Goodness of fitFood energyType theorySearch engine (computing)Black boxTwitterAreaComputer scienceContext awarenessLevel (video gaming)Digital mediaAsynchronous Transfer ModeServer (computing)Expert systemPhysical systemVirtual machineGoogolCore dumpEuler anglesDependent and independent variablesCelestial sphereStandard deviationVariable (mathematics)HypermediaFormal grammarNormal (geometry)Interpreter (computing)Image resolutionCoefficient of determinationQuicksortPhysical lawFacebookGame theory2 (number)Lecture/ConferenceMeeting/Interview
20:31
Service (economics)GoogolVector potentialInformationFacebookClassical physicsCasting (performing arts)Design by contractDependent and independent variablesQuicksortPhysical systemCommitment schemeText editor1 (number)Basis <Mathematik>Direction (geometry)Parameter (computer programming)Level (video gaming)MereologySource codeWordService (economics)
21:51
Duality (mathematics)Computer fileQuicksortInterface (computing)Group actionHypermediaWeb pageReal numberInverse elementComa BerenicesComplex number
22:54
Self-organizationCelestial sphereInformationGoogolInternetworkingSource codeContent (media)MultimediaSearch engine (computing)MedianSocial softwareFacebookLevel (video gaming)Point (geometry)Content (media)File formatRule of inferenceReal numberType theoryCivil engineeringRepresentation (politics)Right angleTerm (mathematics)InformationSelf-organizationAsynchronous Transfer ModeWhiteboardMereologyShared memoryConnected spaceMachine visionNumberResultantPerpetual motionQuicksortPhysical lawHoaxGroup actionInternetworkingDependent and independent variablesHypermediaSource codeDifferent (Kate Ryan album)Automatic differentiationGradientSet (mathematics)FacebookCelestial sphereTraffic reportingVector potentialMechanism designNatural numberDirection (geometry)SummierbarkeitDivision (mathematics)View (database)Artificial neural networkVideo game5 (number)4 (number)Slide ruleRadio-frequency identificationBit rateRegulator gene6 (number)MedianDecision theory
28:34
Dependent and independent variablesSphereDigital filterTuring testFreewareRegulärer Ausdruck <Textverarbeitung>Computing platformCelestial sphereHypermediaSlide ruleTheorySpeech synthesisFile formatInsertion lossIdeal (ethics)QuicksortDependent and independent variablesSensitivity analysisType theoryExpressionFreewareExterior algebraRegulator geneRight angleOpen sourceForcing (mathematics)Parameter (computer programming)AlgorithmBit rateProcess (computing)MereologyBitRational numberPhysical systemForm (programming)Extreme programmingImaginary numberCommitment schemeContext awarenessShift operatorStability theoryEndliche ModelltheorieWordComputer fontFigurate numberMultiplication signSpectrum (functional analysis)Theory of relativityMarginal distributionFood energyInternet forumGroup actionLattice (group)
35:11
Multiplication signSummierbarkeitSign (mathematics)Lecture/Conference
35:51
TheoryInformationWebsiteRight angleWage labourQuicksortSelf-organizationGroup actionAuthorizationMereologyForm (programming)Sound effectCelestial sphereWordDivisorSelf-referenceInformation technology consultingLecture/Conference
37:20
Civil engineeringMereologyRow (database)Cartesian coordinate systemGroup actionState of matterFrequencyAreaPhysical systemZirkulation <Strömungsmechanik>Self-organizationRenewal theoryLecture/Conference
38:24
Group actionState of matterIdentity managementArtificial neural networkIdeal (ethics)CodeSign (mathematics)Standard deviationRight angleCelestial sphereAlgorithmDecision theoryLecture/ConferenceMeeting/Interview
39:31
Right anglePosition operatorOpen sourceSoftware developerMereologyFamilyShift operatorLecture/Conference
40:31
Line (geometry)Commitment schemeMedical imagingOpen setPoint (geometry)QuicksortSlide ruleTerm (mathematics)Endliche ModelltheorieRight angleSpectrum (functional analysis)TelecommunicationCore dumpShift operatorLecture/Conference
41:25
Term (mathematics)Goodness of fitWeb 2.0SpacetimeHypermediaAlgorithmType theoryDevice driverLevel (video gaming)Different (Kate Ryan album)Machine visionFocus (optics)Scaling (geometry)DemosceneEndliche ModelltheorieSpectrum (functional analysis)Lecture/Conference
42:34
Message passingMereologyView (database)Key (cryptography)ExpressionOrder (biology)FreewareBitFocus (optics)Euler anglesDistortion (mathematics)Line (geometry)QuicksortRow (database)Celestial sphereDependent and independent variablesLatent heatService (economics)Instance (computer science)Procedural programmingMeeting/Interview
44:22
Computer virusDifferent (Kate Ryan album)Information securityModal logicPhysical lawBitAlgorithmDecision theorySoftwareProcedural programmingMereologyCelestial sphere
45:53
Point (geometry)Service-oriented architectureHypermediaCrash (computing)Extreme programmingAlgorithmConstructor (object-oriented programming)Context awarenessRight anglePosition operatorNumberLecture/Conference
46:42
RoboticsPower (physics)MereologyType theoryGrand Unified TheoryIncidence algebraResultantProcess (computing)Digital photographyPhysical systemNormal (geometry)AreaAlgorithmPhysical lawExtreme programmingSearch engine (computing)Commitment schemeLecture/ConferenceMeeting/Interview
47:24
System identificationInternetworkingGreatest elementContext awarenessGrass (card game)InformationRootRegulator geneBitOptical disc driveLecture/Conference
48:39
Right angleDefault (computer science)Physical lawPower (physics)QuicksortMachine visionType theoryHoaxDecision theoryGroup actionProcedural programmingMultiplication signRule of inferenceMatching (graph theory)Vector potentialRegulator geneDependent and independent variablesComplete metric spaceParameter (computer programming)SoftwareSpontaneous symmetry breakingOrder (biology)Data structureSystem administratorTwitterCuboidGame theoryGraph (mathematics)FreewareMereologyStudent's t-testLecture/ConferenceMeeting/Interview
50:58
Celestial sphereAuthenticationTraffic reportingCivil engineeringQuicksort1 (number)Zirkulation <Strömungsmechanik>Form (programming)Right angleHypermediaData structurePhysical lawType theoryGoodness of fitEndliche ModelltheorieContext awarenessDifferential (mechanical device)Group actionHoaxRaw image formatMeeting/InterviewLecture/Conference
52:01
SmartphoneLecture/ConferenceComputer animation
Transcript: English(auto-generated)
00:18
Thank you so much Geraldine, and thank you to the media convention.
00:22
It's fantastic to be here. And I'm just going to dive right in because I have a lot to cover today. And just to sort of put my talk into segments, the way I'm going to start is by discussing really troubling applications of big data, algorithms, and artificial intelligence
00:42
to the public sphere, to politics and culture. After going through some of these problems, I'm next going to talk about seven regulatory approaches that could be deployed in order to address the problems caused. And finally I'm going to reflect on some situations where perhaps regulation is too late or counterproductive.
01:01
So I think with that type of presentation we'll have a balance and we'll try to understand what can government do, can it do anything to respond to the evolving media landscape that in many areas is very troubling. I want to start with sort of four ways in which recent developments in the public sphere over say the past five years or so
01:22
have reversed the common utopian story about the nature of media online. And I would summarize this by saying that we've moved from a wealth of networks to a black box society. How did that happen? Well it first happened because we were told that the ability of almost anyone to set up a website,
01:43
to say their own piece online would end big media domination and would empower everyone. But what we've also found is that it's enabled a great deal of fragmentation and extremism. Secondly we were told that the enabling of anonymity would give people an extraordinary courage
02:01
and freedom to speak their mind. But what we're also seeing is that the most powerful plutocratic forces in societies can set up shell corporations. We saw that in the Panama Papers. They can set up other entities that essentially allow their influence to go unchecked and nearly unmonitored because it is so secretly funded and accomplished.
02:25
A third example of the utopian version of the public sphere via the internet was the idea that all would participate, all would have a right to speak. But we've seen the flip side of that is a hashtag spam where yes if everyone has the right to speak then you certainly also have the right to say
02:41
flood the hashtag black lives matter with white supremacist propaganda. And to the extent that this is permitted it is an almost immediate effort to strangle in the cradle the emancipatory potentials of the public sphere that we had all hoped and dreamed of ten years ago or so. And perhaps to make it even more topical the last week or so
03:02
our original conception of WikiLeaks was this wonderful tool to discredit the surveillance state. Now it seems as though it's sort of really bent on discrediting anti-fascist candidates in elections. And so when we see this type of transformation we have to worry. And I'm going to go into some particular examples of this.
03:22
The proliferation of Twitter bots I think is not merely a reflection of technical evolution that people like Tim Huang were writing about years ago. It's also a reflection of a commitment to what I call free expression fundamentalism. Where the idea is that say if anybody can code up a program that wants to speak or tweet thousands of times online
03:42
that that is a natural extension of their right to speak. I think we really have to combat that because we can't just allow say the romantic appeal of Isaac Asimov's talking robots to influence our idea of how software and automated methods of speaking can influence the public sphere because
04:01
I can never keep up with a bot that can tweet a thousand times an hour, neither can anyone here. And the logical endpoint of such a completely unregulated and unmediated public sphere is one where those who are the wealthiest with the best coders are able to drown out everyone else. I'd also say that fake content on Facebook is a huge fundamental challenge to the idea of
04:23
a marketplace of ideas ultimately enabling good true speech to drive out the bad speech. In fact what we're seeing is dark ads and personalization are directly targeting certain types of content to those who are most susceptible to it and those who would dispute it
04:41
will never see this content because it's in the dark. It's one of the themes of my book, The Black Box Society. That no one will be able to see this and counteract the speech because it's simply been so slightly targeted on such a personal level. And what's really scary I believe about this is that we have a problem of human subject experimentation
05:01
in many social media. We have constant efforts at A-B testing to see what works, what doesn't and this can just as easily be a way of accelerating propaganda as it is a way of accelerating truth. Now a lot of sophisticated observers will ask you why does this matter? Who believes the fake news? Who actually believes that Hillary Clinton had five people killed
05:22
during the election 2016 etc. Who would believe these other lies? Well that does make some sense for a lot of the folks that run in highly sophisticated, highly educated circles. But we also have to worry a great deal about floating voters, low information voters voters that are marginally attached to the political process
05:42
being particularly susceptible to a lot of efforts at propaganda and online manipulation. And when we consider the possibility of low information voters being the key ones who decide an election and I think you can think of this in your own experience I mean most people one knows, they know exactly what their political commitments are but there are this sort of core of low information voters
06:02
that could be really disproportionately influenced. I'd also say that you know the promise of Facebook was that of a unified, national, supranational even global public sphere where we could all talk and debate and that still is part say of the 6,000 word Zuckerberg manifesto with respect to Facebook.
06:21
But the problem is that a sort of radical egalitarianism as to source can lead to a total lack of educative effect or branding as to the validity of the content. So when you have a situation where automation homogenizes the presentation of information so that everything looks the same
06:40
something could be from the Denver Guardian so called that was just cooked up by a couple of people in a few days or a site like that was just cooked up in this way could appear with the same level of authority as something say that was from a prize winning newspaper investigative team. And this again, this idea of sort of debased egalitarianism
07:01
a way of sort of making everything look the same is very troubling. I also think that one of the biggest problems here is that a lot of the people that run the biggest online platforms and intermediaries Google, Apple, Facebook, Amazon, Microsoft, etc. they come from the U.S. and the culture of the U.S. really has culminated
07:21
at least with respect to free expression and something that I call First Amendment fundamentalism. The idea behind this is it's in a book called Courting the Abyss where the strange value system is that the worse the speech is the more misleading it is the prouder we are to allow it because it shows how radically committed we are
07:41
to let everybody say whatever they want. The culmination of this is in things like there's a case called U.S. v. Alvarez called the Stolen Valor case where the Supreme Court says the government doesn't even have a right to make it illegal for people to lie about whether they had military honors. There's another case from the Washington State Supreme Court that essentially guarantees political candidates a right to lie.
08:00
It's really just that simple. We're just going to allow the candidates to lie. And when you have a public sphere that has this type of cultural, legal attitude toward free expression, of course you have viral stories that say, you know, lies about that there are three million illegal votes or what have you. We also have to look at the material foundations
08:22
not just the legal foundations of this sort of automated public sphere. And I think these are really brilliantly investigated by Jody Dean in her book Blog Theory where she describes the circuits of drive and the profit that is made by simply outraging people, right? Rational opinion formation and deliberation is not a profit center.
08:42
Rapid sharing and engagement is. And that's one reason, for example, why even Dadaist interventions like Ben Grosser's effort to randomly automate your reactions to Facebook posts why these things are so popular. Because people have a sense of this sort of an out of control public sphere that really is very hard to understand where it's going
09:01
or what one can do to even influence it in a positive way. Academic responses have been various to this problem. So we have a situation where you have a lot of people that are trying to figure out is there a role for regulators? Is there a role for consumer protection authorities? For media authorities? For other entities?
09:21
Unfortunately though, there's another aspect of U.S. American centrism that I think is overly influencing the internet debate which is a rhetorical commitment to something called one unified open internet or don't break the internet. And so often what happens is that when there are efforts at rational public interest regulation
09:40
of internet media a lot of those at trade authorities from the U.S. and other entities say if you allow different rules in say Germany or France or Argentina or Japan than U.S. centered rules you will break the internet. And this I think is something to be resisted because we need culturally and nationally specific responses
10:02
to some of the problems that I've just gone through. The problem also is that we have an attitude from the U.S. Federal Communications Commission pioneered in the 1980s that television and now the internet under Ajit Pai, the current FCC chair is little more than a toaster with pictures. And if we trust the market
10:20
to give us the best toasters we can trust it to give us the best internet the best program online. Of course even in the case of toasters that's not true. There are product safety commissions that actually try to ensure that your toaster won't explode in your face when you put bread in it. But nevertheless this is one of the problems in which we have many much rhetoric from the U.S. saying
10:41
just don't regulate it will all work out in the end. But this is very disingenuous because essentially deregulation is a lie. Once you deregulate by the state in a given area you essentially cede power to massive corporations to be the de facto regulators.
11:01
And in our case especially the problems that we're discussing at this conference this convention Facebook and Google effectively are the regulators but they're acting in regulatory ways without the normal modes of accountability such as due process public comment public responsiveness that are the hallmark of a legitimate regulatory authority.
11:21
And so as I go through seven different regulatory responses part of the inspiration for this is trying to figure out a way in which the de facto regulatory power of online entities could be translated into something that is more legitimate and more responsive to public ideals. This is in a nutshell
11:40
the seven items that I'll be going over over the next 15 minutes which are to label and monitor and explain hate driven search results to ban certain particularly dangerous content to audit the logs of data fed into algorithmic systems to allow some limited outside annotations to limit the predation possible by online intermediaries
12:00
because I think that's a huge part of the story here is that as you see so much money power and attention being sucked up like a vortex centripetally into these online intermediaries there's so much less to go to the standard or media forces the right to be forgotten as one case study in this area of algorithmic accountability
12:20
and finally a shout out for media literacy. So one example here is labeling and monitoring biased certain results and an example that I'm sort of putting on this here on the slide is one involving the rise of advocacy by the outright to promote a revisionist account of the Holocaust
12:42
and of history up to World War II and what we see in this situation is that people searching on the term Hitler, on Holocaust on the 1930s in Germany, etc. all of these types of searches are leading to extreme prominence for renewed right wing results
13:00
and or far right Nazi results and this was revealed in a piece by Carol Cadwellater a few weeks ago, she does a lot of she did some empirical research, some anecdotal studies or stories of here what's kind of ironic here is that we've actually moved backwards from where we were in 2004 I started writing about search engines and their arrangement of information
13:22
back in about 2003 or so and when I was writing some of my first articles like rankings, reductionism, and responsibility I found this example of Jew Watch, which was a Holocaust denial site that was in the top 10 results for the query Jew at the time, what Google did is they were extremely worried by this the Anti-Defamation League complained to them
13:41
and they had an asterisk next to it and at the bottom of the page, they said we are very concerned about this result we believe it's been manipulated but just know that we can't really change it, but we would like you to bear in mind the potentially illegitimate origins of this result What Cadwellater's reporting
14:02
revealed in the past year or so is that there was backsliding from that commitment by the company and it's rather ironic, right, because if anything Google is 10 times more powerful and more rich than it was in 2004 but it backslid Another problem that I think is really important and what demands some responses is autocomplete bias
14:21
So we have a problem where for example, autocompletes about certain individuals are leading to clearly biased results Now here's one, because it's a public figure I am less inclined to endorse any sort of intervention but certainly when there are private figures who are having autocompletes going after them that's clearly troubling
14:40
and secondly, when we have whole racial groups having autocompletes that are derogatory that are troubling that are deriding them that is something we have to address and I'm really looking forward to this forthcoming book by Safai Noble who has explored in great detail exactly what is going on
15:01
in many of these algorithmic biases that show up in autocompletes and has proposed some concrete ways in which large intermediaries can try to intervene to assure that there's not this kind of bias and influence and certainly even governments that have anti-hate laws should seriously consider how far this could go how far down the road we could go
15:22
I say this in particular because one of the most troubling terrorist incidents in the United States over the past ten years or so was this man, Dylan Roof went to a South Carolina predominantly African American church and killed over ten people When asked why he did it he said he had googled black on white crime
15:42
and saw all of these results that led him to say that there was an epidemic a race war happening where essentially there was a white genocide possible because of all of the black on white crime Of course these things are entirely spurious, they're all hate driven and so the idea is how as society is committed to pluralism
16:00
to tolerance and diversity we try to ensure that what we have committed to formally what we commit to in education is showing up in our online media because to be frank that's much more important to many people than many of the standard media educated outlets that's how people really are connected to each other and connecting to the web
16:22
now another thing that I think is so important here is audit logs of the data fed into algorithmic systems and I want to explain the significance of this with reference to a scandal that happened in the United States or this bizarre situation a story called Pizzagate where essentially there were rumors based on
16:41
ostensibly based on some of the WikiLeaks releases that there was a child sex ring at a popular pizzeria in Washington that was connected to the democratic establishment again entirely false but one, it took some time to get to the bottom of it because the people who spread this news were savvy about
17:01
how they would disseminate it if you had better audit logs so we could better understand exactly where certain stories that are quite suspect originate, this one originated in a white supremacist tweet this type of public regulatory understanding of the black box of these algorithmic systems could be crucial to rapid responses
17:22
responses that happened before say in this example a gunman goes to the site and starts firing shots and says I just want to see what's really going on here this is where audit logs are really helpful and they're also really helpful because we hear a lot from Silicon Valley lobbyists and other
17:41
tech advocates that the current modes of ordering information on search engines and social networks are so complicated law can never catch up there was actually one machine learning expert that was quoted in a recent story that said trying to explain how Google or other sites arrange information for you
18:00
for me as a computer scientist to explain that to you as a layman would be like trying to explain Shakespeare to a dog and I think it's very important that when we hear these metaphors we push back immediately and first we say, you know, the story about Pizzagate, that's not Shakespeare and then we secondly say
18:20
you cannot adopt the sort of condescending attitude that the computational sphere is itself automatically superior to modes of human self interpretation and human understanding but that is precisely the core normative value in most of the deregulatory efforts here and one small push back against
18:40
this is to say even if we can't necessarily understand or explain in a humanly relatable way how machine learning works on thousands of variables with thousands of parameters we can at least have logs of data that are influencing certain results that we see on Facebook and Google and we could try to isolate and try to find suspect sources of information
19:01
those suspect sources of information could for example be part of the annotations on certain posts so if you recall the JewWatch example I gave from the beginning about how white supremacists were gaming the search engine to lead to antisemitic results, that could be a much faster process if we have data from audit logs that allow us to identify
19:20
say suspect efforts to manipulate news feeds, to manipulate research engine results and then immediately or very close to immediately allow annotation of those results and I think this is a very important step forward in how we regulate the economy
19:40
because we have already committed to the idea that we need to have labeling of say food and drugs so we understand say how many calories, how much fat is in food what we need to take as a second step in an information economy is we not only need to have information about goods and services we need to label we have to have information about the information we get
20:02
we have to be able to understand that there might be certain suspect or troubling influences on the data we're getting that helps us to figure out what shows we're going to watch, what news we're going to trust, what feed we're going to follow etc. and that's a premise of Mark Patterson's book on antitrust law and the new economy it's an effort to sort of step
20:22
to a second level of awareness and to break out of our self-imposed tutelage about black boxes to break out, to really enlighten the public sphere in these scenarios I also think that part of this is inevitably going to involve restoring human editors and when I say restoring human editors I don't mean doing this in the
20:42
classic Facebook style of creating a cast of untouchable journalists who are out in a trailer somewhere that are just contract workers who can be immediately removed or vetoed by anyone who's an engineer in the company. We have to realize that this sort of cast system very explicitly described in
21:02
Kate Lossy's book about Facebook is a reality about the way in which a lot of entities in Silicon Valley see the relationship between journalists and engineers. What we instead need is we need to really make sure that there's this robust and substantive commitment to respecting journalism as a profession, not merely a source of peace work, propaganda, or PR
21:21
and to ensure that human editors can, for example, look at breaking news stories and just knock out ones that clearly have no basis in reality. Google's fact-checking initiative is one small step in this direction. They have partnered with entities like Snopes.com and factcheck.org. They're trying to ensure that
21:41
there are some rigid parameters to qualify those who serve as fact-checkers to have some level of accountability and responsibility there. I do think there are some issues. One is sort of an interface problem. One problem here might be that even though Snopes.com is debunking the story that Pope Francis endures
22:01
Donald Trump for President, we nevertheless have the headline is the Pope endures Donald Trump for President. We have to be sensitive to a world of, as Hartmut Rosen characterizes it, a world of acceleration and social acceleration, that we have the ability of entities to respond to the possibility that
22:20
people may only be reading the headlines. So maybe even headlines have to be sort of altered to respond to the real activity of media reception by new audiences, fast audiences. Same with the story about Clinton campaign chairman involved in satanic spirit cooking. Again, false story. Let's perhaps not
22:41
require people to click to the second page. Let's have a better sense of what an accurate headline would look like. I also want to say that the groups involved first of all, I question are they being adequately supported by the large intermediaries? The second question I have is, have we really opened it up to all potential groups? Or do we have a very narrow concept
23:00
of what fact checking could entail? I think that this type of civil society initiative, trying to recognize its quasi-governmental nature, trying to recognize that there are questions of legitimacy, of representation involved is very important, and trying to be open about that, right? And I think that even to the Google response to the right to be forgotten decision, there is not quite enough pluralism
23:21
in terms of how they established a board to try to respond to that. There could be more. In going further, what I want to say is that we have to, when I talk about funding these other entities, part of that funding probably has to come from intermediaries themselves. And we had a proposal back in 2006 or so, by Harvard professor
23:40
Terry Fisher, to assure that content was getting some share of the profits that were being made from the growing connection and internet connections of other people around the world, and he would actually have a proposal that would allow lots of access to content after having attacks
24:01
on internet connections. Now, of course, I think that is far too regressive attacks. I would make it a much more progressive attacks. I think that is more than the vision of someone like Jaron Lanier, but he also has some other ideas about how to make sure that content is actually compensated. But part of the idea here is that we have to recognize that
24:20
as more and more revenue goes to the intermediary, less and less to the source that actually is enabling the intermediary to be something of value, we have to respond to that. Another example I'll give, number six of my examples is the right to be forgotten, and I think that the example here is one where I know it's controversial, I know there are certainly concerns in the press about potential
24:40
censorship or collateral censorship would be a more accurate term, but I think that when we look at the demands of individuals, say, and when you have a woman who says, whenever people search my name, the first result they see is the fact that my husband was murdered 20 years ago. I don't want that to be the first result everyone sees about my name
25:01
for the rest of my life into perpetuity. It seems that a humane, compassionate response to that really is for the intermediary to help that person, say, obscure the story. You know, at least when it's a search on her name, right? It reminds me a lot of like a credit report or a dossier or a private investigators report on somebody. They really need the ability to
25:21
respond to that. And this, I think, is a first step toward assuring that we are not just sort of drifting into an artificially intelligent future, but we're drifting into a wiser future where there's some artificial wisdom that's informed by a compassionate response and a requirement under law that intermediaries be
25:40
somewhat responsive to individual concerns. Can it be abused? Of course. But the answer to this abuse, I think, is better, more legitimate operationalization rather than ending the right in itself. And I think that this is part of, you know, it fits in together with some of the ideas that I've already been talking about with respect to
26:00
obscuring certain content, banning certain content, because it really is quite possible that we are on the cusp of a social media genocide. You know, if you have the ability of individuals to spread hatred, to spread damaging stereotypes or fake news about, say, a minority group in a community, we could be very close to a
26:21
scenario where that group in itself could have accelerated hatred directed against it. And we need to have mechanisms in place to try to at least retard that or slow down the acceleration of damaging content. A final example that I would give, and this is steps back from that sort of disaster scenario that I was thinking about in the earlier slide, is
26:40
educating users in media literacy. One of the concrete examples here that I think is most compelling is that there are lots of people that do not understand the difference between a sponsored story on Facebook and organic content. They don't understand the difference between sponsored ads on Google and real content or content that's ostensibly organic. And just
27:02
have these basic levels of media literacy taught maybe in fourth, fifth, sixth grade so that individuals know what are they being paid to see, what is something that is arising out of a less commercial process, although certainly a commercial process. This type of media literacy is really crucial, and it's also crucial even in the
27:22
smallest settings. So for example, there are lots of scandals now in the US about celebrities promoting not just the Fyre Festival, but promoting drugs online. And we have, like Kim Kardashian was actually warned by the Food and Drug Administration not to promote a morning sickness drug without at least acknowledging this is an ad,
27:41
right? Because people should have some level more of skepticism about advertising than about organic content. Ultimately, these seven ideas that I put out here, these seven modes of regulation, they are something that I think would enable us to move a bit closer to
28:01
a better modulated, more representative, more legitimate online public sphere. I'm not saying there's the solution to everything. Certainly there are many problems with the legacy media that I've not even touched on here to this point. I'm going to go through in the last third of the talk. But they at least give some sense that there can be
28:22
democratic will formation by a polity that is coherent and that wants to set certain rules of the road with respect to how information is disseminated and consumed online. But I also want to think about what happens in public spheres that are what I call post deliberative.
28:42
And I have on this slide two social theorists who I think are really, there's almost a rivalry between their thought and whose thought will most influence the 21st century in politics. Jurgen Habermas and Karl Schmitt.
29:00
And when we think about Jurgen Habermas' work over a lifetime, he has put forward an idea of an ideal speech situation and a public sphere based on rational will formation on the unforced force of the better argument. And many people say, well this is a pretty idealistic notion, right?
29:22
And it's ironic when we think about the intellectual history of German social theory that he himself was almost acting to be a more realistic voice in response to the Frankfurt School. But I think it's very important to have these as regulative ideals for how we enter into political debate and dialogue. The alternative of course is the
29:41
crown jurist of the Third Reich, Karl Schmitt's concept of politics, which is that politics is essentially a battle between friend and enemy. And we have to try to win, and we really have nothing to learn from the other. The other is the enemy. And why is this social theory relevant?
30:00
Well, I think it's relevant because we should do a thought experiment about any reform that say is based on the concept of a filter bubble. So let's imagine that we're really concerned about Elie Pariser's concept of filter bubble, and we ensure that everyone on their news feed, all people who are identified as left see some right wing stuff on their news feed,
30:22
and all people who are identified as right see some left wing stuff on their news feed. If everyone is committed to a relatively Habermasian ideal of deliberative democratic politics, maybe this is going to lead us to moderation, right? Maybe this will lead us to consensus to societal harmony. However,
30:40
if you had a, if we can just imagine a society where, say liberals are, say, very open-minded, you have liberals on the left, very open-minded, and you have those on the right who are completely closed-minded, then if we iterate over time, we could imagine a scenario where there's some folks on the left, liberals who say, yeah, you know, these sort of neoliberal policies with
31:02
respect to immigration, with respect to taxation, et cetera, maybe they make some sense, maybe I'll move a little bit closer to the right. And then you might have a right that says, we will never move. We are here. If you have this type of scenario, it's pretty easy to imagine what happens iteratively over time,
31:20
right? And I don't think this is just a thought experiment, really. I think it's actually something that's coming somewhat close to reality in the United States at the time, at the present time, and this is where I want to draw distinction between, say, being concerned about algorithms and being concerned about, say, legacy media. In a lot of the United States, we get sort of media that is presented,
31:42
so, for example, in this slide, the jobless rate for four presidents is compared, and we're meant to assume that essentially Trump is the best of the, one of the best of the four, and Obama's the worst, because after three months, look, Trump's fixed all the problems. It gets even worse, where you have, essentially, 100 days job creation figures, and you have, say, Trump again
32:01
looking as the best person, the other's not. If this is the media diet that is, say, being fed to one side of the political spectrum, and part of the filter bubble is a commitment to ensure that more and more people see this media diet, that, to me, would probably be a perversion of the fairness doctrine, right? Or at least,
32:21
one would not necessarily want to sign up for it. So I think we really have to think hard in the future about exactly what is the public sphere we're dealing with. Another example that I think is relatively telling, at least from the U.S. context here, is if you look, for example, on free trade, on the left-hand side on this slide,
32:40
there was a huge Republican shift on free trade. So at the beginning, at 2009, you know, you had a relatively stable support for free trade. Once you had a candidate who was against it, suddenly against it. It's even more dramatic if you look at the Syria strikes, where you have, among Democrats, say, a relatively stable opposition, roughly, to getting more
33:02
Middle East adventures for the U.S., whereas you have, among the GOP, 22% supporting the Democratic president, 86% supporting the Republican president. So again, the problem here is one of, if your model of public sphere regulation is one that presumes rational deliberation among large portions of the polity,
33:22
and it seems as though there might be, say, portions of the polity that really don't want it, that are committed to a certain way of viewing the world, you've got to think very deeply about how that is either improving the public sphere or potentially not improving it, potentially deforming it and playing into the hands of one side or the other. My bottom line for this talk is that
33:42
the U.S. and free expression advocates really should not try to stop well-ordered societies from imposing reasonable consumer protection competition regulation. I think that the seven items that I went over earlier, that those could be quite helpful in assuring that places that have a relatively well-ordered
34:01
and deliberative political system can maintain that, and how they could be used to nip in the bud certain forms of extremism or ways of undermining the political common understanding, the imaginary of politics, to use Charles Taylor's term, that undergirds robust democracies.
34:21
But I'd also say that after a certain amount of time, in some public spheres, there's essentially a loss of this common understanding. And in those public spheres, it's relatively easy to see the regulatory bodies being captured, or regulation itself being put to ends that it was
34:41
never intended for. And so this sort of cultural sensitivity to whether we are in a Haber-Majin or Schmidtian public sphere will be central to how committed we are to the seven types of reform of the automated public sphere that I discussed earlier. And so with that,
35:01
I just wanted to thank you so much and I really look forward to questions and dialogue from this group. Thank you. Thank you so much, Frank. We have left some time for questions
35:22
and I'm pretty sure there are going to be some. We have our ladies with the yellow t-shirts to come around and give the microphones to you if you can give me and Frank a sign of hand if you'd like to raise a question or comment. There's one over there here.
35:48
Hey, first of all, thank you very much. I enjoyed it very much. And I have a question. You had this idea about information about information. Like labeling the information.
36:01
I was thinking why you said that, like, how would you or what kind of thoughts have you about that people trust this label? Because there's so much mistrust, there's so much like conspiracy theories that people really believe in what you talk about. Like, what is your idea about, like, establishing trust for these
36:22
kind of senders, that label? Right. That's a fantastic question. And I think that the problem of achieving some sort of self-reflexivity, self-reflection about the public sphere is a really critical one. I know that there are some certainly the organization factcheck.org
36:42
was widely distrusted by a lot of people who felt that it had an extremely narrow approach to public financing, for example. And so that is something where enabling many groups to be part of the dialogue should be a first step toward allowing more legitimate
37:00
forms of labeling information online. I think this is going to be part of a consultative process that should be a cooperative endeavor, say, between governmental authorities and the intermediaries that are sort of have the most effect here. I just think that it can't be left entirely up to the intermediaries themselves, because if it is,
37:20
then I certainly am distrusting. And I want to see more involvement by civil society. Now, part of that involvement could involve say, looking at the past record applications from the group, setting up a commission to sort of go through the applications to figure out who would be, say, who's done the most work in this area, the most credible work, to allow
37:40
challenges. So, for example, in the rulemaking process administratively, you have comment periods, and then you allow sometimes there's an allowance for comments on the comments. So that can be part of the issue as well. But I also have to acknowledge that you don't want to create a situation where it's simply the person that is most doggedly interested that wins, because that sometimes
38:01
you see in like Wikipedia edit wars. And so you have to be careful to sort of make sure that you're not setting up a system where it's simply about whoever expends the most effort. There has to be a somewhat more closed process, but a process that is open at the beginning, but then also allows some contestation, and some renewal, and some circulation too. You don't want
38:21
the same organizations doing it over, say, past five or ten years or something. But all of those things I think have been addressed by, say, intergovernmental advisory panels or other panels, and all that I'm really focusing on here is trying to ensure that these methods are recognized, that something that is like a state action is
38:41
recognized as in the identity of what intermediaries are doing. That they are acting like governments, that they need to act like it. And one last example is my colleague Daniel Citron has an article called Technological Due Process, a path-breaking article, where she says that when they make these decisions, there should also be due process, a right of contestation, a right to be heard.
39:01
And that's where I think the worry about algorithms and artificial intelligence I think really bites, is because so often in tech firms, it's seen that anything that involves a human is a defect that indicates a failure of code. And what we need to do is to reverse that and say, in fact, humans are essential to governance, and involving them
39:20
is a sign of the legitimacy of the process, not of its deviation from a platonic ideal of an entirely automated public sphere. Thanks. Are there more questions? You're going to have to wave and make yourself very visible over there. Thanks. Right there.
39:40
Hello? Here? Over here? Sorry. Okay, then you first please, and you after that. Thank you. I wanted to ask about you had this little scenario that you sketched kind of the hard right that has a clear position and the beautifully liberal liberals that are so
40:00
open, and that would kind of shift the whole society to the right. Now I'm kind of part of a like a mixed tribal marriage. My wife's family is conservative evangelical Kansas. My own family comes from a liberal east coast. I hear from both sides, especially after the election, I hear so much ignorant stuff
40:20
on both sides, and I just don't quite buy the kind of beautiful, idyllic picture that you painted of the liberals in the United States. Oh sure, I mean I would say certainly you could model it the other way. My point is basically the same about the filter bubble, right? You could model it the exact opposite way and you could say yeah, a society could do
40:42
completely to the left, maybe that's Venezuela, I don't know, I mean that could be the society that goes where there's one hardcore. But I just sort of thought that was a convenient slide because it was sort of an image that I think is quite compelling in terms of rebutting the filter model bubble, the filter bubble model. The bottom line is that
41:00
democracy is fragile. And that if you don't have a real commitment to openness by all the major entities in a political spectrum, you really have to worry about that. And I think that is very troubling. So that's just the point. The point is that you have to have a general, beneath political commitments, you have to have sort of a Habermasian commitment
41:22
I think to actually listening to the other. Over there. Yeah, hi. Good afternoon, my name is Craig Fagan I'm with the World Wide Web Foundation which was set up to basically keep the web an open and safe space for everyone. And obviously algorithms are key as you use the term social media genocide it's affecting how this
41:41
vision is kept alive. But in a lot of your remarks you focused on the US, which obviously is a good example. How have you seen this play out in other countries? Because one of the things that we're very concerned about is it's very much just as you were talking about Silicon Valley driving the agenda in the US it's Silicon Valley driving the agenda in Lagos, in Delhi, in other countries
42:01
where there isn't that type of the concerns you talk about the filter bubble is on a totally different scale. And there's also a thing which I think you alluded to which is around algorithmic harm, the idea that there is a social harm being created by algorithms so we need to do something about it. Which is similar to me in the concept of where the environmental debate, where the
42:21
drive of the agenda in Germany or US is not going to be the same level of what's happening again in Nigeria, India. So I wanted to know how you're kind of looking at this broader spectrum and to bring it back to how you see these two things playing out in the debate in other countries. Thank you. Thank you. So I would say that one of the key messages of
42:42
I think one of the the first part of the talk the ending was that I am very worried that the US centered view of the world is overly influencing international bodies that might misinterpret a culturally specific US idea of what
43:01
free expression or free trade is in order to eliminate the possibility for many other countries to develop culturally nationally specific responses. And just to turn the question around a bit I guess I would say that perhaps my focus on US examples performatively embodies that
43:21
attitude by indirectly confessing that that's what I'm familiar with. I don't read all of the German papers each day. I don't read all of the British papers each day. I don't read the other countries' papers each day. But I do look quite specifically and look quite closely at what the US experience is and I think that what it shows is that
43:41
in a country with very advanced penetration and adoption of some of the most advanced automated public sphere methods that you have a lot of opportunities for distortion. So all that my bottom line would be is I'm trying to essentially use US parochialism against itself.
44:01
I'm trying to say essentially that I don't necessarily trust what is being advanced by interests largely identified with culturally specific ideas of what free expression or free trade is. Be open to other sort of experiences. So yeah, that would be my response. There's one more question in the front
44:21
row here. And one more over there. Hello. I'm a member of the trade union Verdi and my question is if it would be helpful to open the discussion more up not only talking about filter bubble but
44:40
talking about the necessity of transparency of algorithm in all the spheres of democratic decision making but also about security. I think we have to have much more transparency of algorithms in all the basic infrastructures. It's extremely transparent unfortunately and
45:00
therefore it's a security issue. But also for example in the trade union we do have talking about different laws in different nations. In Germany we have co-decision making and we need to have transparency of algorithms or at least criteria what's going on with
45:20
the software, the functioning, the decision making processes of software to be able to co-decide what kind of technology can be introduced. So you know I think that there in general for democratic procedures and safe infrastructures we do need much more
45:40
transparency and it's part of the filter bubble and part of other issues too. Maybe if we discuss it as a general problem of democracy and security maybe then we have a little bit more strength in pushing it. Thank you. Well I would completely reiterate that point. I think that in particular in my Black Box Society book I was really
46:01
focused on the data brokers, the media sector, and the finance sector. And I thought particularly if you look at the crash of 2008, the global financial crisis, much of that was down to the extreme opacity of algorithms but what's so important is that we have to emphasize that this opacity was a social construction. A lot of firms say oh it's just too complicated for people to understand.
46:21
In fact it's because of trade secrecy requirements etc. And one thing I'm very concerned about in the European context is that the right of explanation that is guaranteed under the GDPR, that that will be essentially cut down to nothing thanks to trade secrecy assertion by large algorithmically driven entities. So we really have
46:41
to watch trade secrecy assertion. I'd also say, and I have to preview my next book which is on robotics that in the areas of medical robotics, education robotics, law enforcement, and the military there's extreme opacity about the guts of robotic systems. And if you think it's bad
47:01
when it's opaque as to how we're getting our news feeds, and how we're getting search engine results, imagine when it's a robot deciding whether to arrest you or not, or whether how it's going to be teaching, or how it's going to be diagnosing illness. So as these algorithms power more and more parts of our lives, we're going to have to really recommit ourselves to exactly the type of
47:21
commitments that you articulated. So thank you. And I think one more question from the lady there, if that's okay with the frame. Thanks. Well thank you very much. First of all, this was a very inspiring talk and I look forward to following this up with your book. I'm not quite sure whether I really can put my finger
47:41
on my question precisely now, but I would like you to connect what you just said to the notion of democracy because what struck me is that a lot of what you suggested is actually very top down. You were talking about regulation, you were talking about the old notion
48:01
of gatekeepers to information, about labeling information. Now the concept of democracy says that people actually participate from the bottom up. That seems for me to be a bit at odds and one of the good things that I think that the internet has brought about and especially
48:20
also in the context of the Trump election is that we've been seeing a lot of grassroots journalistic outfits, startups, etc. So how do you put this together? Sorry, I'll just throw this at you. No, it's an excellent question and I think that this approach
48:42
that I'm putting forward in response set one with those seven proposals for regulation or for even self-regulation was in some of them. It really is a plea for structure and here's how I would root this and it actually comes from a James Q Wilson book about bureaucracy. He's a great administrative law sort of
49:01
political scientist and uses administrative law and he compares ideal types of how political struggle is done and he says that in, and again this is very crude but I think it's illuminating, he says that in Europe many decision procedures seem like they follow something like the Marcus of Queensberry rules or it's like a boxing match.
49:21
There's a fight, people contest it. There's an argument. There's a decision on a winner and people sort of are going along with it generally. He compared that to the American political process which he analogized to a barroom brawl. He said that the fight there, it never ends. There might be a gang that goes out of the bar and tries to get somebody else to come in and beat up their enemies et cetera and he said that's
49:41
essentially what the administrative process in the US reminded him of was this sort of complete free-for-all and I think what my argument is that I think that ten years ago say when we were most influenced by say a wealth of networks type of utopianism about the potential for networks online that we thought that this would bring everybody in and that that would sort of automatically
50:01
conduce to democracy, to free markets et cetera. It was a very Hayekian vision of spontaneous order. I think that over the past ten years what we've seen is that what's really happening is that there is, there are large intermediaries that have enormous power. So first of all it's not like regulation
50:21
is intervening on or interfering with a process that is now an open ended and grassroots one. It's really one where the process right now, the default is one of extreme centralized corporate power and secondly that the sort of grassroots vision
50:40
grassroots are very easily replaced by astroturf, by fake grass, right? And so to even be supportive of a grassroots group you have to have some concept of what's grassroots and what is, like the Twitter bots I showed simply purchased time from very good coders or from others to manipulate the public
51:01
sphere. And without that type of basic differentiation between authentic civil society institutions and fake ones that are essentially made up by manipulators we're essentially discrediting all of civil society. It would be the same as if we allowed counterfeit money in circulation, right? Gresham's law says that counterfeit
51:21
money, the bad money drives out the good. In many respects we would have bad civil society entities driving out the good, the fake ones driving out authentic ones, etc. How do you make that distinction? That's difficult. But at the very least entities should be extremely transparent about their funding about how they were formed about their governance structures, etc.
51:41
There are unfortunately, you know, in many contexts there's lots of think tanks that say, hey we're just a civil society group, we're just trying to represent the grassroots. And it's obvious where the funding is coming from but it's never noted on say reports, news media, etc. So this sort of self-protection of civil society demands some forms of structure. And without it we're going to see that type of barroom
52:01
raw model of politics that really ends up in exactly where the US is right now.