We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Fake, hate, and propaganda - What can technology do?

00:00

Formal Metadata

Title
Fake, hate, and propaganda - What can technology do?
Title of Series
Number of Parts
234
Author
License
CC Attribution - ShareAlike 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Join us for a session with Nicklas Lundblad on how Google is approaching fake news, hate speech and other policy challenges through partnerships and technology.
CAN busPoint (geometry)Right angleBitLevel (video gaming)Computer animationJSONXMLUMLMeeting/Interview
Different (Kate Ryan album)NumberInformation privacyExpressionContent (media)Category of beingTerm (mathematics)Dependent and independent variablesComputing platformOpen setFreewareContext awarenessQuicksortExpected valueDivisorChemical equationService (economics)Physical lawMeeting/Interview
State of matterTask (computing)FacebookFitness functionMetropolitan area networkDifferent (Kate Ryan album)Category of beingNumberInformationTerm (mathematics)SphereMeeting/Interview
SpacetimeArithmetic meanInformationMechanism designSphereDependent and independent variablesIterationFile formatRule of inferenceCASE <Informatik>Content (media)Meeting/Interview
Computing platformYouTubeDifferent (Kate Ryan album)Physical systemBinary codeAxiom of choiceData miningMeeting/Interview
CuboidInformationObject (grammar)BitComplex (psychology)FrequencyMeeting/Interview
Dependent and independent variablesHyperlinkDifferent (Kate Ryan album)Modal logicState observerInformationGoodness of fitFigurate numberMultiplication signMeeting/Interview
InformationSet (mathematics)Graph (mathematics)Multiplication signFrequencyMeeting/Interview
Basis <Mathematik>DivisorQuicksortMultiplication signComputer-assisted translationSpeech synthesisOrder (biology)Flow separationDifferent (Kate Ryan album)Repeating decimalInformationSpacetimeMedical imagingSearch engine (computing)Meeting/Interview
InformationSound effectMereologyLine (geometry)Real numberBinary codeHoaxWordState of matterIntegrated development environmentArithmetic meanSearch engine (computing)Order (biology)Basis <Mathematik>Web 2.01 (number)Multiplication signMathematical analysisWeb pageFrequencyKey (cryptography)GoogolRankingQuicksortMeeting/Interview
Self-organizationFile archiverWeb pageWeb 2.0Search theoryDifferent (Kate Ryan album)Multiplication signLine (geometry)FrequencyInformationExtension (kinesiology)Pattern languageSemantics (computer science)Meeting/Interview
Order (biology)InformationDataflowNoise (electronics)Automatic differentiationSelf-organizationQuicksortPhysical systemDependent and independent variablesWeb 2.0Markup languageDifferent (Kate Ryan album)Standard deviationIntegrated development environmentDivisorSerial portMeeting/InterviewLecture/Conference
Integrated development environmentNumberQuicksortPolarization (waves)Arithmetic meanDifferent (Kate Ryan album)Rule of inferenceMeeting/InterviewLecture/Conference
Multiplication signSet (mathematics)Dependent and independent variablesOrder (biology)Arithmetic meanDialectDeterminantProduct (business)View (database)Point (geometry)Meeting/Interview
Right angleDependent and independent variablesPoint (geometry)Error messageQuicksortSpeech synthesisSelf-organizationSystem callInternet service providerContent (media)NeuroinformatikNumberMereologyDecision theoryTask (computing)Associative propertyArithmetic meanCivil engineeringSign (mathematics)Meeting/Interview
Response time (technology)DataflowSpeech synthesisInformationMereologyDecision theoryState of matterExpressionConstructor (object-oriented programming)Physical lawCore dumpDependent and independent variablesFreewareReal numberQuicksortMultiplication signMeeting/Interview
YouTubeQuicksortState of matterBitPhysical lawCASE <Informatik>FacebookGoodness of fitRule of inferencePresentation of a groupPoint (geometry)Axiom of choiceSoftware frameworkFlow separationDecision theoryDifferent (Kate Ryan album)Basis <Mathematik>Line (geometry)Degree (graph theory)Meeting/Interview
Software frameworkDegree (graph theory)Decision theoryPhysical lawRight angleInformationTraffic reportingMathematicsDifferent (Kate Ryan album)Rule of inferenceMultiplication signMeeting/Interview
Point (geometry)Absolute valuePrisoner's dilemmaTraffic reportingCASE <Informatik>Order (biology)Multiplication signMeeting/Interview
Content (media)QuicksortInternet forumService (economics)Traffic reportingTerm (mathematics)AreaState of matterArithmetic meanRule of inferenceDecision theoryMultiplication signSpeech synthesisMeeting/InterviewLecture/Conference
Decision theoryState of matterDependent and independent variablesSubsetMereologySpeech synthesisRegulator geneAlphabet (computer science)GoogolFlowchartDataflowContext awarenessNumberContent (media)Focus (optics)VideoconferencingExtreme programmingAreaSpacetimeMeeting/Interview
Multiplication signRight angleExtreme programmingForm (programming)NumberTouchscreenRadical (chemistry)Process (computing)Projective planeMereologyVideoconferencingBuildingIdentity managementParticle systemComplex (psychology)MultiplicationMeeting/Interview
CASE <Informatik>Projective planeRadical (chemistry)Integrated development environmentElement (mathematics)Content (media)Process (computing)Complex (psychology)InformationSpeech synthesisResultantExtreme programmingPattern languageMeeting/Interview
InternetworkingRight angleQuicksortSpeech synthesisDifferent (Kate Ryan album)Type theoryWordDependent and independent variablesPerspective (visual)Multiplication signInformationUniverse (mathematics)Meeting/Interview
InformationBoundary value problemOnline helpStatement (computer science)Task (computing)DivisorQuicksortInternetworkingDifferent (Kate Ryan album)Point (geometry)View (database)Universe (mathematics)Speech synthesisRegulator geneOrder (biology)Physical lawGoodness of fitMeeting/Interview
Computing platformSound effectSpeech synthesisExpert systemLevel (video gaming)9 (number)MereologyNumberRule of inferenceTraffic reportingRegulator geneOnlinecommunityDifferent (Kate Ryan album)Content (media)SoftwareInternetworkingMeeting/Interview
Open setAreaRight angleSpeech synthesisBitCore dumpCausalitySymbol tableFlagArithmetic meanStandard deviationMeeting/Interview
Content (media)AlgorithmEntropie <Informationstheorie>BitSoftwareStress (mechanics)Wage labourInternet forumDifferent (Kate Ryan album)Multiplication signMeeting/Interview
Finitary relationQuicksortSpeech synthesisComputer scienceOpen setContext awarenessContent (media)Decision theoryPhysical systemRegulator geneProcess (computing)Dependent and independent variablesParameter (computer programming)Confidence intervalPerspective (visual)Meeting/Interview
Content (media)1 (number)Confidence intervalComputing platformPosition operatorRight anglePhysical systemProcess (computing)Insertion lossSpeech synthesisNatural numberMeeting/Interview
Natural numberPosition operatorPhysical systemIntegrated development environmentQuicksortWeb 2.0Speech synthesisWorkstation <Musikinstrument>FlagBitPoint (geometry)Meeting/Interview
WordSpeech synthesisMultiplication signPhysical systemMathematicsDivisorDecision theoryConfidence intervalBoundary value problemEvoluteOnlinecommunityCodeMeeting/Interview
MathematicsView (database)Set (mathematics)Multiplication signPhysical systemInternet forumFlagAlgorithmTupleTask (computing)Scaling (geometry)QuicksortExterior algebraMeeting/Interview
Moment (mathematics)Multiplication signDependent and independent variablesContent (media)MathematicsMeeting/Interview
Projective planeAddress spaceDependent and independent variablesAxiom of choiceKey (cryptography)Content (media)MereologyMathematicsDigitizingDifferent (Kate Ryan album)NumberMeeting/Interview
Sign (mathematics)Fitness functionSource codeBitMultiplication signSocial classMereologyDifferent (Kate Ryan album)Propositional formulaQuicksortMeeting/Interview
InformationOrder (biology)Visualization (computer graphics)Analytic setRevision controlClassical physicsDefault (computer science)Physical lawSpeech synthesisMeeting/InterviewLecture/Conference
FacebookForm (programming)WordRegulator geneAxiom of choiceNumberProcess (computing)MassPerturbation theoryContent (media)Meeting/InterviewLecture/Conference
Content (media)Proper mapWordPoint (geometry)FacebookSpeech synthesisMedical imagingLecture/ConferenceMeeting/Interview
FacebookSeries (mathematics)QuicksortExterior algebraMereologyWhiteboardPerspective (visual)Physical lawSpeech synthesisSound effectDifferent (Kate Ryan album)Meeting/Interview
Physical systemTerm (mathematics)Multiplication signLattice (order)Revision controlFlow separationArithmetic meanDependent and independent variablesSound effectExterior algebraLecture/ConferenceMeeting/Interview
Point (geometry)WebsiteMeeting/InterviewLecture/Conference
Shared memoryPosition operatorMereologyLecture/ConferenceMeeting/Interview
Different (Kate Ryan album)Projective planeMereologyPosition operatorGoodness of fitFacebookInternetworkingMeeting/InterviewLecture/Conference
InternetworkingRule of inferenceGoodness of fitFreewareMereologyRandom matrixFacebookMeeting/InterviewLecture/Conference
MereologySpeech synthesisFreewareComputing platformInternetworkingDecision theoryFlagMusical ensembleRight angleService (economics)Point (geometry)Content (media)Physical systemPosition operatorLecture/ConferenceMeeting/Interview
Service (economics)Thresholding (image processing)WordPoint (geometry)Decision theoryVideoconferencingAsynchronous Transfer ModeYouTubeMultiplication signLecture/ConferenceMeeting/Interview
Content (media)Natural numberTerm (mathematics)Decision theoryPerspective (visual)Traffic reportingLevel (video gaming)Complex systemMeeting/InterviewLecture/Conference
Perspective (visual)Data conversionTraffic reportingTerm (mathematics)Speech synthesisVirtual machineSoftwareContent (media)BitBit rateConfidence intervalDecision theoryPattern languageSpacetimeProjective planeIdeal (ethics)WebsiteOnline helpMeeting/InterviewLecture/Conference
Speech synthesisExpressionPhysical systemDecision theoryAutomationMathematical analysisContent (media)Term (mathematics)Physical lawLecture/Conference
Decision theoryMathematical analysisNatural numberPhysical systemInternet forumPhysical lawGame theoryFormal languageDifferent (Kate Ryan album)Moment (mathematics)CybersexTrailEndliche ModelltheorieInstance (computer science)OracleSpecial unitary groupVirtual machineMultiplication signMathematicsMeeting/Interview
Natural numberContent (media)Order (biology)Virtual machineWave packetFile formatDecision theoryPhysical systemBitOffice suitePower (physics)Shared memoryData centerMeeting/InterviewLecture/Conference
Software developerWordPhysical systemFlow separationField (computer science)Food energyDifferent (Kate Ryan album)Data centerPower (physics)Disk read-and-write headOrder (biology)MereologyCore dumpAngleProper mapProjective planeLecture/ConferenceMeeting/Interview
Multiplication signMachine learningAlgorithmContent (media)HoaxInteractive televisionFeasibility studyKey (cryptography)FacebookGame theoryBlack boxMeeting/InterviewLecture/Conference
AlgorithmContent (media)Graph coloringFile formatTraffic reportingQuicksortHoaxSimilarity (geometry)Observational studyContext awarenessWeb page1 (number)Error messageReal numberSource codeFormal languageAverageMultitier architectureMeeting/Interview
MereologyPattern languageLevel (video gaming)WeightSeries (mathematics)Mathematical analysisDecision theoryAlgorithmContent (media)Single-precision floating-point formatNoise (electronics)StatisticsRankingPhysical systemComplex (psychology)Wave packetGame theoryIntegrated development environmentPoint (geometry)Set (mathematics)Lecture/ConferenceMeeting/Interview
Different (Kate Ryan album)Content (media)BitQuantumLevel (video gaming)Quantum mechanicsOnline helpMathematical analysisAlgorithmComputerGroup actionMechanism designLecture/ConferenceMeeting/Interview
Multiplication signMeeting/InterviewLecture/ConferenceComputer animation
Transcript: English(auto-generated)
Thank you so much. It's great being here today and thank you all for being here at 10 o'clock in the morning.
I commend you. I hope you have coffee in your hands because I think that this is going to be a really interesting discussion. We chatted a bit in preparation for this and I think you'll find that we'll really try to tackle some of the big issues today. So I'm going to start out with what I think is one of the defining questions
and something that Frank Pascal talked about yesterday in his talk on stage one if folks saw it. Which is the role of a corporation in the 21st century with respect to human rights, freedom of expression, privacy. Full disclosure, in my work I look at a number of different companies including Google, various
Google properties in terms of how individuals feel that their content has been taken down on social networks. So it's called online censorship.org and we've been looking at that for a few years now. But I think it is a really big question that covers all of these different issues. So I would be curious just to hear, to start off, where you feel that, or what you feel
that the role of a corporation is today. It's a huge issue as you point out and I think that the role of a corporation is also constantly changing because of changing laws and changing balances et cetera. But I think at the heart of it, the role of a corporation is to provide as good as possible a service as they can to their users. That's the first sort of baseline expectation that people have. And then on top of that, try to find a way to maximize and encourage free expression,
creativity and the open debate that the public discourse requires on top of that platform. Now as you well know, that responsibility is then influenced by a number of different factors and the ability to sort of maneuver is influenced by a number of different factors.
So I think it's a good question to be asking but it should be asked in context. So what is the shared, distributed responsibility for open debate in our societies? If you ask about the role of the corporation, you also necessarily have to ask about the role of citizens but also the role of the state and the role of the regulator and try to figure out how they all fit together.
And that's a negotiation that we've been in for the last 20 years ever since we were founded back in 1998. And I think it's one that's evolving, sometimes in a good way, sometimes not so much. Well then let me ask more specifically with respect to Google, where do you see Google fitting into that landscape? Cuz I know when it comes to, for example, Facebook,
they've been really staunchly opposed to being seen as a publisher. They've said no, we're neutral, we're a technology company and I would vehemently disagree with that personally. And so I'd be curious to see where you see Google's role in particular. So I think, I mean, we have a number of different roles because we have a number of different properties that we offer. One is YouTube, another is search. And I think that they are also different in terms of how they relate to
the overall ecosystem. I think it's too simple to say that it's either a question of you being a publisher or not being a publisher. If you look at the way the public sphere has changed with the introduction of technology, you realize that entirely new roles have actually emerged. One of the things that I think is interesting is that when you have a situation of enormous information wealth, you have so
much information out there, what is needed is somebody to curate it, certainly, somebody to host it, somebody to publish it with a sense of liability and editorial responsibility. But you also have a vast space that is information discovery. Search, for example, solidly sits in the information discovery niche in my way.
Which means that search necessarily needs to find out what is the best way to construct the public sphere information discovery mechanism. And that is by maximizing access to knowledge, to information, to whatever people are looking for. Not necessarily becoming a publisher with all of what that means. Information discovery and publishing are different.
And I think that once we start understanding and studying this ecosystem and the different ways in which we all interact with a wealth of content out there, we see that different rules apply. So for search, that's very clearly the case. YouTube has more of a community approach to this and essentially is trying to create a platform for users to create,
you know, speak their own voice, etc. And within that mission, it's also trying to create community guidelines that will allow the users to shape in different ways the kind of platform they want. So it's going to be different for different kinds of platforms. And I don't think it's, it's not a, it's not a binary choice of publishing or not publishing, I think.
Ah, I really like that answer. And I think I want to move to what we were chatting about just before we started, which is search. So if you don't mind, I'm going to share just a quick story. A few days ago, a friend of mine asked, you know, how many countries are there in the world? And a couple of us gave different answers and I thought, okay, well, good thing we have Google.
And we searched it and what came back was what I now know is called a knowledge panel, those boxes that you see at the top of search. And it had a particular answer that was not incorrect but not necessarily comprehensive. So obviously how many countries is a complex question that we're not up here to answer today. But I think that the knowledge panel is a really interesting thing.
And I liked what you were saying before, this kind of question around the objectivity of information and, you know, whether that even can provide a complete answer. So what would, what's your take on that? How do you, how do you see those as, as serving? And then how, where do you, or do you see it as kind of a policy issue when,
when these questions are more complex? Oh yes, when it's, it's, it's a little bit of a question of what is a, what is a good response to a search? When you're asking a question, what's a good answer, essentially. That's the kind of thing that our engineers are always thinking about and are trying to figure out new ways to, to, to to answer that question in turn. I think one of the things that we have certainly seen over the last ten years
or so at, that is that there's no natural necessity that the answer to a search is ten blue links. In fact, this needs to evolve because people become more advanced, they become better at using information, and they want different kinds of responses. And on top of that, the vast amount of information that you have to search through increases all the time.
According to some figures, and this is quite interesting, I mean, according to some figures, the amount of information digitally stored doubles every 12 months. It's a staggering, staggering observation if you think about it. If you, if you believe that search quality is in some way related to the set of information you have to search through, you would then say that if you do nothing for this period of time, what will happen is that the quality of
your searches will have, right? So the, the essential challenge for anyone involved in search is to figure out ways to combat the enormous information explosion and try to get better answers to the questions people are asking. One way that we've done this is through the knowledge panels. The knowledge panels essentially are building on a kind of technology you can
describe as a knowledge graph where you try to map out what we know and how it relates to every single other thing out there. And this, this is an enormous undertaking because you have to start answering questions like, you know, how many facts are there? How do you determine them? What is, what is sort of a good, solid basis for determining fact? And it is, it is tremendously hard.
There is a couple of startups out there, too, that are working on this. And they have a, they, they, essentially I think the mission of one of them that was reported on New York Times, I think about a year ago or so, was to, to determine all the facts in the world. And, you know, ask how many they were. The, the, the answer was someone's pleading. Because I think it's really hard to, to, to do that.
Just look at search. 15% of the searches we get every single day are new. We haven't seen them before. These are questions that we don't see as recurring. So that's, that's a, that's a real challenge. And I, I believe that for a search engine, engaged in information discovery, you have to pursue several different avenues of innovation in order to, to become better in a space that's quickly exploding in information.
I think that with the, with the knowledge panels, I mean, they also have the, the possibility of returning incorrect information. And I know that that's something that, that Google has worked on and is always actively working to fix. But I think that that's a good segue to talk about the, the issue that everyone's calling fake news.
And I'll, full disclosure here that I, you know, I personally feel that this, the, the, this binary of fake news and real news is part of the problem. And I've heard people say that here in the past day as well. That by kind of drawing that line in the sand and saying all of this is correct, all of this is not correct, that we're, we're really just perpetuating the issue. And so, I guess first I would just say for, for those in the room who aren't familiar,
what, what is where does Google see its role in tackling this fake news crisis? And then I'd also just love to hear your thoughts on, on how you see this as an issue. So fake news in a sense is, is not new for a company that's been engaged in working on information quality for most of its time. I mean, fake news or other kinds of low quality information has always been
a problem for a search. The, the key issue to, to observe is that the web is an adversarial environment. Everyone is not working together collaboratively in order to get to the best possible environment. People are working to further their own interests. That's why, you know, we've had for the longest time these enormous problems with search spam.
If you go back to the introduction of Google search and you do sort of a brief overview of the history of search, you can see that the first kinds of search engines out there were very simple. They did linguistic frequency analysis. They would look at a page and they would look at how many times does a word actually occur, and they would use that as the basis of their ranking. Once that method was tried in an adversarial environment,
it completely broke down. If you go into archive.org and you look at old web pages, you will find web pages that are like this much text and this much scrolling line where there's white text on white background to attract people. That, that, that, which is really interesting, right? I mean, it's like you, if you have, you know, Britney Spears, white text, white background for the entire page.
And it completely broke the linguistic frequency search model. And what we've seen over time is that this adversarial, this tension is moving from spam up into more and more semantic layers. So people are constantly competing for attention in different ways, because attention is something that can change societies,
that can change purchasing patterns, etc. And then in the midst of all of that, you have to figure out how you retain information quality, which is increasingly a hard problem. So I think the way we see it is that this is an extension of the information quality problem, but with an added order of difficulty, because you're trying to decide semantically if something is fake or
not, if it's valuable or not. And that's really, really hard to do. Now, there are some simple things we can do. When somebody is obviously misrepresenting themselves as a news organization, you can shut off the ads for that organization. Things like that, that sort of follow the money and stop the money flow to those who are willfully just spewing noise into the system. So that, those are certainly things that we do do.
And the other things we are trying to do is to very pointly show that this is not something that a single actor in this ecosystem can solve. Just as we came back to the question of responsibility earlier, it's a question of how you build this ecosystem out. So we are, for example, enabling different kinds of standards and information markups for fact checkers.
So they can work within this ecosystem to mark up information quality as well. And so it's a complex problem, it is not new, and it stems from the fact that the web is a deeply adversarial environment. Like, the web is a deeply adversarial environment, that's quite quotable. And I don't think anyone here would disagree, yeah.
Great, well then I guess I would like to follow up on that and say. So I'm starting to feel like we live in a society where evidenced-based thought is kind of on the decline. It feels that way at least. It feels that way in the sort of polarization of politics that we're seeing in Europe and the United States in a number of different places. In the world, is fact checking even going to be enough?
Are facts, do facts hold the same meaning if everything is a construct? And so I guess I would say, what more can we as humans do? Not just as companies, not just as our role in the world, but what more can we as humans do to solve this issue? And it's such a good question, because at the end of the day,
we spoke in the beginning about there being a responsibility for citizens as well. I recently finished Jay Rosen, I think, I may be misquoting this, wrote a biography of Justice Brandeis. And in this biography, there's this fascinating set of quotes from Justice Brandeis in which he says that, leisure time is the time you spend becoming a better citizen.
That's how you engage in society of democracy, strengthen it, how you invest in the institutions of democracy in order to strengthen it. And it's your duty as a citizen to do that. It's a way of viewing how we participate in democracy that is almost completely outdated in a sense.
It's very hard to find the corresponding views to those of Justice Brandeis today. And I think it's Justice Brandeis' point that we do have an individual responsibility when it comes to determination of fact, participation in democracy, the production of knowledge, for example, is quite right. And there's no simple fix to this.
There's no technical fix to what you're describing. I think that's really important to remember. There's no dial to turn up or down. There's no switch to flip that will make this kind of problem go away. This is about how we view our democracy and how we want to participate in it and how we invest in it in time and effort and that is going to be up to us.
That's why if you look at Aristotle, the ethics precedes the politics in Aristotle's work. There's a very real reason for that because you have to have your ethics right first, your personal responsibilities, and then you can start with your politics. I like that as well. I think that the question of participating in democracy and
what sorts of things we expect from our democracies at this point in this current era where I think we're all feeling this kind of tension, brings me to something that I think we actually, Google, we personally but also Google and myself, my organization, probably share an opinion on which is the hate speech bill here in Germany.
I just want to pull up my phone to make sure I get this quote right. So there was a recent civil society letter that I understand some of the industry associations that you're part of signed on to but also that was signed by the computer club and to get a number of German organizations. And in that letter it said that internet service providers play an important role
in combating illegal content by deleting or blocking it. However, they, meaning internet service providers, should not be entrusted with the governmental task of making decisions on the legality of content. What are your thoughts on that? I agree with that. I think that if you are, and it's also a question of how you actually want
your information flow in a democracy to evolve over time. I don't think that it's a good idea to put that decision over on corporations. Specifically not if you look at the way the hate speech law is construed with enormous penalties, very short response times. If you have enormous penalties in short response times,
it is quite clear that you are creating a risk that people who don't have a lot of resources or are not able to put a lot of thought into this or push a lot of reviewers towards it are simply just going to default to take down. Which means that the overreach that such a legal construction leads to will be quite significant. And I think that is a real risk to free expression.
And some of these things, some of these institutions, some of these decisions the state needs to retain and needs to still have as a part of what it means to be a democratic core institution. I think that's very, very accurate. I mean, I absolutely agree with that.
I think though, when it comes down to the sort of role of companies engaging with states on censorship, I wanna not really push back, but also kind of broaden this out a little bit. Expand it to say that a lot of the companies, I think YouTube in this case, Facebook, do comply with laws that they don't necessarily, that people in those companies do not necessarily feel are just.
And I think Turkey's a really good example of this. I know that there are a couple of talks on that here this week. Where do you feel that the company's role is in compliance in that sense? Where do you feel that they should draw a line and say, okay, enough, we're not going to be complicit in this kind of censorship? So I think in many ways, what you're asking is, what do you, as a company,
say about a legal decision in a country where you're present? If you are present in a country, if you made the choice to be present in a country, you've also implicitly made the choice to abide by that country's laws and rules to a high degree. And I think that is sort of a basic consideration that all
companies need to bring into this discussion. Now, there are some decisions where you can certainly push back. And you can push back against the legal basis of it, and you can challenge it in court. And we've done that in several cases in several different countries. And I think what you do at that point is that you try to combat the decision within the legal framework in which it was taken. And I think that's something that companies actually should do,
to a large degree. And they should challenge these decisions within that legal framework. At the end of the day, though, if you look at the way that democracies are expected to work, a democratically elected government has the right to restrict information to its citizens according to its constitution, no, framework, and according to law.
So what I think would be good, given that there is so much discussion about this, is to provide much more accountability and transparency in how that is done. We have a transparency report, the first company to launch a transparency report will be reported through all of these different things. But really, companies shouldn't do that, governments should do that.
Governments should say, here are the different ways in which we restrict information. Here are the different ways in which we request information about you, as our citizens. And here are many times we've used them. And here is the different kinds of ways in which you can change those rules, should you disagree with them. That should be a national reporting duty for governments, because they put them, provide transparency into those regimes.
And give citizens a chance to discuss them, and give citizens a chance to change them if they don't agree with them. Now I'm excited to talk about transparency. No, I agree with you. I think that governments should, but we also know that in a lot of cases they won't. To give a story about a different company, there was a scenario a couple of years ago where EFF saw that a company was complying with
orders from prisons in the United States. And that was a real problem because there was no transparency around that. Because it was not the same kind of legal request that had existed previously in that company's transparency report. Eventually things were settled, it was changed, the policy was changed. But I think that that shows that even if the US, which seeming less and
less democratic these days, but even if the US is not complying, then it's unreasonable to maybe expect Turkey at this point to do so. And so in that sense, I would say I agree. Google's been an absolute pioneer in transparency reporting, our who has your back report that we've been putting out for quite some time. I think Google has all the stars, I believe.
I didn't check the most recent one. But do you see now as companies are sort of expected to take on more when it comes to content moderation, to regulate content that is not necessarily illegal? I know Google doesn't do this quite as much as some other companies, but there are still areas where that happens. Do you see a role in these transparency reports for
more transparency around that kind of issue, around terms of service censorship? Yes, I do. I think that that's a very reasonable question. I think that's something, I will not be revealing any state secrets if I tell you we have been discussing this for a long time. The right way to do it so it becomes meaningful and it becomes informative and it becomes something that the users can use to make
their own decisions about whether or not it's right or wrong. This is an evolving topic, and I think that if we have this discussion in a couple of years, you will have seen new governance institutions like that, for example, have evolved within companies. And I think that is probably the right way to go about this. Companies, I think, can only gain from providing more transparency into how these decisions are made.
Because at the end of the day, users should judge them on the past behavior and try to decide whether or not they trust them based on their past behavior. That is the only way in which you can generate the kind of trust you need to continue delivering your services. Then I think that, again, I think that should be complemented by transparency
from governments about how these different rules are applied. Which is also when it comes to things like the German hate speech law, why I believe it's a bad idea to push all of this over to companies. Because then the state doesn't need to take responsibility nor accountability for the kinds of decisions that are made. Which is, again, something that stops citizens from changing those decisions.
Absolutely, so beyond the issue of speech regulation, I think that there's something here, too, around hate speech and around counter-extremism. And so I wanted to raise Jigsaw, which is a part of, I feel like I'm getting this wrong these days, so Alphabet, Google, Jigsaw. Am I understanding the- It's Alphabet, Jigsaw, Google. Jigsaw, Google, okay, I apologize.
I need the flow chart, we should have put that up behind us, it's complicated. So this whole space. So I know that Jigsaw is focused on anti-extremism, that's one of their areas of focus. And I think this is a place where companies maybe feel that they have to
step up in some way beyond just regulating the content, beyond just taking down the videos. Because that may not even be the best solution. For example, we've heard from some police departments in a number of places in the world that the taking down of the content without any further context is not even necessarily helpful. That it doesn't allow them to track, it doesn't allow archiving, etc.
And so I think that this is admirable work. But at the same time, right now, given the rise of white supremacy and fascist ideologies in Europe and the US in a number of places, where does Jigsaw, where does Google see its role in countering or combating those ideas as well?
Not just Islamic extremism, but also these other really prevalent forms of extremism. Well, I think that obviously it's something that a company can't do alone. Even Jigsaw acknowledges that radicalization is a complex, multi-layered process, right? It doesn't happen very, we have no evidence that it happens because somebody sits in front of a screen and sees a number of videos or a number of posts in
a social network and then decides to become an extremist. Usually that is about the identity building part and the radicalization happens offline in a small community where somebody is recruited. So the radicalization process and the de-radicalization process are deeply complex processes. And Jigsaw has chosen to, for now, focus on the Islamic radicalization process.
Now there's a reason that Jigsaw is not a part of Google. They want to try all these different things and was decided early on that it's better for them to do that as a free-standing company. And they should speak for themselves, and they have their own ideas about how to put this forward. Jared Cohen and Scott Carpenter are more than happy to come talk about these issues.
The reason we did the radicalization project with them from Google's side was we felt that there was a clear use case there for the redirect project we did. When somebody looking for information that is essentially going to be radicalization in fact is redirected to counter speech. Because we felt that was a clear case where we could test counter speech in this particular environment.
Now will we do that for other kinds of extremism as well? I don't know, I mean we're still evaluating the results of the redirect experiment and what we believe that went, what that led to. We do know that it led to a lot of exposure for people who were looking for radicalization content, but we don't necessarily know what the impact of that is. As you point out, it's a deeply complex process because
if you're feeling that the entire world is conspiring against you and stopping you from getting to the content that you really need and that there's this paternalistic element to it, then maybe you actually end up making the situation worse. So you want to be very careful, you want to look at limited cases, and you want to understand how they work, and that's essentially where we are now. So it's very early in the process.
Now, Jigsaw may well have another answer to that question. I think they should duly be asked that in their own right. Excellent, well I will do my best to ask them in the future. I think, I'm also really curious about your thoughts on just the, where the internet is headed. There's, I know that this is a really big question.
I know that, but we've been kind of talking around these ideas when it comes to hate speech, when it comes to the sort of different types of knowledge that people have access to and censorship, and I think that now we're seeing an increasing, I'm not gonna use the word balkanization, fragmentation of the internet. What do you see as your role there in ensuring that people do have
some sort of universalized access to information? I mean, from my perspective, governments are working together a lot of times to collude in oppression. And I think that people also have this responsibility to work together across borders and across boundaries in solidarity with each other.
But when our access to information differs, and sometimes even with the help of companies, that can be a difficult task. Our mission statement is to organize all the world's information, make it universally accessible and useful. And I think that we are not, we are committed to try to do that as best as we can within existing laws and legal regulations. And where we can have an open debate and a policy debate
about how those regulations can be changed in order to achieve that goal. That's very clear to us. I think that universal information access is actually a very important point when it comes to how we can change democracies and how we can get to a shared worldview, etc. On where the internet is heading, though, I mean, there's a couple of different things happening
that I think are really difficult. Take hate speech, for example, just as one single factor that we can study in order to understand what's happening right now. There is a push in a lot of Western democracies, for good reason, because this is felt to be a real problem, to regulate hate speech and to regulate it in quite a heavy-handed way, which means that you end up having regulation
that might deter companies from allowing any kind of spirited debate, if you will, on their platform. And I think that's very different than hate speech, but it may have a chilling effect. And what will happen then is not that this will disappear. I mean, there was a research report recently from Pew, published on the 29th of March, where they asked the number of experts in the internet community
what they think the next steps are. And they say, we think we see more hate speech, but we think that we will also see it move from a public stage to a semi-public stage into other kinds of networks. And it will fragment the internet because different kinds of content rules will be applied differently in different places.
And I think that the content regulation we're discussing now in several different parts of the world actually has the ability to create content islands on the internet. So you get an archipelago of different kinds of discussions, and then they are separated off because of regulatory concerns. So of course you can see that happening. And then you can ask yourself, look, if you have a guy who says,
I really, really dislike Swedes on an open arena, right? That is not nice. I think that's pretty bad. But if he goes into a small room with like-minded people and says, I really dislike Swedes, what is most dangerous for our society? To have him in that closed room together with like-minded people, or to have him on an open arena?
We haven't had that discussion. We just assume that we can sort of remove hate speech, which I think is a little bit like fighting a symptom rather than thinking about the core causes. Yeah, I absolutely agree. It's like putting a Band-Aid on this open and terrible wound and thinking, oh, okay, this is going to go away, but it doesn't. And I think that we've seen this in the way that imagery is regulated.
I mean, I know that when I go to Budapest, for example, I can see, you know, stand-in symbols that indicate that this shop is owned by Nazis. The Confederate flag, the American Confederate flag is commonly used there as a stand-in symbol because this other one has been made illegal. So I may not recognize that immediately. And I think that there is a real danger there to siloing off networks of hate.
I want to change text just a little bit because that was just a superb answer and move toward automation. I'm going to be not advertising this, but I'm running off to my next talk after this and talking a lot about automation algorithms. And so I have to take this opportunity and ask for your thoughts, particularly for the role of automation and content moderation.
I think right now there's a lot of talk about the labor involved, the post-traumatic stress disorder involved with content moderators who have to look at this terrible imagery every day. And there's a lot of empathy for them and a lot of trying attempts to create something different. But at the same time, I'm concerned that automation
will bring in the same sorts of human biases that we see now. And so I'm curious where you see this headed. Well, I think that first and foremost, you're absolutely right. The people who are forced to review this stuff, they are doing a really, really difficult job. And as a company, you have a responsibility to make sure
that they get the kind of support they need in that very difficult job. So I think that's important to say first. Then I think that this is a problem about... It is a problem that can be separated into two different things. The first is, can we? You know, is it possible within the existing computer science knowledge to build a system that can automatically do content regulation
and decide that this little piece of content here is hate speech and this little piece of content over here should stay up? I think that we are still far, far away from that, being able to do that. And I think that the notion of context, the ability to detect context is really hard. It's an open computer science problem, if you want.
So that's the sort of can we perspective. I don't think that it's possible to do within any kind of confidence interval that we would feel roughly comfortable with. Now, assuming we can, just for the sake of argument, assume that we have a system that has learned to identify
some content that is hate speech and some that isn't, would we still want to delegate this decision to a system or not? That is a much more difficult question, I think, because it might be true for some kinds of content. This is where we need to get nuanced and this is where it gets messy. If you run a platform and you don't want pornography on that platform, you might be able to build a system
that could detect pornography with a certain high level of confidence. And the false positives of that detection may be people on beach or people participating in some kind of noob stuff, right? And those false positives may not be high value content
and if they are actually removed from the public debate, you may not think that that's a huge loss and you could correct it with an appeals process and a reinstatement process, et cetera. Now, if you go to something like hate speech and you start looking at the nature of the false positives, which is sort of the research problem that you have to engage in,
you realize that the nature of the false positives when it comes to hate speech is much more difficult to address because you may be filtering out counter speech, for example. You may be filtering out other things like that. So I think that we're still at the can we stage and no, we can't. So that sort of settles the issue for now. But moving ahead, the question is,
will we ever be able to? I don't know what the answer to that is. It depends on a little bit of how complex this problem is. May well be really complex. I personally think it's so complex because again, the web is an adversarial environment. You made a really, really good point about the flag. If I build a system today that essentially allows me to detect certain kinds of markers for white supremacists, for example,
and that system picks out all of that speech. Now, what will they do? They will change the words, right? This is the exact same kind of fight that you see in spam, but you will end up with that and the code words, et cetera, will change over time and that system again is a system in evolution and competition and you're going to end up
having to try to push the boundaries. If that change is always going to be fluid enough to keep confidence intervals lower than we want them to be, we're never going to be able to do it. So there's a will we ever be able to question too, but assuming that we will be able to, should we, I think still is a really big issue and I don't have a good answer for you
and I won't pretend that we are not all discussing this in the internet community. I had a discussion last week with a professor about this as well and he essentially said that it is a decision also that needs to be informed by the risk of future discovery of bias because if you assume now that you have a really good,
if you assume that you had this system in the 1930s and you put it in place, everybody would agree it's not biased, it's working really well, it's sorting out all of the anti-whatever stuff and then you look at it now, 60 years in hindsight, and you say, my God, it's terribly biased, right? Because our view of bias changes over time. So there is a whole set of issues there
that are unresolved and I think we should, as a community, continue to debate them because I think that we will soon see somebody out there saying, oh, it's quite possible to do because they've built a system that they have faith in and they want to be able to do it. Is there anything right now for which Google is using algorithms to pull things
or even just to flag them rather than having human moderators? Mostly human moderators right now. There's a couple of very simple ways of identifying when flags become enough flags for a particular subject or something has been flagged before. There's an automation there but nothing on the scale of what you're discussing. This idea of sort of applying it to content moderation
which is essentially a human task today. Absolutely, no, I agree that it's a really tricky problem and I don't have the right answer either. I want to go to the audience in just a moment but I think I had one last question that I wanted to ask first and perhaps I don't. Well, I have so many things that I want to ask you
but I also want to make sure that I give plenty of time to the audience to ask questions. So do we have any just yet? Otherwise I can keep going up here. We do. Yeah, I wanted to ask, I was interested in what you were saying
about the citizen's responsibility to discriminate essentially between content and it's our responsibility as citizens to engage. I just wonder given that the education that we have received as citizens even amongst people much younger than me
is not in step with the technological change. So as citizens we are equipped to engage in a society which has gone and whether there is a responsibility from the producers of the technology that is changing our society, so people like yourselves, to help equip the citizens
to make those choices. Thanks. So we are engaged as you probably know in a number of different digital literacy projects and those are addressed at kids but they're also addressed at grown-ups and we are doing something called the growth engine project which teaches digital skills across Europe and I think that's part of it. Now I actually think that it's really important to talk about education overall
and how education needs to change in our society and why it might be important for us to constantly re-educate ourselves vis-a-vis technology. So we provide a lot of online content et cetera but at the end of the day that's also dependent on the individual's will to seek that content out and to change it. The key I think is as you say trying to figure out how you fit this
into basic education. Now there are some signs of hope. I'm a native Swede and my kids have this enormously annoying thing that they're doing through classes four to six is called check the source. So what happens is that every time something is every time a proposition is being brought forward they're taught to say check the source
and then they go off and they turn into fact checkers and they check a lot of different sources et cetera et cetera and they are supposed to do this 150 times over the course of a year. So they're really sort of it's ingrained into them. Now the annoying part of this is that every time I tell them to clean their room they say check the source and they disappear which is a bit of a bother but the overall sentiment there is really really good
and I do believe that this is classical critical thinking and something that needs to be embedded in education. I don't know if you read science fiction but Werner Winger wrote a really interesting book called a while back I think it was 1999 called Rainbow's End in which this guy who has Alzheimer's is cured of Alzheimer's
and has to go back to school because he has to relearn everything and there are only two subjects in school in Winger's version of the future. One is search and analytics which is essentially critical thinking and the other is visualization in order to be able to visualize complex information. Those are two only school subjects and maybe we should think about educational reform
in order to equip us with better tools to deal with this. I would love to go back to the German law about age pitch that you've been talking about. I think that the gist of what you were saying about it might be forcing companies with less resources
to just have it like a default solution for this problem so let's get rid of it even if it's just controversial and not really age pitch so it's much easier not to incur and those kind of very big fines but the problem is I've been following this closely from Germany
from here for the publication and so the problem is this has been ongoing especially with Facebook for more than two years and so self-regulation was the first choice for the government and there was a big push for self-regulation but it didn't work at all.
It didn't work at all. For two years Facebook I'm saying Facebook because the numbers show that Google actually did a great job and not because you're here but because it's just the mass ministry so the justice ministry just showed numbers. Your numbers went up.
Facebook's number in actually addressing illegal content we're not even talking about stuff that cannot be deemed as controversial. It was clearly illegal. They were not able to pursue that properly. Their percentages of illegal content were just reported, just went down.
So what is the government to do at some point? There's political push on that so there has to be something done and what the politics have in their hands is just regulating in ways that... And yeah, so what do you think about this bigger issue
of how to tackle that? I know that you can't say this but I can. I mean I have to think at some point with Facebook that it's willful because they're excellent at taking down any image of a woman's breast but they cannot seem to deal with hate speech and I can assure you I've been banned for posting breast cancer campaigns on Facebook before. It's unfathomable that they can't tackle this issue
but from my perspective I do feel that since Google, and I've seen that as well, Google was capable of this. I think companies are capable of it. Look, I don't think that companies, for the most part, should be forced into this situation. I do understand where you're coming from with respect to this one company but I see this as a problem with one company,
not a problem across the board that needs to be solved with this sort of rapid law. But I'll turn it to you to see where you stand on it. Yeah, I want to comment on Facebook. Yes, you're not allowed to legislate for a company. But there are a couple of different things here that I believe are really important. One is that I think that the entire industry has said
that we're willing to come to the table and discuss, see what can be done, how this can be improved, what we can do. We believe this particular piece of legislation is not necessarily going to solve the problem that you're looking at and it's going to have a lot of adverse effects. So let's not push a law through very quickly. I understand the sense of desperation that a politician can feel.
Also because many of them are on the receiving end of this and they're standing up for democracy. They are trying to really participate and I think that if you look at what politicians are doing, they are exposing themselves to this problem a lot. So I can understand the sense of desperation. But I think there are some things that we can do that doesn't have to entail pushing through an overreaching law.
One thing that we can do that actually should be done is to look at the existing enforcement of the hate speech laws that are there. What kinds of resources are being poured into this? How much are we actually trying to take these laws and take these cases and bring them to court? What is the resources of the court system
to deal with this? Are there special prosecutors that can work on this, for example, with transparent means and with revision, et cetera? Or all of those things should be discussed and can be discussed. So we have said several times in meetings with anyone who will listen that we're happy to come to the table and discuss what can be done. And there may be other things we can do in terms of shared responsibility
where we would discuss what the role of companies and governments are. But this is not it. It is not going to have the kind of effect that you're looking for. Then it would be much better to look at enforcement and look at alternative solutions, I think. Do we have more questions?
Hi. If I'm not mistaken, Google actually makes 90% of its revenue from advertising, is that correct? That is correct. OK, so basically, Google is an advertising company whose business is to influence people on behalf of its advertisers,
which gets back to Gillian's original point of what's the role of a corporation. And I have to say, I'm very troubled by the idea of experimenting on people, even if it's for good, to try to move them away from radical sites. I find this is human experimentation.
And Google is already doing this on behalf of advertisers. Well, it's a valid point. We make our money from advertisers, and we're not shy about it. It's not a secret. I think that one of the things that I believe is really important is that when you talk about the jigsaw experiment is that it's done in a structured fashion, that we're very clear about what's happening, and that we're very transparent about it.
You may disagree, and you may feel that that is the kind of thing we should never do. I think that's a valid standpoint to take, and it's one that we don't share. We think that we can do this within the remit we have. On advertising generally, I think the idea of advertising and funding what we do with advertising is that we help people find what they're looking for.
We help them do it for organic search, and we help them do it for advertising. Now, you can quite legitimately take the position that all advertising is a bad thing, and there should be no advertising in society. I disagree with that, too. I think advertising is a really powerful way to reach consumers, and it's a part of a market economy.
We may have a fundamental disagreement about whether or not a market economy is a good thing or not, and then we could go on, and we could discuss that and the basic tenets of how we build our society, but for us, we're a company. We're providing an advertising technology. The revenue of that we pour into a lot of different projects. I think that in many senses we believe that we are doing a lot of good with those projects,
and we will continue doing so. However, I understand and respect your position. Yeah, when the two big players, Google and Facebook, they pretty much, for many people right now, replaced what the internet was. They become the internet.
Especially Facebook really tries hard to be the internet for millions and billions of people. I'm somewhat troubled when these companies move past anything we experienced before from a company, still claim something. We as a company need to have a moral background.
We need to have our moral guidelines. No, you don't. You're beyond the role of picking or choosing moral rules to implement, because looking at Facebook and the ban on women's press, that's a good example on where this goes.
So you shouldn't be in the role to stop or enforce or do anything on any part of free speech, because this really goes out of hand, because you're already far too powerful. I think we're not the internet, and that's a very good point.
We realize that internally. We're absolutely not the internet. We provide certain services, and on those services we do believe we have the right to determine roughly what goes and what doesn't. And you could say, well, you are so large now that I disagree with that. I think that you should not make any decision about what goes on your platform or not. We disagree with that position. We believe that we should be able to decide
what goes and does not go on our platforms. We don't believe that extends to all of the internet, and we do think that the internet is changing quite fast, and there will be new services, and there will be other competitors, and that will go on. Then I think that all of those values can be questioned. If you disagree with our values, you should definitely let us know, because the way we try to shape our guidelines on YouTube,
for example, is from listening to users, what they want, what they don't want, what kinds of content they don't like. That's why the flagging system is so valuable to us. We see people flagging, and we try to respond to that. Now, we don't always act on the flags. We're not completely governed by the flags. If we were, for example,
there are a couple of pieces of music by Justin Bieber that would be picked down, because people see flag Justin Bieber all the time, which is very confusing. And so this is something that is a part of running a service. You have to decide what you want to actually have on that service or not. There is a living debate that is representing exactly what you say,
people who say, well, your service has now crossed a threshold where it's so large as not to be able to do that anymore, and you should refrain from any moral judgment. I don't think that's possible. I think it's a moral judgment or a moral decision even to provide a service. So I think that's a point where I disagree. Can I follow up on that real quick and just say that I think that
a really interesting example of this that came about recently was YouTube adding a restricted mode that would segment some LGBTQ videos off to the side. I know YouTube has apologized for this and that most of those videos are now available, but at the same time, that is a company passing a judgment on what is, let's say, acceptable for adults versus children
or people who want adult content versus not, and to be clear, these were not all sexual in nature. Some were, some were not. From my perspective, I think that there's perhaps a diversity issue within the companies when these decisions are made, and I know that Google does produce a diversity report.
I know that in terms of global diversity at least, it ranks much higher than a lot of the other companies, but I still see this as perhaps an issue coming out of Silicon Valley, and I'd just be curious. Diversity is certainly important. I think diversity in making these decisions is absolutely essential. We did apologize. I think we made a mistake, and when we make mistakes, we back off of that,
and this was discussed at the very highest levels. Susan, who leads YouTube, was deeply involved in this, and of course, she also was deeply involved in walking this decision back. We will make mistakes. We will own up to them, and we will try to fix them. That is, that is unfortunately the reality of a fairly complex system.
Hi. Thanks for your talk. I would like to come back to the technology aspect, and in the announcement, there was conversation, AI and perspective was mentioned in the text, and maybe I've overheard that, but I didn't listen to anything about that.
Could you explain a little bit what these tools or softwares are? This is another jigsaw project that is essentially about trying to help publishers decide what kind of tone of voice they want in the commentary space of their website, so it's something that is provided for publishers
to decide how they moderate their comments. It uses a basic AI to detect certain kinds of speech patterns that can then be judged and trained, judged to be appropriate or not appropriate by the publisher, so it's not a decision taken by us, but it's an AI applied to this comment space, and as we said before, you know, the confidence rates here are very difficult,
and the reports about this particular piece of technology has varied in terms of how effective it is, yet publishers are really interested in it because they would like, the ideal for them would be to have a way of setting the tone of voice in the discussions that follow on
to the quality content that they produce, and so jigsaw tried to figure out if there was a way to do that, and it's more of a jigsaw question. I saw that was in the announcement too, and I was hoping I wouldn't get any machine learning questions. Thank you very much for this session. Usually we have definition of what is legal
and what's not legal to put out as expressions, both hate speech and blasphemy and other things that are decided by parliaments and by courts. How would you look at integrating the decisions made by courts into automated systems? So, if I understand your question correctly,
you're asking whether or not you could take a decision by a court that in some way interprets a law on what's legal or not legal in terms of content, and then integrate that into an automated removal system.
Well, I think that the challenge there would be that you have to code it in such a way that it actually represents accurately the gist of the conceptual analysis in the decision. Now, that would require that the decision analysis in court decisions is somewhat standardized. Now, you can imagine a world in which you actually standardize the conceptual analysis of a piece of content according to a way that's machine readable that you could
immediately import into a system. There's been discussions of that kind when it comes to courts, and I think that there are a lot of really interesting research tracks on what's called cyber courts, where the decisions are essentially coded in such a way that they're completely transferable between different kinds of systems. So if you have a court decision over here,
you can import it into a moderation system over here. That asks a profound question about the nature of law and legal concepts. Aren't law and legal concepts such that they're amenable to that kind of conceptual modeling or coding? I'm not sure they are. When it comes to legal philosophy, I'm a pragmatist.
I believe with Oliver Wendell Holmes and others that law changes, that societies change, which means that any codification of it in a particular moment will be a snapshot of what the law looks like in that particular moment, and it conceptually will not be stable over time. The language game around law changes, which means that it won't be possible
to actually import a coded decision just from one instance to another. You can do some things to, I guess, at least, another way of asking your question would be, should these decisions then be used and provided in a format that can be used for training machine learning systems, so you train classifiers on them
in order to train these classifiers to moderate content. That might be quite interesting too, but yet again, that asks a profound question about the nature of legal concepts, where I tend to be on the skeptical side of whether it would work or not, but it's a very valuable research question. Hello, thank you for the very interesting panel.
I'm changing the topic slightly. I know that Google has been very proactive in promoting sustainability, such as solar panels to power your offices and data centers. Are you able to just share a little bit more on that and other initiatives that you guys are actively
introducing or practicing to enable sustainable development? Thank you. Thank you, yes, of course. No, we do believe this is a really important question. We are, since several years back now, carbon neutral. You can discuss the efficiency of the offset system, but we buy carbon offsets for when people like me travel across the world, et cetera,
so we're truly a carbon neutral company. We're looking at different kinds of sustainable energy investments across the field, and especially when it comes to our data centers, we've been very careful about how we locate them in order to be able to look at new ways of utilizing energy. For example, there's a data center in Hamina that uses offshore wind power in different ways
in order to power the data center as such. It is a key question, and for that reason, the sustainability work is actually in Google proper. It's not in alphabet. It's been decided that it's a core part of what Google does, and there are several different other projects ongoing, and we were happy to provide you with an overview. I don't have them all in my head, but an overview, if that can be helpful to you in some way.
I didn't know that. That's very cool. I didn't know that Google was carbon neutral. I think we've got time for probably one to two more questions. I will come back to the algorithms and machine learning. I'm sorry, but it really seems very relevant in this discussion.
My feeling is that many companies propose algorithms as that solution for the problem that was created possibly by the use of algorithms in the first place. So we see how, for example, fake news get certain visibility online because of the algorithm behind that that promotes the content that will trigger more interactions.
That's less Google example. That's more Facebook, I think, example. And then the company like Facebook responding with the same idea. Now we will teach our algorithms to ignore this kind of content. So my question to you would be how much you believe in transparency and opening those black boxes? Do you see any risks inherent into doing that? So for example, we open the algorithm,
and then people start gaming them even more. On the other hand, if they are being gamed by botnets anyway, maybe we should be opening it. What's your take on algorithm transparency in the context of fake news and similar problems? So there are a couple of tiers to your question that I would like to ask. The first one is, are algorithms really
generating a filter bubble? This is a real research question that people are looking into. And just a couple of days ago, the University of Michigan, Oxford University, and University of Ottawa released a 203-page report where they looked at the use of search and political opinion, formation of political opinion. And it's a really interesting report, and there's a couple of things in there
that I think are salient to what you're asking. One of them is that people actually do not only look at content that confirms their own prejudice. They look at 4.5 sources on average. They seek out sources that are different from the ones that they typically see. And maybe this behavior is different in social networks. I don't know. The study doesn't say. But the notion of having a filter bubble
be generated by your search pattern seems to be one that is not proven by research. It's rather challenged by it. So that's an important first part of the question to answer. Second part is, how do you think about algorithmic transparency? I think there are two answers to that. One is the one that you gave yourself. If you put the algorithm out there in an adversarial environment, what will happen
is it will be gamed, and you will be lost in the noise. And that's not a great idea. But that answer is only partially true. I think that the more problematic question there is that if you look at what is happening today, it's actually better to try to explain outcomes than explain the algorithms. Because the algorithms are constantly becoming more and more complex.
And it's not certain that you would be able to make any decisions or informed analysis by just looking at a series of neural nets and how they're trained at a certain data set. You would be able to see the algorithm, but you would not be able to predict the outcomes because of the level of complexity inherent in the system that you're trying to audit. So at that point, what becomes important
is that you are able to do statistical analysis of the outcomes and that you're able to talk about the outcomes. So for search, for example, what we want to be very transparent is about how you can rank higher. How can you provide better quality content? So we have something called How Search Works, which is a broad campaign that goes in depth
on all of the different things we look at. And at this point, it's more than 250 different signals we look at when we try to determine the quality of content. And it's going to grow and it's going to increase, which means that every single piece of content that we look at will be determined by a lot of different signals. So when we can talk about how that happens, it's actually more useful than just talking
about the algorithm behind it. And this has to do with levels of explanation. If you want to understand how a car works, and I start teaching you about quantum physics, you're not necessarily going to be helped by that because quantum physics does describe how the car works, but the explanation is going to be really long and complex. But if I talk about mechanics,
certainly you have another understanding of the car, and then you can do things with the car. And it's a bit the same thing with computer systems. Talking about the algorithms is a little bit like talking about quantum physics. When what you actually want to do is to talk about what the algorithm does and the outcome analysis of the algorithm. So it's levels of explanation and it's levels of explanation that help you
be as transparent about what you're doing as possible. Thank you so much, Nicholas. I think we're out of time, unfortunately. Thank you, everyone. I hope that you enjoyed this discussion as much as I did. Thank you very much. Thank you, Judith.