Encryption At Scale
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Part Number | 72 | |
Number of Parts | 177 | |
Author | ||
License | CC Attribution - ShareAlike 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/31877 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Production Place | Berlin |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
re:publica 201572 / 177
1
3
4
8
10
12
20
21
23
28
29
30
34
37
40
46
48
49
54
55
59
63
65
67
70
71
72
73
76
86
87
89
91
96
97
98
102
103
110
117
121
127
132
136
142
149
153
157
158
170
171
175
00:00
Term (mathematics)EncryptionFormal languageSubsetContext awarenessSymmetric-key algorithmLine (geometry)Scaling (geometry)Physical systemBitLengthXMLComputer animationLecture/Conference
00:54
Hazard (2005 film)Execution unitVolume (thermodynamics)Information securityMereologyUniform resource locatorCryptographyPhysical systemMixture modelInformation privacySet (mathematics)Software engineeringCodeCore dumpLecture/Conference
02:06
Mortality ratePasswordKey (cryptography)EncryptionPhysical systemUsabilityInformation security1 (number)Multiplication signDigital rights managementLecture/Conference
03:00
Category of beingPasswordSet (mathematics)Communications protocolCryptographyAuthenticationScalabilityVirtual machineScaling (geometry)Order (biology)CircleFingerprintMultiplication signTouch typingInformation securityMarginal distributionPhishingThermal conductivityLecture/Conference
04:43
Information securityElement (mathematics)CryptographyCommunications protocolWeb browserComplex (psychology)Electric generatorRight angleServer (computing)Process (computing)CurveCurvaturePublic-key cryptographyPosition operatorLecture/Conference
05:24
InformationField (computer science)DivisorProcess (computing)Multiplication signPosition operatorInformation securityCurveInternetworkingLatent heatGraphical user interfaceWeb browserLecture/Conference
06:16
Graphical user interfaceWeb browserComputer configurationLecture/Conference
06:57
CASE <Informatik>Cartesian coordinate systemComputer configurationMereologyMathematicsBit rateEncryptionAxiom of choiceDefault (computer science)XMLUMLLecture/Conference
07:56
Coma BerenicesCASE <Informatik>Default (computer science)WebsiteMultiplication signService (economics)Transport Layer SecurityEvoluteServer (computing)Form (programming)Genetic programmingResultantComputer configurationCausalityBefehlsprozessorLecture/Conference
09:13
Traffic reportingSubsetEmailTransport Layer SecurityNumberServer (computing)Standard deviationInternet service providerXMLProgram flowchart
09:58
WebsiteCASE <Informatik>Term (mathematics)Sawtooth waveArc (geometry)Operator (mathematics)Peer-to-peerPattern languageNumberLecture/Conference
10:52
Point (geometry)Coma BerenicesArithmetic progressionInternetworkingMoment (mathematics)WebsiteAutomatic differentiationContent (media)Mixed realityPhysical systemWeb browserGenetic programmingLecture/ConferenceDiagram
12:00
Hacker (term)Software bugComputer programmingVulnerability (computing)Service (economics)Lecture/Conference
12:35
Patch (Unix)Projective planeComputer programmingService (economics)Open sourceVulnerability (computing)Information securityFlow separationGraphical user interfaceSoftware bugNumberAlgorithmMathematicsForcing (mathematics)Lecture/Conference
13:44
ImplementationCuboidCodeVulnerability (computing)Metropolitan area networkMultiplication signLibrary (computing)MathematicsSoftware bugResultantOpen sourceCryptographyLecture/Conference
14:44
Public key certificateGraphical user interfaceMultiplication signIncidence algebraRootNumberMereologyAdditionIntegerXML
15:27
Incidence algebraError messagePublic key certificateMultiplication signCodeOpen sourceCASE <Informatik>Graphical user interfaceMereologyChainPersonal identification numberRepository (publishing)RootLecture/Conference
16:30
CollaborationismSelf-organizationSoftware bugEncryptionMultiplication signOpen sourceInternetworkingType theoryOffice suiteImplementationVulnerability (computing)Computer programmingLibrary (computing)CASE <Informatik>CryptographyGraphical user interface1 (number)Enterprise architectureSoftwareOpen setLecture/Conference
17:55
Transport Layer SecurityMalwareEncryptionDefault (computer science)InformationPublic key certificateNumberSoftwareInformation securityData storage deviceDegree (graph theory)Integrated development environmentWeb browserType theoryWeb crawlerLecture/Conference
18:55
Information securityOrder (biology)EncryptionPhysical systemFocus (optics)AuthenticationGroup actionCASE <Informatik>Data structureSoftware testingUser interfaceCryptographyLecture/Conference
19:50
Design of experimentsUser interface1 (number)Point (geometry)Information securityInformation privacyReal numberCodeComputer programmingNumberVulnerability (computing)Projective planeFlow separationWage labourProduct (business)Lecture/Conference
21:00
Process (computing)CodeProjective planeMatching (graph theory)MathematicsInformation securityLevel (video gaming)Type theoryOpen sourceError messageDifferent (Kate Ryan album)Term (mathematics)Commitment schemeStandard deviationPhysical systemMoment (mathematics)Lecture/Conference
22:06
Software bugTraffic reportingSoftware repositorySource codeCodeMathematicsTerm (mathematics)Core dumpSoftwareSoftware testingDecision theoryCategory of beingProduct (business)Lecture/Conference
22:55
Open sourceCharacteristic polynomialService (economics)Scaling (geometry)Point (geometry)CryptographyLibrary (computing)Multiplication signSoftware developerSoftwareInstance (computer science)Dependent and independent variablesGroup actionProcess (computing)Server (computing)Scripting languageLecture/Conference
23:50
Information securitySoftwareCodeCartesian coordinate systemStandard deviationGame controllerInheritance (object-oriented programming)Physical systemRight angleVulnerability (computing)Axiom of choiceMobile appLecture/ConferenceMeeting/Interview
25:10
Axiom of choiceSoftware testingDependent and independent variablesPublic key certificateInformation securityTerm (mathematics)Server (computing)Computer configurationContrast (vision)CryptographyComputer animationLecture/Conference
26:11
Expert systemDirection (geometry)Propositional formulaType theoryoutputDecision theoryAxiom of choiceData storage deviceSimilarity (geometry)Set (mathematics)InformationModel theoryGraphical user interfaceWeb 2.0Lecture/Conference
27:08
Revision controlAdaptive behaviorComputer configurationType theoryQuicksortLecture/Conference
27:46
WebsitePublic key certificateVelocityBasis <Mathematik>GoogolPoint cloudData storage deviceServer (computing)Lecture/Conference
28:26
Descriptive statisticsInformation securityPublic-key cryptographyKey (cryptography)Decision theoryOrder (biology)Point cloudModel theoryVirtual machineLecture/Conference
29:20
Digital rights managementDegree (graph theory)Cloud computingComplex (psychology)EncryptionNumberData storage deviceKey (cryptography)Level (video gaming)Product (business)Computer configurationInformation securityGame controllerMoment (mathematics)Default (computer science)Lecture/Conference
30:08
Key (cryptography)Digital rights managementServer (computing)Flow separationGmailNormal (geometry)MereologyDistribution (mathematics)Matching (graph theory)Local ringLecture/ConferenceMeeting/Interview
30:55
Event horizonEmailProcess (computing)Key (cryptography)CollaborationismPhysical systemServer (computing)PasswordNumberLecture/Conference
31:36
Information privacyInformation securityPosition operatorData storage deviceReal numberPole (complex analysis)BitMeeting/Interview
32:17
Characteristic polynomialInformation securityPosition operatorUltraviolet photoelectron spectroscopyMultiplication signInformation1 (number)QuicksortComputer-assisted translationMereologyType theoryService (economics)Lecture/Conference
33:32
Information securityService (economics)Internet service providerProduct (business)Message passingSoftwareAndroid (robot)Computer hardwareCASE <Informatik>Multiplication signLecture/Conference
34:49
Product (business)Information securityInternet service providerCombinational logicLecture/ConferenceMeeting/Interview
35:29
Computer virusGoodness of fitTerm (mathematics)Information securityBlock (periodic table)Bit rateRoutingForm (programming)BlogDefault (computer science)Dependent and independent variablesMultiplication signCodeGoogolAndroid (robot)Revision controlLecture/Conference
36:37
Android (robot)Multiplication signComputer hardwarePoint (geometry)Projective planeInformation securityInternet service providerPatch (Unix)Modul <Datentyp>Service (economics)Position operatorSoftwareRevision controlGame controllerOpen sourceWeb 2.0View (database)Time travelPhysical systemVolumenvisualisierungCASE <Informatik>Analytic continuationBuildingLecture/Conference
38:34
CASE <Informatik>Web browserLecture/Conference
39:13
Extension (kinesiology)MathematicsWeb browserCASE <Informatik>Information securityEncryptionChainMobile appTheoryOrder (biology)State of matterLevel (video gaming)Virtual machineKey (cryptography)Different (Kate Ryan album)MereologyVulnerability (computing)Lecture/ConferenceMeeting/Interview
40:00
Predicate (grammar)Virtual machineMereologyComputing platformGraphical user interfaceSoftwareINTEGRALTerm (mathematics)Extension (kinesiology)Electronic signaturePublic-key cryptographyLocal ringBootingProgrammable read-only memoryType theoryWeb crawlerArithmetic meanLecture/ConferenceMeeting/Interview
41:11
Information securityOnline helpProjective planeCASE <Informatik>MereologyWeb browserKey (cryptography)Traffic reportingEvent horizonModel theorySource codeExtension (kinesiology)Backdoor (computing)Lecture/Conference
42:35
EncryptionInternet service providerOnline helpBitInformation privacyCommitment schemePhysical lawModel theoryRight angleMassLecture/ConferenceComputer animation
43:46
Model theoryGoogolPhysical lawImplementationPhysical systemSoftwareRight angleRange (statistics)Different (Kate Ryan album)Variety (linguistics)Operations support systemOpen setProjective planeInformation securityCASE <Informatik>InformationServer (computing)Multiplication signDecision theoryPatch (Unix)MereologyGame controllerPoint (geometry)Open sourceTrailLocal ringLengthComputing platformLecture/ConferenceComputer animation
45:52
Type theoryRight angleRoundness (object)Multiplication signLecture/ConferenceComputer animation
Transcript: English(auto-generated)
00:01
Thanks very much, so first of all, can I get a quick show of hands, who here is English only in terms of language, okay.
00:32
So for Q&A later, you can pick a language, and then we'll pick a language to answer. So to set a bit of context, it would be very tempting to just have a talk about encryption algorithms and symmetric key lengths and things like that,
00:47
but fundamentally, when you build systems at the scale that we do, it's very important to think slightly unconventionally. And so one of my favorite questions that I tend to pose to people who ask, how does Google approach security,
01:02
is I tend to inquire of an audience such as you, who would like to hazard a guess as to who is the world's largest tire manufacturer by unit volume? Anybody? Wow, deafening silence. It is in fact Lego. 318 million tires a year.
01:27
And so it's very important to think ever so slightly askew, because that's what gives you a general idea of how an attacker might be pondering how to do unpleasant things to your systems. Now I am just one guy up there. The Google core security team and privacy
01:44
team are over 500 people worldwide, and I stand on the shoulders of these giants. I am just one part of an extremely large team that includes cryptographers and code auditors and software engineers in a wide mixture of skill sets of nationalities and locations all over the place.
02:07
Now an often overlooked aspect to security is the human factor, and after all encryption is something that is a tool intended to be used by humans,
02:20
yet to date we have not been terribly successful at making encryption usable for ordinary mortals. This is our daily reality. People reuse passwords all the time, and if people cannot follow even relatively basic data hygiene practices,
02:43
how can we expect them to understand the importance of careful key management? And so we have to constantly remind ourselves that it is straightforward to teach other engineers about high technology, but the people who are really building systems for are the ones who intuitively
03:03
will just choose to reuse their passwords because it's the most convenient thing to do. One of the ways that we applied cryptography to this particular problem is by working on a set of technologies that ultimately became the FIDO Alliances U2F protocol.
03:21
And this is a device. The category is called a U2F authenticator. This one happens to be produced by Yubico, but there are multiple other companies in the world making these. A French company called Plugup makes them as well. And even though it looks like a very innocuous little piece of plastic with some metal on it,
03:40
we actually think that this is going to break the economics of phishing fairly substantially. Because if you think about how phishing works, it's a scale attack. It is high volume, low margin. The attackers only need relatively few successful phishes in order to monetize those who are successfully attacked.
04:03
Once they have these credentials that they've stolen from users, they will reuse them at a time of their choosing. So if we can limit the ability of the attacker to use these credentials, we break the scalability of the attack. And that's why that circle in the middle is so important. That's a touch sensor.
04:21
So in order to authenticate yourself, all you have to do is touch that little circle. It's not biometric. It's not a fingerprint. And as long as you're as conductive as a typical human being, it will register as somebody in front of this machine was physically present and chose to authenticate at that time.
04:40
And something that simple, one touch, is actually significant security innovation. There's all kinds of cryptography built in. Inside that device is a secure element. It does key pair generation. There's a bunch of protocols involved, some of which are in the browser, some of which are in the server side. There's a lot of complexity.
05:01
But if we do our job right, all that complexity is hidden from the user. And it's this kind of complexity that the typical user feels confronted with every day when we alleged security professionals tell them, this is what you have to do to stay safe online. This is the flight deck of the space shuttle Endeavor. It takes a little while to learn how to fly one of these.
05:24
Ideally, we should be in a position as practitioners to make the learning curve as flat and as short as it possibly can be to allow people to use the internet as a whole as securely as possible. And indeed, this is the ideal outcome. If we do our job, we are invisible, there's nothing going on.
05:47
And the only time people tend to notice us is when we demonstrate that we are humans and make mistakes. Now, one of the important human factors aspects is how do you communicate complex technical information
06:01
to a non-technical user. And my colleague, Adrienne Porter-Felt, is a top researcher in the field of security UX specifically. And she pondered whether it was possible to fix SSL warnings within our Chrome browser. And this is the old SSL warning.
06:24
You get the sentence that explains to you in technically comprehensible English what's going on, what you ideally should do, but it's kind of down in the middle and it's at a very small point size. We then give you two options. They're visually identical, so we provide no visual cue whatsoever for what the user would ideally do.
06:48
So we decided to redesign this with the goal of guiding the user and providing an opinion about what the user should be doing to stay safe. So in this case, we very clearly highlighted the thing that we should be doing, that the user should be doing.
07:05
It is the most visual prominent element other than the big red lock with the X. We then offer a smaller option, the advanced choice, which is what the user would need to do if they did want to proceed despite this warning. And if you click on that, you get more explanation about what's actually going on,
07:23
something that you don't get by default because we found that users don't read it anyway. And then we say at the very bottom, you may proceed, and we even say in parentheses, it's unsafe. These relatively straightforward changes reduce the click-through rate to these warnings by over half.
07:43
And we are now well below 10% of a click-through rate in the presence of an SSL warning. The other slightly unconventional part of encryption is the applied economic aspect to it. We have been working very steadily on putting HTTPS everywhere throughout our
08:04
infrastructure and providing as much of our services to users in HTTPS form. This goes all the way back. Gmail, when it first launched on April 1, 2004, had HTTPS, but it wasn't the default option.
08:21
In 2008, we actually created a sticky preference because we had heard so clearly from our users that they wanted to have that be the default. And ultimately in 2010, we set HTTPS as the default for the user as a result. So we have evolved over time. Some of this evolution was bounded by technology.
08:43
It used to be the case that providing services over HTTPS was slower, was high latency, caused more CPU burden on the server side. Over time, all these problems have gone away. And if you still have questions about whether or not you can produce a performant website that delivers via
09:03
HTTPS, I encourage you to visit istlsfastyet.com, which is a site put together by some of my colleagues. Now, the other thing that making HTTPS pervasive allows us to do is to lead by example. And it has the benefit of allowing us then to compare ourselves and use the data available to us and produce transparency reports like this.
09:28
Last June, we released the Safer email transparency report. And you'll notice that when it started, the amount of email that we sent to other servers
09:41
that could accept start TLS or standard SMTP email over TLS was on a sharp upward climb. And a number of very prominent mail providers worldwide, after we launched this report, suddenly enabled TLS.
10:01
Because they noticed that it was being noticed, that they didn't provide it, and in some cases their users also told them very clearly that this stuff matters. It's been a very slow arc since then. We wish it were higher and we're going to continue working with our peers in the industry worldwide to try and get this number up as high as possible.
10:21
Both on the outbound from Gmail and on the inbound in terms of what we receive. And for those of you wondering what this sawtooth pattern is in terms of inbound, that's weekends. There's less being sent. We also announced late last year that we would use HTTPS as a search ranking signal.
10:44
Thereby providing economic incentive for website operators to actually encrypt their traffic. Even if they initially felt that that may not be necessary to their business. This is some data that was put together by builtwith.com.
11:02
And this was the last data point before we announced that we're using it as a ranking signal. And ever since then, with one small dip that I can't explain at the moment, the progression of the top 1 million sites of the internet has been very very consistent and going straight up ultimately.
11:24
We also are aware of the fact that until we can deliver ads via HTTPS, we run the risk of generating mixed content warnings in browsers if ads are unencrypted but content is encrypted.
11:40
So we announced in January that we were beginning to very systematically update all of our systems to deliver everything including ads over HTTPS. And the big milestone that we set for ourselves is mid-year this year on June 30th. Other ways that we apply our own economic facets to this problem is by providing money
12:07
to encourage researchers and engineers out in the world to tell us about bugs in our stuff. The Vulnerability Rewards program has been around since 2010. It was initially somewhat controversial. It is now accepted as an industry best practice.
12:22
So much so that there's actually a company, HackerOne, that manages Vulnerability Rewards programs on behalf of firms who don't necessarily have the staffing or the desire to manage their own. So it is something that is so useful that it's now provided as a service. The Patch Rewards program allows us to pay out in situations where somebody provides a security related patch to a widely used open source project.
12:50
It need not necessarily be a Google project. We also offer Vulnerability Research grants and we also have a separate Chrome Awards program. In 2014 alone, we paid out over a million and a half US dollars to over 200 researchers for over 500 bugs.
13:08
This has been a very successful program and it has changed the economics of discovery and reporting security vulnerabilities out there.
13:21
Now the final facet of how we approach security is attention to detail because a lot of this is very non-obvious. First and foremost, the math is solid. No matter what you hear about the number of vulnerabilities out there, about whether or not a given entity can or cannot brute force a given algorithm, the math is provable.
13:45
Trust the math. At the same time, implementation does matter. Humans are very good at generating bugs. They also write code, but they are very good at the bugs. Heartbleed was a very prominent vulnerability in the widely used OpenSSL library out in the world.
14:04
Similarly, Poodle was another vulnerability that our researchers found once we started spending more and more time thinking about what are the subtle implementation issues in some of these widely used crypto libraries and how could we make them better.
14:22
As a result, we've chosen to create boring SSL with the aspirational hope that we won't generate a whole lot of very easily marketable and nameable vulnerabilities. And we are slowly but surely rewriting our own crypto and providing it as open source so that everybody benefits, not just us.
14:46
It's important to bear in mind that there are a great number of interconnected parts that all have dependencies. And we saw in the summer of 2011 an attack that we hadn't previously encountered in the wild but began to be reported by Chrome users in Iran.
15:05
This was ultimately known as the Diginotar incident. It turns out that a legitimate root CA named Diginotar had been issuing certificates on behalf of Google even though they were not authorized to do so in any way. These certificates were being used in Iran and Chrome was configured at the time to take note
15:28
of any certificate claiming to be from Google and how it ultimately chained up to a root. And if it chained up in a way that was the way we thought it should work and chained up to the one that we use for our certificates.
15:42
And when this for the first time happened not to be the case, the users got this error message. We believe this attack was plausible enough to put some code into Chrome to detect it and to report it but we'd never seen it. Today if you look into the Chromium open source repository you will see that certificate pinning has a lot
16:02
more code dedicated to it because we now know this is something that does happen in the real world. And indeed it's happened on multiple occasions. It happened with Diginotar in 2011. There was an incident with TurkTrust, a CA in Turkey in 2012. The French government agency ANSI had an incident in 2013.
16:21
And we effectively had one pretty much every year where we discover a problem with issuance on the part of a root CA. Every single time this happens we publicize it because transparency is very important to the overall herd immunity of the internet. There is no benefit to keeping this type of thing quiet and there's a great deal of benefit to making sure everyone is aware of the risks.
16:47
We also announced last year that we were working on end-to-end encryption and integrating it into Gmail. We released it as open source on day one and we've been very encouraged not only by the offers of collaboration from organizations like Yahoo but also by the bug reports that we've gotten.
17:06
To date there haven't been any serious crypto bugs in the crypto library that we developed nor in the open PGP implementation. But there have been very valid reports that in some cases have actually resulted in payouts from the vulnerability rewards program.
17:24
We believe end-to-end encryption is a very relevant and important tool and that's why we're working on it and we will continue to work on it. Other random crypto related things. In 2008 when Chrome was first announced we introduced this notion of auto update.
17:41
It was initially very unpopular particularly in enterprises but this is by far the fastest way to get software updates to users who might not otherwise take them including crypto related ones. We announced Chromebooks and Chrome OS in 2010 providing a lot of encryption on
18:01
what little local storage there was but also degree of assurance to the environment. We enabled forward secrecy in our TLS by default in 2011. We also after 12 years of SHA-2 being out finally decided to announce that we were also sunsetting support for SHA-1 certificates in 2014.
18:22
And most recently we've seen a number of items of software out there that engage in ad injection. But what's worse is they do so by fundamentally breaking the trust of TLS in the local browser in a way that allows a user to get all of their TLS information and all the data that travels over TLS intercepted without any warnings.
18:46
So there are unintended consequences of some of this type of unwanted software. Fundamentally we believe you have to be holistic about security. You can't over focus on just the encryption.
19:01
You have to look at the authentication. You have to look at how they play together. You have to look at all the facets of a system in order to make sure that is secure. And of all of these facets, encryption is of course an important one. And with that I'm going to ask Anna to come up and we're going to have a chat about various encryption related matters.
19:29
One of the first questions that came up while you were talking is how is the inner structure in case of, you said, over 500 people in the team and cryptographers and engineers.
19:40
Are there psychologists to make this user test or to assess what people need or human interface designers or the quantity or the percentage of the people. Can you somehow estimate? I don't know the exact breakdown by percentage but we certainly have teams that have user experience researchers as well as user experience designers.
20:01
So we do have people who are formally trained in understanding how user interface works. And all of the ones that I can think of right now at some point decided to specialize in the security and privacy aspects of this. So we actually have user experience researchers focused specifically on the security and privacy related outcomes.
20:21
And there are several handfuls of them. We have a large number of engineers who develop code. We also have a large number of engineers who audit code. Because one of the best ways of assuring that your own infrastructure is secure is to actually have that code be audited. It's very labor intensive, it's very detail oriented work and we have a fair number of people who do that.
20:42
We have an assortment of people who manage these various programs such as the vulnerability rewards program. And then we have them all over the world because for us, given that we have a global audience, we have to be operational 24-7, 365. And you also have many projects on GitHub and then you really say people you can collaborate or you can contribute to the product.
21:08
But how do you ensure that it's trustworthy and secure if you have contributors from outside? And what is the internal process? Even if you commit something to the software, how is it audited? What is the three stage process or whatever?
21:22
Typically, you can't just as a single individual commit any code changes. It has to go through a code review, somebody else has to review it, check it and approve it. And that is just a common practice. The way Chromium works, and I don't know it in detail, but also it takes a while until somebody is allowed to be a committer. And this is very similar to a standard open source type project.
21:43
You don't overnight become a DBN committer or an Ubuntu committer or whatever. The systems aren't actually fundamentally that much different. Okay, so how many can you access? Like for Google end-to-end, did you get many external commits?
22:01
Or do many people try to commit or is it kind of really such a huge system? We don't. We actually, at the moment, in terms of external committers, we've mainly had external bug reports from people who are downloading the source, looking through it, trying to get it to run. The primary external committers right now to end-to-end are Yahoo.
22:20
And they actually have their own repo. We reconcile them at various intervals because they certainly have had contributions into our core code and certainly they take a bunch of the changes that we have. And that's how we work. In terms of individual contributors, we'd actually like to encourage more of them. And you also say you do a lot of software yourself.
22:41
So there is one philosophy to use a lot of software that is quite well known or that is quite well used. So it's kind of tested or you could assume it's tested. How is your decision to have a design that you do most of the products your own? A lot of the development that we do relies on infrastructure that is unique to Google. That's how we do stuff at the scale that we do.
23:03
When we can use existing open source software, we do. And we also very frequently contribute back to that open source software. In instances where we cannot find an open source tool to do what it is that we want to do, or that does it in a way that we want it to happen, then we inevitably wind up writing our own.
23:22
And as often as we can, we'll make it open source, which is exactly what we did with end-to-end as well. We, at the time that we started development, were unable to find a JavaScript crypto library with the characteristics that we wanted. So we decided to write our own and then we made it open source. And now we have a bunch of external people also beginning to integrate it into their tools.
23:45
And how do we see your kind of responsibility? Because you serve services to a lot of people, so you have really a big point of failure if there is a security flaw in your software. And how do you manage this? We try to create the incentive for people to notify us by the VRP.
24:01
So I guess that's one way of trying to manage that particular risk. We have internal mechanisms, including code reviews, that try to manage the risk along a different axis. And we also have a lot of standard engineering practices that, in aggregate, are designed to try to minimize the risk
24:20
of something actually launching with a security vulnerability as much as possible. Then to another topic, how do you weigh a user control that the user can choose by himself, not influenced by anyone, like this neutral way of giving him choices which are kind of equal
24:42
to playing the parent and telling the user what he should do or kind of influencing him so he doesn't even know that he is doing what you want him to do. And we also had this maybe in this Android system we talked already about, that previously you had the right to manage which rights your app should get,
25:01
and this feature disappeared and there was a lot of criticism, I think, and people didn't understand why it went away because it gave you a lot of more control, but you turned to give the user a simple choice. And how do you weigh this? We try and do as much applied user research as we can. We do as much A-B testing as we can, we do generally as much testing as we can,
25:24
in situations specifically around security where we know that there is a more correct path or an absolutely correct path. If we know there's something fundamentally wrong with an SSL cert that isn't as simple as the local clock is wrong, we feel it entirely appropriate to guide the user as forcefully as we can,
25:45
which is what you saw in the contrast of those two designs. It makes sense that if you can't trust the SSL, it's our responsibility to let you know. We give you the option of choosing the far less secure path, but most people don't have the technical depth to really understand the subtleties.
26:04
They don't even know what a cryptographic certificate is. They don't know what it means to be untrusted. They don't know what the various technical terms are. So in those situations where we are the experts and we do know, we can guide them in the right direction. It's a much trickier proposition when it's a value judgment where you really do need the user's input.
26:25
And that's why we're trying to make the types of decisions, whether they're an Android or the Chrome Web Store has a similar type of permissions model, where you want to provide enough information for the user to make a choice, but it turns out it is also very easy to overwhelm the user.
26:43
And they stop reading these choices and they stop looking at the permissions, and very often they'll just say, I want this app, I don't care, and they click through. So I don't think that we have the ultimate answer yet for how to communicate best this type of risk to the user,
27:00
but we're certainly continuing to try and work in that direction, and we'd like to keep getting better. Okay, and you're told that users try to click through, but if this slightly more difficult version of clicking through, do you see kind of an adaption effect, because as soon as a user found the button he has to click to get rid of this unnerving warning,
27:23
do you see that after some weeks you released this version, that there is some decline in the percentage of users which choose your right option? Not to date, also these types of warnings are infrequent. It would be very depressing if people encountered these sorts of things daily or even weekly.
27:44
Actually I do. Then maybe you surf to a lot of sites with self-signed certificates, I don't know. Might be, yeah. Yeah, maybe. So I suppose if you encounter it very frequently then maybe yes, there's a certain amount of habituation that allows you to click through,
28:01
but then there's the question of what does the majority of the world see out there in the world, and the majority of the world shouldn't see those warnings on a daily basis. Okay, another thing to the autonomy of the user, and what you're doing is like this, Google Cloud Storage which you announced in 2013,
28:20
I think that it will be now encrypted on the server side, and already there was criticism again because you keep the private keys, and you have the private keys to decrypt and encrypt the user's data, and then it's transport security and transport description to get it to him. And I think in my experience the user doesn't know that you own the keys.
28:43
The user really has no idea of whom he has to trust, or can't make a really conscious decision of whom he has to trust. So why do you do this? Why do you choose to say the user doesn't keep the keys for decrypting the data before he's sent over the wire?
29:01
I think much depends on what the threat model is. I mean in order for a virtual machine to run in our cloud infrastructure, it has to have access to the data. What we know from our cloud customers is they value reliability, they value the security against external threats, and to date it hasn't been a significant customer requirement
29:24
to get into the gory details of precisely what kind of key management is done. That adds a degree of complexity that if customers really wanted that level of control, they might very well buy their own infrastructure. They're using and taking advantage of cloud infrastructure
29:41
because they're trying to simplify their lives. I think ultimately as the product set evolves, we are going to provide as many options for more finely grained security as we can, but for the moment, we're aiming for the largest number of people, and the majority of cloud users right now seem to be entirely content with the fact
30:03
that we provide storage encryption by default, but beyond that, they're not burdened with a whole lot of key management. That may change. If you mention key management, there's end-to-end you mentioned already in your presentation, and will this be a universe of Google users, like of Google mail users,
30:24
because you have some separate key server infrastructure, or how will you integrate end-to-end into this normal PGP key world? It turns out that really the hardest part about using OpenPGP anywhere is the key management, both the local key management as well as key distribution.
30:44
We haven't actually announced how we're going to do that yet because we're still working on it, because we realize that we don't want to make the same mistakes that we've made historically. So our intent is not only to make it easy for Gmail users to use end-to-end to send email to other Gmail users,
31:01
but of course we want to operate with the outside world. If we didn't want to operate, we wouldn't have chosen OpenPGP to begin with. So given that it's a matter of public knowledge that Yahoo is our collaborator, I think it's very clear that clearly some kind of system has to be devised that allows Google users of end-to-end to exchange keys with Yahoo users of end-to-end,
31:23
and this must not be a painful process, because then we're back to where we started with the MIT key server that has an infinite number of extremely ancient keys that no one else has the password to anymore. One last question from my side. Google is really big, and Google can spend a lot of money on security,
31:43
and for sure it gives a good image, and we always say we want privacy to be a market advantage, to be an incentive for companies to adapt back to privacy and protect the data they have in store. Do you think this is viable, because for you it's like Google is really a large player?
32:02
Do you think privacy could be a market advantage in general, or is it your pole position, or the position of big companies who can earn trust they already have, but make it still a bit more trustful? I think making sure that you earn user trust and keep user trust
32:22
is one of the most important characteristics of any company, regardless of size. I spend some of my time with startups, and they frequently ask me how big do I need to be until I need a security team? What do I need to do? Is security important early on? Should I not be spending my resources on something more important?
32:43
And ultimately trust is a positive brand characteristic of any successful brand, and maintaining user data in a secure way is part of that trust. I agree we're very fortunate that we can hire this many people to secure all of Google services, and that puts us in a great position,
33:02
but it doesn't mean we're the only ones able to do that. If you're a startup, think about what your threats are, think about what your risks are, and then figure out what the appropriate amount of security is. What type of information do you have about your users, and how much do they care about it? Is it trivial information? Is it a collection of cat pictures?
33:21
Versus is it much more sensitive information? And depending on the answer to that, the rest of the answers sort of fall out from there. Do you see a solution, how to bring this incentive of applying really good security to your products? Is it more like a self-regulating question
33:41
if people get more aware that security is important and choose services which provide them security? Or has it to be a political solution? Or how do you think you can really bring this into reality? I don't think that there is a single path, but we've seen certain things work. If you can make the consumer care about security,
34:02
companies will listen. So, for example, starting with, I believe it was Jelly Bean, Android hardware manufacturers started explicitly saying that they would keep their hardware up to date with the latest Android release within some period of time,
34:22
because suddenly this became something that they could compete against other handset manufacturers on. And it seemed to be something that the users did in fact care about, because the users did want to always have the latest and greatest, because they had become accustomed to having these updates. So that's a case where the market decided
34:40
that there was value to software currency, where previously that value wasn't as obvious to people. That's why software updates are really, really important, not just for currency, but also to get security updates. So I think it's a combination of things that individually, as long as the users of the products or the devices care,
35:03
they will find a way to communicate that to the technology providers. Okay, that were basically my questions, so we'd like to open up the discussion to Q&A.
35:22
Alright, thank you very much for the interesting talk, and also for the interview session that you did. We have fortunately another 20 minutes for Q&A. So if you have a question for the speaker, for Stefan, just raise your hand and I will bring you the microphone. There you go.
35:41
Hello. I really think Google does a lot of good in terms of IT security, but then there are also situations where I think that are contradictory to that. One small example would be that the Google security team block is not HTTPS by default, which is kind of weird.
36:00
And maybe bigger issue is a lot of people have Android phones with old Android versions that don't get any security updates, which is I think a very, very big problem. And I don't see Google working on a solution for that. I don't have the solution. I don't know how it would look like, but something like providing updates for a longer time
36:23
or providing more possibilities to upgrade old phones, I think this is a very obvious, very big security issue that kind of Google is responsible for because it's Android, and there's no solution for it. So those were two things. The first thing, blogger, no HTTPS yet.
36:40
We know it makes us sad. We look forward to the future. As far as the Android question, Android is an open ecosystem. And so with that, Google is not in a position of control. There are many users of Android who take the Android open source project,
37:01
build it, add their own specific facets, and put it on to their hardware. We will provide software updates and particularly security patches for as long as it's practical to do so. But over time, we have also realized that it is very burdensome for the Android hardware producers to do full builds of the OS to update.
37:26
So over the last couple of releases, you will have noticed that there has been much modularization of the services delivered on Android. So for example, with Lollipop, for the first time we've delivered a version of WebView, which is the HTML system renderer.
37:41
That can be updated independently of the OS. So it's very clear that we're aware of exactly the problem that you describe, and in the vast majority of the cases, we actually do provide the security patches, but it is up to the various other participants in this ecosystem to actually implement them. And for a lot of these vendors,
38:02
it is fairly burdensome to do an entire OS build versus support just a relatively modular update. So as time goes on, I think things are going to get a lot better. I think if we had perfect hindsight and a time machine, we maybe could have made things a lot better five years ago, ten years ago,
38:23
when a lot of the infrastructure for this was built. But at the same time, there also comes a point beyond which it is not practical for Google to continue work on this. Nothing stops a hardware manufacturer from continuing to update the base OS if that's what it is that they feel their customers demand.
38:46
All right, any other questions? Please raise your hand. Hey, Steven. Brennan here.
39:06
So it's kind of a two-part question, and the first is a hypothetical, but considering that Google controls the entire stack of the browser in this case, the extension app store,
39:20
and the automatic update chain of both the browser and the extensions, and since the security of end-to-end PGP encryption lives in the browser extension, in theory that could be not compromised, but if Google were compelled by a state-level actor
39:40
that they had a gag order and such, such as NSA, they could be compelled to insert a vulnerability that allows that person's keys to be compromised, all without the user being aware of it. Is that in theory correct, or am I missing some part of the extension? So a couple of different things. First of all, end-to-end is designed to keep the private key material local to the machine.
40:03
Google does not have that private key material. So, short of compromising that user's endpoint, your scenario won't work. But the flip side to part of that is also, we have in many ways the luxury of having some fairly assurable platforms.
40:24
So if you are a truly at-risk user, and you want to be as careful as you possibly can, you would probably be running end-to-end in Chrome on a Chromebook or any other Chrome OS device, because the Chrome OS device gives you assurance of the integrity of the OS effectively from the metal on up.
40:42
And so you have various assurances. There are cryptographic signatures applied, both in terms of the boot image, the updates that get pushed down from Google are also signature checked, and then any extensions that you install off of the Chrome Web Store, again, are also verified. So you can feel very good about the integrity of the software
41:01
so that intrinsically the software isn't compromised. And in terms of the public key, as I said, the public key is stored locally. But it sounds like Eric wants to pipe up, so by all means. Another question? A comment. Sorry, sorry. No, he's helping answer the question.
41:22
All right. I'll let it pass. Eric Gross, also from the Google security team. I think I understood your question, which is what if the government ordered us to put a backdoor into that extension that leaked the keys back some way?
41:41
And that's been part of our threat model since the dawn of that project. I insisted that the project could not proceed unless they had a solid answer to that, because we know from the Hushmail case in Canada, that's exactly what would happen. So all of that source code is in GitHub, and the part that is operating on this
42:04
is kept sufficiently separate from the rest of the browser that the part you have to inspect is just that part that's in GitHub, really. And so that's our best effort at showing that, yes, we could be compelled to put a backdoor in, but it would be visible to the world,
42:21
and they're never going to ask us for that. When they want us to do something in secret, if they can see that it's externally visible that there's a backdoor, they don't even ask us. That's our intended defense.
42:40
Any other questions? Encryption is great if you have third-party threats, but it doesn't protect me from you. It doesn't protect the user from the provider. So I was just wondering whether your threat modeling you include yourselves as a threat to the users,
43:03
because that would be a massive step forward. And second, I can't help being a bit skeptical about Google's commitment to privacy, because when we submitted access requests, which is one of the basic data protection rights in Europe, you have a right to ask a company or a government
43:22
what data they have on you and what they do with that data. EU not only sent us to the US to ask that question, which goes against EU law, but then the US never got back to us. So if you cannot even tell me what you have on me and comply with basic EU law,
43:42
how am I to believe that you care about my privacy? So first of all, I am not an attorney, so it is very difficult for me to respond to issues specifically around EU law, despite being an EU citizen. As far as our threat models and our trust models, end-to-end is specifically designed in a way
44:03
that it trusts Google as little as possible. In the current implementation that's been out there since June, and which was updated in December, we don't even trust our own DOM. So we actually go to very significant lengths, provably so, because it's open source, to not trust our own infrastructure,
44:21
because that was the design goal for this particular tool. But at some point, you also need to consider whether you as an individual, what is your threat model and what is your trust model. What are you trying to do? Maybe for you, the right threat model involves keeping all of your systems local,
44:41
all your data local. But then you contrast that to how much assurance do you have that you can keep up to date with all the software patches and updates that you need to operate. There's a cost associated with that. And that's not a decision I can make for you, that's a decision that everyone can make for themselves. Various of us on the Google security team run our own servers at home
45:01
for a variety of reasons, to do a variety of different things across a wide range of platforms, across lots of different OSs and for whatever reason, some are hobby projects or not. So do we include ourselves in general as part of the threat model? We look at it as making sure that the systems operate as designed,
45:21
that access controls to data internally are correctly defined, and that no unauthorized internal users have access to particular data. But I'm not really going to get into whatever information access request you submitted to some Google subsidiary in Europe
45:41
who then referred you to the US, because I'm simply not familiar with that particular case. And it would be foolish of me to try and comment without a great deal more information. And I'm not an attorney either. All right, we still have time for more questions, if there are any. You all good? You all satisfied? No more questions?
46:03
Well, if that's the case, please give another warm round of applause to Stefan, thank you very much for the talk and the time for Q&A. Thank you.