We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Quo vadis Cyber Security?

00:00

Formal Metadata

Title
Quo vadis Cyber Security?
Title of Series
Part Number
96
Number of Parts
177
Author
License
CC Attribution - ShareAlike 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language
Production PlaceBerlin

Content Metadata

Subject Area
Genre
Abstract
Eric Grosse from Google’s Security Team in conversation with Jillian York from the EFF about Cyber Security.
Information privacyBasis <Mathematik>BitGoodness of fitInformation securityXMLComputer animation
BitInformation privacyRight angleFrequencyDifferent (Kate Ryan album)Multiplication signInformation securityMeeting/InterviewLecture/Conference
SpacetimeInformation securityContent (media)SoftwareProjective planeWebsiteMultiplication signGroup actionMeasurementTerm (mathematics)AuthorizationUbiquitous computingEncryptionCryptographyElement (mathematics)Revision controlDifferent (Kate Ryan album)Absolute valueMeeting/InterviewLecture/Conference
Information securityEncryptionStandard deviationSoftwareTerm (mathematics)Multiplication signNatural numberAxiom of choiceArithmetic progressionProduct (business)Meeting/InterviewLecture/Conference
EncryptionCodeWeb browserOpen sourceOnline helpProduct (business)Open setEmailLecture/ConferenceMeeting/Interview
Client (computing)Multiplication signKey (cryptography)EncryptionDifferent (Kate Ryan album)Physical systemPrime idealGame controllerUser interfaceType theoryPoint (geometry)Meeting/Interview
WordProjective planeMereologyHypothesisWeb browserPhysical systemEncryptionNumberServer (computing)MiniDiscLecture/Conference
EncryptionEmailPublic-key cryptographyOrder (biology)Goodness of fitLecture/Conference
Shared memoryFormal languageTranslation (relic)Office suiteArithmetic progressionComputer configurationMeasurementEncryptionExtreme programmingTerm (mathematics)Goodness of fitBlock (periodic table)PlotterCodeQuicksortEmailMeeting/Interview
Event horizonForcing (mathematics)CodeEmail1 (number)BitKeyboard shortcutLecture/ConferenceMeeting/Interview
Process (computing)Information securityPerspective (visual)Service (economics)Pay televisionMereologyRevision controlAutomatic differentiationLecture/ConferenceMeeting/Interview
Software crackingLoginPasswordState of matterDigital photographyNumberInformation privacyRevision controlSpywareGoodness of fitProduct (business)ArmInformation securityValue-added networkLecture/Conference
PasswordFigurate numberWindowEmailConservation lawGroup actionInformation securityCurveDistanceSemiconductor memoryMathematicsMultiplication signPublic-key cryptographyFlow separationType theoryExtension (kinesiology)Elliptic curveToken ringKey (cryptography)Context awarenessFreewareWebsite
Information securityMultiplication signType theoryWebsiteComputing platformWindowShared memoryLimit (category theory)Physical lawForcing (mathematics)Meeting/Interview
Vulnerability (computing)SoftwarePoint cloudInformation securitySoftware bugObject (grammar)Ultraviolet photoelectron spectroscopyProduct (business)BlogExtension (kinesiology)Numbering schemeSocial softwareLecture/Conference
Denial-of-service attackScaling (geometry)Point cloudFiber (mathematics)
MereologyBand matrixStatistical hypothesis testingBuildingInformation securityCloud computingPoint cloudPasswordQuicksortLecture/Conference
Term (mathematics)Nichtlineares GleichungssystemNP-hardMereologyInformation securityTheoryMessage passingConnected spaceFrustrationUser interfaceWeb browserCryptographyExpert systemPublic key certificateWebsiteArithmetic meanMassMultiplication signMeeting/InterviewLecture/Conference
HypermediaRight angleArithmetic meanBit rateSurfacePrice indexMetropolitan area networkState of matterBitFerry CorstenDifferent (Kate Ryan album)AuthenticationPasswordService (economics)Expected valueMeeting/Interview
Information privacyRight angleOrder of magnitudeService (economics)Scaling (geometry)Identity managementWater vaporFerry CorstenControl flowLecture/ConferenceMeeting/Interview
Server (computing)Identity managementIP addressLevel (video gaming)Characteristic polynomialMeeting/Interview
Connected spaceIdentity managementUniform resource locatorQuicksortPhysical systemInternetworkingNumberReal numberGoodness of fit
NumberSystem callMeeting/InterviewLecture/ConferenceComputer animation
Projective planeIncidence algebraQuicksortLecture/ConferenceMeeting/Interview
Extension (kinesiology)NumberLengthMereologyMultiplication signMeeting/InterviewLecture/Conference
EncryptionComputer scienceMeeting/Interview
Physical systemDegree (graph theory)ResultantQuicksortWater vaporEncryptionPersonal digital assistantMeeting/InterviewLecture/Conference
Fiber bundleOnline helpStaff (military)Forcing (mathematics)Film editing
CASE <Informatik>Control flowBuildingLecture/ConferenceMeeting/Interview
Power (physics)Type theoryComputing platformData conversionState of matterMeeting/Interview
Physical systemPower (physics)Information securityOrder (biology)Revision controlLecture/Conference
QuicksortCuboidSocial classDirection (geometry)Meeting/InterviewLecture/Conference
Auditory maskingMultiplication signGroup actionAverageTheoryInformation securityLecture/ConferenceMeeting/Interview
Formal verificationAbsolute valueInformation securityState of matterExtension (kinesiology)Multiplication signNumberIdeal (ethics)Vulnerability (computing)Tournament (medieval)SpywareMeeting/InterviewLecture/Conference
Patch (Unix)Lecture/ConferenceMeeting/InterviewComputer animation
Transcript: English(auto-generated)
How's this working?
Yes, excellent. Good morning. How are you this morning? Good morning. Excellent. So did you want to kick off a little bit and just tell us a bit about the work that
you do so they've got a basis for our conversation? That's a good idea. So Eric Gross, I have been growing a team in security and privacy at Google over the past eight years that has grown quite a bit. We started off around probably around 30 people and now we're over 500 people.
And it's really been exciting because over that period of time we've had all kinds of challenges, some of which you've heard about in newspapers. And the important thing I would say is I've found that what really makes a difference in a security and privacy team is the people that are in that team. If you don't have people that are really skilled and motivated the right way and have
the right executive support, this is an almost impossible challenge. We're up against extremely skilled adversaries and it takes a really strong team to do so. I'm blessed with a really wonderful team that's been able to accomplish a lot.
So that's the most important thing I would say when it comes to this is getting the right people. And I think that's really been great. There are also, as I say, challenging adversaries and that's partly meant government espionage surveillance-type agencies and I think that's probably a topic that's especially of interest
to people here. At least that's the sense I've had in talking with some of you already. And so that's really been a big challenge and we can go into that some more. But there are also others who may also be able to snoop on the network or even modify
content in the network. So one of my first projects, actually over the past 10 years, I've been preaching pervasive crypto and I'm happy to say the world today in terms of network security is far better than it was back then. We now have SSL pretty widely deployed, not just at Google but in many websites.
And that's measurably given those other groups a harder time as they try to modify the content that you see. You can now see the real material that the author intended you to see and not some modified version. So that's been good. But we have other adversaries as well. There are criminal elements that want to take over your account and do things.
So we're defending against a lot of different adversaries and encryption helps a lot but there are other things that help with that as well. So I'd love to get into some of those topics with you. Absolutely. I want to come back to encryption. But first I want to ask you one thing. You've got such a great team. What do you think that Google is doing really well and what's one thing that you
think that Google is failing at in terms of security? So as I say, this promotion of higher standard of security both in encryption on the network but also in fighting off malware, making products that are more resistant to abuse has really been a remarkable accomplishment.
So I'd say we're doing pretty well on that front. Where it's more challenging is giving people, the ordinary consumer, a clear understanding of the various choices they have and what they need to do to defend themselves against
being socially engineered. And we've made progress there but that remains a challenge. I suppose it's natural. It is hard to explain technical things to a general audience. So I believe that ideally we build security in so well the natural easy way to do something
also turns out to be the secure way and we'll keep working to make that more possible. But that's where I would like to see us improve. Make things simpler and yet also more secure at the same time. Great, well speaking of that, there was an announcement last fall about Google implementing end-to-end encryption in some of its products including mail. What's the status of that and where do you think, do you think that that's going
to solve the problem that we're talking about? Okay, so that's a good topic. What we have, and it's open source, it's in GitHub, is some code that allows you to do end-to-end encryption directly in your browser so that if you have a particularly well-hardened endpoint, and I use a Chromebook for example, that's about the only thing I type on,
if you have a device like that where you're not running a command line client, you can still do end-to-end encryption where the keys are just under your control in that endpoint. They're never in any of the intermediate systems. They're not in Google for example.
And that's a workable system. I've been using PGP within Gmail for many years, so that's a workable system. It's not yet to the point that I think it's ready for prime time all consumers to use. We're still iterating on different user interfaces to try to come up with something
that we're confident the average person will correctly use and will be sufficiently interoperable with other tools that they'll be able to communicate with people. And I think that's an ongoing research project. One of the leaders of a part of my team, Alma Whitten, actually did her PhD thesis
on just this problem of why can't Johnny encrypt. That was a number of years ago, and we're still in that problem. So we're not done. Fortunately, most people don't need that amount of encryption. Most people today, if you're using a system like Gmail, have solid encryption from the browser to the Google server,
have encryption of the data as it's sitting on a disk at Google, have encryption as the mail goes from Google to Yahoo, say, if you're exchanging with others. So we actually have pretty good encryption for mail today in that practical sense, and people aren't having to worry about how do I get the right public key for this person
that I haven't met before. So we have actually pretty good protection now. If you're one of these dissidents in a country where you're fighting against your own government and you're worried about even a court order coming to Google, then you need to go to this more extreme measure of end-to-end encryption.
There's a downside to it, though. If you turn that on, then you lose things like translation. I mean, I find I'm not fluent in German. I get an email in German. It's just great that I can push a button and I see it in English well enough that I can actually communicate with someone where we don't actually share a language.
I think that, I mean, in the history of mankind, when has that been true? So that's a tremendous value that one loses if you go to end-to-end encryption. So I don't ever think that end-to-end encryption will likely take over 100% of mail,
but good protection on what's there. That is a sort of higher protection option is coming. So making good progress there. Great. Any estimated ETA? Well, the JavaScript code is out there now, so people are using that today. We have these various experiments we're going on inside,
and I won't predict when that will be available. I actually was delighted when I first got to Google to learn that we have this engineering culture that we don't promise things in advance. When the engineers feel the code is ready, then it ships, and not until then.
And if you don't tie yourself to a date, then you don't force yourself into a bind of shipping something before it's ready. So I'm going to stick with that. All right, fair enough. So you talked a little bit about the struggle between wanting to ensure that one's mail is secure, but also losing some features in the process. And from an engineering perspective, do you feel that there are other ways in which Google is forced to compromise
between providing user security and making money? No, I don't see that compromise happening at all, actually. So this idea that I can subscribe to Gmail, so I'm paying so much per year, and Google offers that service,
and it's engineered it in such a way that the cost to Google, supplying that service, is fairly low. So ads or something don't even come into it. No need for ads to be part of that. Subscription service works just fine there. And I think that shows that, no, there's not a conflict. On the contrary, we think that it's important that we add security and privacy.
And we're actually probably further on that front, just because of the passion of the people on my team, than the demands coming from our users. It's almost like we are so constantly seeing these threats
more clearly than our users do. We're probably ahead of our users in trying to add those extra protections. So no, I actually don't feel a conflict there at all. Across all of the products, what would you say that Google is doing to raise the cost of intrusion by state actors such as the NSA, or any state actors, really?
Yeah, it's more an NSA, I promise you. Yeah, maybe we can come back to which governments are busy. So what do we do? The simplest way that people can get access to all of your data today is to trick you into giving them your login credentials.
That's probably the number one way that people lose their data. You know, when there was this celebrity photos hack, that's apparently how it happened, right? That the login credentials could get cracked. That had nothing to do with Google, thank goodness, right? I try to make sure that things like that can't ever happen.
But that is still the biggest problem that our users have. And it's, as we say, it's not our fault, but if it's their issue, then it's our problem. So we try to improve the ways in which people can authenticate to keep that kind of account hijacking from happening.
Huge engineering effort into that. So passwords are a thing that we wish people didn't have to use, because they're a nuisance, and we'd like our security to be simpler. And because a bearer token like that is inherently insecure, there are various technical ways in which it's just a thing
that you can accidentally lose and not notice. So we've switched to things that are stronger. For some years now, here I can show you, there's a thing we call a security key. So this is now a consumer. Anyone with a Gmail account should absolutely be protecting it with a security key.
By security key, I mean use elliptic curves. Public key cryptography really works dramatically better than passwords. So using modern cryptography, you can authenticate securely from a distance in a way that passwords are never going to be able to achieve.
As long as you're still using a password, we're offering other things, like we recently announced a password alert where you can load an extension into Chrome, and if you by accident type your password in the wrong place, we'll warn you, and then you can go change your password. And we've been using that within Google for our employee population for several years,
and I can't tell you how many times. People will just by accident type their corporate password into a non-corporate site. They just, it's usually not malicious, usually it's just finger memory, you just accidentally type the password in the wrong window. But we also have been able to stop some phishing attacks.
There are groups in the Mideast that have been trying to break into Google by sending phishing emails to Google employees, and those emails are so well done that it would work, even against members of my team who are more security aware than the average person on the street. It's really hard to avoid that.
So these two things, the security key and this password alert, have practically stopped, as far as we can tell, practically stopped phishing of Googlers. So I'm enthusiastic about the fact that same technology is now available to just the average consumer with just a free Gmail account. Great, great. I'm sure that we have quite a few entrepreneur types in the audience.
What kind of advice, coming from a huge company such as Google, what kind of advice would you provide to smaller companies that are trying to ensure the security of their platforms, their sites, but also their users? Well, first of all, I appreciate the challenge that a startup is under. They have a limited time window,
they are very limited on engineering resources. If they spend a lot of time on security, they may not make it to their main objective, so I get that. On the flip side, when I'm purchasing products from startups, I have had to actually yank some software out of our fleet
because we discovered vulnerabilities. So when a startup skims too much in security in their product, that can be bad for their direct business. So step one is, you know, make sure you don't introduce bugs. And that's especially true of a security startup. It's extra embarrassing when a security startup sells you software that is introducing more vulnerabilities
than it blocks, and we've observed that. For a startup that's not selling a product, they're just doing something else, it is a challenge, because they can't afford to have the size team that I have. So to some extent, they can benefit from my team by deploying to the cloud.
They get a lot of protection. Let's say denial of service attacks are a very common problem. The average company, even a large company, cannot defend against the scale of denial of service attacks that we now see coming from some countries in the world.
It's saturating fibers. It's really dramatic. Fortunately, if you're in a large cloud provider, they have so much bandwidth, they can soak up these attacks, and you just benefit from that. More sophisticated actors who are using sophisticated techniques to break in may also be knocked out by those cloud providers.
So I can see some advantages to many companies in moving things into the cloud and sort of outsourcing part of their security to us, and we're happy to help with that. It doesn't solve 100% of the problem if you're getting phished, if your data's getting lost because you're giving your password away to people.
That could be true even in the cloud. But if you take advantage of all the capabilities we offer, then yes, we can make things pretty secure for them. Great. So I've got some questions that I collected from Twitter, but I've got one more personal one that I'm curious about. How do you, from the engineering team, how do you work with designers to ensure that user failure
is not part of the equation in terms of security? Yes. Very hard problem and very important one. Like I was saying, we often find that's where we are. We're stuck because we've built a solution in the security industry. We've built a solution that's so complicated the average person doesn't know how to set all the knobs and they give up in frustration
and leave things at the fault or they set what they thought was secure only they actually got it wrong. So we hire people who are experts in user interface design and we run experiments. A good example would be in the browsers today, when you go to some website and the certificate is not valid,
maybe it's expired or it's self-signed or it's using weak crypto. I mean, there's lots of ways in which companies get the security wrong. The browser can help you out. It can warn you that connection's not secure. And we know there are actors who will exploit your connection
if you don't have strong security. How can we give a message to the user that they can understand and do something with? I mean, too many of our messages have really been cryptic. So we've put a lot of effort into iterating where we actually put up a certain message.
We see how people click through that. Do they click away the certificate warning or do they understand what it means? And over time, we're getting better messages. So we're not done by any means, but actually hiring people who have focused on that and giving them the resources to actually iterate is what makes a difference, I think.
So I'm going to pop to some of those questions from Twitter, most of which do have to do with state actors and surveillance. OK, I'll expect. So not that this is a cop-out, but one of the questions that came to me was what Google's stance is toward Tor exit nodes and IP reputation. So this one's actually a little bit different. Rate limiting and plain password authentication to Google services.
I've noticed this lately that when I'm trying to browse using Tor, sometimes I come across some issues. That's right. So one of the problems I see in the popular media and books and so forth, they're talking about these problems, is they don't appreciate, because it's natural,
they're not written by people who run large services, they don't appreciate the magnitude of the abuse problem out there. If you launch a service today, I promise you there are a host of people who will be looking for any loophole to take advantage of that service, to either break into your service,
or to just leverage your service to attack some innocent third party. And so anything you deploy today at scale has to think about how it can be abused. And that's a real problem. I mean, I just, I can't convey what a serious problem that is to people who aren't running these things at scale.
And that's what's challenging about Tor exit nodes. There are going to be lots of people who try to hide their identity through a Tor exit node. And I'm all in favor, right? I mean, I'm probably one of the more passionate privacy advocates even in this room. I think that's saying something.
But yeah, if we talk about the individual things that we do as people, I think you'd discover, yeah, Eric seems pretty paranoid. So okay, I totally get why one would want to use the Tor node. The trouble is, you now have the same identity as far as we can tell on the server. You're coming from the same IP address. You have the same user agent.
All the characteristics for you look the same as this person who's trying to use us to attack someone else. So it becomes a challenge. How do we know which is you and which is not? Now, if you're coming through Tor and you're logged in as you or something, fine, we know who you are and we can do the right thing.
But if you're at the very early stage where you haven't even authenticated yet, we really can't tell whether you're an attacker or a good guy. And that's sort of the engineering challenge of anonymity on the internet. How do you build systems that preserve anonymity,
which I'm definitely strongly in favor of, and yet don't enable large-scale abuse? And it's a challenge. So one of the ways in which we can do that is, I say, well, if you're well authenticated, then we know you're not a bad guy. Authenticated at Google doesn't mean we have to know your real-world identity. You can be anonymous.
You can make up an account. It's got no connection to your real-world identity. You didn't give us your true name. You didn't give us your true location or anything. We'll try to do that. Should not even have to give us a phone number. We have some internal debates about where phone numbers, the right thing to use, and so forth. Phone numbers came in to deter abuse.
It's not because we wanted your phone number to call you up or something. I mean, we don't have enough manpower to pick up the phone and call you, right? This was just a way we found we could knock down the number of people creating fictitious accounts just to use them to cause harm to somebody else. And there's got to be a better way than asking for a phone number.
But okay, you can replace a bad way with a better way. You can't just throw away a bad way. So anyway, that's what I would say is sort of the answer there. We have tried to work with the folks from the Tor project to see is there something better we could do to enable that.
And I'm not satisfied with where we are right now. So we'll keep at that. Yeah, because it does seem as though, even though I'm seeing some tension there between the ability to create an anonymous account, but with a phone number, it becomes an impediment to anonymity. But I'm glad to hear that you're working on. So to the extent we can find some way that ensures anonymity while keeping the abuse down, I think it's going to be good for everyone.
There was a second part to the question about password, length of time. Yes, I'm not sure that I totally understand the rest of the question. 140 characters is also an impediment to asking questions. Yes. Another one that came to me. There was a great quote from a Congress person a few days ago,
Ted Loon from California. He said that it is clear to him that creating a pathway for decryption only for good guys is technologically stupid. Would you agree with that? I got a charge out of seeing that. Yes, I saw him at the 50th anniversary founding of the computer science department at Stanford. So we were just together just a week ago.
So it's super cool to see Congress people with CS degrees. Yes. OK, so he's right. It is really hard to design a system that can be used by good people and not by bad people. I mean, that's just intrinsic. So fundamentally, we're in agreement.
I think that whole issue has been kind of overblown. I think law enforcement has some pretty good methods for getting the data they need to do their investigations without all of this breaking encryption stuff. So I just disagree with law enforcement on this particular topic.
But we're having a healthy discussion. You know, folks come to our campus. We see them. We go through this exploring. We try to explain. We try not to just sort of blindly push back. And we're trying to explain what the genuine problems are and genuinely understand what their needs are that are legitimate.
So there are actually evil people in the world. So when there's a kidnapping or child abuse or something, we do not want using our systems to do that. So we're actually trying to figure out ways that we can help law enforcement stop really bad behavior of that kind,
the kind of stuff that's repugnant to all of mankind. That's clear-cut. Law enforcement sometimes bundles that together with other kinds of things they want to do that are not quite as clear-cut. And that's why we need to have this, people sometimes call front door, back door, right? My team's goal is to build such strong engineering methods
that law enforcement or criminals or anyone else cannot break in through the site. If they want the data, they go to a judge, make their case, and come to us with a warrant. It's properly, narrowly scoped. Our lawyers look at that and say, yes, you've made a good case for that data.
OK, we turn over that data. I think that's the basically right way to run the world. So actually, I would be remiss if I didn't ask you, I know we've only got a couple minutes left, we've got two questions, but if I didn't ask you a Europe-focused question, and since you brought that up, I recall that last fall Google, I believe, was involved in conversations with politicians in the EU around curbing terrorism,
specifically on some of the more social platforms of Google. What do you feel about those types of conversations? Because it does seem as though politicians both in Europe and the US and probably elsewhere are trying to push for something that's more algorithmic or proactive rather than just the government requests.
Yeah, I don't believe that it's within our power to prevent all terrorism in the world. I mean, Google's good, but it's not reasonable. I think it's something that we want to try to help.
As I say, we don't want folks abusing our systems to do real harm in the world, so no fundamental conflict there. I don't think it's necessary to weaken security in order to enable that, and that's where we continue to stand, and we just stand firm on that. So, governments do get fairly pushy.
All governments, I don't mean just the US, right? European governments come to us and say, we demand you put our classified box on your network or in your data set. And I have just absolutely firmly said, under no circumstances, we have never done it, we're not going to do it, that's not reasonable. If you have a legitimate need for some data,
tell us what that is, and let's talk about how we can get you that data properly scoped. But the notion that we're going to give you some sort of direct tap-in, that's a non-starter, not going to do that. I know that we're almost out of time, and I don't have time for audience questions. I'm going to ask one more question, though, but before I do that, I recall that you're going to be available to people.
Can you tell us where that will be? I'm sorry, I've forgotten exactly, but there's a Google booth outside there, and I'll be at that. We have Stefan Smoky has a session, I think later today, also talking in a little more technical detail about some of these security things we do.
But I'll be around the next couple of days, happy to talk with people. And so for my final question, what's the one piece of advice in 2015 that you would give to the average user to improve their personal security? Absolutely. If you have a Google account, turn on two-step verification. Ideally, also get one of these security keys, but at least turn on the two-step.
That's your best way of telling us that you really care about the security of your account, and it measurably helps against hijacking. We definitely observe that. So you can certainly protect yourself well that way. Give us one non-Google thing, too. A non-Google thing. Probably keeping your whatever device you use
patched up to date is the single best thing you can do to fight off malware. We observe that the time from when vulnerability is published to when bad guys start exploiting it is getting faster and faster. And so staying patched is really important.
We try to do that as much as we can, because, again, as I say, ideal security doesn't require the user to do anything. It just happens automatically. So we try to do that automatic update where we can. But to some extent, you participate. So don't give away your password and stay patched. Great. Thank you so much.
And he'll be available to you guys if you want to ask the tougher questions. Happy to do that.