Web Application Security: Lessons Learned
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 96 | |
Author | ||
License | CC Attribution - NonCommercial - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/51856 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
World Wide Web ConsortiumInformation securityMyspaceFacebookInformation securityOpen sourceCartesian coordinate systemServer (computing)Variety (linguistics)Web applicationRegular graphPhysical systemInternet der DingeCodeComputer hardwareData managementIncidence algebraVideoconferencingSoftware developerMultiplication signProgramming languageContent (media)Vulnerability (computing)Different (Kate Ryan album)Set (mathematics)Profil (magazine)Revision controlFlow separationMetropolitan area networkCategory of beingSoftware testingRight angleReplication (computing)Musical ensembleSummierbarkeitDatabaseComputer animation
04:02
PasswordLocal GroupHacker (term)EmailInjektivitätSingle-precision floating-point formatComputer networkComputer-generated imageryAngleEntropie <Informationstheorie>DatabaseoutputFlow separationSoftware testingMultiplication signNumberMechanism designStatement (computer science)PlastikkarteParameter (computer programming)Food energyData structureElectronic mailing listBoss CorporationMathematicsTransport Layer SecurityLeakGroup actionState of matterOpen sourceInjektivitätArithmetic meanArmCodeCombinational logicLibrary (computing)DatabaseShared memoryBitDatabase transactionFacebookInsertion lossLevel (video gaming)WebsitePoint (geometry)Content (media)Incidence algebraPointer (computer programming)Right angleInformation securityMetropolitan area networkDisk read-and-write headFree variables and bound variablesEvent horizonScaling (geometry)WeightPasswordInstance (computer science)Forcing (mathematics)Projective planeCASE <Informatik>Object (grammar)Software frameworkEscape characterContext awareness1 (number)Software developerUniform resource locatorStack (abstract data type)Hacker (term)Web 2.0Goodness of fitComputer animation
09:22
Information securityCore dumpSuite (music)Drill commandsInstallation artInjektivitätAbstractionDatabaseQuery languageCodeBackdoor (computing)Escape characterLatent heatFunction (mathematics)Statement (computer science)Open sourceProjective planeInjektivitätType theory19 (number)DatabaseMusical ensembleSoftware testingMereologyConnectivity (graph theory)Statement (computer science)String (computer science)Extension (kinesiology)Logic gateRegulärer Ausdruck <Textverarbeitung>ResultantRight angleGroup actionCodeObject (grammar)Library (computing)Sound effectArithmetic meanBitLevel (video gaming)Different (Kate Ryan album)Service (economics)NumberCASE <Informatik>Table (information)Hash functionFormal languageDrop (liquid)BlogLine (geometry)Instance (computer science)Mechanism designElectronic mailing listLatent heatAbstractionBackupScaling (geometry)Parameter (computer programming)Sign (mathematics)Subject indexingDependent and independent variablesInformation securityFunctional (mathematics)PlanningBit rateGame controllerSet (mathematics)AuthenticationTrailKey (cryptography)Greatest elementSystem callQuicksortPressureWebsiteElectronic program guideFree variables and bound variablesMultiplication signSelectivity (electronic)Slide ruleBlock (periodic table)Suite (music)Vulnerability (computing)Software developerStaff (military)Perfect groupComputer animation
15:52
Context awarenessInternet service providerInformation securityHTTP cookieMessage passingWeb browserSoftware developerRight angleAsynchronous Transfer ModeRoutingInstance (computer science)Different (Kate Ryan album)WebsiteComputer animation
16:49
Software developerConvex hullMaxima and minimaMenu (computing)Dependent and independent variablesCodeScripting languageInformation securityWeb pageContext awarenessWebsiteFunction (mathematics)FlagHTTP cookieCross-site scriptingCodeVulnerability (computing)Information securityContext awarenessSoftware developerApplication service providerEscape characterAngleInjektivitätServer (computing)BitHTTP cookieAttribute grammarInformationPasswordWeb pageDomain nameService (economics)CASE <Informatik>Mechanism designException handlingWebsiteGroup actionRevision controlType theoryExtension (kinesiology)LoginSpywareDomain nameFunction (mathematics)WindowModal logicNumberSoftwareAreaProcess (computing)EmailFunctional (mathematics)Quantum stateMaxima and minimaRight angleSpacetimeWeb 2.0Poisson-KlammerScripting languageJava appletInheritance (object-oriented programming)MeasurementFile archiverWordDressing (medical)Drop (liquid)GodEndliche ModelltheorieLie groupCatastrophismSound effectFlagDialectForm (programming)Configuration spaceComputer animation
22:01
Coma BerenicesMyspaceScripting languageJava appletHypermediaWeb pageWeb browserFacebookCanonical ensembleMyspaceInheritance (object-oriented programming)SpacetimeLatent heatWeb 2.0Content (media)Web pageBitAnalytic continuationWeb browserPower (physics)Arithmetic meanComputer animation
23:14
Java appletScripting languageCodeMyspaceComputer networkUser profileDigital filterElement (mathematics)Attribute grammarWebsiteRandomizationAttribute grammarMyspaceProfil (magazine)Uniform resource locatorInterior (topology)HTTP cookieEvent horizonCodeParameter (computer programming)LengthComputer wormSystem callCategory of beingFilter <Stochastik>AdditionWeb browserVulnerability (computing)Cross-site scriptingWeb pageServer (computing)Object (grammar)Representational state transferVirtual machineElectronic mailing listDifferenz <Mathematik>MereologyTouch typingMedical imagingFirmwareRouter (computing)Phase transitionArithmetic meanMathematicsBitKey (cryptography)Information securityOnline helpAuthenticationState of matterProcess (computing)SpeciesSpacetimeLevel (video gaming)Right angleCASE <Informatik>Gastropod shellView (database)Instance (computer science)ChainMetropolitan area networkScripting languageTheory of relativityTouchscreenExpressionGroup actionCausalityCartesian coordinate systemException handlingDialectOpen sourceContent (media)Prime idealBlock (periodic table)Evelyn PinchingAddress spaceWritingTap (transformer)Zoom lensComputer animation
30:52
Function (mathematics)Escape characterToken ringStandard deviationEmailWeb browserPhysical systemWeb pageFormal languageClique-widthMeta elementScripting languageView (database)Dependent and independent variablesContent (media)Information securityDefault (computer science)String (computer science)Web browserCodeDefault (computer science)Multiplication signCross-site scriptingWebsiteNumberRevision controlLimit (category theory)Software frameworkVulnerability (computing)Goodness of fitAuthorizationOrder (biology)ResultantWeb 2.0Graph coloringConfiguration spaceSystem callPrime idealSet (mathematics)1 (number)Performance appraisalApplication service providerEmailDigital rights managementStandard deviationFunction (mathematics)Web pageDynamical systemCASE <Informatik>Open setAd servingLink (knot theory)Local ringMathematicsGraphic designBitWindowComputer fileAdditionClosed setServer (computing)Connected spaceDomain nameRule of inferenceStructural loadString (computer science)Computer fontProcess (computing)Parameter (computer programming)Line (geometry)Java appletTesselationScripting languageExecution unitVector spaceFunctional (mathematics)Attribute grammarPhysical lawForm (programming)Address spaceNetwork topologyReal numberLevel (video gaming)Information securityOnline helpRight angleGastropod shellElektronisches MarketingInstance (computer science)Phase transitionRegulator geneHand fanCausalityQuicksortMetropolitan area networkService (economics)TheorySampling (statistics)Graphical user interfaceFacebookComputer animation
38:30
Web pageEmailFacebookStaff (military)Scripting languageFrame problemUniform resource locatorModal logicSpywareWeb pageContent (media)Direction (geometry)Escape characterWebsiteWeightServer (computing)Combinational logicBitElectronic mailing listPhysical systemService (economics)MereologyStructural loadInstance (computer science)Hand fan19 (number)Computer configurationXMLJSONComputer animation
42:03
Keyboard shortcutData modelMassView (database)Information securityDependent and independent variablesDefault (computer science)String (computer science)Vulnerability (computing)Software bugWeightArmGame controllerTrailEndliche ModelltheorieElectronic mailing listView (database)BlogOcean currentIdentity managementTime travelMachine visionoutputRight angleBitTesselationKeyboard shortcutConstructor (object-oriented programming)Category of beingProtein foldingApplication service providerComplete metric spaceVisualization (computer graphics)Field (computer science)Different (Kate Ryan album)Template (C++)Computer animation
44:34
BlogString (computer science)InformationParameter (computer programming)Endliche ModelltheorieField (computer science)CASE <Informatik>BlogKeyboard shortcutCodeGoodness of fitGroup actionType theoryCategory of beingInstance (computer science)Electronic mailing listOrder (biology)Arithmetic meanTesselationDefault (computer science)Computer animationProgram flowchart
45:29
BlogString (computer science)Software testingArchitectureGroup actionVisualization (computer graphics)Template (C++)Web browserMereologyField (computer science)Order (biology)Vulnerability (computing)Server (computing)Right angleMachine visionAttribute grammarElectronic mailing listComputer animation
46:45
Data modelKeyboard shortcutMassVideo gameEndliche ModelltheorieKeyboard shortcutDirection (geometry)Vulnerability (computing)Web 2.0outputValidity (statistics)XMLJSON
47:30
Complex (psychology)Uniform resource locatorAddress spaceNumberSource codeLipschitz-StetigkeitDefault (computer science)Data typeBlock (periodic table)Regulärer Ausdruck <Textverarbeitung>Multiplication signComplex (psychology)Address spaceWeb browserValidity (statistics)Scripting languageSource codeBookmark (World Wide Web)Software testingoutputRegular graphBitEmailRight angleComputer fontCodeServer (computing)Application service providerField (computer science)Reading (process)Arithmetic mean2 (number)Arithmetic progressionVirtual machineOvalOpen setTouchscreenRing (mathematics)Type theoryData typeFeedbackFocus (optics)Interior (topology)Web pageClient (computing)Computer animation
52:28
InformationInformation securityCybersexDatabaseRankingCore dumpSample (statistics)EncryptionPasswordLeakCoefficient of determinationWeb pageService (economics)LeakPasswordKey (cryptography)Virtual machineMalwareString (computer science)Multiplication signInformation securityEmailWindowStatement (computer science)Web browserSoftware developerDependent and independent variablesNormal (geometry)DatabaseTwitterGroup actionWebsiteInternet service providerZuckerberg, MarkObservational studyCASE <Informatik>Hecke operatorRight angleMessage passingReal numberHash functionReading (process)JSONComputer animationSource code
Transcript: English(auto-generated)
00:04
All right, thank you people. Thank you for hanging back, which is my fourth NDC. Very proud to be here and thrilled to be here. Also very thrilled that there is so much great security content at this conference.
00:21
So there have been very amazing talks, some about hardware security aspects, some about systems that are open and what you can do with them, a lot of IoT security. And the one topic that's specifically missing so far is web application security.
00:40
And instead of doing a kind of regular web application security intro talk, I thought, okay, we could try this differently this time. We could not just theoretically talk about security risks or attacks or mistakes developers make. We could look at high profile targets that have been successfully attacked
01:02
and then we discuss how they were attacked and then we draw some conclusions from that. So even if you already know some of the attacks, and I'm pretty sure almost all of you will, I still can promise you that these 55 minutes will be worth your time
01:20
because I think that's quite some interesting content. So it's supposed to be kind of a fun talk, although we talk about something that's very severe. And that's what web application security lessons learned is all about. Before we commence, let me just say one thing that's really important to me. So we will have a look at successful attacks and, well, then kind of see what went wrong.
01:48
I will only use stuff that's publicly known. And most of the examples, most of those incidents happened some time ago.
02:01
The reason for that is it's always easy pinpointing at someone who is in the room and can't defend themselves. I don't want to do that. I also would feel very uncomfortable if in the adjacent session room someone would rant about me and I wouldn't be there, right? So that's what I won't be doing. So let me give you one example.
02:22
For those of you who are watching this on video later, we now have early June 2016, and about two and a half weeks ago, one of the most widely used open source content management systems issued a patch for almost all versions they have, even those that were not supported any longer
02:40
because there was a severe security vulnerability. I would have loved to look at this today, but we don't, right? Why don't we? Because not everyone has installed the patch so far, and thus it probably would be a bad timing, right? So we want to learn something, and what we want to learn is to make our applications more secure. We probably do not want to learn how to kind of attack someone else's applications.
03:05
Well, actually we kind of do, right? But the main purpose really is to learn about our applications. So I'm not making fun of those companies or examples I picked, right? So just have a look, okay, what went wrong, what's the attack behind that, and what's the defense against this attack, right?
03:22
So that's the basic idea here, right? So we're supposed to have fun, but not on the expense of someone else, but just because it's kind of an interesting topic. Okay, so here are a couple of those companies which I picked. We'll have probably a look at most of them today. They are using different set of technologies and most of web application security risks.
03:45
They are independent on the technology used on the server, but some of them are special. So if you see code snippets today, they also will be in a variety of program languages and server technologies. And I'd like to start with an example that, well, kind of happened five years ago
04:04
when several Sony assets had some issues, stuff like that. I mean, depending on your musical taste, that might not be an issue, actually. Tupac is still being alive, and I mean, it's beautiful in New Zealand. So I mean, if I would vanish, I would probably go there as well, right?
04:21
But unfortunately, that was not the case. And on the other hand, at about the same time, Sony was hacked so that one million passwords have been exposed. And if you've watched or followed the news closely over the last couple of, well, actually weeks,
04:41
so many password leaks, so many password leaks. Even early this week, it was announced that Mark Zuckerberg, of Facebook fame, his password was also kind of stolen. It was, I think, dah, dah, dah, but with a capital D, so heightened security. And he was using it for several accounts, right? So just that you don't have to remember so many things,
05:02
because if you're busy, you can't do it, right? So it might happen to anyone, right? But yeah, it was a pause. So how did that work? Now, there are some pointers, and well, this article states the obvious. Okay, SQL injection it was, right? Now, SQL injection is a strange beast.
05:20
If you look at the OWASP top ten, almost every three-year updated list of the most prevalent security risks today, SQL injection in the current list of 2013 is number one. I've been thinking, okay, when I do audits, I almost never find SQL injection,
05:41
because nowadays, we have tools and program libraries that even prevent us from writing SQL, right? So for instance, I think the majority of developers here in the room, they use the Microsoft stack, the net, and so kind of Microsoft forces us, kind of, to use Entity Framework, which is their OR mapper.
06:02
Well, it's really hard to write SQL there, right? Because we just work with objects. And the main problem with SQL is that commands and data is in the same string. And usually, if formally, when we still string concatenated our SQL, we added user data into the SQL, thinking it was data.
06:22
But depending on how we did it, maybe an attacker could change the context from data back to command, and then execute SQL on our database. And that's exactly what happened here with Sony. So the hacker team, they kind of gave some pointers as to what they did. So here's one example, and actually there were several incidents,
06:43
but this is probably the most telling example. So the Japanese website, they had URLs that looked like this. Now, this is perfectly fine. It uses PHP, but it's not that PHP is insecure. But well, it was using get parameters, which is also fine. What was probably not so fine is that those get parameters
07:00
were then kind of concatenated into an SQL statement, especially here, where those parameters all seem to be integers or numbers. It's easy, it's easy to validate, but well, they obviously didn't, and so they found out the database structure and then could retrieve a lot of data. I mean, that's an attack that exists since, I don't know, the 1980s probably, right?
07:25
Not in the web in the 1980s, but still. The attack is not new, but still they are victims. And as I said, I mean, if you use a modern technology stack, it's really hard to get SQL injection. But why is it still number one in the OWASP top ten?
07:40
Well, one of the reasons is legacy code just doesn't use the modern stack. And in legacy code, I mean, don't get me wrong, right? So we talk about some other people messing it up, but to be honest, if I look at the code I wrote ten years ago, ten days ago, I don't know, but especially ten years ago, I mean, it's horrific, it's horrific.
08:00
I can't show you this, right? You would boo me off stage. And I mean, everyone has made mistakes, right? But it's harder to do those mistakes now. But still, SQL injection is a very dangerous thing, and some of those leaks of recently, they were also caused by SQL injection.
08:21
And again, the problem is, SQL is a combination of data and commands, and adding user stuff into the SQL could add commands into the SQL. Defending against SQL injection is actually pretty easy, as we all know, right? So we could use an ORR mapper, we could, well, escape special characters,
08:42
or we could use something like prepared segments. So the technology is there, the knowledge should be there, but, you know, I still find code where people think, well, you know, it's a tiny bit faster. ORR mappers are slow. It's a tiny bit faster if I do this myself. Yes, good idea, but not for you, but for the attacker. So this happens all of the time.
09:00
So let's think about prepared statements for a bit. So prepared statements, as you all know, are a mechanism where you have placeholders in your SQL, and in a second step, you assign values to those placeholders. So you kind of separate the concept of an SQL command and data. Fine, problem solved.
09:22
Some of the time. Where it didn't really work was in the very successful and well-known PHP-based, in that case, open source project Drupal. And, well, Drupal did prepared statements, but still they had a problem with SQL injection
09:41
one and a half years ago. So highly critical pre-authentication SQL injection vulnerability. Very, very high score. So you see here, towards the bottom, that on a scale from 0 to 25, that risk was classified as 25. So really, really, really high.
10:02
And, well, a lot of websites caught on, SQL injection site wide open, installed a patch to the suite. And what happened? This was taken from the advisory of the company that founded Vulnerability, because Vulnerability was found and then reported.
10:21
So this is PHP code, right? But I'll guide you through it if you're not working with PHP, because the mistake made is independent of the language being used. So what they did in the code is they did a prepared statement. So you've seen the first code line, the SQL statement, and the colon name, that's the placeholder, right?
10:43
And they had a function called db underscore query, and the first argument is the SQL string with the placeholder, and then an array of all the placeholder values. Now we have here an in statement in SQL. So we get a list. So for that case, the Drupal developers thought, OK, perfect. So colon name, we put it in a hash table,
11:02
and the value of that is an array consisting of, let's say, user one and user two. And the mechanism that was used would then convert it into the SQL statement, which you see in the bottom lines, like staff and users were named in, and then we had the placeholder name underscore zero, underscore one, et cetera, et cetera. And the values that were then assigned to those placeholders
11:23
were, in that case, user one, user two. That looks good, right? And I mean, of course PHP knows prepared statements or the database extensions PHP are using is supporting prepared statements. And what they were actually using was a specific abstraction library in PHP called PDO,
11:42
PHP Data Objects, which kind of is a shim, so it can work with a lot of different database types, but the API is always the same. And that extension did support prepared statements, but they were looking for something clever for in. Well, was it that clever?
12:01
Well, that's part of the code. So the first code block is part of the code. So they were kind of iterating over the values you receive, and then they were creating those keys. So you see the line with key, and then the underscore character, and then a number, so this kind of created the SQL, and then they did some magic regular expression,
12:21
then they were doing some replacing within SQL, awesome, and finally they had the SQL. Now the problem is they were assuming that, let me go back one slide for that, they were assuming that this array, which assigns a value to dollar name, well, receives an array with just values, right? A regular array, not an associative array,
12:42
an array with a numerical index. But what you could do is you could not provide, you could provide a string index, right? So you could do a hash table. It's kind of like object literal syntax in JavaScript.
13:00
And if you did that, the way the above code worked was, look at the last line, that's what the result is. So we get select stuff from users with name equals colon name underscore test dash dash dash dash, which is a comment in some SQL dialects, right? So that means we can end the SQL statement here, and since we control the test dash dash instead of test dash dash,
13:25
we could, you know, add another SQL statement, like drop table. I think drop table is kind of boring because we all have updates, right? Please, please not, please not, no? Okay, yeah, but you most of the time have updates, right? And of course we have a plan on how to restore from those backups, right?
13:43
But in general, dropping a table is kind of boring, right? Because you see the effect more or less immediately, but there are like other things, adding new users, giving additional rights to some users. And that was the problem here. So that was not easy to spot, but the problem here was they were rolling their own security, right?
14:05
There is a prepared statement mechanism, but they didn't use it. They used their own, right? Because that's more clever. Well, it turns out it wasn't. And I mean, SQL injection kind of sounds boring and a thing of the 1980s. Not everything is bad, so at tonight's party, for instance,
14:22
there's a band that's playing a lot of 1980s music, right? So that's great. But what typically happens when you have SQL injection is, it's not only like, you know, dropping some table or stuff like that. Sometimes you can inject the back doors at the session this morning. You can see how that could be done. It might even lead to code execution,
14:41
and this article linked here shows quite a lot of, well, kind of nice effects. And quite often companies call me after they've been attacked, and then it's always like, okay, we have been kind of attacked, and there's kind of code from someone else on our service. Could you just remove the code, and then the site runs like before? And of course you have to say, oh, that's not possible.
15:01
I mean, if there is external code on your site, it's not your site any longer, right? But still, you have to have a quick response team for that, and sometimes we just try to remove as good as we can. But still, SQL injection is dangerous. If you follow best practices, so you escape output, you use prepared statements, you use an OR mapper,
15:22
but you update it, right? OWASP top 10, number 9, do not use outdated components. So some of the OR mappers, they did have security issues in the past, and thus have been updated. So do update those components you're using as well, but do not roll your own security, okay?
15:42
So old attack, but still happening, and sometimes it's obvious, but sometimes it's not as obvious. But, well, it's getting more complicated. Let's continue with Apple. Now, I mean, we all know, right, that Apple doesn't have any security issues, right?
16:01
At least according to Apple. And, yeah, it's kind of interesting, right? So, for instance, let me just give you one example. There is a, you know, the private browsing mode, right? Which exists in all browsers, and it's kind of interesting in Safari. So I was at the hotel, true story, I was at the hotel and wanted to print
16:21
my boring pass for tomorrow, if the airport is open, which is still under debate. And I went into private browsing mode, and then I went to a webmail provider site, and I was logged in as someone else in the private browsing mode, right? So private browsing mode, in that case, it's not just empty cookie jar, like, you know, everywhere else.
16:41
It just seems to be, you know, a different context that's preserved between restarts of the browsers. Wow. Okay. But still, we have no security issues here, unless the Apple developer sites looks like this. Took the screenshot myself, really back soon. On Thursday, an intruder attempted to secure personal information
17:01
of our registered developers from the developer site, including some personal information as well. Now, I mean, there have been breaches, and some companies have been really, really bad at reacting to those breaches, right? So they were informed, and then they said, oh, yeah, you know, thousands were affected. Oh, maybe 10,000, maybe 100,000.
17:21
Oh, well, just add some zeros. Eventually, you'll get the right number. But what they did is they closed down the developer site. They closed it down. And why? Because there was a cross-site scripting vulnerability. Now, when I talk about cross-site scripting, right, many people really start to yawn because they say, yeah, you know, cross-site scripting.
17:41
It's been there since probably 1999 or something. Really old, and I mean, what happens with cross-site scripting? Alert one, modal warning windows. But of course, that's not the case. You can do so much more once you inject JavaScript because the security mechanism of JavaScript or the security concept is origin-based, right?
18:00
So you have the origin, protocol, fully qualified domain name, and port. And JavaScript code has a origin and runs in the security context of that origin. So if someone can inject JavaScript code on your site or on your page, then it runs with the security context of your page or of the origin of your page. So maximum catastrophe, right?
18:22
And indeed, really, Apple responded quite well. That's the dashboard, right? So that's all services that were running, two out of 15. Just because there was cross-site scripting. Cross-scripting is really, really, really bad. There are many dangers associated with cross-site scripting.
18:42
Most commonly known is cookie theft. Right? Because we can kinda access cookies that have the same origin as the JavaScript code, thus enabling attacks like session hijacking. We can do redirection, right? So we redirect from yourbank.no
19:00
to yourbank.whatever, some of these new domain extensions. And the target site looks exactly like the bank site, except for that you have to enter your password again. DOM manipulation. In my book, that's the most dangerous one of those, right? Just imagine you have a cross-site scripting vulnerability
19:20
on the login page of, whatever, say, your webmail provider, right? The page looks exactly as before. So you say, well, the domain name is correct, the page looks as before, of course I type in my password. But what you didn't notice is that due to DOM manipulation, the target of the form,
19:41
so the action attribute, was changed so that your data is sent somewhere else. So really, really, really dangerous. I mean, cross-site scripting is, I think, since the very first version of the OVES Pro X in 2003. Always in the top three risks, always. And, you know, every year, every year,
20:01
I talk to people and say, okay, you know, I have this insight, cross-site scripting will eventually die. Because, I mean, everyone knows about cross-site scripting. Cross-site scripting is going to die. And, yeah, I've been doing this since forever. And I'm almost 15 years now. And still, cross-site scripting is still in the top three.
20:23
But, well, maybe eventually I'll be lucky because it's kind of easy to defend against cross-site scripting. I mean, there are only five characters in HTML that change the context. The angle brackets, because they delimit tags. The single and double quotes, because they delimit attribute values. And the ampersand character as the escape sequence, right?
20:44
So, just escaping them is good enough. And all technologies have built-in functionality for that. And if you're, say, using, and I'm sure many of you do in this room, ASP.NET MVC, if you're using the RazorView engine, your output is automatically HTML escaped.
21:00
You have to put extra effort in your code that you do get cross-site scripting. So, really, you have to help the attacker with the cross-site scripting, because of the output being automatically escaped. So, it's harder to get cross-site scripting if you have a modern technology stack, but still, many people still do.
21:22
And to defend against cookie theft, you could try to make accessing cookies a bit harder. With the HTTP only cookie flag, JavaScript doesn't have direct access to those cookies, only if you misconfigure your server. And if you use HTTPS and use a secure flag,
21:41
then in an open network like the conference network, still no one should be able to access your cookies, right? So, that's a good idea. And there's one more thing, CSP, and we'll talk about CSP in a few minutes. And, I mean, I've been saying it since 15 years, but 2016 is the year where cross-site scripting is going down. It's going down for good, and we'll see why in a bit.
22:01
I would like to briefly talk to the elder people in the room, right? Who still remembers Myspace? So, for the younger people in the room, you're like, oh, okay, wow, 50%? Ah, okay, okay. Because your parents have been on Myspace, right? They got to know each other there, right? No, so Myspace was kind of like the predecessor of Facebook.
22:23
I think currently it's owned by Justin Timberlake or something. I don't know why, but I read that it's owned by Justin Timberlake. And Myspace is kind of like the canonical example for one very specific attack. And this then looked like this. Ten years ago, right? A bit more than ten years ago,
22:41
but this is the very first kind of brilliant example for the attack I'm about to talk about. So here, Myspace knocked off the web. Or, and I mean, buzzword bingo continues. Remember, it's 2005. Ajax prepares for battle on the dark side. Awesome. So, I really have to read that.
23:00
One of the newest web technologies has a sneaky power. It can access pages from your browser without you knowing about it. Yeah, actually that's what the browser can do as well, right? By preloading content. But anyway, so really, really dangerous stuff. Okay, let's get a bit more specific. Cross-site scripting worm hits Myspace. Ah, okay.
23:21
So, what exactly happened? So, Myspace had a kind of RESTful API. Very modern, right? REST API. And one of those API calls was add this user as my friend. Right? So, if you did an HTTP call to, let's just assume,
23:43
slash API, slash add friend, then the user your profile page you were on was added as your friend. Or add friend accepted an argument which user is supposed to be added as a friend, right? Now, this API ran on the same origin.
24:01
So, it ran on Myspace.com. That's fine. But what then happened is that the successful attacker was Sami Kamkar who detailed and described his attack at great length at the URL I gave you, but I just give you the gist. He found a cross-site scripting vulnerability on Myspace.
24:22
As you know, cross-site scripting is bad. Cookie theft, DOM manipulation, stuff like that. What we can also do, of course, when we inject JavaScript code is we can do HTTP requests using AJAX, so the XML HTTP request JavaScript object. Now, usually that's not that big a problem when we have cross-site scripting
24:41
because thanks to the same origin policy we more or less can only talk to our own server. But in that case, that was good enough because there was cross-site scripting on the profile page of Myspace. So, if I went to Myspace, went to someone else's, to Kamkar's profile page, the JavaScript code that was injected thanks to cross-site scripting
25:02
was executed and was doing an HTTP request to slash API slash ad friend. Now, my browser was doing that HTTP request, right? Because it's JavaScript. And my browser, nice and helpful as the browser always is, sends along my authentication cookie I have with Myspace.
25:21
Why? Because cookies are bound to a domain, maybe even to a path, depending on how you can configure the cookie. So it's sent along automatically. So now slash API slash ad friend got an HTTP request from my browser using my authentication cookie. I mean, what's Myspace supposed to do? Myspace says, oh, Christian is just sending me a request.
25:42
He would like to add Kamkar as a friend. Yes, of course, do that. And exactly that's what happened. And after, I think, 19 and a bit hours, Kamkar had one million friends and then Myspace pulled the plug. It's always great to have one million friends, right? Especially if you try to lend money from them, right?
26:00
Could I have one krona from all of you or something? But still, so that was the attack. That was the attack. Now, I mean, that's interesting enough, right? But what I find even more appealing here for us today is how did that happen? Because, I mean, they were stupid at Myspace. They did have security precautions in place.
26:21
So for instance, Myspace was whitelisting HTML. As we all know, whitelist is good, blacklist is bad. What's the only one exception? Well, the blacklist for the five characters that have a special meaning in HTML. So that's the only blacklist kind of that works. So Myspace was whitelisting HTML. So yes, people could enter or add HTML to their profile pages,
26:43
but only a certain amount. Diff elements, bold, italics, stuff like that. However, for attributes, they were using a blacklist. So they had a blacklist on click, on mouse over, on mouse out. I mean, that idea is bad enough. Just imagine, so my machine here has a touch screen, right?
27:04
So if I'm using that machine, I have a lot of new attributes that maybe were not part of the original Myspace blacklist on tap, on pinch, on zoom, stuff like that. So that list is never complete. That's a problem with a blacklist anyway, right? So somehow Kamka had to inject JavaScript code,
27:21
avoid the filter, and then finally send the XML HTTP request. And I have here some example of what he did. Not all of this is still working in modern browsers, but it just shows you that having a good idea is not always being a good idea. So for instance, that's what works in older browsers, right? So diff was on the whitelist,
27:44
and the style attribute was not on the blacklist. So he could add styles, including set the backgrounds to the following image, and the image has the URL JavaScript colon alert one, which doesn't make sense at all, but the browser executes it as JavaScript because it says so. So let's do it. Okay.
28:00
So let's add just some more code. Remember, we have a blacklist for attributes. So he could just add random attributes like EXPR here for expression, and then use the same trick from above to then evaluate the code in that attribute. Now, there were some additional filters in place.
28:22
So for instance, if we want to do DOM manipulation, we quite often resort to the inner HTML property in JavaScript, right? Inner HTML was filtered, right? So if you wanted to write on your profile inner HTML, that was stripped out. What is not stripped out, though, was eval.
28:41
So stuff like that works, right? So he could kind of concatenate his way all through the code. Also, when using XML HTTP request back then, you needed to subscribe to the ready state change event. On ready state change was blocked, but I mean, you get the idea, right? So he could inject code very efficiently,
29:01
and again, one million friends after a bit less than 20 hours. And what makes this attack so dangerous is that it combines actually two risks, two attacks. We have cross site scripting, which we discussed before with Apple, but we also had cross site request forgery. And the idea behind cross site request forgery,
29:21
as the name already suggests, is we forge a request from somewhere else. So that's what cross site is. And the idea here is always that very often at least, that we use something that's also called session writing, because we assume that the browser sends along the session cookie
29:43
as part of the request, because that's how cookies work. Thus, we have an authenticated HTTP request. That's, by the way, one of the reasons why a GET request should never change the state of an application, right? Never, never should, because a GET request, to forge a GET request is really very trivial,
30:00
but this attack was using POST, but POST also still works just a tiny bit harder. I think a year ago, I mean, there are many vendors for routers, right? And for those consumer routers, there are two big vendors in Europe, or even worldwide, and they both issued a lot of firmware updates, because, for instance, changing your WPA2 key,
30:23
think about your router UI, your WPA2 key. What happens if you just click on it, change this to another key? Well, it's an HTTP POST request, right? So if the iPod is on strike tomorrow, and I'll visit you at your house, I'll just see what kind of router you have,
30:41
and then kind of change the WPA2 key, because then the Wi-Fi is so much faster. I'm the only one who knows the new key. So that would kind of work, right? So that's the issue here. Now, what do we do against those attacks? Escaping the output is the prime countermeasure against cross-site scripting, as we already know,
31:01
and for cross-site request forgery, we just have to make sure the attacker cannot predict the HTTP request. The problem is, with the API, it was public. Well, the attacker knew exactly how the HTTP request would look like, except for the session cookie, because that's individual to each user, but that one came for free, because the browser sent it.
31:21
So what you have to do is, we have to add a token, or a nonce, to the request, which the website knows, and which the browser, or at least the HTML of the browser knows, to secure it. If cross-site scripting and cross-site request forgery are both in place, there is no real defense, right? Because if you have cross-site scripting,
31:42
we can use, within the limits of the same origin policy, Ajax to load a page with a token, then we know the token, then we can send the request, right? So we have to avoid both. Now, cross-site request forgery is only number, I believe, eight in the OAuth 10, because the OAuth says,
32:03
well, you know, all the frameworks have kind of like protection against cross-site request forgery. Yeah, but still, I mean, my order results tell otherwise. I mean, you still have to use those measures. You have to use a token, so for instance, with ASP.MVC, there is a very simple HTML helper, and an authorization attribute, which does exactly that,
32:23
creates a token and validates it. But again, the problem is, if you have cross-site scripting, there's no good defense against cross-site request forgeries. So we have to get rid of cross-site scripting. I think I mentioned that half a dozen times before today, so now I tell you how we do this. And the solution is W3C, kind of standard,
32:42
still being developed, so there is version number one, which is kind of supported by most up-to-date browsers. There is version number two, which is kind of good, has kind of got support in Firefox and Chrome, and there's version number three, which no one supports yet. And the standard is called content security policy.
33:01
And the idea is, you have an HTTP header called content security policy, and in that header, you provide a kind of policy, policy like only load style sheets from that server, only load JavaScript code from that server, only load web fonts from that server, only open up WebSocket connections to that server or to these servers.
33:21
And actually, it works pretty well. It even works in IE 10 and 11. Yes, yes, surprise, surprise. Well, the other surprise here is that IE 10 and 11 support exactly one feature of content security policy, and that's the feature almost no one uses. So in those two browser versions, kind of bad luck.
33:40
I mean, content security policy is an additional layer of security, an additional safeguard against cross-site scripting. So if a browser doesn't support it, they just ignore it. But still, we have an additional layer of defense. Edge supports content security policy version one, and as you know, there's supposed to be an update to Windows 10 this summer,
34:05
and there are rumors that the updated Edge version there will also support content security policy version number two. And so the... Actually, I can show you this. I have some simple code here.
34:22
So I found an HTML page, right? And in that HTML page, I have a little bit of JavaScript code. So I have some inline code here, and I have an external JavaScript file here, right? And I have no idea how this JavaScript came in here. So it could have been me intentionally,
34:40
or it could have been an attacker using a cross-site scripting vulnerability. But this is kind of the end result. And if I execute that in the browser... Let me have another example here. But let's just call this one...
35:00
If I call this in the browser, I get two prompts. So the first one says inline, that's the inline JavaScript code, and the second one says external, because it's the external code. And then we have the best graphics design I personally can do. We have orange colors for the bulleted list, and we have the NDC logo, which I scrapped from the website.
35:20
Now what happens if I click on the first link is we load the page again, but this time something has changed. And what we did there is... Let me go to the handler here. So what we do here is we send out a policy.
35:42
I do a very simple policy here, default SRC self, which kind of means, okay, the master policy for all external resources I'm using is use the same origin the HTML page is served from. So in that case it's my local host server, my port 55271 or something, and regular HTTP.
36:01
So that's all there is. And you might see a subtle change once I click here. I get the external popup. I do not get the inline popup, because content security policy automatically disables inline code, inline style sheets, if it is applied to styles,
36:22
and dynamic code evaluation. So all calls to eval in JavaScript, new function, set time out with a string value as the first argument, stuff like that. All that is disabled, and also inline style, so my beautiful orange color is gone. Probably that's nothing you cry too much about,
36:43
but the other things are probably pretty interesting. So if you can manage to externalize all your JavaScript code, if you can manage to do that, then content security policy is something for you, if you ask me.
37:01
Because there's fine grained configuration for that, so you can have a set of rules for style sheets, and for JavaScript, and there's some work involved, because if you have an ad server included, that ad server might load some JavaScript from some other domain, so you have to add these domains into the policy. So there is some effort involved. But still, you could, at least in theory,
37:23
close many, many attack vectors, if your technology allows you to externalize JavaScript code, and the set up of your websites and pages allows that. So if you're using ASP.NET MVC, it's very well possible. If you use ASP.NET Web Forms, which renders quite a bit of inline styles
37:41
and inline JavaScript code, there is some extra work involved, and recently went to a customer site, and they're using the Wicket, the Wicket framework. And the Wicket framework, I've never seen anything like this. They're using inline JavaScript like there wasn't a reward for doing that. And yeah, basically they haven't backtracked tickets open
38:02
since almost two years, and they say, oh, no, inline JavaScript code is fine, because CSP, no one's going to use that. Yeah, I think someone will use this. All right, so that's basically, in my opinion, the best approach to get rid of cross-site scripting eventually,
38:21
so I invite you all to evaluate content security policy and maybe to get rid of cross-site scripting once and for all. Okay, so far so good. Now, I've talked to the other people in the room now for the younger people, Facebook. I don't know if people are still on Facebook, but yeah. So Facebook had some issues as well, and I picked a specific issue,
38:41
so this is not a forged screenshot. So this Stefan, I know this Stefan personally. Well, actually, I know two Steffans on Facebook with the same last name, and they both fell in the same trap. That was really, really funny. So I suddenly saw that Stefan was kind of supporting this page
39:01
he obviously found, which, let me just say, irritated me a bit with Stefan. So I kind of started to research, and then I called. And what happened is he went on a site, and that site said, please click here to get access to content, and he clicked.
39:24
But what he didn't see was that the click here button was placed on a page exactly over an iframe, and in that iframe was the Facebook like button. So he did click the Facebook like button,
39:41
but he didn't see it. I mean, across the request forgery, we saw how that's done, but I mean, how can you defend against that because the victim is clicking the button, right? So if, in that case, Facebook would look in their logs, they would see, come on, you clicked, you clicked on that button. The request was perfectly fine.
40:02
And yeah, that's called click-jacking. It's like a combination of clicking and hijacking. Because you click, but well, you do not know where you're clicking. And that attack only works because the page with the Facebook like button can be put into an iframe.
40:22
And for special ways of that like button, that's still a necessity. How can you really defend, or how can Facebook defend against this kind of attack? Well, the attack works because we can frame the page. And again, the older people in the room still remember the 1990s and remember frames, right?
40:41
So sometimes other pages were put in a frame, thus kind of stealing content. And back then, frame-busting scripts were kind of the hype because they allowed people to kind of escape frames. And that's possible again, but there are more modern approaches to that. So for instance, there's an HTTP header,
41:02
X frame options, supported since IE8, so kind of old stuff. But this allows us to prevent a page being loaded in an iframe. If you use the value deny, if you use the value same origin, you can load the page in an iframe, but the outer page has to have the same origin as the inner page.
41:20
So you can frame your own content, but no one else can frame your content. And once CSP2 is more widespread, it will get even better because there's a directive frame ancestors, and then you can provide a list, a kind of white list of servers of URLs where this can be done.
41:41
Okay. So far I think we've seen more or less the classical examples, but there are a few more I would like to show you in the last part. And I'd like to start... Actually, how many of you are .NET developers? Just a show of hands. Okay, so like 80%, just like the opposite, like the market,
42:01
where it's kind of 20%. Great, because I have a great example. It has nothing to do with .NET, but the kind of vulnerability exists in .NET. So I'd like to show you, and I'll talk about GitHub. And there once was a GitHub bug tracker entry. I'm Bender from Future.
42:24
This issue was opened in 1001 years. Okay, that's interesting. I mean, personally, I would be really interested in time travel. But how did the guy do that? And, well, they're using a different technology on GitHub.
42:44
But it could happen if you're not careful if you're using ASP.NET by yourself. So what I did is I created a simple model. So let's say I write a blog, and I have a very simple model for that blog.
43:01
I have an ID. And then I have properties, title, text, and date, all public. And in the constructor, I set the date to the current date, right? Nothing special. And what I then did is I used the scaffolding feature of Visual Studio to get this ASP.NET MVC complete crud controller
43:25
and the views for listing, editing, inserting, et cetera. So really, I did nothing wrong, so to speak. And this is the end result, right? So I have here a kind of list of entries, title, text, and date.
43:43
And when I create a new entry, then, well, I can only add the title and the text. So what I did is I went to the razor template and removed the input field for the date, right? Because I want to add an entry, the date is determined automatically, so I do not need the UI.
44:01
And so what happens here is I activate tamper data. Okay, you don't see tamper data, which is a shame. So I activate it here, so I start tampering. And then I add something, whatever. MVC 2016, and then whatever, party tonight.
44:24
And when I click on create, I can tamper the data. And well, you see that I have, well, you can kind of see. Let me try to zoom in a little bit. So we have a field for title, and we have a field for text.
44:41
And what happens then in MVC is model binding. Model binding means we have our model with the properties title and text. And in our action handler method, by default, we expect something of type, blog entry in my case. And model binding then uses post parameters and also routing info and get whatever's available,
45:02
and there is a list in which order this is done to kind of create a blog entry instance. So far, so good. But the problem is, if you look at the code again, and we happen to have a look at the code, there's also a property date. So in theory, we could just add a new entry.
45:25
Oh, sorry for that. And I just say date. Maybe you can even look while I'm typing. Date equals, and I never know how the order is, so let's just try something like that. 12, 34, 56. Now, this is now being sent back to the server,
45:44
so we now have a date field also as part of the HTTP request. We don't have a UI for that, but I can forge the HTTP request. And yeah, then I go back to the list of entries. Let me, yes, yes, yes, let me end that real quick.
46:03
Let's go back to the browser. And here at the end, we see, oh yeah, there's a party tonight, which I will be writing on the 11th of November 2016. So the same vulnerability if you use the old templates. In the current templates, what you'll find out is
46:22
that you have a whitelist or a blacklist in the action method, which values should be bound. Blacklist is exclude attribute, whitelist is the include attribute. So nowadays, Visual Studio automatically adds those attributes. If you've ever wondered why, now you know.
46:40
Because then we are only binding the data that's allowed. And again, I mean, if you're careless, then this is what might happen. And as convenient as model binding is, it can also be dangerous. Well, you just assume it granted. I like model binding. I really do.
47:01
But yeah, make sure that you exactly know which values are bound and if there are any values, you do not want people to bind. So yes, use a whitelist or a blacklist. All right, since we had a kind of Microsoft-y example here, let's move over to Microsoft directly. And I'd like to show you an indirect web vulnerability.
47:25
I found this really fascinating, so I'd like to present it to you today. And that happened last year. It's already fixed, so all good. And as you know, we have annotations. And that annotations can then be used for automated validation of input data.
47:43
In HTML5, we have input field types, email, URL, and phone. And they kind of can get validated. And that idea was also put into MVC. And well, this is exactly how it worked.
48:01
I happen to have the source code open. And let me actually bump up the font size a little bit. So we have here the C sharp code. And what basically happens is we have our data type, email address, right? And we just define somewhere else.
48:20
And then we have here a very long, and I really mean it, a very long regular expression, right? So if you have the Master of Regular Expressions book by Jeffrey Friedl, where he kind of reads the RFC and turns it into a regular expression, I think it's four printed pages, so it's a lot, right? And you know how I validate regular expressions?
48:42
I validate regular expressions by just checking, is there an ampersand character in it, and is there a dot right to it, right? It should be good enough. I mean, if someone does unintentional bad inputs and gives it to your site, I mean, what can you do? But if there's an unintentional error, whatever, someone mistypes the ampersand character, whatever, then you want to help them.
49:00
Okay, anyway, so this is what they did. And then someone found, found out something. I reference the block here, but I'll again show you the gist. So here's one of my favorite tools, Regex Body. What Regex Body does is it kind of helps you deciphering and also setting up regular expressions because it explains you those regular expressions
49:22
and also lets you test and debug those regular expressions. So up here is the regular expression I copied from the reference source of ASP.NET MVC, and here in the middle, unfortunately I can't make it much larger with the screen, but here is a weird input.
49:41
It's T at T dot T dot T dot T dot T dot, and so on and so forth. And well, if I validate this input, I can go here on debug, you see, well, you know, they are checking, checking. Whenever you see something red, it means backtracking, right, okay? And you see, well, at the end, there's a little bit of backtracking,
50:00
but after 179 steps, the validation of that email address is done. Now, actually, this is the updated code. If you go back here, maybe to the beginning, you'll see that they have the regular expression, and what they then do is, before they match it,
50:21
they set up a timeout of two seconds. Seriously, if you do regular expression matching in your application, have you ever set a timeout for that? No, why? I can tell you why they did, because what I happen to have in my clipboard is the old regular expression as of last May.
50:42
And I will now go to Regex body, I paste it, so you see, it's a little bit longer, and if you looked at that a bit closer, you would see there's some backtracking involved, right? So look ahead, look behind, stuff like that. All that's fun about regular expressions, not really. And so we go in here, and then I click on debug,
51:01
and I mean, this is a really fast machine, but you see, there was this kind of one second it took, and we start backtracking early, then, oh, we're making progress, progress, backtrack, backtrack, backtrack, backtrack, oh, forth, back, forth, back, forth, back, forth, back, forth, back, and so on and so forth, and so on and so forth. Unfortunately, I don't have so much time left, but rest assured, after one million attempts, they gave up.
51:23
Now imagine what happened on the server if that regular expression was run. Yeah, exactly, it ran forever, or kind of forever. Also, the validation code, the validation Regex, would also provide it to JavaScript to give client-side validation. What happened with the JavaScript code?
51:42
It ran forever. Well, not forever, because eventually the browser said, oh, a slow-running script is kind of hindering your browser. Do you want to kill it? So they changed the regular expression, and they added the timeout. And I thought this was a very novel kind of attack. I found this really interesting, and just want to show you,
52:03
I mean, there are these regular attacks, and we have to defend that against them, but still, all input might be evil, might be unexpected. And if you have a complex regular expression, this complexity might mean you need a lot of resources. All right, looking at the clock, I think we have time still for one more,
52:24
which I would be more than happy to show you, and what I would like to pick is Adobe. Now, Adobe was also victim for one of those leaks. And, yeah, so here's one of the pages.
52:42
There are some interesting things in here. Nearly three million user accounts. Originally, then they said, oh, maybe it's 38. Then eventually it was 150 million user accounts. But what Adobe did is the passwords were encrypted. And the key they encrypted the passwords with is not known.
53:05
Thumbs up. Unfortunately, there was no salting involved, and always the same key. So if two of us had the same password in the database that was leaked, the same encrypted value could be found.
53:22
So what, of course, did security researchers do once the leak was out? Well, they did a group by select statement. What's the most common password? Well, I guess one, two, three, four, five, six. So, okay, you could try, okay? Fair enough. But there was another problem.
53:41
There were password hints. I don't like password hints. Password resettlings, I'm fine with that, but password hints, I mean, your mother's maiden name. If you are kind of like a C celebrity, then maybe Wikipedia knows your mother's maiden name. So that's just not secure.
54:01
And so what Adobe did is, you could provide your own remember value, right? So for your password. I mean, Windows has that too, right? So if you have a Windows password, you can say, okay, here's some string or some text that will remind you of the Windows password. Those were not encrypted.
54:21
So those password hints were unencrypted in the leak. And then someone had a great idea. It was based on an XKCD comic or cartoon. And they created this. The Adobe crossword puzzle.
54:40
Okay, so let's start with, let's say, one down. One down was among the top 100 passwords, right? One down is encrypted 2AZL4 and so on and so forth. No idea what it is. But the password has been used very often. But the password hints are unencrypted.
55:01
So the password hints for one down, 10 letters, is Adobe. Adobe X2, Adobe 2, Adobe twice, twice Adobe. I don't know, maybe you have an idea. I'm still pondering about that one, but let's try this. Okay, what's next? 10 across. Okay, 10 across. Let me in. Let.
55:22
Usual. Knock, knock. LMI. The usual. Normal. And yes, indeed. Let me in. I think open sesame was the most well. That's awesome. I mean, that really shows if there's such a leak.
55:43
We really have a problem. I mean, at least there was a leak where the passwords were encrypted, right? But still, thanks to the password hints, a lot of those passwords could be found out. This just shows us, I mean, it can happen to anybody. There are now rumors going around this week that Twitter was hacked.
56:02
You may have read that, right? Because some user data passwords were found out. But the way it looks as of this morning, that was the last time I could check and research, is the problem was mostly that users had malware on their machines.
56:21
And that malware then kind of read out the password save of their browsers. And then they sent it along. And well, if you're a pro like Mark Zuckerberg, then maybe you have used your password with some other services as well. And that's what the problem really was. So, well, for passwords, do not encrypt those passwords.
56:44
Use a hash, but don't use MD5 or SHA-1, right? Use something real. And do not provide password hints. Just let users reset their passwords with, let's say, with an email. Maybe even use two-factor authentication on your websites.
57:03
There are, by the way, good sessions about how to do that at this conference as well. So, sometimes it's the user's fault. More often than not, it's the developer's fault. We all can make mistakes. I've made a lot of mistakes in the past. I will continue making mistakes, no matter how hard I try. But, I mean, in the end, it's our responsibility
57:22
to make our applications as secure as we can. And also the big companies, they fail. And, well, I hope that you enjoyed the talk and that you won't fail as spectacularly as those big companies. And, with that, enjoy the rest of the day. Enjoy tonight's party and safe travels back home tomorrow and to next time.
57:41
Thank you very much. Thank you.