Using Ruby In Security Critical Applications
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 66 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/37569 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Producer | ||
Production Place | San Antonio |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
Ruby Conference 20156 / 66
8
10
11
14
18
19
21
22
23
25
26
27
30
31
35
36
38
40
41
42
46
47
48
51
52
54
55
57
60
61
62
64
66
00:00
Information securityLecture/Conference
00:44
Royal NavySampling (statistics)Decision theorySoftware developerFamilyComputer animationLecture/Conference
01:25
MIDIRun time (program lifecycle phase)Error messageComputer programmingBitVideo gameObservational studyPattern recognitionException handlingComputer animationLecture/Conference
01:56
BitGroup actionSatelliteComputer programmingVirtuelles privates NetzwerkPhysical systemInformation securityBoss CorporationEndliche ModelltheorieTheoryComputer animationLecture/Conference
02:44
Meta elementCASE <Informatik>Physical systemPoint (geometry)Information securityCartesian coordinate systemRoundness (object)Online helpSlide ruleTheory of relativityMultiplication signRepresentation (politics)Computer animationLecture/Conference
03:59
SoftwareArmElectronic mailing listExpected valueInformation securityPhysical systemSoftwareVideoconferencingNeuroinformatikSystem callLecture/Conference
04:33
Video gameSineInformation securityMathematical analysisExistential quantificationNeuroinformatikDecision theory
05:16
Information technology consultingInformation securityLibrary (computing)Axiom of choiceInheritance (object-oriented programming)Lecture/Conference
05:50
Point (geometry)Software bugCodeMultiplication signPhysical systemVideo gameRight angleGame controllerInformation securityComputer animationLecture/Conference
06:19
Physical systemGUI widgetInformation securityArchitectureInformation securityRight anglePoint (geometry)Physical systemGame controllerComputer architectureMultiplication signCodeBuildingServer (computing)Software testingHacker (term)Firewall (computing)Complex (psychology)Software frameworkFunctional (mathematics)Cartesian coordinate systemOperating systemMaxima and minimaLecture/Conference
07:14
Cartesian coordinate systemMaxima and minimaOperating systemGame controllerInformation securitySoftware frameworkPhysical systemMeasurementTerm (mathematics)Disk read-and-write headTelecommunicationDigital electronicsComputer animationLecture/Conference
08:02
Game controllerInformation securityPoint (geometry)Digital electronicsElectric power transmissionGoodness of fitDataflowTelecommunicationWordLaptopMathematical analysisFluid staticsSymbol tableCodeTouchscreenPerformance appraisalCartesian coordinate systemSystem callBranch (computer science)Lecture/Conference
09:08
CompilerType theoryDefault (computer science)Goodness of fitEmailView (database)FamilyLeakPower (physics)Group actionPoint (geometry)Lecture/Conference
09:48
Stack (abstract data type)Binary codeFamilyLink (knot theory)Computer programmingFerry CorstenHTTP cookieBuffer overflowType theorySlide ruleCartesian coordinate systemSimilarity (geometry)Proof theoryBuffer solutionLecture/Conference
10:37
GUI widgetChecklistCASE <Informatik>Game controllerChecklistPhysical systemCodeProjective planeInformation securityDifferent (Kate Ryan album)Operating systemMereologyCASE <Informatik>Fluid staticsSystem administratorRight angleCartesian coordinate systemControl systemOperator (mathematics)QuicksortSoftwareDatabaseMathematicsBootingComputer animationLecture/Conference
11:54
DatabaseServer (computing)LogicFlow separationDatabaseSoftwareRead-only memoryCASE <Informatik>CodeQuicksortDataflowInformationCartesian coordinate systemRight angleCuboidSequelComputer animationLecture/Conference
12:47
Data integrityReading (process)Router (computing)CuboidDatabaseTypprüfungCodeDataflowKernel (computing)System callSingle-precision floating-point formatFile systemFormal languageOperating systemRouter (computing)Cartesian coordinate systemControl systemOperator (mathematics)Computer animationLecture/Conference
13:36
GUI widgetPhysical systemMobile appCuboidCartesian coordinate systemGame controllerLibrary (computing)Software developerSoftwareControl systemPhysical systemOperating systemFile systemInformation securityPoint (geometry)Order (biology)CodeOnline helpLevel (video gaming)Object (grammar)Computer animationLecture/Conference
14:56
Execution unitDisintegrationInformation securityOperating systemFile systemSoftware testingTypprüfungBlogGoodness of fitINTEGRALCodeComputer animationLecture/Conference
15:41
Software frameworkInformation securityService (economics)CASE <Informatik>DatabaseTelecommunicationBoss CorporationProcess (computing)DemosceneCartesian coordinate systemAcoustic shadowNumerical integrationMultiplication signDirection (geometry)Service (economics)Internationalization and localizationCASE <Informatik>Software frameworkGame theoryPhysical systemQuicksortInformation securityNP-hardComputer animation
16:11
Finite element methodDecision theoryDigital filterMKS system of unitsInterior (topology)QuicksortSurfaceDemosceneAcoustic shadowConfidence intervalGame theoryPhysical systemCodeMultiplication signService (economics)Process (computing)Connectivity (graph theory)Lecture/Conference
17:02
Process (computing)Data storage deviceProcess (computing)QuicksortInformation securityProblemorientierte ProgrammierspracheConnectivity (graph theory)Multiplication signForcing (mathematics)Web 2.0Data storage deviceInternationalization and localizationComputer animationLecture/Conference
17:47
Complex (psychology)LogicData storage deviceProcess (computing)Internationalization and localizationBuildingInformation securityProcess (computing)Mobile appCartesian coordinate systemKey (cryptography)Escape characterImage resolutionQuicksortCodeLecture/ConferenceComputer animation
18:42
Data storage deviceProcess (computing)Data storage deviceCodeProcess (computing)Information securityCartesian coordinate systemPlastikkarteLine (geometry)Lecture/Conference
19:17
Data storage deviceFlow separationProcess (computing)Normal (geometry)Point (geometry)QuicksortCartesian coordinate systemData storage deviceMultiplication signSocial classCodePatch (Unix)Centralizer and normalizerComputer animationLecture/Conference
20:02
ParsingInformation securityBinary codeContent (media)BlogParsingLink (knot theory)CodeExploit (computer security)HookingMathematical analysisProjective planeValidity (statistics)Information securityQuicksortParsingWeb applicationScripting languageExpert systemDigital photographyMereologyNetwork topologyComputer animationLecture/Conference
21:16
Link (knot theory)Complex (psychology)Information securityState of matterSoftware bugLink (knot theory)Formal languageSocial classView (database)DatabaseUnit testingDisk read-and-write headParameter (computer programming)Cellular automatonHacker (term)Perspective (visual)Multiplication signAutomatic differentiationFilm editingFactory (trading post)Computer animationLecture/Conference
22:41
Monad (category theory)File formatSet (mathematics)Data conversionContent (media)Code refactoringCodeGoodness of fitInformation securityMechanism designRule of inferenceLatent heatMultiplication signMonad (category theory)AlgorithmSet (mathematics)Category of beingMereologyComputer animationLecture/Conference
23:36
CoprocessorInformation securityProcess (computing)Mobile appDecision theoryMereologyCASE <Informatik>INTEGRALService (economics)Library (computing)Cartesian coordinate systemMultiplication signCoprocessorComputer animationLecture/Conference
24:04
SoftwareBinary fileLibrary (computing)Different (Kate Ryan album)Strategy gameForm (programming)Binary codeSystem callMechanism designMultiplication signReading (process)Functional (mathematics)Cartesian coordinate systemMathematical analysisLevel (video gaming)Flow separationProcess (computing)CodeVulnerability (computing)Physical systemAreaProof theoryData storage deviceWebsitePoint (geometry)Execution unitLecture/ConferenceComputer animation
25:30
Compilation albumReflektor <Informatik>CodeGoodness of fitCartesian coordinate systemData storage deviceUnit testingProcess (computing)Coma BerenicesOperating systemSlide ruleVotingFirewall (computing)Operator (mathematics)Computer fileMultiplication signSoftware testingComputer animationLecture/Conference
26:13
SoftwareForm (programming)Compilation albumReflektor <Informatik>Social classFirewall (computing)Operator (mathematics)Multiplication signDistribution (mathematics)Game controllerInternetworkingPerspective (visual)Patch (Unix)Information securityVulnerability (computing)Kernel (computing)OpcodeOperating systemPhysical systemSoftware bugProcess (computing)Compilation albumPetri netComa BerenicesRoutingComputer animationLecture/Conference
27:43
Compilation albumFirewall (computing)Product (business)Flow separationProcess (computing)Physical systemInstance (computer science)Cartesian coordinate systemSurfaceInformation securityCodeReal numberInternet service providerShared memoryGame controllerWindowComputer animationLecture/Conference
28:33
DisintegrationCompilation albumRootChainTrigonometric functionsInformation securityJava appletVirtualizationQuicksortVulnerability (computing)Process (computing)Binary codeWordExtension (kinesiology)Virtual machineRoyal NavyGame controllerEvent horizonSemiconductor memoryScheduling (computing)Power (physics)Computer animationLecture/Conference
29:51
Information securitySoftware testingInformation securitySoftware testingMechanism designRing (mathematics)OvalWater vaporWorkstation <Musikinstrument>Computer animationLecture/Conference
31:06
InformationFinitary relationBuildingPenetrationstestRule of inferenceSoftware testingRule of inferenceInformationCartesian coordinate systemSocial classLine (geometry)Physical lawBuildingUnit testingService (economics)Computer animationLecture/Conference
31:49
Software testingControl flowExecution unitUnit testingCore dumpProduct (business)Game controllerLibrary (computing)Control systemFirewall (computing)Cartesian coordinate systemSocial classSoftware testingDifferent (Kate Ryan album)Scripting languageVideo game consoleInsertion lossSoftwareLink (knot theory)Mobile appIdentity managementComputer animationLecture/Conference
32:30
Physical lawIdentity managementSoftware bugSoftware testingLink (knot theory)Vulnerability (computing)Reading (process)SurfaceSoftwareMassPoint (geometry)Library (computing)Right angleMathematicsIdentity management1 (number)Port scannerPreconditionerProduct (business)Physical lawTerm (mathematics)Multiplication signContinuous integrationLevel (video gaming)Content (media)Mathematical analysisCondition numberCartesian coordinate systemLecture/Conference
Transcript: English(auto-generated)
00:16
Welcome everybody to my talk. Thanks for coming, hope everyone's having a good conference.
00:20
I know I am, is everybody learning a lot? Excellent, I try to live a few minutes when I do talks because I learn so much in conferences, I want to talk about the stuff I'm learning in other people's talks more than what I came to talk about, so if I get on a side note about something that I heard the last talk I was at was phenomenal, but anyway,
00:41
I hope you guys get a lot out of my talk today. Before I say anything else, let me get this disclaimer out of the way, I work for the US Naval Research Laboratory, but my talk today is my opinions based on my professional experience and my personal experience. My opinions don't represent those of the US Navy, the US government, anything like that.
01:00
As a matter of fact, if you do any research, you'll probably find that there's a lot of people in the government who disagree with me on a lot of things. Also, another disclaimer, I say we a lot when I talk because I have a really close-knit team and it's an awesome team and we argue about stuff we don't always agree, but when I say we, I'm not talking about Big Brother or all the developers I work with, I'm just kind of subconsciously referring to the fact that we try to make
01:20
as many decisions as we can as a team, so I apologize in advance when I say we. So enough about that, a little bit about me. I consider myself a good programmer, not a great programmer, but a good programmer, and I like to keep things simple. I study martial art called Akito, and in Akito we have a lot of sayings, and one of the sayings we have is that an advanced technique is just a simple technique
01:41
done better, and I like to apply that not just in martial arts, but in all aspects of my life, and programming is no exception. So everything I do, everything I talk about, the underlying theme is keep things as simple as you possibly can. So just a little bit about this Naval Research Lab thing. It was started in 1923 by Congress
02:01
by the recommendation of this guy, Thomas Edison, who said we needed a Naval Research Lab, and so we have one, and the group I work in, the systems group, has come up with some pretty cool technology you may have used, most notably the onion router, Tor, came out of NRL, and a lot of the foundational technologies and virtual private networkings were developed
02:20
by Kathy Meadows and Ryan Atkinson, and there were two doctors at NRL. The Vanguard Space Satellite Program came out of NRL, which was America's first satellite program. Of course, Sputnik was first out of the Soviet Union, and there's a great paper from 1985 called Reasoning About Security Models. It was written by Dr. John McClain, who's my boss's boss's boss's boss's boss's boss, but anyway, it's a great paper. It talks about system Z, and if you're into academics,
02:42
it's a really cool theory about security. So all that said, my talk is not about anything military related. It's not academia. It's not buzzword bingo. I had a really cool buzzword bingo slide, but I took it out because CC's was way better, so anyway. What am I going to be talking about?
03:01
Well, I want to spend some time unpacking what I mean by security critical, like we just heard in the last talk. People throw phrases around, and it means different things to different people, so I want to unpack what I mean by it. Sorry about that. I also want to work through a use case. Now, this use case isn't an actual use case, but it's kind of a composite of experiences I've had,
03:21
so it borrows from systems I've worked on and developed in, but it's not actually representative of any system we've ever built, but the main reason I'm here is this last point, next steps. We've got a lot of initiatives we are interested in pursuing to improve our ability to use Ruby in security critical applications, and some of them we know how to do well.
03:41
Others we have an idea how we'd do it, but we probably wouldn't do it well, and others we know we can't do, and so if anything you see on my next step slides rings a bell with you, please come talk to me after the talk because we're interested in getting help from people who want to do cool stuff with security in Ruby. So anyway, there's a great talk that I saw
04:03
that influenced my thinking about this subject with Ruby. Back in 2012, I was at a conference called Software Craftsmanship North America. I really recommend you go sometime if you haven't. It's a great conference, but Uncle Bob had gave this talk called Reasonable Expectations of the CTO. You probably haven't seen it. It's on Vimeo. If you haven't seen it, look it up. I'm not gonna summarize it for you, but watch it,
04:22
and as you watch it, just add security to the list of problems that systems have. It's very applicable to the security problem as well, and it rings even more true today than when he gave the talk in 2012. So when we talk about computer security, one of the things we talk about a lot is assurance, and assurance usually is a verb,
04:41
is something that I do to assure you that everything is gonna be okay, that there's no problem. Well, when I talk about assurance, I'm not talking about telling you everything is gonna be okay, because what's the first thing you think when I tell you everything's gonna be okay? Something's wrong. So I don't want to assure you of anything. What I want to do is talk about giving you assurances
05:01
that allow you to make a decision of your own, and even if you don't like the assurances that you get when you do a security analysis on something, at least you know where you stand, and that's really useful. So when I talk about assurances, I'm not trying to tell you everything's gonna be okay. I'm talking about evidence. We've all seen this chart before, and whether you're trying to make money
05:21
or make the world a better place or solve a security problem, this chart is not avoidable to my knowledge, and when we go about solving a security problem, we bump into it too, and we look at it and go, well, we got a few choices. We can do something really clever that's gonna outsmart the attackers. We could go buy a really cool library
05:41
that's gonna provide us all this super awesome security and solve all our problems, or we could hire some outside consultant who's gonna assure us that everything's gonna be okay. Well, don't do any of that, because attackers are really, really clever. They're more clever than me. They're more clever than you, and what's more is there's lots of them, and they have lots of time. You build a feature. It's onto the next feature.
06:01
They are out there hammering on your stuff day after day. Sometimes teams of them, if you're unlucky enough to be a target, most of you aren't, but we're going to make mistakes in our code. It's just a fact of life. There are going to be bugs. There are going to be security bugs, so I'm gonna talk about what we can do to defend ourselves.
06:21
A key point I wanna make today is that a security-critical system should have the right security controls in the right places and with the right assurances. Say that again. A security-critical system should have the right security controls in the right places and with the right assurances. Now, I like to do that with architecture. We construct architecture,
06:41
and a lot of times when we're building code, the principles that make code awesome are the same principles that make code secure. We wanna reduce complexity. We wanna localize functionality. We wanna improve test coverage, things like that, but also we wanna make sure we have the right controls in the right places. A firewall at the front doors are gonna keep bad guys out,
07:01
just like a guy with a gun in your server room isn't gonna keep hackers out of your server. You've gotta not only consider architecture of your code and design and test coverage, but you also need to think about what controls you're using where, and how, more specifically, we layer those controls in our system. Some of these acronyms you may not recognize.
07:20
I'll explain them later, but these are really the security control layers that you should consider at a minimum in your application. You have your operating system. You have your security framework, and then you have your application framework. These are what we're gonna layer in our assurances. But what are these assurances? Are they something squishy that we can't measure? Well, kind of, but we do have the ability
07:40
to talk about them in a semi-structured way, and I like to talk about them in terms of this NEAT principle. NEAT stands for non-bypassable, evaluatable, always invoked, and tamper-evident. Your security controls, the more you can measure and answer these questions, nodding your head instead of shaking your head, the more security you're gonna have in your controls.
08:01
So I just want to go through these real quick. Non-bypassable is pretty easy to describe. If you've got a circuit breaker, it keeps your electronics from getting fried because of too much electricity is going over the wire. It's going to trip the breaker and keep the electricity from flowing. But if there's a wire going around the circuit breaker going directly from the power grid to your laptop, it's not going to do you any good
08:21
even if it does trip because you're gonna get fried. So for a good security control to work, it has to be the only way from point A to point B. Evaluatable is a little harder to talk about. There's a lot of things like symbolic execution engines and static analysis tools that we can use to measure and evaluate code security. But for most of you here, I think a great thing to do
08:41
if you haven't done it is follow the instructions on the screen. And you can get a good idea of how readable, how evaluatable your code is if your score is, say, less than 20. The lower the better, but if it's code that needs to be really secured, you should definitely be below 20. So keeping things small, reducing branches,
09:00
not using things like eval or call are good things to do when you're in a piece of code you consider security enforcing in your application. Always invoked. I think the HSanitizer, the HTMLSanitizer in ActionView is a great example. When it first came out, it was something you could call if you wanted, but you could also forget to call really easily. At some point, they brought it into ActionView, I think,
09:22
and made it default so you had to not call it. I haven't used Rails in a while. I'm one of those weird Ruby people that doesn't use Rails very much. But this is a C example, actually, and I like having things like this littered in the headers because it makes the compiler insult people who do dumb things. Not to be judgmental.
09:40
It's a learning experience for all of us, and we all get a good laugh out of it. But type this in and see what your compiler says to you. And then temp-revedent. This is another one that's a little tricky to describe. But this guy here, he's a coal miner, and he's got a little canary with him. And back in the day, coal miners used to bring these canaries into the coal mine with them because when toxic gas leaked out of the rocks,
10:01
it would kill the canary well before it would kill them. And while it's kind of gruesome to think about, it was a good way for this guy to get home to his family with the technology he had. We do something similar in binaries every day. We put these little cookies on the stack, and so if there's a buffer overrun in your application, the A's, the nop slide, whatever it is, is going to crush that cookie if the attacker's not careful,
10:21
and we can exit the program safely. It's a way oversimplification, and it's not bulletproof, but if you're interested more, I've got a little link on my slide here. And if you just type stack canaries or stack-smashing protection, you can learn a lot more about ways we protect binaries. So I've got my checklist.
10:40
I've got some controls I want to talk about today, and I've got some assurances I want to apply to those controls and see how we're doing. So we're going to use this checklist as we go through the rest of this brief. Like I said, the use cases I'm going through, it's one example in three parts, and it's not a system I've actually built. It's little pieces here and there from different projects I've worked on that I think represent good explanations
11:01
of these security principles. So at the base of your system is your operating system controls. No matter how secure your code is, if your operating system is not configured properly, you're screwed. And the main security feature in your operating system is access control. You have the right security control to security geekery, talk about mandatory access controls,
11:22
and it can get complex, but they're actually pretty simple. It just means something that the administrator sets up at boot, and it can't be changed. So the neat thing about mandatory access controls is they're nice and reliable. They don't change. They also have a really pretty static code base supporting them because they're not changing. They get set at boot, and they don't change, so it's easy to make sure that code works well.
11:42
So at the base of your application of your system, it's good to use your operating system's access control mechanisms, preferably in a mandatory way as opposed to a discretionary way. It keeps your system simple. So a use case might be you've got multiple databases, and you wanna be really sure that people can only,
12:01
on different networks, can only read from the database that they're authorized for, and you wanna be really, really careful about what gets into those databases. Maybe, I don't know, all sorts of examples of why we'd wanna do that. But rather than trusting our code that does the very best it can to make sure there's no SQL in our posts, we basically give our application,
12:20
we can give our applications read-only access to some of these databases. And that way, no matter how bad our network application is, there's no way it's going to be able to read from the databases it's not allowed to see. And it's not gonna be able to write to any of the databases. And then we simply implement a piece of glue, a little router, and we can do that very securely. And all it does is make sure that the right requests go to the right places.
12:41
And with this way, we can ensure that our information flows are set up in a very secure manner. Let's validate that. And so, the fact that these databases have read-only permissions, they're basically gonna have to own your box to get around that. So you have reasonable assurance that to write, it's gonna have to go through the data flow pipeline I've created.
13:01
Evaluatable, well the security-critical piece of code here is just the simple little router that takes write requests and sends it to the database owners. So I can keep that pretty small, pretty evaluatable, and I can use a type-safe language like Rust or something like that. Always invoked, every single file system call you make has to go through the kernel.
13:21
That's pretty reliably always invoked. And then I try to come up with a good example for tamper evident. Making sure your operating system isn't being tampered with is kind of outside the scope of this talk. So if you're interested, let me know, but I skipped it because I didn't wanna bore you all to death with it. So, some takeaways. Use access control mechanisms in the operating system
13:41
if you can, and then wrap them directly into your application, maybe say with FFI. And then, don't stop there though, because it's a pain in the butt to develop your application with these things in place, and more to the point, if you screw up, you can crash your development box. You don't wanna do that. So do your day-to-day development with a stub. We ended up in a situation where we had a third party
14:01
that was gonna help us write our application, and we needed to get their help, and they didn't have our Mac infrastructure, so we just gave them this stub. They wrote a really cool app that we couldn't write, and then they sent us back the code, and it was relatively easy for us to take the stub out and integrate the code in with our application. And then finally, it's only mandatory access control
14:20
if the application doesn't have the ability to change the policy, so if you can, avoid giving your application system privileges. If you look at Stagefright and Android, it's a really cool library. It does lots of awesome stuff. I could've never written it, and I don't blame them the least for making a small little mistake. It's inevitable, but if they wouldn't have had system privileges on it, and maybe they had to,
14:41
I don't know, I haven't researched it that well, but it wouldn't have been as catastrophic as it was, had it only had user privileges. So think, I'm not saying never give your software system privileges, but think really hard about it, because if you make a mistake, it's no longer your box, it's their box. So some of the things we wanna do is make it easier to make test doubles
15:02
for our file system objects, because we use the operating system so much in security. I've talked to some of the guys from test double, and I don't know if it's such a good idea, but it's something we've been playing with, and if you wanna convince me it's a bad idea, or you wanna help, let me know. The other thing is, is like I said, we're looking at using Rust more for our type safe security critical code,
15:22
and, or our performance critical code, and everything I've learned about Rust Ruby integration, I learned from this blog entry I have here. I'm not very good at it, so if there's more resources that any of you know of, please let me know. Huh? Heh heh heh. Good to know.
15:41
So moving on through the layers of the onion of our use case is what I am calling our services security framework, and I didn't have a really good name for this, but basically if we're gonna separate our application into a bunch of processes, we're gonna have to integrate them together in some way, and those integration points are great places for attackers to break your system.
16:00
Things like inter-process communication, or database access, these are great places where attackers are able to get in and do things like CSRF, internationalization, attacks, SQL injection, and it's a lot of time hard for us to get our paying customers to understand the sorts of things that can happen if we don't do a really good job with this, and I don't know, have any of you guys ever read Ender's Shadow?
16:21
I know a lot of people have read Ender's Game, but Ender's Shadow's a little less well read. There's a great scene in there where Bean's talking to his boss, and he basically points out that as your attack surface grows, defense becomes impossible, and with these sorts of systems that we're building, our attack surface is growing. Fortunately, not as big as the scope of the aliens
16:41
in Ender's Game and Ender's Shadow, but it's, so it's not hopeless, but it's bad, and I don't have enough confidence in my ability and my team's ability, even though we've been doing this for a long time, to cover every nook and cranny, such that our code can't be changed in 10 years to allow this stuff through. So we stick with this principle of separate, isolate, and integrate, and essentially what we're trying
17:04
to do is every time a process component that's been separated ingests data, it uses some sort of domain-specific language to enforce a security policy such that it's protected, and then when it brings data, sends data out, it also tries to protect the data
17:22
and protect the next process. So that doesn't make much sense. I tried to come up with a better way to explain it, but I think I'm just gonna have to use an example. So let's take a really, really oversimplified example and say that we wanna make sure that no semicolons make it into storage. Now there's a lot of web attacks that require semicolons, and so I'm not saying you wanna do this or not,
17:42
but it might be a useful way to, it might be a useful policy to use in trying to protect a lot of web attacks, and the example I'm about to give does not take into account internationalization considerations, so don't just use it. Internationalization is important for apps, but it's also important for security, so just wanted to throw that in there. Keep internationalization in mind
18:01
when you're building your app, especially with regard to how it impacts security. So let's look at some of this application layer pre-imposed processing we do. What's this code doing? Well, it's not entirely clear. It looks like it's doing some sort of escaping to turn the semicolons into something else because the semicolons might be natural, and then it's doing some sort of resolution
18:20
to make sure that the semicolon escape key doesn't show up in there, and then it sends it off, and then when it goes to render the data, it goes to resolve it back to what it was, and do I trust this code? Hmm, kind of, but it's also kind of ugly, and this is just one policy. Imagine an application with 500 or 600 policies that you have to apply. This is gonna get kind of ugly.
18:42
Well, let's look at the other side, the storage side. I trust this code a lot more. Its job is very simple. It looks for semicolons. There's not supposed to be any semicolons in the data that's there, and what's more is if you look at line nine, it doesn't trust the caller to check the return code before it moves on. If code is security critical, you don't want to make sure and trust
19:01
that the caller is gonna check the return code because maybe you're checking it now, but maybe someone's gonna introduce a problem in two years that's gonna block the check, so if it's actually security critical, don't rely on the caller to check your return code. Handle it right there. Die might seem a little extreme, but like I said, the application was supposed to have gotten rid
19:22
of all the semicolons, so if there's a semicolon there, either there's a horrible problem with our application or somebody's taken it over, so this is an example of how you can do tamper evidence without using fancy things like stack canaries or anything like that. Ruby's really awesome at monkey patching code into classes, so there's all sorts of ways
19:40
you can trigger these hooks so that they automagically get called. We learned about how refinements could even be used to make sure that these things got called at the last talk, which was really cool, and again, the point of the check was very small. The normalization of the data, preparation for storage may have been complex, but it allowed for a very simple check at the time of storage,
20:02
and that brings up a point that I want to get to in just a minute, but I wanted to talk about some other cool technologies that are related. I don't know if any of you guys have used Antler or Parslet or anything like that. Parsers are cool, but it's always hard to figure out what to do with a parser once you've got that AST. Tools like Antler and Treetop and Parslet and others
20:20
make it really easy to hook in behaviors in your code when the parser hits certain things, so if you need to do content validation or content parsing, you should take a look at those projects. Another really cool tool is CheckSec.sh. It's literally just a bash script, and it does analysis of your binaries to look for what sorts of exploit mitigations have been compiled into it. We use this all the time, and not only that,
20:42
if you go to their website, there's all sorts of links about all the security wonk stuff, and if you're interested, you can just learn a tremendous amount about binary exploitation and just following the links on that side. And then POC or GTFO, how shall I, has anyone read POC or GTFO? Think of it as why the lucky stiff,
21:00
but for security geeks, it's really funny. Sometimes it's hard to follow because it gets pretty technical, but it's really funny. And then finally, the spanner.co.uk, it's a really neat blog where he talks about ways he breaks web applications. I haven't found any better than that one. So anyway, a couple things. I don't know if any of you guys have ever worked with SELenux or XAML, but really complex policy languages
21:23
that can do everything. They're very powerful, and they're very good, and people do great things with them, but I have trouble keeping all this data in my head when I'm trying to write policies. So I try to keep things simple and use DSLs that are kind of custom-oriented towards the problem I'm trying to solve. I think that's a very Ruby way to look at it, and it's a good way to look at it.
21:41
The other thing is to keep those checks as simple as you can at enforcement, not just for the evaluatable thing, but there's this other class of bugs called time-of-check, time-of-use bugs. They're kind of obscure, but basically it means you do a check, you do some other stuff, and then you write to the database, and it can change in the meantime. They're really, really hard to detect. Really good hackers are great at finding them and causing them to occur.
22:01
Your unit tests will almost never run into them, and if they do, you'll just assume it was a glitch and skip them. So if you keep your check simple, you'll avoid this whole really, really ugly class of security bug. And then a really good example, I have a link here. There's a guy named Matt Honan who works for Wired Magazine. In 2012, there was this terrible hack, or a great hack depending on your perspective,
22:21
but they did a bunch of little things and then chained all of these hacks together. So if you ever hear the argument, well, they'd have to break this, and then they'd have to break this, and then they'd have to break this. Well, that happened to this guy, and so these things do happen, and it's a really interesting story. So next steps.
22:40
We, well, if I can get it to... So there was a talk that Tom Stewart gave in Barcelona 2014 called Refactoring Ruby with Monads, and I like the idea of Monads right now more than I like to practice because I'm not particularly good at them, but I do believe that we can use Monads to wrap our content that we ingest from untrusted sources and ensure that they're properly validated before we store it.
23:02
There was a good talk on Hamster. We've been looking at immutability also provides security properties, not just performance and code quality. So there's a lot of things we can do to improve the mechanization in our code to enforce that it's properly validated. The other thing is we spend a lot of time writing security rulesets, and it gets rather mundane
23:20
if you take something like the SMTP specification. It takes a tremendous amount of time, and it's very boring to go through and write those. So we're looking to build out our toolset to automatically generate our rulesets from those things, and yeah, anyway, enough on that. So now we're to the part that affects most of you most of the time, which is writing applications,
23:41
and unfortunately, there's a lot of security decisions that have to be made in the app. They can't happen at the services and integration layer. They can't happen in the OS. They are in the app, and a great use case is XML. I try to avoid XML when I can, but sometimes it's unavoidable, and XML processing is very complicated.
24:00
So how do we build a high-assurance, secure XML processor? Well, we don't. It's really complex. If you've looked at all of the different XML libraries out there, some of them are really great, but they are complex. We're not possibly going to be able to get them to meet that evaluatable construct at the very least. So how do we do it?
24:21
Well, we use the same strategy I've been talking about the whole time. We break our goal into smaller pieces, and we separate them, and then we integrate them with well-understood mechanisms that the OS can enforce. Another thing we're introducing here is what I call binary diversity. There's a lot of different forms of binary diversity. It's a great new research area, but the simple act of using different libraries
24:42
for different functions makes a tacker's job much harder. So if you can do the separation and use different libraries, it gives you some level of protection. Again, it's not bulletproof, but it's very good, and like Justin Searle was talking about, by breaking down your functionality into small-arm units, it's easier to test them.
25:00
And this brings up another good point, which is you can have really secure code, but you might be using some library like Cyc that is a good library, but it has some obscure vulnerability in the underlying C native code, and you're screwed when it breaks. So these things are going to happen. We can do all the code analysis we want, but your application is going to break. So make sure that you've got fault isolation
25:22
built into your system. So how are we doing with, let's see, and it's not ... Okay, there we go. How do we do? Well, we've got that great non-bypassable pipeline. The only way to the data storage is through, or the next step in the application
25:41
is through my pipeline, so we've got non-bypassability. Evaluatable, we've got a big code base. There's really nothing we can do except for do our best to keep our flog scores low, make sure our unit test coverage is good. We've got good pen testers, whatever we want to do. There's only so much we can do to evaluate our applications. You can do a lot with Ruby
26:01
to make sure that your code is always invoked. For example, if you're using Rails, you could instrument your checks into as.xml so that they're automatically called. And those little brick walls I had in the last slide, they weren't just for decoration. There's a really cool tool called seccomp that we use a lot, and so seccomp,
26:21
think of it as like a firewall for your operating system. Every time you go to read a file, your process calls down into the operating system, which turns that into a bunch of op codes and things that very few of us understand very well. And there's a few of those we really all need, but there's a bunch of them you should never be using in production, like high performance instrumentation
26:41
to see how you're doing at the microsecond level, or ptrace, which is used for gdb. These are the system calls that you probably don't even know are there, but the attackers sure do, and that's where most of the kernel vulnerabilities lie. So you can use tools like seccomp to protect your application so that if there is a security failure, it can't be used to attack the operating system as a whole.
27:00
Even more controversially is this grsecurity tool. It's a patch, and you have to apply the patch to recompile the operating system, but it provides security controls that protect against classes of bugs. It's very controversial in the Linux community, but there was a really good article in the Washington Post on November 5th that gave a reasonable explanation
27:20
of spender's perspective with grsecurity versus Linus's perspective, so if you look it up in the Washington Post from November 5th, it's great. Also, if you're interested in the internet of things, there's a lot of tools, Open, Embedded, Yocto, but we really like Buildroot in my shop. Buildroot's a great tool for building your own Linux distributions. It makes it real easy to select the things you want.
27:41
Ruby is provided in Buildroot. So some of the things we want to do moving forward is we want to make it easier for other people to use seccomp. I want to build a gem that makes it easy for people to block the system calls that they're not going to need in production. This will greatly reduce the attack surface of any application that uses this gem
28:01
or uses Linux. Obviously, it's not going to work on Windows, but it does provide real protection, but it needs to be easier to use than it is certainly in the instance that I use in my day job. Again, the importance of even in your application separating things into separate processes,
28:21
isolating and then integrating with assured security controls, and just being relentless and making sure every little piece of code that you have is well-tested, is well-designed. So, like I said, I want to do a better job of making seccomp available to users,
28:41
and another really cool technology that's come out recently is this Robusta. I don't know too much about it. I've read the paper, but I haven't actually downloaded and tried to use it, but basically Robusta is a container that lives inside the Java virtual machine, and if there's a security failure in a native extension, a Java native extension, Robusta actually isolates it so they can't break out and take over your whole JVM, which is kind of cool,
29:02
and if you look at most of the vulnerabilities that happen in Ruby, it's usually not in Ruby itself. It's in some sort of native extension that we all use and love. It's some gem. It's buried. We don't even know we're using it, so this could be a real winner for a lot of people in the Ruby community. Another thing is we like mRuby, and we're trying to learn more about it. Unfortunately, there's a Birds of a Feather talk
29:20
going on on mRuby right now that I couldn't go to, which is kind of a bummer, but mRuby allows us to put better weaponized, sorry, I shouldn't use words like that. I work for the Navy. The more robust security controls into your binaries to make it much, much harder for attackers to break them,
29:41
and like I said, when you learn about GCC and Clang, there's all these little compiler flags you can use that make your binaries stronger, and they're really, really awesome, so I could put a picture of my cat up, but I always like it when I see Zach in Corey's briefs. I don't know if you've ever sat through one of Corey's talks, but he's a great presenter, and so it's just kind of homage to him,
30:02
my picture of Zach, and I'm a little ahead of schedule, so I want to take a non-sequitur very briefly into security penetration testing. Like I said, I do that sometimes when called upon. It's not my primary duty, but it is a duty that I do, and there's a lot of mystery around penetration testing
30:21
for people who don't work in this. I don't know if anyone recognizes this picture, but this is the little grate in Helms Deep from Lord of the Rings that the bad guys brought the bomb in and blew the whole thing up, and the obvious example is that you don't want to have this little hole in your outer wall, but there's another example that a lot of people don't know about. The way Castles is designed, that outer wall is just designed to make it kind of a pain in the butt
30:41
for people to get through. It's not really a defensive mechanism. It's just a way to make attacking much harder, so when they stationed all, put all their eggs in guarding that outer perimeter, and the bad guys blew it up because it had a water drain, that was really the mistake they made. They should have been guarding the keep, which had a two-by-two entrance. No matter how strong the orcs were, they would have been coming two-to-two,
31:01
and they could have fought them all off, but anyway, enough geeking about security, and sorry about that. So I don't want to talk to you about whether you should buy penetration testing or not. Often is money well spent. Sometimes it's not, but if you're going to buy penetration testing services,
31:22
give information to them. If you make them find the information, they will find it, and it's just going to take them longer and maybe annoy them, and so if you give them information, you're going to get more value for your money, and along those lines, build relationships with your pen testers because you're going to write these things called rules of engagement, like what they can and can't do. Well, there's always ambiguity in that,
31:41
and the better relationship you guys have, the more you're going to be able to work with them to have a more granular understanding of what those rules of engagement are, and don't just test from the outside. We know in the Ruby community intuitively that we write unit tests for all of our classes no matter how deeply embedded in their application or to the outside. In fact, a lot of times, those core libraries that we rely on the most,
32:02
we put a lot of work into testing those. If we make our pen testers come in from outside the firewall, really they're testing your firewall much more than your app, so that's not a bad place to start, but maybe give them script console access and see if they can test, if they can get around your access control mechanisms in your application. So, like I had those controls at the different layers, have your pen testers do testing from different layers,
32:23
obviously not on your production network, but in a lab or something like that. So with that, I want to thank you for coming to my talk. I hope it was modestly entertaining. A little link I have here, Kim Cameron's Seven Laws of Identity. Who's read that?
32:40
Wow. This was written in 2005, and it was a treatise on what the identity management software community should do to protect the rights of consumers. I really recommend it. It's very relevant today. So it just seems to be coming up a lot in the talks, so it's a good read. It's timeless.
33:00
So with that, thank you for coming to my talk. I have 10 minutes for questions if anybody has any questions. Oh, so the question is, what's my opinion on getting third-party penetration testers coming in
33:21
versus just doing your own automated vulnerability scanning? Well, it depends on what your goals are, and that's a really lame answer, and I'm sorry for giving it, but I have to, but I would say that I would recommend using an automated, just like you have Travis or TeamCity. We use TeamCity in my shop. Just like you do continuous integration,
33:40
you should have pretty regularly some one-point automated vulnerability scanner against your application, both in production and the lab. It makes sense. The cool thing about pen testers is they're humans. They're not automated tools, and if you get ones that know what they're doing, they'll know what to look for that's not in that tool suite, but a lot of times, the usual suspects are the problem,
34:00
and you can get a lot of mileage just out of using automated tools yourself. That answer the question? Well, that's a really good point. So the question, if I got it right and jump up and down if I didn't, is that now I'm using two libraries instead of one, and the attack surface just got a lot bigger, and a way of thinking, absolutely, it's true,
34:20
but if you look at the example that I gave, all RexML was doing was checking for well-formedness, so it was doing a very specific purpose, and we evaluated that RexML was going to do it well, whereas Nokogiri could assume up front that it was already well-formed, so mass assignment bugs would be a lot less likely to be applicable to it, but it's going to do a lot more deep dive on the content,
34:41
and the math term for this is Floyd-Whore precondition postcondition analysis, but basically what you're doing is you're allowing, when it comes into RexML, your precondition is nothing, and your postcondition is that it's well-formed, and then with Nokogiri, you've got a precondition that it's well-formed data, and maybe that the comments have been sanitized or something like that, but using Floyd-Whore,
35:01
it's easy to compose a secure system using preconditions and postconditions. That probably wasn't a great answer, but that's kind of our take on it. Any other questions? All right, cool, well, thank you all for coming.