Docker, Docker, Give Me The News, I Got A Bad Case Of Securing You
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 109 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/36355 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
DEF CON 2373 / 109
12
19
20
23
24
29
32
33
36
51
58
60
62
66
67
68
69
70
71
77
82
84
85
88
89
92
98
99
103
104
107
00:00
CASE <Informatik>Direct numerical simulationSpeech synthesisMultiplication signInformation securityGoodness of fit
01:06
Escape characterProduct (business)Operator (mathematics)CASE <Informatik>Data structureFile formatPerformance appraisalFiber bundleOrder (biology)TheoryType theoryQuicksortSystem administratorEquivalence relationBlogSoftwareDefault (computer science)Information securityElectronic mailing listCodeRevision controlOperating systemLibrary (computing)Exception handlingFile systemEscape characterPerspective (visual)BitNamespaceProcess (computing)Bookmark (World Wide Web)Digital rights managementKey (cryptography)Kernel (computing)Term (mathematics)WindowSurfaceCryptographyMessage passingWeightSpacetimeState of matterPoint (geometry)Multiplication signInsertion lossGoodness of fitFile Transfer ProtocolRootExtension (kinesiology)Sinc functionTable (information)MetadataObject-oriented programmingGroup actionAreaVideo gameContext awareness
09:21
Right angleConfiguration spaceMultiplication signRootLibrary (computing)InternetworkingCodeRevision controlBitQuicksortCartesian coordinate systemCASE <Informatik>InformationLevel (video gaming)Functional (mathematics)PermianElectronic mailing listComputer fileTable (information)INTEGRALInstallation artSlide ruleStack (abstract data type)Loop (music)TwitterMultiplicationDirection (geometry)Software testingOperator (mathematics)MereologySpacetimeMedical imagingLink (knot theory)LoginSheaf (mathematics)SoftwareProduct (business)Information securityOperating systemNamespaceWeb pageMultilaterationDefault (computer science)Windows RegistrySystem administratorDemonKernel (computing)BenchmarkSoftware developerInheritance (object-oriented programming)Process (computing)Virtual machineClient (computing)AuthenticationWeb 2.0Identity managementGoodness of fitPublic key certificateGreatest elementMobile appRandomizationProxy serverSign (mathematics)DatabaseTelecommunication
17:36
Default (computer science)Goodness of fitContext awarenessBitRaw image formatSlide ruleRootLevel (video gaming)Computer animation
18:22
Point (geometry)Functional (mathematics)RootLevel (video gaming)Sensitivity analysisComputer file
19:50
Complex (psychology)Cartesian coordinate systemSimilarity (geometry)Kernel (computing)Surface
20:32
Java appletRow (database)Limit (category theory)Semiconductor memoryMaxima and minimaProcess (computing)CuboidIntegrated development environmentSet (mathematics)
21:20
Set (mathematics)Suite (music)BefehlsprozessorLimit (category theory)CuboidMaxima and minimaOpen sourceProduct (business)Point (geometry)NumberCapability Maturity ModelRevision controlMultiplicationInformation securitySoftwareValidity (statistics)RoutingProjective planeClient (computing)LoginState of matterNamespaceShared memoryDefault (computer science)Web 2.0Computing platformWebsiteMobile appService (economics)Uniform resource locatorMathematicsAuthorizationClassical physicsCombinational logicFreewareStructural loadServer (computing)Different (Kate Ryan album)Formal languageRootBitChainInstance (computer science)Single-precision floating-point formatGame controllerMedical imagingComputerVirtualizationConfiguration spaceComputer hardwareSpacetimeKey (cryptography)Point cloudMoving averageInternetworkingCross-correlationGoodness of fitFacebookResultantLibrary (computing)Electronic signatureExploit (computer security)Term (mathematics)Read-only memoryProof theoryCartesian coordinate systemRight angleGroup actionMathematical analysisFile systemBuffer overflowRepository (publishing)Configuration managementPerspective (visual)Film editingSpeech synthesisLaptopQuicksortDirection (geometry)Attribute grammarFlow separationCodeContent (media)Kernel (computing)Software frameworkVulnerability (computing)Denial-of-service attackPatch (Unix)Keyboard shortcutShape (magazine)Interface (computing)Digital rights managementVideo gamePhysical systemLevel (video gaming)Doubling the cubeRow (database)System callHash functionBlogElectronic mailing listBenchmarkMereologyRing (mathematics)Radical (chemistry)Link (knot theory)Slide ruleIdeal (ethics)Software maintenanceBasis <Mathematik>Parameter (computer programming)Forcing (mathematics)Integrated development environmentWindowLastteilungStandard deviationSpreadsheetConfidence intervalLogical constantBootingChecklistCycle (graph theory)Multiplication signComplex (psychology)Data recoveryCanonical ensembleAuthenticationSquare numberType theoryShift operatorProxy serverContext awarenessProcess (computing)Hand fanAddress spaceTransport Layer SecurityExistencePublic key certificateComputer fileOperating systemVariety (linguistics)Windows RegistryComputer animation
Transcript: English(auto-generated)
00:00
welcome to Def Con Sunday. How are we doing? Yeah. That is a disturbing level of enthusiasm. Wow. Welcome to the most coveted speaking slot. First thing in the morning on Sunday. Funny story, a couple years ago I spoke at black
00:21
hat and I spoke at the same time that Dan Kaminsky was giving his talk on DNS. Hardly anybody came to that so here I am and David was my speaker handler there so I'm now returning the favor introducing him in a highly coveted speaking slot as well. So it's going to be an interesting
00:41
talk. I'm really excited to hear some more about this stuff. Let's give David Mortman a big hand. Good morning everyone. Thank you for coming out at this really stupidly early hour. I appreciate the effort. So today we talk about
01:04
Docker, containers in general and that whole security thing with regards to that. So a little bit about me. So in my day job I'm the chief security architect for Dell software. In spite of that they do let me use a Mac for
01:20
the most part. Unless I'm going to customers in which case I have to pull out the Windows thing. It's kind of scary when that happens. Anyway, so I do cloudy stuff most of the time and I've been poking around a bit at Docker. So it seems to have gotten a lot of publicity in the last year. Everyone is like oh my God, Docker this, Docker that. You can't go anywhere near a tech blog without someone
01:42
talking about how awesome Docker is. So what is the really big deal about Docker? In some sense it's not a big deal at all. It's a container. Or for those of us who have been around for a while, remember Jails and Chroot? Yeah. I remember setting up Chroot like on an FTP in Chroot.
02:01
Because hey, that's more secure. The cool thing about containers is that you're taking standard basically Chroot or Jails or LXC which is the modern version of that stuff and you're wrapping it with metadata. So now you're giving it some context of what's inside the container. So now you're not just saying hey, I've sort of contained this
02:22
executable but I can tell the rest of the operating system what's inside it. And so now I can say hey, this now is a portable format. It's actually what you've done is taken a container and you've made it a package format. So now it's just like any other packaging format except for rather than being a single executable with a list of
02:42
dependencies you need to download yourself or rely on apt-get or one of your favorite package managers of choice, all the dependencies are self-contained in this little package. So that's pretty cool. It's very effective from an operational perspective. Life gets a lot easier. And especially when you start looking at things like hey, I'm developing something and I need to hand it off to QA who then hands it off to some other security team
03:05
for evaluation who then hands it off to production. If you're lucky that's the order it goes in. If you're not lucky we get called three weeks later and say well, it's in production, can you scan it? But you know, in theory that's the way it works. But the great thing about that means is that what actually goes from dev all the way to production is
03:21
the actual same exact code. So you actually avoid things like it worked on my laptop or well, we thought you had this version of the library in production and actually in dev we're three versions later. So it's really convenient that way. So from an operational perspective it's awesome. But the problem is of course that everything has security issues
03:43
in it. Because you know, what doesn't? And in the last year people have gone to lots of effort and they'll say oh my God, containers don't contain. They're not secure. They're not like a VM. Because VMs are secure, we know that, right? And containers in some absolute sense are not as secure
04:07
as a VM. They're much lighter weight in terms of security, in terms of isolation, but they're pretty good. And the fact is, for the most part, they actually do contain. There's a couple of issues, I'll get to those in a little bit, about
04:21
the places where they don't do full containment. But even in their current state, even if you looked at what it was like 20 years ago with charoots and jails, you've significantly actually reduced the attack surface that someone can go after. And realistically if you think about it, if you escape the container, well they're just where you would be if you were running on bare metal. So it's not actually a
04:42
huge loss in security that you get at that point. And in particular, there was a few blog posts over the last year since Docker was released where people were like, oh look, trivial escape from the container. I can just do this. And there was a beautiful one where you could actually, as a Docker user, launch a container, create a SUID bash
05:01
shell, copy it out of the container and get root on the host OS. Oops. And so I was like, oh, that's scary. I should validate that. So I was sitting around the other week and I'm taking all the posts of container escapes that people have done in the last year and Docker has fixed all of them. Mostly through the speeding of changing the
05:23
default configurations. Funny how that works, you know. It gets more into this a little bit later, but it's things like, you know what? Don't run Docker as root. Okay. Most of them fall under don't run Docker as root and do a few other
05:41
basic sort of hygiene type things. The sysadmin equivalent of washing your hands, putting away the trash. So that's good. Escapes aren't trivial anymore. However, there's still a lot to do. So let's start off, where are we today? What do we get from Docker? What does Docker give you or other
06:02
containers, frankly? They're all more or less the same. There's app C and there's Intel clear, Linux thing that's not quite a container, but they all have the same basic structure going on. They all have, you know, some sort of, where is it? You know, some sort of basic
06:21
container management to limit what you can do. So go ahead myself here. So they all have like, you know, C groups and they all have name spaces mostly. Most of the areas are name space. This is good. This means that if you're in one network stack, you can't see another container's network stack. Generally a good idea. Excuse me. You know, they all have
06:45
things like IP tables. Every single thing, you know, the file system is its own name space, processes are their own name spaces. This gives you a fair amount of protection. There's two key places that are not yet name spaced that are being fixed. The first is there's not a user
07:01
name space yet. This means that if you're operating as a particular user in a container and you escape the container somehow, you get to operate as that same user outside the container. Not so good. They're fixing that in the next release of Docker. I'll talk a little bit more about that later, is implementing name spaces in the underlying
07:21
structure. So pretty soon we'll actually have user name spaces. Another big issue, however, is that the kernel key space, you know that place where you put crypto secrets or pass phrases that need to live in memory, is not name spaced at all. This means that as the host OS, if you put
07:41
say a critical credential into that kernel key space, any of the containers can see that. Not so good. Also, if you have containers running, any one container happens to put something into the kernel key space, all the containers can see it. So if you need to use containers, be really
08:03
careful about what you're putting, what kind of key management, what kind of secrets you're dealing with. And similarly, be careful about user space stuff. So what you want to do, this is why the state of the art is to run one container per VM or one container per bare metal. You still get a lot of benefits of containers, especially
08:20
production, without running risks around that, especially around the key space situation. So that's a really useful thing to consider. The key space stuff is addressed by running SELinux. Does anyone here actually run SELinux? Okay. Keep your hands up. Okay. Keep your hands up if using SELinux means the first thing you do is turn it off
08:42
when you get your operating system up. Exactly. So SELinux is a really cool tool. For most of us, if we're not Dan Walsh, we're not actually capable of using it to its full extent of its capabilities. So this is one of the pain points still in containers. Running SELinux by default actually solves this
09:03
particular key space issue is my understanding. But to really get the benefit out of SELinux takes a lot of time and effort. So there's dedicated network stacks, as I mentioned. Now, when Docker first came out, there was no ability, there
09:22
was no way of validating that the container you were downloading from a registry was actually the container you thought you were getting. Nothing. That's comforting, kind of. Okay. It's not comforting at all. It's terrible. Two, let's see, in Docker, I think 1.3, maybe 1.4, they started offering signed manifests for official Docker
09:44
containers. So if you were going to download a container from Docker, Docker.org that has the official Docker stamp of approval on it, the manifest that described the container had a signature on it. That's a good step forward. Except for the part where the container itself isn't signed, so there's
10:01
no way to actually validate that what's in the manifest is in the container. But boy howdy, is that manifest signed. So I was like, okay. So what does this get me? It gets me the validated manifest. So I feel real comfortable. Okay, I don't. But they're fixing that. I'll talk about that a little
10:24
bit later as well because that's kind of cool. What Docker has done, the folks, they've hired some really smart people in the last six months to a year to work on security Docker. So I spoke to them several times now. And they basically released Docker knowing there were these security issues. They're like this is beta code. We have a
10:43
road map for fixing the security issues. And every single release adds extra functionality on the security front. So this is good. We're getting better. That's the trend we want to be, right? Definitely not the opposite direction. So they just recently released a really great white paper, high level white paper on securing Docker. I'll
11:01
be posting the new version of the slides will be posted online and there will be a whole section with all the links to the various resources I'm mentioning over the course of the talk. So they have a great high level white paper on how Docker works, how containers work in general and some high level security things you can do. They also recently released with CIS a document on how to harden
11:23
Docker. It's 190 pages. So I had a lot of spare time apparently and I read it all. And I've pulled out some highlights for you so that way you don't need to read it all but it is worth going through. And one thing you're going to
11:41
find here is that as you go to lock down Docker, this list is going to sound very familiar to locking down anything else really. I mean there's a few special things around Docker but realistically speaking it's an application and has some special corner cases but in the end there's a lot to do
12:03
just like anything else. So they recommend, that's a good idea, is let's restrict network traffic between containers. If you're running multiple containers on your host there, don't allow Docker, the containers, to talk through internal buses, through the internal operating system
12:21
guts. Make it go over the network. That's a great bet. Make sure everything goes across the network because then you maintain that network name space and you maintain the integrity of those separate network stacks. As soon as you start allowing the containers to communicate through the host OS, then you start losing protection. So always, always,
12:41
make containers talk across the network. Even if it's just using, they'll generally just use loop back anyway but at least that way it's going out the stack and back through and then any network controls you have in place like IP tables and things also take effect. Here's a clever one. Turn on audit D for all of the Docker files and the network
13:01
itself. So this way you know. And then here's the radical part. You actually have to read the logs. I know, I know. We don't generally do that in this industry. We just collect logs or spray them to dev null but please, for me, it's 10 a.m. on Sunday. Most of us are somewhat hungover.
13:20
Please review the logs. It'll make your auditors happy at least and then they'll be nicer to you. So that's worth something right there I think. The other thing, this is a good default. Don't turn this off. Only use SSL or TLS when you're connecting to Docker registries. I think we all know this but don't turn it off anyway. And in fact, don't
13:44
let the Docker demon itself listen on the network. If you're in production you may not be able to avoid that but if you're doing local development there's no reason to actually have your Docker demon listing on the network. That provides an immense amount of protection. The Docker client is sitting there right there on your machine anyway. Don't have the Docker demon listing on the
14:01
network. That gives you a lot of protection. Especially because, by the way, the Docker API has no authentication. It has no concept of identity yet. It has no concept of roles. It's just wide open sitting there saying, use me. Abuse me. So please don't put it on the
14:20
network. And then if you have to turn it on the network at least enable some sort of certificate based authentication on top of it. Using nginx or something like that. So at least that way you'll get some comfort level that only the people you know are actually using that. Since the API itself has no authentication, proxy something on top of it. Just give yourself some safety if you
14:42
have to put it on the network. This happens realistically in any sort of larger environment or if you're doing some sort of orchestration using third party tools or something. You're going to have to put it on the network. Which sucks but give yourself some protection. Another radical idea,
15:02
lock down all the config files to root only. Make the ownership root root. The config files generally aren't containing critical information so they shouldn't be writable by anyone else but they can be readable by the public. Make sure your certs, if you're using any certs or keys, make sure they're owned by root, perms of 400. This is more or
15:24
less obvious but I've seen multiple test installs with Docker where they're like, oh, I put a cert there and they left it perms 666. So check those things. This is not rocket science. That's a different talk. Don't run Docker, don't run your containers as root. Run them as
15:44
that user. Just like we do with Apache, just like we do with Tomcat, just like we do with MySQL. Basic stuff here. And then only use trusted images. This is kind of a weird thing, I know. Just don't download random shit off the Internet and
16:01
click on it, right people? Come on. That was funny. I'll get more to that later though because this is actually a general problem space around trusted images. It's not an operational issue but I'll get to that a little bit later. Minimize your package installs. Again, basic sysadmin 101. Don't install shit you don't need in your container.
16:25
One app, one process, one parent process per container. Keep it simple. Containers are fast to spin up. Applications are increasingly getting distributed, you know, Webified, SOA, things like that. So just have, if you
16:42
have a container, just have one app running inside that thing. If it's a microservice, fine. If it's a web server, fine. You don't need to build your entire application stack top to bottom in one container. It's tempting especially in dev to be like, oh, I'll put my web server, my app server, my database. It's all cute in one little package. Well, it's not much harder to spin up three
17:00
separate containers and keep those communications more secure. It's also easier to audit that package and it's much easier to avoid dependency conflicts and issues with security issues brought in by third party libraries which I'll again get to a little bit later. Take advantage of kernel capabilities. Linux has this concept of kernel capabilities. Take advantage of those. Restrict that
17:23
container to only have the capabilities at the kernel level that it needs. The benchmark, the CIS benchmark actually has a great list of what all those capabilities are and by default, the default credentials are actually pretty good. The capabilities are pretty good. If you start
17:41
getting to some weird raw packet stuff, you might need to adjust that a bit but the defaults are pretty good. This is one where I'll say trust the defaults but be aware of certain things. Ping, ICMP in general does funky things in this network stack in general so that might break. They're working on fixing that with the capabilities thing as well. The slide's blinking for everyone. And
18:07
generally speaking when you're talking, you know, the default capabilities are net admin, CIS admin, CIS module, that generally does it. That's pretty much all you need. Don't use privileged containers. So privileged container is one that actually has like root level access, lets you do root level
18:23
functionality. Generally speaking, if you're doing root level privileged containers, you're actively abusing the point of containers and negating them as well. So that's not so useful. So avoid privileged containers unless you really, really can't. So another rocket science item,
19:01
don't mount sensitive host file systems, directories, et cetera, in your containers. You know what? Your container doesn't need the actual Etsy mounted. I know. It doesn't need dev. And it really doesn't need proc. So don't mount that shit. Another thing is, and this was one that
19:25
surprised me, is don't SSH into containers. Don't put SSH into your containers. You don't need it. If you need to access a container, log into the host operating system, use NS enter, which basically has the ability to jump into your container. But generally speaking, SSH is hard to
19:42
secure. It's hard to manage. And it takes up and it's kind of funky. It does interesting, bizarre things with the stack. It gets in there and makes your, you need to greatly expand your capabilities to make it work properly. So avoid SSH if at all possible. It adds complexity you really don't
20:01
need. So avoid that if at all possible. Also, if at all possible, don't use privileged ports. Obviously you can't avoid that. If you're running Apache or similar application that needs to run below port 1024, you can't avoid that. But generally speaking, anything other than those front facing
20:22
services, don't run them on privileged ports. Anything that has to run on a privileged port needs greater access to the kernel. It needs more capabilities. It's adding to your attack surface. So your mid-tier stuff, don't run it on a privileged port. Your database, don't run it on a privileged port in that container. Just don't do that shit. Again, set reasonable limits for memory usage. So
20:45
anyone here ever had the pleasure of configuring Java on the VM, max memory, min memory, all that shit, yes. I've got the front row doing this and yes. You're going to have that same joy as you start running containers. But this is a good
21:02
idea. Particularly if you're going to a production environment. Set those maximums. Give yourself some protection from DOS attacks. Or even runaway processes. There's no reason, for the most part, there's no reason a container needs all the memory on a box. If you're running something that's memory intensive, you probably want to be running bare metal anyway. You're possibly not even
21:21
in a VM. Containers, that's not your best suit. So set reasonable CPU priority. Again, this makes sense. Make sure that you're not going to have a container go awry and kill your entire machine. Not crazy rocket science stuff. Set
21:41
reasonable U limits. Does anyone here actually like U limits? I mean, they are generally speaking the thing, the being of existence when I was a sysadmin, especially when running databases or either the SQL or no SQL variety is, you always end up U limits constantly. Pay attention to those. Again, that's a great way of protecting yourself. Container goes awry. Make sure you have a reasonable limit on
22:02
that U limit and at least that way you see what happens. You see that pain and suffering coming. It prevents that container from getting out of control. For the most part, containers, you can mount that. There's no reason not to mount your root file system as anything other than read-only. There's literally no reason to ever mount your root
22:25
file system as a read-write in a container especially. If you need to make changes to your container, what you're actually going to do is take a copy of that container offline, make the changes you want, generate a new container image and then launch it. I'll talk a little bit more about configuration management later and the ways in which
22:42
containers really change configuration management in terms of security perspective. Only bind your containers to the appropriate network interfaces. Don't go the default amount of having them bind to every network interface on the box. For the most part, most of your containers can just be hooked to loopback. And there's no reason for them to be ever
23:04
exposed to a network interface off the box unless you're actually having them talk. Unless obviously you have mobile boxes running. But in a dev environment especially, there's no reason for containers to ever be listed on anything other than loopback. This is an exciting one which is limit your
23:21
containers will automatically restart when they die. That's a cool feature. You want to limit that to like three or five, maybe a few more than that. But the last thing you want is your container constantly restarting and hosing your box. It's generally good practice even in dev environments. The last thing you want is to instead of having a DOS attack
23:42
take down your box, have a DOS attack force a constant reboot cycle and take down your box. That's just as painful. Don't share namespaces. By default the namespaces are separated between the host and the containers and devices. They're there for a reason. If you share namespaces, you've just destroyed the point of
24:02
namespaces. So don't share namespaces. This is a default that they're not shared but sometimes people think I know we'll make this easier. Let's share that namespace. Well, you could do that but then you may as well run everything in one container or just not use containers at all. You've got backups. I know. I know. It's weird. Back up your shit.
24:25
I know. No, man, don't do that. Get logs. I know. Logs. So logging is a little bit tricky still. The last release of Docker just added syslog hooks finally. So that makes
24:40
it a bit easier. Every single major SIM and log correlation vendor now has a mini tutorial about how to enable Docker containers and logging in their product posted on their websites. So that's easy. I mean, it's not ideal yet. It's still a little tricky but there's directions posted so it's definitely, you know, it's like programming off stack
25:02
overflow. Cut and paste and you're probably in good shape in that regard. Work with a minimal number of images. Anyone remember when we first started doing VMs and like people would generate a VM for every single application they had as opposed to having a base level like three or four host VMs and then add the applications on
25:22
using like Chef or Puppet? Don't get yourself in the same situation with Docker. Image sprawl is a huge problem already for folks especially when you get to maintenance for Windows and things like that. So really, minimal number of images. Start with a base image and you can always add things on as you need to. Every time you add an image your problem gets not quite exponentially harder especially
25:43
once you get above like 12 or 13. People are really bad managing large numbers like 12 and 13 it turns out. Minimal number of containers per host. I recommend anything remotely production oriented, one container per host. Keeps it simple. If someone escapes it's not the end of the world. If
26:02
I'm going to run multiple containers per host they're going to be like services. So it's going to be like, okay, I have a big honking box rather than running 12 VMs on the box I'm going to run 12 containers all running the web server or something like that. And then still you want to make sure you have some diversity across boxes just like you do with VMs. That way if you lose your hardware you're not down.
26:21
Just like anywhere else. So I talked a bit about trusted containers. So you want to actually know that the container you're using is the container you think it is. So this becomes a supply chain problem. How do you know that you actually have what you think you have? So as I said earlier, right now Docker publish images have manifest
26:44
associated with each image and that manifest is signed. That's a start. It's not ideal because like I said, the container itself isn't signed so you don't actually have any proof that the container itself is what's in the container is actually what's in the manifest. They're fixing that in the 1.8 release which is due out any second now and
27:04
they've already been released. I've been off the internet mostly this week for some because there was a security conference or two going on. So my laptop has not been on the wireless here. I didn't want to be on the wallet sheet for some strange reason. So supply chains, you want to watch that supply chain. You want to validate that your containers
27:23
are what you think they are. If at all possible, given the current state, don't use public repositories. Instead of a private repository, validate that you know the images what you think it is. Run that repository TLS only and then just continue to valid, you regularly sort of double check
27:41
things and especially for that repository server, kind of keys of the kingdom there. So audit, monitor, have appropriate protections in place to make sure that those containers have not been violated in any way. There was a recent blog post about last month, maybe six weeks ago where someone went
28:01
ahead and claimed they said you know what, 30% of the images on the public doctor repository are insecure. This proves that Docker is insecure. I was like, that seems like a really big number. I bet it's kind of on the small side. And
28:20
so I did a little bit of research and poked around some other folks, did some deeper analysis and it turned out what they meant was that 30% of the containers they found had a library or an application inside it that was vulnerable to some exploit. So yeah, and so you download, if you're going to use
28:41
one of those containers, you do what you do with every container which is you run app to get an upgrade after you download the container to make sure you're running the latest versions of the code and you move on. Just because it has a vulnerability in it or update code isn't the end of the world. But it does mean you can't just assume that your container is up to date which is why I said earlier, patch. You
29:02
have to actually pay attention to this stuff. You have to patch your containers and keep them up to date just like anything else. I'm going to go out on a limb here and do something a little bit radical. This one was not actually recommended by the CIS benchmark but don't use chef with your containers. Don't use puppet. In fact, don't use
29:21
any online configuration management with containers. I know, I'm getting some quizzical looks in the front row. I can't actually see the second row so that may be why. The reason I'm telling you this is docker and containers in general are the ideal candidates for immutable servers. The problem with using configuration management is
29:42
you get or the reason configuration management was invented was the concept of configuration drift. And configuration drift is you have the binder on the shelf or in the excel spreadsheet that says this is the configuration of my web server. And over time, you make changes to that configuration but it doesn't get copied to the spreadsheet.
30:02
It doesn't get printed out and put in the binder for your disaster recovery. And three years later when you have an issue, no one actually knows what the configuration looks like. So tools like chef and puppet were invented and one of the benefits they have is not only does it automate everything so you get consistent configurations across all your boxes but now you have basically a CMDB. You
30:23
actually know what chef or puppet thinks is the configuration is the configuration. And in fact, if you're running these tools and the configuration, someone changes the configuration on the box, chef and puppet do a trip wire type thing and they say uh-uh-uh-uh and shifts it back to the way it is. So any changes that happen outside
30:43
that space get pushed back. Well, chef and puppet are kind of heavy clients. And in the container world, as I say, you don't want to run extra shit in your container. You want to run one process and that's not going to be chef. It's kind of pointless to have a container just sitting there running chef or puppet, right? You're not doing anything at that point. But your configuration is good. So instead, because containers are
31:03
so fast to spin up, we're talking milliseconds in most cases, instead what you want to do when you need to make a change is you create a new container and spin that up and then shut down the old container. And then if there's an issue, the new container doesn't work, well you shut that down and bring the old one back up or run them in parallel, classic AB things. So you might use your load balancer
31:24
and start shifting load over to the new containers. But any change you make generates a new container. So this is what Netflix does. They don't actually ever make configuration changes to running VMs on Amazon. They burn a new AMI and they spin up a whole new instance or
31:41
hundreds of instances and then transition to load balancers. Facebook does similar things as does Amazon. So this is the great thing here is that you then have a history of what everything looks like and you're not worried about configuration management failing and you keep your container nice and tight and clean. Related to this issue,
32:02
of course, in terms of just trusted containers is that because you only have that signature on the manifest is how do you actually have any sort of attribution to the life span of that container? So there's some interesting stuff coming. So there's some other things we can do beyond these
32:22
basics, which is you can run app armor. Actually, the Docker folks recommend you run both app armor and SELinux. The cool thing is if you run SELinux, once you get your configuration right, it actually lives with the container. So you don't need to track that separately. You figure out what your ideal configuration looks like. It's built into the
32:43
container. So as you transition across your infrastructure, it goes with it. So at least, again, it's still consistent. There's a cool tool called set comp. I just found out this a few weeks ago. This is really cool. It lets you limit syscall and syscall arguments on a case-by-case basis. So now you can get some really tight
33:03
control over what those system calls are doing back to the kernel, back to the operating system in general. So that's pretty cool. The Docker folks released this cool tool called Dockerbench security. It's up on GitHub. I'll post the links to that along with the other stuff when I get the
33:21
slides redone. And what Dockerbench security does is it goes through your container and validates or alerts you on all the recommended configuration and settings for your Docker container. So most of these recommendations I made are actually already built, checks for them are already built into Dockerbench security. So you can just download that, run it against your container and make sure you're in reasonably good shape. So for the most part,
33:43
you know, although it's a complex list of things you need to do and, you know, checklists are really boring to go through every time you do something, Dockerbench security automates that for you. So that's a win right there. There's also two third-party things you can do to lock down your containers more. The folks at Canonical
34:02
have released a project called LXD, Lima X-ray Delta. And what that is, is basically a container hypervisor. So rather than run your containers in a whole OS, they're building the container version of a VM hypervisor. So that way,
34:22
there's very little, should you manage to escape your container, this is a very thin layer just like you have with a traditional hypervisor. So that's out. That's open source. And that's continuing to mature. So that looks promising. And there's a third-party commercial package called App Sera. And they do policy and security of containers in general. They do both VMs and
34:44
containers. It was originally built with platforms as a service in mind, but as containers have taken off in general. Basically, it's a policy-based language which lets you set permissions and what containers themselves can do. So this is looking promising. I haven't had a chance to do a deep dive on it. Derek Collision, who is
35:01
the primary author, was the primary author of Cloud Foundry and he's been involved in the cloud and virtual computing space forever. So it looks very promising. That's another one to check out. They do have some free accessibility things. There's some cool stuff coming, though, from Docker. Docker has not ‑‑ they're not sitting on
35:21
the laurels. It's good enough. They're adding security ‑‑ they're continuing to add security. At DockerCon several months ago, they announced a new project called Notary. And what Notary is is it is a secure package management system based on the update framework. And what they
35:40
have done is Notary, which is coming out in 1.8, which as I said is going to be out any day now, may be out, in fact, is that it is ‑‑ gives you the ability to not only have signed manifests from Docker, it lets anyone have signed manifests. And more importantly, it's a content ‑‑ it's basically Notary, which is part of the V2 registry, is
36:03
a content addressable registry, which means that now your manifests contain a list of hashes of all of your contents of your container. So you don't need to sign the container or encrypt your container, though you could do that if you wanted to. So now when you get the manifest, you validate the
36:24
signature on that manifest, and now you have a list of hashes of all the contents of the container. So now you can actually validate that what's in the container is what you think it is, and it's not restricted to official Docker containers at this point. So you can do this yourself in your private registry, you can have much more confidence that the container you thought ‑‑ that the container you downloaded
36:42
last week and validated as acceptable for your standards is still the same container. Now, the update framework is pretty cool because not only ‑‑ Notary, which is using it, because not only does it enable this content addressable file system, but it has this concept of freshness. So what
37:00
this means is ‑‑ and it's ‑‑ the signatures are unique enough that what happens is that when you go to the registry and say, hey, I want this container, and the registry says go to this mirror over here in the western U.S. or this one in England or Ireland, it actually validates against the ‑‑ your client looks at what's on
37:22
the mirror, looks at what is on the master and validates it's the same thing. So you know you're actually getting the most recent version even if you're going to a mirror, because obviously for large mirror sites things often get out of sync, especially if there's recent updates, it takes a while for those mirrors to update. So now you actually know not only you're getting what's in the container, what the manifest
37:41
says is in the container, but you're actually getting the most recent version that you want. Or if you want a particular older version, you now know it's exactly the right version of the older one as well. So that's pretty cool. And it also has the concept of a snapshot. So again, like I was saying, you now have versioning of your container, so you can actually roll ‑‑ it makes it that much
38:00
easier to securely roll back to a different version of container or roll forward. And it's designed to be survivable to key compromise. The root key is supposed to be stored offline, but if any of the other keys get lost, the spec is designed to allow for survivable key compromise. So that's kind of cool. The folks at Docker are having this audited by a well‑known security firm. They did ask me
38:22
not to say who it is. They will announce it. When they release the results of the audit, they will say who did the work. But I know who they are and they are actually ‑‑ it's a very talented folks you've heard of. So that looks promising as well. So they're doing all the right things in terms of adding security in that space. And speaking of space, they're adding user namespaces finally. So that's
38:42
good. So once you have user namespaces, it means that you can have what the container thinks is a root user, but the main operating system thinks is just a general user without doing complex things. And this is currently in run C in 1.8. That run C is the underlying infrastructure that makes Docker containers work at all. It will bubble up all the
39:00
way through Docker in the next release or two. There are a few places that still need some help. I already talked to you about the kernel key ring isn't name spaced. That's the whole problem with injecting secrets into your kernel and other people will see it. SE-UNIX does solve that. Sort of.
39:21
Kind of. But it's not ideal yet. In terms of managing secrets, there's two open source products. Vault from hashicorp and keywiz from square are designed to help you manage secrets and keys, especially in a container environment. They're both open source. So check those out. As
39:41
I mentioned, the API for the Docker API has no concept of authentication or authorization at this point. They're working on that, but be aware if you have to use the API publicly or on network again, proxy something in front of it at least so at least you can get SSL, TLS on that certificate based off anything to layer it on. I mentioned set
40:00
comp, SE-UNIX app armor. The people who are big fans will say, oh, it's easy. No, it's not. You know, at this point, my opinion is that set comp, SE-UNIX app armor, they're really for the 1% for the most part. The tools for managing them are not there yet, especially at scale. This is
40:23
my biggest nervous point about Docker is that the tools you need to use to make these things much safer are really hard to use and they're really hard to use at scale, which means a lot of containers are going to get deployed in a less than ideal state because of that. I am totally one of those people who tries to run something with SE-UNIX turned on
40:43
and then the solution to fixing it is to turn SE-UNIX off because that is the fastest route to doing it. So I am one of those people. Logging is getting better, but this still needs some help. Orchestration, again, if you do anything at
41:00
scale, people, you know, you apply Kubernetes or both, those are still, again, they're sort of for the 1%, still they're early on. If you're not Google or a handful of others, you're not using them yet and they're hard to use. So that's where things are, that's what's left at this point. And then like I said, I'll post the resources, I'll
41:22
send Nikita the latest slides along with all the resources because you really don't want to be trying to take screenshots of some of these URLs with all the tools and everything. And just to finish up, you know, basically it's not as bad as it used to be. A year ago it was horrible, six months ago it wasn't so bad. At this point we're pretty much in this place where Docker is usable. If
41:43
you are at that far right end of the curve, it's really usable. But it's relatively safe to use at this point. And again, if you own production, please, one container per VM at this point. And that's my story for the day. I have just a minute or two for questions if there's any questions.
42:01
Otherwise I'll give you five minutes back. What about Docker OS? I'm not familiar with the second one. Those are coming along. I haven't done a real deep dive into the
42:22
security of those in particular. I assume at this point they all have the same general issues to deal with at that point. Containers are containers regardless of the OS at this point still. Last month Docker was ported to free BSD. They're implementing it in a free BSD jail on ZFS. That
42:46
combination itself should make for some interesting security issues. Or rather fixes rather than issues. Just to make sure everyone heard that. Our audience member said last month Docker was ported to free BSD and working with jails. That
43:04
will lead to some interesting stuff. I agree. I wasn't aware of that. That sounds cool though. I'll definitely have to check that out. Thank you very much, everyone.