Getting Started with Habitat
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 47 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/37928 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Producer |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
00:00
System programmingPhysical systemCartesian coordinate systemPlotterDistributed computingMereologySoftwareVirtual machineConfiguration managementMappingSystem administratorOpen sourceBuildingProjective planeFocus (optics)Partition (number theory)Computer hardwareDifferent (Kate Ryan album)State of matterLogicCASE <Informatik>Configuration spacePhysicalismAutonomic computingCore dumpLine (geometry)Computer animation
02:10
System programmingClient (computing)Process (computing)Revision controlServer (computing)Kernel (computing)Software developerMereologyPhysical systemBitService (economics)WindowComputer animation
02:52
BuildingService (economics)System programmingService (economics)Process (computing)Software testingPhysical systemTouchscreenComputer animation
03:42
System programmingFacebookCore dumpWordSelf-organizationSoftwareDifferent (Kate Ryan album)Web pageXML
04:17
System programmingInformation privacyBuildingFunction (mathematics)Data storage devicePhysical systemRepository (publishing)Shared memoryAuthorizationXML
04:52
System programmingKey (cryptography)InternetworkingComputer fileRepository (publishing)Source codeElectronic signaturePlanningSoftwareSign (mathematics)Matching (graph theory)NeuroinformatikPublic-key cryptographyMultiplication sign
05:54
System programmingAsynchronous Transfer ModeSample (statistics)CodeData miningLine (geometry)High-level programming languageComputer fileBuildingProjective planePoint (geometry)SoftwarePhysical systemCartesian coordinate systemFormal languageDirectory serviceService (economics)Sampling (statistics)Asynchronous Transfer ModePlanningStapeldateiComputer animation
07:14
System programmingProcess (computing)SoftwareCoprocessorMereologySelf-organizationINTEGRALXMLProgram flowchart
08:01
System programmingDemo (music)INTEGRALXMLProgram flowchart
08:33
System programmingPlanningAnalogyComputer animationXML
09:15
Sample (statistics)System programmingCore dumpLink (knot theory)MathematicsPhysical systemComputer fileFunction (mathematics)Computer animation
09:49
System programmingLink (knot theory)Repository (publishing)PasswordLoginCartesian coordinate systemFunction (mathematics)BuildingPlanningPhysical systemSoftwareMoment (mathematics)Source code
11:08
System programmingService (economics)BuildingMoment (mathematics)RoutingArithmetic meanMobile appBuildingVirtual machinePhysicalismState of matterVirtualizationSampling (statistics)Revision controlConfiguration managementCartesian coordinate systemMultiplication signPhysical systemConfiguration spaceAtomic numberSource codeComputer animation
12:50
System programmingWordOpen sourceProcess (computing)Cartesian coordinate systemTime travelPhysical systemVulnerability (computing)Service (economics)Library (computing)Data compressionPlastikkarteSoftwareFreewareCodeMultiplication signRevision controlLevel (video gaming)Product (business)BitMoment (mathematics)Demo (music)Data managementPatch (Unix)BuildingNetwork socketNetwork topologyVirtual machinePower (physics)Server (computing)Source codeXMLUML
15:32
System programmingLink (knot theory)Core dumpPassword2 (number)Source code
16:08
System programmingDemo (music)BuildingPhysical systemRight angleSource codeComputer animation
16:49
System programmingSet (mathematics)Right angleMereology
17:26
System programmingLink (knot theory)Process (computing)2 (number)MereologyMultiplication signoutputProgram flowchartXML
18:12
System programmingModemElectric currentRandom numberCapability Maturity ModelHypermediaCore dumpWeb pageWhiteboardService (economics)Open setBuildingEntire functionOpen sourceProjective planeLevel (video gaming)Right angleComputer animation
18:47
System programmingMobile appSampling (statistics)NumberProcess (computing)Cartesian coordinate systemGraph theoryPixelLevel (video gaming)Source codeComputer animation
19:24
System programmingWordCore dumpProjective planeDefault (computer science)Integrated development environmentDemo (music)Power (physics)Product (business)Computer clusterMessage passingFacebookVideo gameSoftwareFreewareXMLUML
20:29
System programmingVideo gameOnline gameService (economics)Server (computing)9 (number)Game theoryEndliche ModelltheorieComputer architectureRing (mathematics)Front and back endsProgrammer (hardware)Moment (mathematics)Physical systemStability theoryWater vaporSource code
21:26
System programmingSample (statistics)Salem, IllinoisIdentifiabilityComputer animationSource code
22:06
LoginSystem programmingPasswordDependent and independent variablesRepository (publishing)ParsingBitSoftwareBranch (computer science)Moment (mathematics)Point (geometry)Source codeComputer animation
22:43
System programmingWordLoginGraphics processing unitComputer-generated imageryPasswordDemonDependent and independent variablesRepository (publishing)FrictionError messageSample (statistics)CodeSystem callWorkstation <Musikinstrument>BuildingOpen setMathematicsDirectory serviceHacker (term)Function (mathematics)XMLUMLSource codeComputer animation
23:17
System programmingRevision controlSoftwarePhysical systemMoment (mathematics)Process (computing)MetadataCartesian coordinate systemSingle-precision floating-point formatFunction (mathematics)Mobile appPrincipal ideal domainSource code
24:12
System programmingRevision controlElectronic meeting systemSample (statistics)Computer-generated imagerySalem, IllinoisRecursive descent parserComputer iconRevision controlMobile appMathematicsComputer animationXML
24:45
System programmingSlide ruleBitPeer-to-peerThread (computing)Process (computing)SoftwareCommunications protocolGame theoryPhysical systemComputer animation
25:42
Message passingSystem programmingFigurate numberConfiguration spaceSpeech synthesisWaveOperator (mathematics)Autonomic computingService (economics)Process (computing)Communications protocolXML
26:32
Configuration spaceModule (mathematics)Peer-to-peerDependent and independent variablesRight angleLecture/Conference
27:20
Message passingSystem programmingPeer-to-peerComputer networkMultiplication signVirtual machineElement (mathematics)Process (computing)InformationElectronic mailing listPeer-to-peerMathematicsConfiguration spaceXMLComputer animation
28:07
System programmingRing (mathematics)Peer-to-peerMessage passingMusical ensembleMetropolitan area networkPhysical systemOperator (mathematics)
28:58
System programmingElectronic mailing listElement (mathematics)Process (computing)Communications protocolService (economics)Presentation of a groupScaling (geometry)Multiplication signScalabilityConsistencyPeer-to-peerGroup actionComputer animation
29:38
System programmingOperator (mathematics)State of matterPhysical systemMobile appServer (computing)DatabaseCartesian coordinate systemService (economics)Disk read-and-write headEnterprise architectureInformationCommunications protocolConfiguration spaceSoftwareProof theoryParticle systemMessage passing
31:37
System programmingSample (statistics)Menu (computing)EmulationGroup actionService (economics)Process (computing)Mobile appConnectivity (graph theory)Level (video gaming)Server (computing)BitSlide ruleComputer animation
33:10
System programmingMultiplicationMultitier architectureSlide ruleBitMultitier architectureVirtual machineInstallation artDifferent (Kate Ryan album)Binary fileWindowXMLUML
33:54
System programmingVirtual machineService (economics)RoutingMathematicsComputing platformServer (computing)WindowIntegrated development environmentBuildingSoftwareOperating systemPhysical systemRootComputer animation
35:07
System programmingFirst-person shooterService (economics)Parameter (computer programming)Default (computer science)Standard deviationoutputCore dumpFunction (mathematics)Router (computing)BitComputer fontInternetworkingFunction (mathematics)Router (computing)Multitier architectureService (economics)Message passingBinary fileGateway (telecommunications)InformationDatabaseRevision controlSupport vector machineDirectory serviceType theoryProcess (computing)Product (business)Instance (computer science)Source codeJSONXML
37:04
Core dumpSystem programmingRouter (computing)Service (economics)Function (mathematics)Term (mathematics)File archiverPlanningRevision controlPhysical systemSoftware bugGame controllerService (economics)Source codeXML
37:47
Gateway (telecommunications)Router (computing)Core dumpBit rateFunction (mathematics)System programmingService (economics)State of matterServer (computing)InformationService (economics)Physical systemBinary fileGateway (telecommunications)FlagLink (knot theory)Point (geometry)InformationMaxima and minimaSingle-precision floating-point formatSource code
38:51
Core dumpRouter (computing)PermanentHecke operatorService (economics)BlogComputer iconProcess (computing)Binary fileSystem programmingFluid staticsFunction (mathematics)Cartesian coordinate systemServer (computing)Control flowInformationSoftwareMultiplication signService (economics)Graph (mathematics)Physical systemHookingKeyboard shortcutConfiguration spaceRouter (computing)Single-precision floating-point formatDefault (computer science)Group actionVirtual machineSource codeXML
40:39
System programmingComputer fileEvent horizonCore dumpServer (computing)Mobile appRouter (computing)Configuration spaceOpen sourceVideo gameService (economics)MereologyPlanningKeyboard shortcutJSONXMLComputer animation
41:43
System programmingSoftwareEntire functionMultiplication signBuildingTwitterCore dumpComputer animation
42:42
System programmingComputer animationXML
Transcript: English(auto-generated)
00:06
All right, awesome. Sorry for that. Thank you, everyone, for coming. I'm Jamie from the Habitat core team. I'm also the lead engineer of Habitat. Habitat is an open source project from Chef.
00:20
We've been building it for about two years. It's a collection of pieces of an overall application that has a focus on applying distributed systems to application automation. If any of you all are familiar with traditional configuration management, it typically takes an approach
00:41
of building a system that your application will live on. So it maps pretty well to a systems administrator persona. So you provision out a machine. It's a VM or a container or physical hardware. And then you choose a distro. That's part of the provisioning. And then from there, you layer software on top of it.
01:02
And you configure the machine the way that you want it to be. And then you put your applications on it. Habitat takes an approach where it applies distributed systems to make instead applications of focus and then applications as autonomous actors in a system that interact with each other to set state using choreography
01:22
versus orchestration. Orchestration would be micromanagement, top down. So I'm the CTO of this company. And I'm going to ask Phil to do a thing. And I'm going to ask you to do a thing. And I'm going to go down the line. And then I'm going to ask each person, did you do your thing? Did you do your thing? The problem with that is there's a network partition between every one of those requests.
01:44
And as the system gets more complicated to set up, you need to build in more logic into your orchestrator to handle failure cases through those network partitions or through those different systems. With Habitat, you set a goal for the system.
02:00
And then the actors in the system make that goal happen for you. And we do that using distributed systems. So Habitat itself is a process supervisor, a developer studio, and also a hosted build service.
02:21
Those are the three large pieces of what Habitat make up. Habitat currently works on Linux, any 64-bit version in Linux, 2.6 kernel and higher, I believe, Windows, Server 2008 R2 and higher. And currently, we have just support
02:42
for the client in Mac OS. The process supervisor is a large part of what makes this an interesting talk for all systems go. But the thing that I'd like to talk about first is the hosted build service. So we have our process supervisor, and it runs packages.
03:03
So Habitat has its own package managing system that we'll go through. And the build service is the thing that actually runs those. The first thing I want to do is show you all the build service and get a job going so that way we can show the basics of where to get started.
03:26
So here I am at Habitat SH. If you log in, you'll see this screen. I'm logged in as me. These are the origins that I have. There's a bunch of them that are test origins. The one that we care about is my personal one here right now.
03:45
So what I would do normally is create a brand new origin when I land here. Origins are like an organization in GitHub. It's a way to segment packages into different containers. So instead of having just one Redis package, you could have one for Facebook.
04:02
You could have one for Chef. You could have one for yourself. Or you can consume from the core origin, which is where most of our software lives. You'll see on this previous page that the core origin has about 484 packages in it right now. So if you landed here, you'd create your own origin.
04:22
I'll create a new one for all systems go. We support public and private packages. So a public package is one that you want to share with the entire community. A private one is just like a private GitHub repository, where we'll store your packages for you, but without authorization, no one
04:40
will be able to download them. All the private data around the build output and things like that, as well, is quarantined off unless the person is a member of your origin. So let's just make a public one. So you'll see that we also generated some origin keys. These signing keys get used when
05:01
we build packages to identify where that package came from. So if you receive a package, it's got a signature associated with it. And as long as the public key matches, you know that it came from a verified source. So if I was to connect a new package to this,
05:25
or a new piece of software to build the package, I'd come connect a plan file. This is reaching out to GitHub. Sorry, the internet is giving me some trouble here,
05:40
so it's going to take a minute. But this is reaching out to GitHub to find a GitHub repository that contains not just my software, but a file describing how to build that package. So if we go to my GitHub, actually, I just want mine.
06:09
I've already forked a project to prepare for this. This is just a sample node application. In this directory is just a node app, but there's a Habitat directory as well that has a plan.sh file.
06:22
This is the entry point for how to build Habitat software. This might be a little small. Sorry, guys. It's just nine lines of bash. We used your system scripting language to represent how to build packages in the Habitat system.
06:40
This is nine lines long because we know really well how to build node applications. This is called scaffolding. You can build any software with Habitat. It's not just for Node, or Ruby, or Python, or a high-level language. We built an entire, I mean, it's not a Linux distro, but we rebuilt every piece of Linux from glibc up.
07:02
And we'll get to why we did that crazy ass shit in a minute, but we had to. So nine lines, and this describes exactly how to build the package that we're going to put through the build service.
07:25
I'm going to make it a public package, and I'm also going to export it to Docker. Oh, I haven't set up an integration yet. So part of the process here is, and this is optional, we have post processors for the build process.
07:41
So after we build a package, we'll upload it back into builder, where builder hosts your software. So your process supervisors come, they download the software from builder, and then they run them there. You may have a preference in your organization to use containers. So one of the things that we're going to do here is set up an integration with Docker Hub.
08:10
I just got to get my password, sorry about that. Whoops, live demos are great.
08:32
So I've got the integration set up. If I come back here and refresh, connect the plan again, let's put it out
09:05
to a container in that origin. OK, so I've set this up. What I'm going to do here is edit the file
09:20
to make sure that the origin matches the origin that I just created in builder, so it's all systems go. And yeah, this is good. Let's commit these changes.
09:44
And because I just commit those changes, it actually kicked the build off for me. So if we look at the output here, this is the Habitat build system, which is called plan build. And it's running and just taking care of installing any
10:01
of the package dependencies that we have, building that software, uploading it into itself, and then exporting this all to Docker. I happen to have a Docker container already built of this, so I can skip ahead and do the cooking show thing for you.
10:59
Yep, the cooking show thing is going to fail.
11:01
We're going to wait a minute while this actually does build. So this is a good moment to chat a little bit about while this is building, how the packages work, and why we went the route that we did.
11:24
So every Habitat package is immutable, atomic, and isolated. So the thing that we're building right now is isolated from the rest of the system entirely. This sample app that we're outputting depends on glibc, but it depends on a particular version of glibc that's a snapshot in time from the moment
11:41
that we issued the build. So because they're isolated in this way and because they never change, it gives you those superpowers from a container, but it doesn't just work with containers. You can use this for physical hardware, virtual machines, or in this example that we're going to do, a container.
12:02
And versus traditional configuration management, where you have an artifact that configures a machine that has global state, you don't really know if that thing will succeed in running a year from now. You also don't really know if it will just tomorrow or today, because the machine that you run that configuration management
12:22
on basically is dealing with global state at all times. Because we instead set up the machine and we cared about that first, and the application is a citizen of the machine, when you try to configure the application, you just don't know what the rest happened on that machine before you came there. So we say that Habitat packages are immutable,
12:41
isolated, and atomic, meaning they never change. They're all or nothing atomic, and they're isolated from everything else in the system. An interesting why we went this approach and why things are isolated in the way that they are, and they only work against the thing
13:00
that they were built against. Let's take a little bit of time back and talk about a story from about 2000 or 2002, which was a day that Linux rebuilt the world because of a vulnerability in Zlib. Zlib is a compression library that every socket pretty much uses, which allowed for arbitrary code execution
13:20
if your machine was vulnerable. And because of that, basically every piece of software on the planet had to be rebuilt to remedy this issue. The problem with that is, at the time, people weren't dynamically linking to Zlib. Or some people were, but one of the larger issues at the moment was it was less common to dynamically link
13:41
to a shared system library, so you couldn't just patch Zlib and then restart all the services. Instead, you had to go to your build server, if you had one, because this is 2002. Cruise control landed at 2000 or something like that, right? So it's a way back time machine that we're in. So you had to rebuild basically every piece of software
14:00
if you've vendored Zlib into it. And the problem with that is you just had no idea sometimes which applications were built with which versions of that software. And today, in 2017, we still have that problem with containers. People are trying to solve for what is in that container. Do you know?
14:20
Did you just go to Docker Hub, download Redis, and then run it and hope that everything in that container is copacetic? Do you know what version of OpenSSL is in there? Do you know what version of Zlib is in there? And the answer is no unless you built it into your build system, and Habitat gives that for you for free. All this is open source and free. That build service that's running right now, $0.
14:42
You don't give me a credit card. It builds your software for free. So actually, what I want to do right now is rebuild glibc in production as a live demo. I'm not going to promote it to stable because I don't actually want a problem, but I will rebuild the world right now and show you guys what that looks like.
15:01
One of the reasons that we built Builder to support this process supervisor and packaging manager is for this problem. We realized if we isolate things in the way that we do, it will be, you will constantly be rebuilding software manually trying to figure out how,
15:22
trying to figure out which software at which level needs to be rebuilt to remedy the issues that you have at the leafs or the applications of the dependency tree. Okay, so this failed because my password
15:42
wasn't any good, but we'll get to that in a second. For now, what we're going to do is,
16:16
looking at Fletcher and Chris's face, the people on my team that didn't know I was going to do this.
16:23
If it doesn't, it's live demos, man. I don't know if it's going to work. It should work, worked like a couple days ago. All right, so while this is building, I just want to show you quickly a dashboard that we have. This is private. This data we hope to get out to people so you can see what's going on in the system,
16:41
but right now, we just launched, so this is private and internal, but this is basically what's happening in the system right now, and glibc will start working through it in a minute. Okay, right now, I want to see why this failed.
17:11
I don't, thank you very much. All right, so what happened was my username on Docker is different than I thought it was.
17:26
That's right. Oh, I got this part right. This will just take a second. I'm sorry, guys.
17:55
Can I get a time check really quick?
18:00
How many more minutes? 20, excellent, thank you. We have to validate that input. We've got an issue on our board. Also, this entire project's open source, the build service included,
18:22
so if you go to Habitat on GitHub here, you can follow our project tracker, see exactly what we're working on right now. We also have a roadmap.
18:40
I can show you guys that in a minute, but it's all open source, so you can see 100% what we're doing. So this has kicked off now. This is that sample app. And we can see that we've got a number of jobs kicked off. glibc is building right now.
19:02
And as soon as glibc is done building, it looks at the rest of the dependency graphs, everything that depends upon glibc, kicks those builds off, and they happen in stages. So if 20 things depend on glibc, those will begin building. If 40 things begin to build off of those things, they'll start building, so on and so on until we get to your applications.
19:25
And why this matters is if you have your own origin, say you create Facebook, and you depend upon core glibc for anything, as soon as glibc rebuilds, we issue a command to your projects as well to rebuild,
19:43
so you'll have a message. It'll automatically rebuild your software, and it won't affect production. That's why I'm pretty okay rebuilding glibc right now. I hope my live demo doesn't fail, but I know that nothing's gonna happen, because inside of builder, we also have this concept called release channels. There's two by default.
20:01
One is called unstable, and one is called stable. Unstable is where all this is happening right now. If you depend upon unstable packages, I mean, more power to you, but I would not. Once we're done with this, if I wanted all these packages to be consumed by other people's projects,
20:22
or if I wanted to promote, say, our production environment of builder here, I would promote everything to the stable channel. Builder, this service that you're looking at right now, is built on, so my history is, I built video games for 10 years.
20:41
I worked on Guild Wars, I'm sorry, Guild Wars 2, Lord of the Rings Online, Dungeons and Dragons Online, League of Legends. Large online distributed games. In this whole build system, our goal is to have about five nines of uptime or more in a year. This is brand new, and it's in preview,
21:01
so I can't guarantee that this moment, but the backend architecture that's under this is a distributed online game system, basically. And it models really closely to the experience that I learned from working with the server programmers at Guild Wars 2, where their uptime was unbelievably ridiculous. Anyway, so this build is kicking off now,
21:21
and it's pushing it into Docker, where I'll be able to pull it in a moment. If we look, glibc is still building.
21:43
All right, so it's done. So I look at latest here, you'll see it's in the unstable channel. There's no latest stable build of this. If I wanted to put this into stable,
22:04
I take this package identifier, and I simply run this command to promote it. This will be in the UI at some point, but it's not right this moment. So if I push this into stable now, any supervisors that are running the software
22:20
and connected to the stable branch will automatically be updated. And it'll be automatically updated in a rolling update fashion, if so you want. We'll get to that in a moment, just as soon as we get to this Docker container bit. So if I look in Docker now, it created this for me.
22:48
Like I said, this is optional. You don't have to output a Docker if you don't want to.
23:13
Just trying to, yeah, there it is. Port 8080. So after I pull it, I'm gonna start the Docker container,
23:26
and forward some ports, 8,000, where the app is listening. What just output was the supervisor itself. Our process supervisor runs as PID one. Why this is important is if you go back to the moment
23:40
where I described how our packaging system works, and we know every version of every piece of software running. Because we're process one, there's literally nothing else in here. This isn't even Alpine Linux or Busybox. This is nothing other than the process one. We know for sure every single piece of software that's running in your container, and what ports you're listening to.
24:02
All that metadata about the application travels with the package, and because the package is immutable, it never changes, it always works. So if I look here, we'll see I'm running version 1.0.2 of this really, really basic sample app.
24:25
And if I wanted to change that, something about this app, I just commit something here, builder picks up the changes, auto builds it, and if I had that container set to auto updating, the container inside would auto update. If not, I could just pull down a new container
24:43
and deploy it if I'd like to. So let's take a peek at our, oh, glibc's still building. So we'll move on to the next bit of the slide deck, which talks about the Habitat supervisor itself. So I talked a lot about packaging, the build system.
25:01
Those are the sexy things. To this crowd, this might be the sexy thing, which is the process supervisor that runs all of this. So Habitat is a peer-to-peer process supervisor with a network thread that communicates
25:22
in a peer-to-peer fashion to spread rumors about the processes that it's running in an epidemic protocol. I like to play a game with everyone here. If you don't wanna play, it's okay. This usually works for a less experienced crowd. You guys might know how this all works already.
25:41
But I wanna simulate what it looks like to spread a rumor through a bunch of autonomous actors. So every one of you is a process supervisor running services right now. And the protocol that we're gonna speak is a fist bump. We're gonna do that for sanity reasons. If you don't wanna fist bump, you don't wanna play, you can stay sitting.
26:01
But you can high-five, you can wave, anything that you're comfortable with. What I want you to do is stand up and fist bump three people around you. So if you wanna play, let's do it. All right, whoa, whoa, whoa, not yet. We're gonna, I'm choreographing this, okay?
26:22
I'm the operator here, I get to say when it happens. Thank you for being ambitious, though. Okay, so I'm gonna issue a configuration change right now. So I tell Phil, and you start, and as soon as somebody gets fist bumps, you turn around and hit three other people. And if you've already received one,
26:41
just take it anyway and drop it on the floor, right? Okay, config. So three peers, everyone three peers. And as soon as you're done, sit down.
27:04
All right, so the rumor's pretty much been propagated to this room of 10,000 people. That was very fast, right? A lot of people came to this talk. So if I personally got off the podium and went and issued a command to every single person, it would be error-prone.
27:22
Maybe during that time, someone has to go to the bathroom or I couldn't even give the talk. I can't go do the rest of my work if I was going to each machine or person, which is a machine, and issuing this command. So what I just simulated here was the audience is a peer-to-peer network.
27:41
I suggested that a configuration change happens. You didn't have to play if you didn't want to, but if you did, what you did was establish a membership list with each other. So you know who's there, who's not there, and you also established information about yourself to each other. I mean, I'm abstracting a whole lot
28:00
of what that fist bump meant, but imagine that you were a supervisor telling each other about the processes that you run. So you established a membership list, and then in the future, if somebody went away, like say somebody had went to the bathroom, if you went to fist bump people and you realized that person was gone, well then you would send a message to your peers and say, Tom's gone, and they'd verify that he's gone,
28:25
detect there's a failure in the system, mark that person as confirmed dead, and then the world would keep on going. The person that's confirmed dead could come back and be like, what happened? I just went to the bathroom. Still, you're saving my seat, man. And they can come back and rejoin the ring.
28:42
If they come back and they start causing problems, one of the permanent members of the ring can permanently get rid of them and say, ah, that person's departed. Or as an operator, if I know that somebody's coming back and they're misbehaving, I can permanently depart a member as well and they're not allowed to join the ring again. It's a kick band. What I just described and simulated is something called the SWIM protocol.
29:03
Habitat's process supervisor has this built into it. It uses SWIM to establish a membership list with the peers around it. SWIM stands for Scalable Weekly Consistent Infection Style Process Group Membership Protocol. All that means is mathematically, it scales indefinitely,
29:23
or at least linearly over time. And its job is to figure out who's alive, who's there, and who's not. Now, that doesn't have anything to do with the services in general, but it does have a lot to do with who's present and what supervisors are available. How we figure out what you're running
29:42
and what services you have is with something called Newscast. And we use this to spread rumors on top. This is a sub-protocol, no, this is not a sub-protocol, I'm sorry. This is the main protocol for how we issue rumors. There's sub-protocols built on top of this, which I'll chat about in a minute. So spreading rumors, we spread information
30:01
about what services we're running, what their health is, what their state is, are they up, are they down, what configuration they have, and if they're the leader, if they're the follower, we build sub-protocols on top of this as well. Those sub-protocols are leader election. So say you had a database server,
30:22
three of them running in a ring, and application servers connected to them. You automatically figure out where those databases are through service discovery, linking the services together. So the app servers find the database servers. Let's imagine that you shoot one of the database servers in the head and it was the leader.
30:40
Well, they'll perform a leader election using SWIM to identify the failure, and then spreading rumors to figure out who the new leader is, and then the application servers will identify, oh, one of those database servers is gone, I stopped talking to them, and they'll be reconfigured and sent a message. All this was done without operator interference. This happens when you're sleeping.
31:02
As long as you build it into the packaging, which again are immutable, they never change, they always work, the system will just self-heal itself as long as you set the packages up correctly. This works with any software. You don't need to change your software. So this works with Postgres, it works with Redis,
31:21
it works with an unnamed, really terrible piece of enterprise software that we used as a proof two years ago to see if Habitat would work with the most garbage shit I could find, and it does. Let's check on glibc really quick.
31:41
Okay, so it's done building, and now it's kicked off a whole lot of stuff. So there's about 30 builds running that it's dispatched now, and there's 31 other builds that are pending and waiting until those are done. Now they go off in groups, and eventually, Builder will be rebuilt.
32:02
This thing, this app that you're looking at right now will be rebuilt because I rebuilt glibc. And if I promoted the entire world to stable, Builder will be automatically updated because Habitat supervisors are running Builder.
32:21
Builder is building Builder, and Builder is building Habitat supervisors. Habitat supervisors also can automatically update themself without a service outage. When they do so, they'll update, reattach all the processes, and then eventually, Builder will be rebuilt, and the Builder components will auto update themself.
32:42
They'll do so in a rolling update fashion, and we shard our data 1.128, and we have them separated in concerns. So there's a session server, there's a job server, there's all the workers, and they'll all auto update in a rolling fashion, so you won't notice a service outage either.
33:02
Unfortunately, maybe next year I'll do that on stage, but right now I'm not gonna update glibc to the world and show you that process. All right, back to the slides here. I've got a little bit more time. So the last bit I wanna show you is how to interact with the supervisor.
33:21
And what I'm gonna do is run a multi-tier application with Habitat. It all starts with the Habitat CLI. You can get it a couple different ways with Brew, with Chocolaty, if you're on Windows, and there's a curl bash that somebody in the audience is gonna complain about, that if you don't want it,
33:41
you don't have to use it, you can go just fucking download it from bin tray and put it on your machine. I don't care how you get it. You can even install Habitat with Habitat because Habitat, the CLI, is packaged with Habitat, but you gotta start somewhere. And then what we do is we enter something called the Studio. This is one of the large pillars of Habitat. So we have the distributed build service,
34:02
we have the supervisor, and then we have this development environment. Basically, it's an isolated CH root. It depends on your platform. If you're on Linux, it's a CH root. If you're on Windows or Mac, it's currently a Docker container. And it has just enough of what you need. It's a completely isolated, clean room environment. So when you build software in here,
34:21
it doesn't pull in the dependencies of an operating system around you. You can only link to the things that you've depended upon. And what's really nice here is this Studio is exactly what Builder is using to build your software. So if you've ever used another build, I'm sure everyone here has used a build system before, but what sucks about them often is troubleshooting them.
34:41
You make a change in your local machine, you're like, that's probably gonna work. You kick it to the build server, and fucking four hours later, you're still trying to figure out why the build server won't build your stuff, because the path is wrong, or it's missing a dependency, or somebody else is on the build server changing it while you're there. And anyway, I digress. We did this so we would avoid that. Basically, the Studio builds the software
35:01
the exact same way that Builder does, so you don't run into those problems. So let's kill this container up the font a little bit here. So I already have Habitat,
35:22
and I'm gonna enter something called the Studio. So once you're in the Studio, it gives you some information here. So for instance, I'm already running a supervisor, and if I type SL, I'll see the output. It just went and grabbed some information from Builder itself.
35:42
This is connected to the, I'm connected to the internet. I just went out and connected to Builder and downloaded some packages, and now the supervisor's running, but there's no processes in it. So if I run HAB SVC status, there's no services loaded. So the first thing I'm gonna do
36:00
is pull down the router for Builder. So it's downloading it from the stable channel, which is the exact same version that's running in production, and it also is downloading all of its dependencies, everything that the router has ever linked to.
36:21
In our infrastructure, we have gateways at the front, a router, and then all the services are in a service tier, and then the databases are in a data tier. So this is the message router between all of that and how we can stay online. If you look in the HAB directory here, you'll see that that's where the packages went.
36:41
So if I look in the router and at the bin directory, let me move this over a little bit, I'm gonna LDD build a router
37:02
so you can see what I mean about the isolation. So you are linked to only Habitat packages here. There is no system glibc here that we're linked to. We're linking to a specific version of zero MQ and libsodium and libarchive.
37:20
All those were built with Builder and with plan build. So that's why I know everything that it's running. I'm gonna start the supervisor again because I hit control C and killed it. Stupid bug that we have, sorry about that. So if I wanna start the service, I just type start.
37:48
If I look, it's now taken that package and began to run it as a service. I check the status, it's running.
38:01
And I also have on every single supervisor an HTTP gateway that's running, which will give me information about what's on the system. So I don't have curl because I mentioned only what is needed. The very bare minimum is in this studio. So I'm gonna ask what provides it.
38:25
It's gonna tell me some bullshit, that's a bug. But normally it works, trust me. And I'm gonna install curl and pass it this dash B flag which bin links it into my path. So if I say which curl, I've got it.
38:41
Okay, I don't have which, you get the point. I'm gonna ask what services are running. And it's a bunch of spew here, so let me get something to format that.
39:06
If I look, this is information about every single service that's running in the system. That's what the path is. This is every piece of software that it's running or every piece of software
39:20
that it depends upon, the service does. You can hook this up into any system that you want to put data over it or graph it, et cetera. The next piece that I'm gonna show you is running the router, the application server that connects to this and then I'm gonna have to stop. But I can talk to anybody about this afterwards. So I'm gonna install the API server.
39:50
And if these were on two different machines, it would still work the same way just for the sake of time.
40:03
And there's a bind that I have to connect this to. So I connect it to the builder router service, which is running in the default service group. I don't have enough time to describe service groups, but there's tutorials online, there's a 10-minute tutorial.
40:21
You can, there's everything you need to learn. We try to make Habitat as simple to learn in two lunch breaks. So you can learn all this shit without me in front of you. So we start it. The API server started and I'm gonna look at its config really quick and you'll see that the routers
40:42
were automatically filled out in this configuration. This config is my app's config. I config our servers in Toml. But this would work with yours too. And the routers normally, again, it's all open source so you can see how any of this works too,
41:02
like how our whole live service works. This is the part of the plan for configuration. It's very simple, just handlebars. If you look at the routers, we went through each alive member of our router bind.
41:23
We iterated them, we pulled out their sys IP and the configuration that is port. And it's a simple polymorphic relationship between the router and the API server. Router saying, I expose these things. And the API server saying, I wanna depend on something and you must expose at least this port thing for me.
41:43
Last thing that we should do here is check on our glibc build. It's still going. That's gonna be going for about another 25 minutes or so. But at the end of it, if I wanted, I could promote the entire world and software running anywhere that is depending upon our stable channels
42:02
and automatically updating would be updated. I have no more time. That's all I could show you. All this was completely unscripted. So I'm very sorry if it was terrible to listen to me talk for 45 minutes and stutter.
42:20
But I'm Jamie Windsor. You can find me here on GitHub, on Twitter. Two of the team members, two of the core team members are here as well. Chris and Fletcher, if you guys wanna stand up. I think we're all wearing Habitat shirts. If anyone's interested in Habitat, anyone's interested in Builder, the dev studio, we'd love to chat with you. Thank you very much for listening.