Bestand wählen
Merken

Ephemeral Apps With Chef, Terraform, Nomad, and Habitat

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
hello everyone how are we doing today I was really pathetic are we doing today how that I their ago I have 4 slides through here today you've seen them and they're they're rotating so this is entirely live demo and that means that at the end of this talk I will also open source all the code and the script for this so turn them again about we have a link at the end but this is lot terminal and a lot of text and I'm going to assume that you went to either or both of Jamie and Fletcher's talk and if you didn't and quickly talk about habitat but at a very high level so for those of you watching at home on YouTube channel stop go alleged any stock the legislature stock now come back here called so women establishes this I wasn't kidding there 4 slides
and then let's go and get started His
excited and all that was OK part so
this is a good point as for everyone no I can't make a bigger perspective but it so I'm going to be doing today them using a demo application and no habitat notion of data from the Paganini I wanna show you with the application does is a goal binary so I'm just gonna go ahead and build that go binary right now am and I ran go build and while I have a go binary and they go binary is called HTTP echo and it's a really tiny in memory http server that echoes what I give it so I'm at a given some text of like high and it's gonna start a server listening on a configurable port 5 6 7 8 and if 5 a curl that part local those 5 6 7 8 and magnet text program and I get a log message that says like they and you got that so that's or dealing with here today that was like no extra tooling this is just a tiny little observer and this could be a really complex application like like rails or WordPress but I wanted to limit the number of abstractions so this is what we're working with here today so in order to build this I needed like a local go and on environment and I have all that set up because I'm an engineer by trade that you may not be co-developer or you may not need all of this so let's jump in habitat so stop this shift come 0 at so that everything I have is we're starting on a jury so this http Aguirre server this is going to be the next MySpace and I'm Tom here to guide you through the journey so we have to start somewhere right we have to start out with our tiny little application in order for it to be big and be the next big MySpace and but we need logo development 1st but for now I just pretend that I'm running some off the shelf MySpace solution I where the static binaries are precompiled and I just my own configure on top of that and that's the 1st habitat scenario a walking through so let me take a look at what I have here inside my directory I have a habitat the inside the habitat for the I have basically 2 things a plan file and a default . tall tumbled depending on how you pronounce it and have a hoax directory so it can go to the planned 1st
so inside this habitat plan I have a number of configurable things in here and specifying like my name the packet name water downloaded from what you should get out of this is the important part here is this source for online 7 this is telling habitat to go out to in this case can have releases and download this precompiled horrible so in advance gone ahead and compiled this so I don't go I don't need revering nonsense is already compiled a single static of binary with no dynamic dependencies and then using the Irish awesome online 9 here just a guarantee they get the right thing so for some reason the Wi-Fi craps out or if I like him subject to cementum attack like and this will prevent that from happening the I'm adding the resulting thing to my being so this'll make it available as an executable binary and then exporting and exposing port and I'll show you what that is in a 2nd but if you remember when I started the service at random port 5 6 7 8 but I might wanna write on 4 1 2 3 4 or 16 thousand 123 and the binary let me do that and we can actually can figure that the configuration file and habitat I also have a few things here like the build steps in the bell step you'll notice that it's really complex return 0 really really tough stuff there and that's because by default habitat once the build my package but I'm using a precompiled binary in this example so I just over right that is a k story about it already below and then in the install step here I'm simply moving back compiled binary into the past habitat automatically on archives it for me so you noticed above it was a terrible habit automatically on a cockeyed for it for me put it in the source that and I just moving into bin and give executable remission so what not that poor thing well if we take a look
at the default at home or you'll notice that there's 2 configuration options here there's the portal to bind the service to and the text to render and these are the default values 5 6 7 it's a default value for the port and text is hollow shaft comes with an exclamation point at the end so the last thing that I have to show you that is this run Hawk the run hope is the thing that actually runs the program inside the supervisor so this is the thing that summarizes the run http echoes already in the past as part of the do install steps and telling it to listen on a particular port which is from the configuration and I'm telling it to render some text which is coming from configuration so this thing's between those curly braces their little squiggly lines those are coming dynamically in being filled in by habitat and the habitat supervisor so cis . IP is the current IP address so I'm binding to the IP address of that instance and configure up or or CFG . 4 and configure text are coming from that default at home all files and as you might imagine Sensors called the . the full . home were actually over it able to over right those things which will get to in a little bit before now this to see what it looks like to run this thing in a habitat summer gotten enter my studio and implementation detail this is gonna like started darker thing induce a magic marker and and I'm going to build this binary and build is a little bit of an overstatement here cause it's actually downloading a tarball unzipping it and putting it inside a container done that fast took about 6 seconds and really really really tough stuff here other than output here but it's not important what's important is that you think about the high level is I had a pre-built binary on an archive service in this case give I downloaded it and now it's in a habitat package and we can run it somebody go ahead and run had says we see a metal teaching here start and it's called set or slash http echo and since the supervisor starting and we
can see that it's running by running sup log SL for sure we can see OK there's some output there and great there's the log line from my server says it's listening on Syst IP which was interpolated to a 1702 and it's running on port 5 6 7 8 on the new line there thank you line rather they're not on the same so this is running a weekend however all this URL here so we can actually copy this IP and we can crawl that and we get hellish of cough and that came from that default but homophile so that text was interpolated in that run hook it listening on the port and I feel it was configured so start and stop the service now I'm done with it we're moving on with a Donna Summer gotten stop the issue active service those pretty fast so it's verified logs OK we got a stopping signal but what is this this is kind of scary line here unknown signal terminated will and it says that it shut down and I don't know about you but genuine 0 Fletcher told me that SV stands for supervisory but I believe they needed after me 7 or go so every time you see as we just know that that actually named after me of so ran out notice that that 2nd was log-linear error that's coming from my application might go binary what's happening is after some research I found out that habitat sends terms as the signal to gracefully terminated application all my the responded terms a response to it In order to shut down gracefully which is just another eunuch signal like US are 1 quit or whatever so I actually have to update my application might go binary to also listened to terror because since this is an HTTP server and it's gonna be as popular as and I can't teach to the dropping connections all time so once we reach massive scale here a critical mass we need will gracefully drain connections and my all already does that very wrote the code to do that but only does that a response to the right signal in this case into but habitats sending terms so let's go ahead and exit the studio 2 1 and the studio everything's gone so let's move on what I have to do now is is in order to change this behavior and have to compile from source so we need to make a few changes to our plan so again you'll notice that I have that habitat for but also the source directory so I I downloaded as much ahead of time it is the life I was terrible inside the source directory I have my actual goes sorts and so I have the all the go files and there are the important ones we can actually remove some of these because I tested this earlier the the sources so we have 4 files in here and server version handlers if you don't know go none of this actually matters what matters is I need to change the signals the my application response to and I've done this ahead of time so you don't have to watch and tight but come down here in this section I added this particular line of code or in this case have a line of code so I said in case I receive a signal term also do a graceful shutdown you can see previously it was just think it so now inhabited supervisors and turn my application drain the connections and shut down but this is source code and then I look to pretend that I don't have a goal environment ready to go so how I build this will with habitat we can update our plan to remove all of that stuff for downloading it because we're to have the source code here we don't need the tarball anymore instead I'm adding a dependency on what's called core go so you see here this is a package dependency on core go what this is good news is gonna build her bring out all the packages that I need to build this as a goal binary and run in indoor environment so that means we also need to fill in our do build step so remember before the do build step was just return 0 but now or copying or source code into the catch and building so same go build command as before the and then are do install step is exactly the same no changes there so we actually net lost 2 lines of code of the plan file we added 1 line of code to do Bill but we removed 3 from the top and so actually net loss to those actually smaller plant but were pulling in a whole go context now but the process the workflow is still the same and the Hawks everything haven't changed so I'm going ahead and my studio this is a different studio from before each time you enter the studio it's different you'll notice that had downloaded some stuff at the beginning and I downloaded curl but when I run this old it's downloading the girl core so that came from a package steps and download all the dependencies that Gorica cut that they go corny excuse me this to compile from source and now to start building the so it pull everything down and now I build the application and this winter but longer this took about 20 seconds because we to compile the thing from source so we had to download some extra external dependencies but now we have the service and we can start again
so you can run have SEC starts separation GPI go it's running we can run in the logs and see that it's running the ago it just started we concur let's make sure that it's running kernel great it's still running and then the moment of truth as did we compile the right thing so I will have a cc stopped in which of the logs there we go it got we got a note from a service that said receive interrupt and we got a graceful termination which means my application gracefully exited and it drain connections so that was under really high load right if we reach MySpace scale it was on a really high load my application would drain connections before it terminated so we have a well-behaved application here and to a certain extent this also illustrates another point which is have had a really great tool just like you know darker and rock and all these tools but at the end of the day there are certain things that are application responsibilities this isn't something that habitat can solve for you if you have a server or service it should know how to drink connections where receives a certain signal this is a property of like a well-behaved applications so now let's move on to something a little bit more exciting um I don't know about you but if ordered that MySpace we need more than 1 of these rate because it's going to can handle but I don't think you can handle MySpace scale so we need to do some dynamic scaling we need to really be able to build this thing up so let's check it out says 0 go up a directory to you dynamic and in here I have the same source the same habitat directory there's a few differences inside this habitat directory the 1st is in the wrong Hawk the run hook solely for demonstration purposes has added a new line so instead of just rendering the configure that comes from the default homophile it's also printing out its host name it's IP and the pork that it's running on and become important because when we put this thing behind a load balancer we will actually actually will see that were having different points now if this was a real application you'd want to behave the same but for the purposes of the demo worker cheating here a little bit and trying to make it look as interactive as possible that's the only real change here everything else is the same so all gone and enter the studio and will build this thing you notice that this workflow is always the same and so I would have carbon were workflow company we focus a lot of workflows and this habitat tool is also a workflow tool I enter my studio I run build it is a matter from running Rubia go a Python a PHP your cobalt like it's the same workflow every single time and just talking to kill time was compiles perfect there we go you can cut that from the stream of great let's start 1 of these and what are these things were of period so will start of these things start to worry uh the again
at this thing is running subplot is sitting in the hangar right and the thing is running it's running on the report and at this time and they grab this IP make the part was were needed and all that but now I want to apply at a configu change so amended you is on my at the the thing here and uh how configure flies then I give it the target group that I want the target service group which is is going to be issued the echoed . default I give it a monotonically incrementing integer which is just gonna be 2 in this case and I'm going give it the thing that I want to change in this case I wanted to change the text so I'm going to say if you remember the current Texas comes so I'm gonna change the text has said hello puppet come awkward and it will run the log command you'll notice there's that hoax recompiled there at the bottom and hopefully if we curl uh the report 670 will get hello public of Walsh this is awkward but we should fix that so we can go back and we as long as we used a higher are monotonic incrementing integer I can change this back in if we crawl were back to shift recall I don't have to restart the service enough to do any of that it's getting poor when we put this thing a load balancer so in order to run the set scale and the habitat supervisor currently can't run multiple instances of the same package or same applications and so I'm mean X workers to darker which thankfully habitat makes really easy for me so I can just say have package export darker and the name that I wanted to X as and this is the endowment all stuff that it needs to export is a darker container and again I'm going to attempt to talk my way through this but it's so it not only install this things rebuild by package but it's also then going to rely on that the whole darker build process so in a 2nd here you should actually see the m familiar darker output they're just started to setting the context the darker Damon and here's here stated can iterating over the darker file notice that I did write it order file enough to touch darker it is happen to be installed and are now an expert in a container so I don't actually have to think about the whole darker file processor any of that In order so now I have a darker image on my local laptop and I'm going to open a new tab to show you how this works so when it is them a start 1 of these outside of the studio the way I do that is with the familiar darker Run command and a run in an interactive mode to so you can see the logs but it could start in a background process and and it's just going to be the name of the container so here we go notice the familiar output but this is running in Dhaka know a max running a adata from and notice this listening on 1 7 to 17 . 3 and that's the way the doctors IP networking works really have something about so now is start a 2nd 1 of these will how I do that in order to do that I have to group them together into uh appear like a service right so I can actually do that I'm in this scenario so what I'm gonna do instead a is started tab here and I'm gonna run the same commands but this time specified the dashed as pure flag and I'm a pass in the IP address of my already running container which in this case is the 17 0 3 and when I run this is going to start up the same but you'll notice there's that hoax recompiled and if you look above it you see another hoax recompiled that's your indication that those 2 services have found each other we can use a 3rd time and I can join any 1 of the other IP addresses so in this case I have 2 of them joined together but the way that the gossip protocol works under the hood is they'll find each other and so all these services are now gossiping they're communicating with each other and in a clear all the screens here any but now I need a way to address that so I have 3 of these things 3 instances of them running in a really common pattern is to put them behind a load balancer way traffic is kind of round robin under centered routing algorithm to all the services and will make that Yale accessible to the users of our application on and you know my myspace . com or whatever and applications going be so the way I do that is by opening yet another new tab and I'm going to start a darker container which is a habitat service that I built ahead of time at this habitat services cut engine X L it's fully open source some I get help and what it does is it accepts a series of Bayern's and binds to a back and automatically creates an engine next load balancer based off of that so article the super there you freeze it and look at it as you want but it's just implementation he told this talk so i'm going on around that again interactive as you can see that the details there but I'm also gonna a port 80 and I'll show you why I a binding for on my host and on the container so it is 7 Vardos slash and genetics and then it everything has to do with the same peer group so have the pass in the dashed as pure flag which were those where 1 7 2 17 of about 3 but that also means a dash dash find argument and this is the back into service to bind to that i want or out of the traffic to and in this case special be echoed up before that's the back in service group so and start this up and you'll notice we get a couple errors like hey uh school too fast OK I'll see that serviced appear and also service correct 0 there we go I found it because the ring has completed as propagated that gossip membership and there we engine X is now running and it started the process in a notice that a compiled and X . com Phan mind on types those the configs they're being management habitat supervisor so now the moment of truth is if I open it uh will do side-by-side here if I open another tab over here and I crawl local host Bloom not only am I hitting the load balancer as you can see the log message here on the left but if we run this a couple times uh CP and 0 . 5 4 0 minus so those what this is doing is every half a 2nd it's running macro-command and you'll notice that were heading different IP addresses 3 4 5 the running on different nodes to the actual behind a load balancer all monitored by habitat and all the back and upstream dynamic load balancing all that was hindered by index base of a habitat integration recall rate this notice scaring me I find I can see you're not impressed so let's change that and the close this year I'm a jump back over to my studio so the studio still running but this studio is connected to that peer group at all the studios running its own supervisory but it's not connected to those darker containers the right so what it is over here and uh run that same crop run synchro commanders as before so same thing is happening well of course can Fig change so I wanna have configure applied where the you echo that default the I wanna get a monotonically increasing images of this time it's going to be but the tax is going to be hello World but I need to some help target these other peers and I do that by passing in the dashed as pure flight here and I can give again any 1 of those pure addresses so in this case I'm gonna give . 0 . 4 and I can't type text equal at needed and quote 2nd should applied in now updated just like that so NASA's Hello world and I can come over here and I can say that the text is the company and the text what date to on this Bill Burnett is this cool stuff now I so the the challenge I have now is that I don't about you but I don't run protamine local laughter at least not anymore and so was going quit this summer when eggs at the studio will quit these docker containers knowing at time for that and will
quit the and next year against in our back this straight up and production so when go up to our production and nothing has changed about the habitat package I haven't added anything a changing things of the same dynamic 1 as before what have added into the mix in our 3 tools and has she got terraform Haji? consul and has occurred now that I'm sorry so many tools all of the cells of free open source in the integrate really well with the product is allowed to show so the 1st thing I did in advance was I spun out of a cluster so inside this repo which again will be open source I a bunch of terror from configurations terraform is an open-source tool for provisioning infrastructure resources Amazon Google is urges lotion I can manage knowledge is compute instances but it can also manage things like different teams and permissions or integration with your work public cloud are basically anything that has an API and a declarative syntax so I wanted terraform from convicts in here not trying to deter from here what I'm trying to show you is that this whole cluster provisions in about 3 minutes but I wasn't sure how long my talk with takes I did it in advance so what this is doing is making sure I'm all up-to-date what I have here a 5 of
them or 3 axles and running their clients and 3 C 3 to Excel's running as servers important if you don't know those are be the instances they can run lots and lots of applications running on Amazon Web Services this demo has nothing to do with it you ask you could run that anywhere but just for illustration purposes that music the these clients have suffered preinstalled particularly they have to harsh acquittals Nomad which is an application scheduler and consul which is a service discovery distributed key-value store no matter is akin to something like nieces or our communities if you're familiar with that terminology and consuls a similar to something like zookeeper and or other service discovery primitive right a registry and that's important and I'll talk about why in a 2nd so why many right now than is going to jump on 1 of these clients here that we will log in here hoping that you get a nice to have a great day and the tedium can figure that out I so in 1 of the interesting things about schedulers is the way most scandalous work is they run multiple applications on the same physical host but if you've application that bind to a specific port in this case 5 6 7 8 you either have to use like overlay networking in the news and of a firewall routing or even with coalport mapping where you use really high port allocation numbers like 43 thousand 570 years 67 thousand 123 to dynamically allocate these resources on different ports that have sort of a registry to keep track of what ports the services running on you also recall that in the previous sections in order to join those things together this 3 um http echo instances and that 1 and annex load balancer at a copy paste IP address or the sort really automation friendly and I want it will still this very rapidly again we're the next MySpace we have to be able the head that maximum scale he as a joke anything and joking but it's it's it's for so in order to fix or alleviate this problem by leveraging consul what consul that allow me to do is I'm going to start a service I'm going to register that service with consul particularly that service registers itself with consul consul in knows its IP address the port listening on a bunch of metadata about it and then I can create consuls DNS interface so I don't have to pass around IP addresses I can pass around well known DNS entries emission what I mean various jobs but so this is a nomad job I haven't told you a lot about Nomad yet that's OK we'll talk about in a 2nd this is an image of specifications what I'm doing here is i'm starting a service you can see that type Eagle service with a priority of 80 which is just an arbitrary priority related to the other and jobs that are running I'm running it inside darker and running the official habitat have battered supervisor and a marking it as a permanent peer which means uh this holds for decagons like network partitions and transient peers again habitat implementation details the important word here are these 4 lines of code I am registering this service as have soap so that mean equals have dashed up and I'm saying that it has an HTTP port that's actually for 80 of using a 2nd it just hasn't HTTP port what this says is that this service will be available at have dashed up debt-service . com just like Don myspace . com service to Council the TLD the Contel results within your data by specifying the resources that I mean so I need 1 CPU and um 1 gig of RAM which is a little bit of overkill but whatever I need 20 megabits of network and allocating are static U 4 96 31 that's the part that habitat runs the key server on a static port 96 38 which is the port the gossip reports so pretty easy this is basically encapsulating the darker Run command into a text file and the advantage of doing that is like now we get code review we get peer review we conversion this text file and and we can very easily check in and out of source control for now I'm just gonna go out and run this idea that
via the no matter Run command and I pass in the past that and what this does is this triggers what's common evaluation allocation so it inspects the clusters so no matter what your cluster as like 1 terabyte of RAM and 1 petabyte of disk space and it does bin packing to schedule the application in the most appropriate place on the we'll really care where where it might have gone on this house that I'm on we can check and see that is it did not it's not running on the internet 1 of the other clients in this cluster it doesn't matter but how we find it we don't know its IP address for that for consul comes in because what I can do is i can head have stopped service Council and that will result to the IP address this is running what's nice is unlike copying of using this IP address around I can hard code have stopped that service our Council because that's actually a dynamically resolved and point so that means like peer when I pass in that at dashed edge pure flag instead of 1 2 . 3 of 4 I just pass in have set that service consul we rely on the kernel to resolve that the resolution actually happens to consul which exposes itself in a DNS like interface any results to the service discovery such to get what that looks like so it's like another job here and echo server this is not very similar I'm creating a service and create 3 of them this time so I only have 1 supervisor but I have 3 instances of the server so again very similar to what I ran locally a minute ago I'm giving a really tiny ephemeral desk because my services and actually need a disk at all it's just for logs was the only thing any disk space for and starting as a server and then here is I think a little bit different what you might have seen before I'm passing in this environment variable have http echo which is the environment variable habitat looks for 4th configuration and the port that I'm giving the service to listen on is Nomad port http I know that at runtime dynamically populated the values between those curly braces with a really high number of ports bits allocated for this task so that means my server instead of running on 5 6 7 8 like it has been it's gonna run on 23 thousand 406 or whatever available port is decided in the scheduler does that for us we don't have to think about it down here and passing and some arguments to have my container remember my containers running the habitat supervisor and have added supervisor by default runs at http port and gossip ports on 96 31 96 38 but if I build those ports on my host that means I can't skill my application there's only 1 port that is 9 6 3 1 I can only bind that 1 time per helps so what we do here is we actually well known added to dynamically assign not only the port that my service listens on but the pork that habitat listens in gossip and each instance of a echo application will be done assigned a different dynamic high port number and habitat handles the seamlessly that since this is a tiny little go binary we can actually get it very minimal resources here it's getting 1 20th of a CPU 128 megs of RAM which almost all that is just for darker 5 megabytes of memory and I'm allocating 3 dynamic ports and you'll notice the dynamic because there's nothing inside of what that's telling matter is just pick me a available that is a privilege so like not 18 not 4 4 3 something like less than you know say a thousand or higher than a thousand that I can bind to very easily anything that that element and telling it to do minimal logging so they can keep the disk space love show you item so let me go ahead and run this now so I'm going
around the HTTP echo jump it's running was that is was jump back when we run the nomads status command we can see we have 2 jobs are to services running how an HTTP echo we can get more information about a particular service by passing in its name so here I can ask for the status of have in particular we can see we have 1 running and has an allocation each instance of the of application or service will have an allocation so I can get the status of this allocation and here I can see that the this particular habitat supervisors using 2 of its thousand megahertz of CPU 66 of its gig of RAM memory of 30 megabits of this and it's bound to 2 addresses of the 1001 of wonder 26 and on the 2 different ports that we can actually get Pollock's for this service as well so instead of Alec status we can ask for the logs and that log up which look really familiar and that's what we saw in the studio always on the darker container so we can verify this thing is running same thing true for HEP echo so we can ask for the status of HTTP echo and Eurydice that item for the whole thing and it's a quick pretty much we have 3 these running the ball terminated to the running status understand the first one year I'll check that Alex status and here's where things are a little bit different notice that that if the HTTP word is fixed 56 thousand 876 running on a really high gossip or at a really high have high http and these reports of the schedule assigned to me and notice that it's a different IP address this is running on 2 . 175 the I don't have a lot to the service the same way we did before we can see that from and I can grab this IP the and I can crawl that IP and we get a thinks is actually running like it's not be as the thing is actually running that the different host in the cluster and I just hated the across the story of his running so just like the authority to put these in a load balancer because an order copy-and-paste IP addresses are so we'll take a look at the edgex load balancer job again it's point on the same topic in here but the 1 difference here is on line 3 the type is no longer service the type is now system this is a special type of job a nomad that instead of specifying account this tells them that to make sure that 1 of these is running on every host in the cluster and as new host joined they automatically get this and I'm binding this to port 80 on the cluster so again dynamic port allocation is the important part is down here and binding on port 80 so what that means is every host in my cluster in this case every AWS Instance is going have an engine x service Listening on port 80 bound to the hopes to which we did you had the public IP of the city of US instance you'd be hitting this engine axle about this is important in a 2nd for now the I have 5 points how many engine X servers are gonna get schedule 5 with C 4 helper a and B operating what is over 5 right we got 5 scandal and no less handling with the downloading of this container didn't exist in advance its handling all the background network traffic and placement it's making sure that there is 1 of these on every host every client in the cluster so we can check the status of the index perfect all 5 of them running if I run a darker appears we can see a luck that's 1 of the engine x container and I can on-demand service genetics I can grab 1 these allocations and I can ask for the logs of this allocation we can see that is actually getting requests and woods and giving something away here there's an you'll be health check coming in which I'll show you in a 2nd but those are the logs
so and now what I can do is I can crawl local host because this thing runs on local hosts around the local host port 80 but it also runs on local host and L O C a L post they're on the local host and each time I had this I'm getting round robin to a different things so large test tube he'd and 0 . 5 romans as before but and you can see that that's changing continuously but like I said before I we would like we need scale right so how many of these do you think we can reasonably run how many people think we get have 50 50 check it out so well a differences between Nomad and others get was on the market is its speed as with Nomad we schedule a million containers in under 5 minutes some about the speed of 50 that should be pretty darn impressive and so I had time there's no matter run jobs AGB echo and I didn't no matter 3 of no matter run and it's community evaluation and that took a little bit over a 2nd so we just adult 50 which was dead 0 47 containers the RT 3 running and the what's interesting is that these are public-facing so I'm going to jump over my laptop here and the fire of a web browser and I'm gonna hit um Nomad . hash acquired . rocks you can do this on your phone if you'd like this is a really or L that it actually load balancing all those clients and is hitting refreshed so all 50 of those clients are being load balanced your freedom to try this out if you like but there's a public-facing thing so what we have is is a load balancer a load balancer LB that is heading for 80 on all of our clients and port 80 is listening engine X and X is running traffic to these dynamically bound back and services the dynamically bounded registered with a habitat supervisors have 2 seconds for this keep going I don't freely but I have 1 more thing to show you so politically wanted to 250 i'm SUCH check it out so will
time this Ryan to these TV anchor how how Hulunbuir lingers is in state 5 seconds pushing out remember scheduling 200 it's over 1 . 1 4 seconds the scandal 200 containers so we have now reached critical MySpace mouse and if we jump over here we will start seeing more and more and more of these as this is running they don't have time for but the end of my demo was pushing I configure change that said thank you but um I only have negative 44 seconds left so that's the end of my talk everything we public and get about 3 of the link and said Vargo thank you big mn at
Rechenschieber
Demo <Programm>
App <Programm>
Open Source
Radikal <Mathematik>
Skript <Programm>
YouTube
Binder <Informatik>
Code
Übergang
Demo <Programm>
Punkt
MySpace
Zahlenbereich
Kartesische Koordinaten
Binärcode
Komplex <Algebra>
Wechselsprung
Perspektive
Luenberger-Beobachter
Softwareentwickler
Optimierung
Default
Konfigurationsraum
Verschiebungsoperator
Abstraktionsebene
Güte der Anpassung
Stellenring
Hoax
Festspeicher
Mereologie
Server
Ordnung <Mathematik>
Programmierumgebung
Verzeichnisdienst
Message-Passing
Bit
Punkt
Kontextfreie Grammatik
Wasserdampftafel
Automatische Handlungsplanung
Zahlenbereich
Implementierung
Binärcode
Netzadresse
Übergang
Binärdaten
Randomisierung
Optimierung
Default
Konfigurationsraum
Gerade
Funktion <Mathematik>
Schnelltaste
Zwei
Strömungsrichtung
Quellcode
Elektronische Publikation
Packprogramm
Konfiguration <Informatik>
Dienst <Informatik>
Mereologie
Einfügungsdämpfung
Bit
Demo <Programm>
Prozess <Physik>
Punkt
Momentenproblem
Versionsverwaltung
MySpace
Kartesische Koordinaten
Binärcode
Login
Eins
Kernel <Informatik>
Streaming <Kommunikationstechnik>
Hook <Programmierung>
Radikal <Mathematik>
Default
Gerade
Funktion <Mathematik>
Zentrische Streckung
Obere Schranke
Gebäude <Mathematik>
Quellcode
Kontextbezogenes System
Bitrate
Frequenz
Dienst <Informatik>
Rechter Winkel
Server
Garbentheorie
URL
Ordnung <Mathematik>
Verzeichnisdienst
Programmierumgebung
Fehlermeldung
Subtraktion
Mathematisierung
Automatische Handlungsplanung
Term
Code
Interrupt <Informatik>
Lastteilung
Endogene Variable
Maßerweiterung
Konfigurationsraum
Einfach zusammenhängender Raum
Trennungsaxiom
Videospiel
Zwei
Elektronische Publikation
System F
Last
Speicherabzug
Prozess <Physik>
Freeware
Momentenproblem
Extrempunkt
Browser
Adressraum
MySpace
Gruppenkeim
Kartesische Koordinaten
Element <Mathematik>
Login
Wechselsprung
Algorithmus
Datenmanagement
Fahne <Mathematik>
Minimum
Mustersprache
Mixed Reality
Default
Funktion <Mathematik>
Parametersystem
ATM
Zentrische Streckung
Schnelltaste
Oval
Reihe
Stellenring
Biprodukt
Bitrate
Kontextbezogenes System
Natürliche Sprache
Intelligentes Netz
Hoax
Arithmetisches Mittel
Dienst <Informatik>
Ganze Zahl
Rechter Winkel
Automatische Indexierung
Ordnung <Mathematik>
Repository <Informatik>
Message-Passing
Instantiierung
Fehlermeldung
Subtraktion
Mathematisierung
Zellularer Automat
Implementierung
Abgeschlossene Menge
Unrundheit
Netzadresse
Lastteilung
Knotenmenge
Unterring
Notebook-Computer
Datentyp
COM
Coprozessor
Indexberechnung
Konfigurationsraum
Bildgebendes Verfahren
Hilfesystem
Touchscreen
Expertensystem
Protokoll <Datenverarbeitungssystem>
Diskretes System
Open Source
Peer-to-Peer-Netz
Elektronische Publikation
Integral
Differenzkern
Last
Mereologie
Streuungsdiagramm
Verkehrsinformation
Resultante
Betriebsmittelverwaltung
Umsetzung <Informatik>
Demo <Programm>
Bit
Punkt
Extrempunkt
MySpace
Versionsverwaltung
Kartesische Koordinaten
Element <Mathematik>
Login
Binärcode
Raum-Zeit
Kernel <Informatik>
Internetworking
Metadaten
Client
Prozess <Informatik>
Fahne <Mathematik>
Gruppe <Mathematik>
Default
Gerade
Bildauflösung
Konfigurationsdatenbank
Schnittstelle
Umwandlungsenthalpie
Parametersystem
Zentrische Streckung
Schnelltaste
Datennetz
Scheduling
Dienst <Informatik>
Rechter Winkel
Festspeicher
Server
Garbentheorie
Ordnung <Mathematik>
Programmierumgebung
Instantiierung
Subtraktion
Firewall
Physikalismus
Zahlenbereich
Implementierung
Zentraleinheit
Overlay-Netz
Netzadresse
Code
Lastteilung
Task
Variable
Weg <Topologie>
Binärdaten
Mini-Disc
Datentyp
Direkte numerische Simulation
COM
Primitive <Informatik>
Cluster <Rechnernetz>
Speicher <Informatik>
Konfigurationsraum
Bildgebendes Verfahren
Leistungsbewertung
Schreib-Lese-Kopf
Diskretes System
Zwei
Rechenzeit
Peer-to-Peer-Netz
Elektronische Publikation
Partitionsfunktion
Quick-Sort
Mapping <Computergraphik>
Mereologie
Wort <Informatik>
Betriebsmittelverwaltung
Subtraktion
Bit
Punkt
Browser
Adressraum
Unrundheit
Kartesische Koordinaten
Zentraleinheit
Login
Netzadresse
Lastteilung
Client
Wechselsprung
Prozess <Informatik>
Notebook-Computer
Hash-Algorithmus
Datentyp
Gerade
Leistungsbewertung
Softwaretest
Autorisierung
Zentrische Streckung
Diskretes System
Zwei
Stellenring
Physikalisches System
Natürliche Sprache
Scheduling
Dienst <Informatik>
Last
Rechter Winkel
Automatische Indexierung
Festspeicher
Mereologie
Client
Server
Wort <Informatik>
Information
Ordnung <Mathematik>
Verkehrsinformation
Instantiierung
Scheduling
Demo <Programm>
Zwei
Mathematisierung
Client
MySpace
Binder <Informatik>
Aggregatzustand

Metadaten

Formale Metadaten

Titel Ephemeral Apps With Chef, Terraform, Nomad, and Habitat
Serientitel Chef Conf 2017
Autor Vargo, Seth
Lizenz CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
DOI 10.5446/34579
Herausgeber Confreaks, LLC
Erscheinungsjahr 2017
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract In addition to composition and portability, one of the more commonly overlooked advantages of moving to microservices, containers, and infrastructure-as-a-Service is the ability to create highly-ephemeral, one-off environments for almost any purpose. Imagine a world where a code change can be tested in a completely isolated environment where 100% of the resources are ephemeral. Say goodbye to long-lived staging or QA environments and say hello to Chef, Terraform, Nomad, and Habitat. Terraform and Chef provide the foundation to build and provision infrastructure resources for your application. Running in parallel, these tools can often provision all the infrastructure required for a cluster in 2-3 minutes. Part of that process installs Nomad, an application scheduler akin to Mesos or Kubernetes, and the supporting resources for Habitat, which enables you to automate any app on any platform. Joined together, this toolset enables rapid development, testing, QA, staging, and more. This demo-driven talk with go from nothing to fully-empheral in snap of, press of a button.

Ähnliche Filme

Loading...
Feedback