Bestand wählen
Merken

How CoreOS is Built, Modified, and Updated

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
and that the static curiosity really who here has deployed correlations that's what I like to see also we're going to be going through a lot of you just talking about and what's happening on the back side we still have another minutes or so so I don't know how much of that is tied to like this training so I know like CCC like you start when they say you start because you think were like clockwork OK and of and I'm happy to get a little bit and you you know for folks that way if it there is things back up a bit more you know they can requirement time absolutely so we will start with this so my name is brian ready hand and of if
I were a core OS I used to have a giant beard that hung down my ways are there is this television show and the US called Duck Dynasty our Europeans and probably have heard that that's a good thing it's a bunch of frankly but races jerks out and I was tired of being associated with them so you know I hit the reset button and now that it's kind of leaving the desired guys that can allow it to come back so just to give you a little bit more history I'm I'm I'm gonna start by saying I'm recovering Slackware
users I started out Slackware 4 . 0 2 that the kernel was 1 of the thing up any it really kind of introduced me to Linux by just throwing me into the deep end the pool you know it wasn't like having to do the full bootstrap from scratch but it was enough to get really introduce you to dependency hell the the following margin and that is configured may make install and every package that you run or install at that point you know circa 1997 that's you like you know patch of opening really not into binary package managers at least at the time so knowing that were realizing that this becomes a bit untenable especially if you wanna maintain a system over time I'm not too proud to admit that I wanted every path through a while but and I was previously user for a long time because the the ports tree of took care of that dependency management while still giving you the ability to compile everything from source and that was important to me at the time but then you know as you know hardware got faster and faster you didn't really have to squeeze every last drop out in order to have a usable system whatever user at and I
was a Red Hat user slash the door user for a very long time and still Fedora user right now on a lot of that was helped along by the fact that I worked at Red Hat for a very long time as well so I was on the consulting team in North America for anyone who is not necessarily aware of the lineage of the Red Hat consulting team Red consulting is where they went VA Linux kind of disbanded and broke apart are like the American Bell System on Red Hat gladly took in VA consulting straight namely a consulting off the door put up a new 1 then just kind of let it ride for a long time and I was fortunate enough to work in that organization that you know have the lineage of the a consulting attached to it on but you know with we talked about kind of where that started ends you know
the big thing if you're using Red Hat is you become you stop here and you become used to
package managers in general so young was kind of the solution to solving that uh dependency hell and other distros you know using dpkg on those of and the roots inside the you know used apps then to solve it and then you start getting things like pacman and he builds in Portage emerge ends then you get to the point where young bearded individual like myself takes off the Fedora and moves to the center of the which is coral now to tell you a little bit more just about core as you know there's a lot of folks here the deployed if there's a lot of folks that is not of course was started uh by individuals especially early on from a pretty OK lineage of folks understanding where x you know the uh CEO came from Rosalyn and Rackspace the CTO came from Su said you know that the 1st employee was from Google and you know we have a number of other Red Hatters there's well that's just a little bit more of but none of you came here to hear me give this history lesson what you can hear me talk about is specifically talking
and how you actually do this bootstrap process of building on its sister so in the case of
so you know an RPM base distro it's the whole build systems is Koji on the open source side of things time really awesome tool for kind of managing a single package across a number of different architectures and guaranteeing that the no correctly the on you know there's a similar tools with dpkg to do that and mostly distros have some flavor of bash blue and from the previous being line also not too proud to admit that we core OS lives in a little bit of that actually but there to talk more about the chorus toolchain 1 of the most important parts on the side of buildings industry is a set of packages called depot tools so do tools was created by Google and it's a you know they will say to gate workflow management tool that's kind of what is there's a bunch of stuff in there as well for doing code review using that revolver like with you'll could you system on there's a lot of useful stuff the biggest thing that we as core West use out of that is to call Rico and what repo does is it allows you to manage a bunch of data repositories of 1 so now we can take and manage actual source trees and developers on 1 project and just work on their own source tree while developers on another project you know using like at CTR key-value store and lead a a kind of systems that abstraction on top of System B they use each other the fleet developers can really just care about flee the FCT developers can just care about TV and when they're ready to cut a release they can do that and each team can kind of manage themselves independent rights uses this kind of XML-based manifest which I'm really not that jazz Don but the nice thing about it is only a small number of people ever need to touch that because a lot it happens in the back and so you know a copy of our manifest information is of 1 get out a bit but you know that comes last for OS large manifest to give you a little bit more of an idea of the types of things that you see when you go to do this like repo sink is you know you just start getting information where it starts ripping down information you pulling at City a locksmith and goes upstream to doctor grab their stuff in in our net system and you toolbox brood engine a bunch of other stuff and this master XML file is just a manifest that breaks down and starts listing projects so it will be like the basic name of the project the you know the overall group that manages that project the kind of subgroup and then the name of software so a lot of this will be like did hub for the project because they're the overall host for a lot of our of repositories you know would work would be like kernel where that way we get hubs specifies a certain mechanism for how you retrieve the updates that specific source control system while the kernel says hey we're going to use the outcome of work because we use the upstream kernel we push all over patches upstream and we run the current stable so all different then you distros of my past but it's it's wonderful world alluded to know that when I'm building this new image I'm running you know 3 . 18 of 4 and i've got all the news hardware support so well but the side of but you so the group is going to be something like in the case of you Halliburton your organization that's core Western stock cloud you know etc. and the software is the name of the package now did get a little bit more into that you know we were talking about these remote so like with a remote name of get out of it says hey can I jump up 1 directory from the overall and then use this general strategy for retrieving information that if you're using something like how Krause which will talk about will you should retrieve that from each BS chromium that Google source that common than their whole review processes using Garrett so wait a sec the is chromium in the US yes that that was in there so on top of the pro tools Google wrote another really awesome piece of software you know it's a little bit of a tragedy that travesty for all of their data mining and horrible things Google does not yet necessarily the Raptors or the commendations that they should for releasing open source software on what 1 other fantastic thing they wrote was that something called the cross can so it's used to build chromium you know the the laptop distro but it is just a nasty like it it's just a bunch of bash scripts which do things in a way that the core OS what to do but we had this vision for how we were going to build this operating system and how we were going to push updates and the cross STK happened to do that so we were able to kind be consumers of this open-source upstream In the same way that during the last talk you heard this argument for use the upstream push things into the upstream converge on you know useful a meaningful intelligence common patterns it didn't make sense for us to reinvent that we'll now we don't use all of it but you know we push patches were needed and we consume the upstream as often as possible so or we just work from you know that you no know it's an SDK chromium is built on this as today we are built from this SDK
chromium uses Omaha protocol as well which can all be going to move later but there are other forces at work and this is something that we have started to hear an increasing amount so I'm happy to have a forum to if nothing else the chromium on the bus to so using you know kind of a at tree model you've got core OS where you know we're doing our own thing you get chromium they're doing their own thing and screened out of here and there is the matter distribution work so we heavily heavily heavily use gentle evil we the same orange trees and we maintain all of our own packages in we consume some Genting packages directly because the gym to he builds then reference upstream sources and tarballs so it's this weird situation of are we pulling source code from Jensen no are we pulling the equivalent spec files from Jan too many yes do we maintain a bunch of those and patterns the meself yes and we keep all those facing the public so were headed back to this thing of so why is this interesting if you just agentive work well were not 184 for and as much as Red Hat is not a fork of Fedora ill there are a lot of individuals in this community of that do really important work and we're happy to be able to stand on the shoulders but if were just doing he builds then let's talk a little bit more about what the core OS is doing makes a difference so we have a read only using 1st and form that means everything in slash user use immutable you cannot just even go into amount dashed 0 read write now and start changing things like we've found some interesting things that you can do with the good partition table to lock that down the idea is is that you're going to be able to really force the user land into a state where you can always there attack we're doing these atomic updates to the entire operating system but on top that were also building in a key-value store and some things were containerization so it really comes down to if you are trying to run an application in core shipping and chances are core west much of it is we keep things as lean down as possible it should be in some form a container now there's a bunch of talks here going on in the virtualization or track tomorrow that I highly recommend folks see obvious batch is talking about docker of John Bolton core teams talking about rocket of if you're not familiar with kind of what's going on in containerization it's really fascinating realm of but the biggest takeaway is that we are not a general-purpose distribution but we get lots of folks going cool and outlook or a lesson I installed it how I run WordPress on that rule of that a well you uh well you're going to have a hard time just doing the apt-get install HCB the that you think you are and there's no GCC's you're not to compile anything directly so rule good luck with that but what it means is that you were paying about this in more comfortable leaving behind the general-purpose in favor of being a little bit opinion and this opinion nation coming gets into the update models well so if you've got this read-only user where you can't actually how much the overall content you can just read write are into it like you never really a out you have to have some mechanism for guaranteeing that you can update everything so you've got to use our partitioning scheme and I'll show you that mention why you have to do that now you can bring additional disks but like can if you're gonna build off of SBA we own STA all the areas go not that's great like by all means do that but the way that we then make this happen is interesting you have this giant so the partition table the unit in the GBT and the age of the most interesting things here for you are going to be that we have 2 copies of this user and use of the partition and we define new ideas for the partition type were also using metadata on the GATT to start tracking a priority of which 1 of these is preferred the number of times that we have tried the put that partition and the number of times that it's been successful which when you're doing atomic the is come important what happens if I go to push the fail which is what he did you have to have some recovery that so you know given that we have these defined you that's a if your UUID is that you are a core OS user we word a partition and if you're UUID matches that string you are a core OS user partition this this is the same kind methodology that windows users with the GVT or a Mac OS users of the GPD to be able to identify their partitions the sea of partition possibilities I the and again that metadata allows us introspection into what has been happening with the host so it is that there we go out of my office on me
I have to track down LibreOffice guys to from flammable reports here I of the recovery question there aren't aware of but seriously I'm told 2015 is your linux and listen investor
reinforce so we have been doing it for a decade self In addition just will literally that
I so where we left off a talking about here we've got these actually sitting on duty but there so we've got this yesterday which by the way in the as teaching a on bounces over the correct screen this
rule 1st so you know you end up with a true that you end up doing all of your builds actually inside of ends you you are compiling everything from scratch for your architecture and at exactly as promised we're using e build so you know if I wanted to go through and actually make some change to it so I can go through and say show all of my repo branches so it's a little hard to see but you know I have this one's that called master which is tracking the overall upstream and then you know I've got this thing here I'm doing some patches to our toolbox so I created this branch for toolbox and this is a branch that spans all of the proposed that retracted so if this particular fix needs to touch something toolbox and needs to touch something grubby beneath the touch something and doctor in order to make that work we can make all of those changes in the individual spots without having to I can't really get too deep into submodules everything else in here and repos pretty easy to use me if you understand you had did commit and Canada can wrap your mind around URI-based you know enough to be dangerous with that and start making changes to it of the the other side of it is that there's 3 basic stages once you get into cross at 1 you have to during to you you have to build your packages that's going to compile all the individual sources in this case we're going to make a binary packages specific so for a machine and then we're going to build an image out of this factor because as we as it is showed we have this mechanism where you have a partition table that sits on a machine and the concept of an image is now just taking assembling everything into a specific flavor of what that overall image looks like so that might be a few new image it might be an OpenStack image I could be a raw dist imagery that assembling it into an ice so that will really be read-only but so to do those you will just you know go into the directory for all of the across scripts and you know I'm going to I show you where you can find the full life detailed step-by-step walkthrough doing every 1 of these things actually I should just she can do that right now and so if you
go to chorus comes slash docs slash SDK we get into no I don't actually have network if so that actually has
4 or 5 different complete sections on building production images building development images setting up the STK from scratch art tips and tricks so things like just short cuts of showing you all the important you application ideas for the GBT but sue you end up in a state where you just use this to say I want to build all of my packages and I want to do this for a you know production image word development image uh a scratch and it will just go through and make sure it has uh reviews all the sources that repo is sucked down and build everything individually after you have done your build images or build packages you would do a build image now I did a build image here a little while ago C it actually just goes through and doesn't emerge of you dead of and the bus and all these things directly into this disk image that it built a up above here so it's gonna lays out the actual petitions tells you what you're doing recent change that we made we have abandoned moderate fast we had a good run but so here we contributor what we could but we need to get back to ext for for the time being because at the moment we now have a overlay FS to be able to use within containerization systems so from there we go through we start emerging the binaries that we built previously into that disk partition table n islands here we go through ends remove some of the firmware that you know users never going to need we go through we set all of our idea mn very hashes inside of the actual image to make sure that you can actually verify trust the content that is there then we go through we generate some of our upstream tolling information so that we can just published like a version that text file so this is really nice because we publish this on to all of our repositories to make it easier to both develop around core OS and right tooling where you can always depend on these basic text files to be there but you can stop sucking in in use as the environment variables inside scripts to dynamically build everything else so we've gone through and created this production image as core OS production image that been and then we're going to convert it to a virtually virtual machine image using image to the other so this now takes that disk image and imprints you know the idea of who it is on top of this is that process of saying now you are OpenStack you are a Hyper-V image you are the ISO so we do that's in this case i just built up a queue you you image and you know we even know generates a a shell script so that you can spin it up real fast so so I can see behind so it even what do the full the process over here so after I've done that the my output directory build indigenous years in the 64 user and then we have of the that hopefully this is a brand new 1 that I built so this morning and the incident of but no 1 was 31 but what I cannot see but but again I just I tore down rebuilt it this to build from scratch so on my authorized he's where they're going to inject that into the image using finite file-system extensions so it uses Plan 9 to actually are share part of my host directory into that on ideas the and yes it is yes well so that's that's 1 of those tricks where on if you're running across 6 you're running at 2 . 6 that 32 kernel with things backward into it so you 90 is not something that the majority of users running RedHat any of your on my Fedora or I think it's enrolled 7 that be fine RL 7 is freed up 12 kernel on and you know we're 3 . 18 . 4 right now I believe because you know as most folks no like the kernel doesn't bring space especially if you're running out of your content inside the container I just my eyes are off but I decide that the your your red hat so you know when you're especially when you're getting into the legs containerization all the containers are going to share that same kernel see I've done a bunch of perfor concepts where took in our cobalt the rack uh rack 3 which was running in any case 6 2 processes so it's you know it's still within the Intel architecture and so it can use like and Intel kernel and taking these that programs that were written for a 2 . 2 kernel and run on a 3 . 17 Kernel the time testing but this is the whole like that's building image yeah you can follow directions on the website that's not that interesting you know it's it's it's it's our version of API and build into our version of taking all of these things and kind of building a true roots and doing yum install of the API and into a tree the but it's just the output of Az is an entire disk image we don't give users the option to just pick and choose components but from within so back 2 slides Canada the 2nd half of everything if was only come on and in the you started closing l terminal windows and but yeah OK
chattering opportunity that was of styles them Omaha and we're back OK
so they have this talking about the Omaha particle so this is how we actually push out around the world so upper recall was
created by Google specifically to handle updates to Chrome browser and Google Earth you know it was something where they need a secure update mechanism to handle touching all kinds of pieces of software so within a lot recall you're providing an update to an application it doesn't matter what that application the client for the protocol is supposed the handle whether or not it's update to a browser whether it's an update to an entire operating system so 1 of the things that we did is we wrote open Source findings are in go along which you are
repeated again a bit later but the to talk about a lies to 1st look at what the design Goals work and there's
2 very simple 1 they needed to provide a more efficient alternative to SSL are as far as actually securing the content and the 2nd was that they wanted to use HTTP as a transport agent B is very easy to pass through firewalls and proxies you know if the curl libraries on a box you can or or w get you can you expose environment variables that will allow any process can navigate through those proxies and
equally important are designed on
so all of these are going to be things like you do not want to rely on SSL so when we said you know they wanted alternative SSL SSL primarily gives you 2 things it gives you assurance that gives privacy the privacy is through encrypting content the assurance is through guaranteeing that a user is allowed to run something or that the content has come from the individual that you think it's come from well when it comes to the assurance I or privacy encryption matters if it's not content that is just publicly sitting on the internet now all I would love to fill the essays deep packet inspection systems with just encrypted publicly available you know content it's hundreds of Mags yeah it's a burden for uh but the CPU overhead for the customers the running of a lot of stuff which admittedly we just off the push off to you G but no as far as the assurance level of we found it's all locks you and Google found it's a lot better to just rely on GTG itself and the mechanism for handling that assurance is that you hash all your content you sign the content you sign the hashes themselves and now you've got a mechanism where you can guarantee that content was not modified in transit so you at least know that it's gotten down to you in a state where no 1 has touched now since we all are pushing this entire operating system image we can make the public keys for the trust in POS so now the OS itself has all of the key ideas for GPG to be able to validate that the updates are actually coming from core West you know because we have this mechanism where we push everything down to this and you can modify the content on this in that partition it means that you can have a much higher degree of assurance that everything has been pushed in a way that she thought that's so the
other ideas that Omaha really stresses the idea keeping your updates for wretch so the idea
is that if the sample of data it's polling for the updates but as soon as the updates are their the client what holds them down so the client is in control of this and more importantly because of how it uses a semantic versioning system of the whole idea is that you can keep a user from being are susceptible to downgrade attacks so you always want to make sure that a user there is set to afford image or falls back to the same image that they're all on never goes back in time that's how you avoid things like getting hit by the ghost patches or hardly leader Shellshock renewed these other yes of but I will try to stay at the site of so up as as matching the whole idea here is to avoid having of rewind or or replay attacks so what it begins looking
like with this atomic update is that a
client will send an application ID down to the update server in XML block it says hi I'm here I'm running version 5 44 the server responds back saying all 544 is a little bit had a date we are currently running version 5 75 here is an aura say signs signature all of the content here it is these Our date stamp of the file here's a location on us a CDN that you can put down from so the idea is that you wanna build a pull this out of band you wanna be able to push the content into something that can handle the distribution for you a gay you know a generic HDP transit system but and just give the client the URL were they can pull that down from and sent Europe giving them enough metadata and signed content to be able to validate from that untrusted location that the content is not good you're able to work through things in a way where so you don't have to worry about like a said content being modified in transit of who cares if it's coming down from or S 3 or even you know local engine x install on the clay then downloads the data verifies the hashing the cryptographic signature and applies the updates of now you remember earlier in the partition table we had very specific geometries that were there so we actually will go through and just completely over right of that portion of the block device to be able to push the epic down but yep data exits with a response code and reports to the up reports that to the updates of server but after all so what's happening here is you're getting telemetry back and forth you're getting information saying step 1 the here's a version running step 2 Ivan asking for an update step 3 and retrieving the update so what this telemetry does is it allows our someone running the Omaha server to get an overall picture of the ecosystem as a whole that this is macro data not microdata so you're getting but things like I see that 20 thousand machines are running version 5 44 I'm about to push version 5 75 and you can watch that number change over time and you can rate limit since it is this polling model words asking for an update if the update servers are down it's not a problem this is similar to like satellite servants inside RedHat so if your satellite server goes down it's not like any of your infrastructure breaks the failure cases you're just not receiving updates so it also allows for rate-limiting because now what we can do is similar to a of mobile provider where they're pushing out the over-the-air updates to Android they can rate limit in such rate limit in such a way where you say OK we're not letting more than n number of machines update every M minutes so we will allow 1 machined update every 5 minutes to start and so you can start to watch that trickle process and the telemetry it's coming back tells you OK the up the system is gone through full successful update and now but it's rebooted and it's running the new version or the system ran through that update and there was a problem and it's I had to revert back to the old version because using that AB partition scheme you can pivot between us and you just watch I've got size start about that next but these but this is important because at as much as you want to be able to guarantee that every update is successful that's never truly going to be the case so so to talk about this you know it our image we started out and the OS is installed to the user a petition we've got this giant data partition which resizes to the end of the geometry of that DS so that is how you deal with things like cloud providers where they allow you to dynamically allocate the disk it also just means that if you're our if you have a administrators you're used to running in the VM where where they just go in and randomly resize disks energi to fix things this will handle that automatically for them and stretch the update out the in the disk so in a new update is coming down from the update server is stage but to temper fast it's unchecked we are it's a download it we verify the crypto on its assuming that everything is good we will apply it to the b partition if that is good right right now we just run off the b partition were able to update the metadata on the desk and we know which partition we should be running from in the unlikely event that there is a problem the update fails we have a grub module which reads that metadata on the desk and can revert back to the old partition we were running from the old partition before so we know it's good and the only thing that we're working with this changing the user lands so 8 puts you into a state where I you're always doing fast forward updates and the worst case scenario is that you are our running to the you're writing on the version that you already on the the it seems here because I
had the pack examples of these not showing the actual along content and it looks like when the graph is crashed and did
not see that so we need to pull them from a different 1 while I'm doing that but I will go ahead and start with asking do folks have questions about and that's so yes in the center there you yes of not all you this the a what people all don't the so we all the the I what of on the yeah the the the that the all what the the so let me ask 1 question at a DIA director repeat the question so that at had so question was a given out like example of the thunderstruck attacker which was announced it C C C where you are actually relying on chain of trust the word potential gaps in G of trust where but because the public key is embedded in the actual image if there's some way that you can do a binary modification to the underlying content so where you're effectively injecting your own of values in there so that you can force a new update and place all stable but there's a couple things about 1 on what that is going to do is allow you to change can the state of the update in and at that point you're going to have to change a lot of other things too like where the updates are coming from and so I maintain that over time the most likely scenario that's going to happen when you do that is that the box is going to crash on a reboot on if so there is the potential for success that that but I wanna be completely clear that like this does not completely mitigate against that this is something where if you have a highly motivated attacker it of they will be able to get into it are in the same way that but now it's just it's a different attacks scheme versus just getting into our host and injecting your own GPG keep in what what we on on the more the the and but OK so there's a gap I'm I misunderstood you there so the actual install and update scripts actually contain the entire public key in there so what you're going to have to do like I'm going to actually enumerated for you the entire process that you have to do to exploit vulnerabilities of so what you have to do is you would have to about 1 get previous access to the system and you're going to have to modify the contents of the disk directly once you have modified the contents of the disk directly to inject your own additional key you're also going to have to either do a DNS spoof to get it to go to a different location where you have content that's been signed by your different private key up or you're going to also have to modify the update location in the configuration on desperate on because 1 of the things that we do allow do is that you run your own private uptake server this is for the bike capital e enterprise customers that can just accept public updates so they need to slow things down so they have and in some instances of our update server just sitting behind a firewall on so what the attack actually looks like in practice is similar to you jailbreaking like an iPhone right something where you have to get find some other vulnerability that allows you to get into the underlying block device and modified directly in place 2 then install a permanent chain of factor so there's isn't another question Genopole assured the generative Lucia here and then I with glasses so start with German publisher that's not on I yes we do we I thought this was the so if you a set in invalid set of flags on the partition table it forces the into a lock state where it just goes I don't know what to do here you know we're we're working on a permanent set of patches where it's ratified into it out but it's just the party trick that we found at this point that allows for that to occur case so next up up front here so that the goal of this 1 the all right you can yes it would that's why we started putting Indian Bharati as well you think all of the no we do not do incremental updates everything is an entire atomic disk image so then there is a gentleman with classes there and then none of the other gentleman with the glasses and Bush's itself the yep the and so that you I guess so because we are are using the gen he builds within the gen to you build you can actually specify a specific get commit that you want to use so it doesn't have to be a tag version you can just say I want to use this exact commit what yeah so I was doing a bunch of work after Fossett kind of really tiny open-source conference in the US where they 1 of the responses we gave you know out of men aboard max and I was actually using that to figure out the exact drivers everything that
we're going to need to support the men aboard max as an embedded platform and that's exactly how I did it you know I hold that different kernel from the upstream our specifically said the exact um Our commits that I wanted to use on that new were kernel and then also used a custom configuration with the kernel to be able to add in all the additional drivers you get to sign questionable jump over to them some of these I was yeah you model on yes yes it is the the so we do not have things that mentioned explicitly removing content I think we have some stuff talking about adding additional content on but I will also say real fast most of our developers are in uptown caress on freenode so you know I will throw up maybe Crawford under the bus and he can now yet without so Gelman in the blue sweater right you want know the the what yes and reboot is required so this gets into that whole we are are opinionated and how the workload the types of workloads that you should run you know this is really geared towards situations where you tear down and rebuild machines all the time situations where your application is stateless across a number of posts so that you can do all rolling reboot and 1 of the things that we have there to help out with that is using at CD we have a tool called locksmith which gives you a semaphore for reboots across an entire cluster so you can guarantee that no more than a number of machines go down for service at a given time where the administrator can about defined by default it's 1 that you have some of our users that have you know 5 thousand machines are more like rebooting water time isn't really an acceptable of the frame to do that is just gonna take you wait too long yes Germans uskok animal go back for another 1 the you the 1st 1 of the this all 1 a half so that gets into a lot of our users actually do that and so comical mem sequel does all the QA testing where they load the image via pixie and just run it from resident RAM and their update model is just they have a process that watches are repos when a new update gets published they just w get it down and they're reboot the machine it picks up the new configuration from pixie and just runs so there is some docs Rackspace also uses it with the OpenStack a project ironic so OS is the basis for so if I repel the J. Faulkner of the Rackspace team of built out of system or was a part of a team building a system called teeth the user's core OS running in resident where they have edited it ironic agent so it uses it calls home to the OpenStack API is so you can use the OpenStack Nova API is to other light or sorry the ironic API is to deploy at a physical machine In the same way that you would be a of virtual machine the and OK again so I was looking at code a little while ago and I was a little bit confused because coming from Red Hat IPA to me means identity policy on it so I was sitting there looking all manner did real weird including all that stuff too but that next OK yeah it long get but it is licensed Apache 2 it is written using go so it's so that's 1 of the things a core OS does not ship any interpreted languages languages with the exception of batch on the underlying user land so it means that you don't get Python Ruby node Pearl none of that and it has meant that for things like so we use cloud in it very very heavily we use cloud configs so 1 of the major piece that I had what was that a customer of mine so as again why were to red Hat I was on a consulting team they would have a whole well long periods that kickstarts for all the physical machines and as soon as they want to move to some cloud system they had to have now rewrite all of that in the cloud can fix so what we did was cloud configures the same is the configuration and fast it's the configuration manifest if you're on bare-metal if you're on like their mental installed the disk pick C the cloud platforms and what we do is offer the bare-metal machines you actually provided as a kernel boot options you say you know clock in Fig URL equals and then give it an end point at issue the endpoint and it will retrieve that and then use that for the manifest in running which is how all men sequel does their configuration and all the QA testing because they just have you know an Apache Server sitting there with these Gamal definitions now what that means is that we had to rewrite of cloud can figure in go and we have to maintain H. cracking implementation of someone else's back in a different language which you know on the 1 hand is really annoying like at the and that's putting it likely that it's annoying up but it also has been interesting because we have a lot of folks come to us going hey I'm working on this embedded platform how I need to figure out how I can strip disco binary down even a little bit further work is now I can run this on my embedded platform and I can use cloud in it to configure this embedded platform and I don't need to bring all of pi bond with me just to support that yes you and from the yes and we have everything to do configurable at that so what we do is we actually look for a deist named configured to and through the wonders of system the we actually ought amount that based on name and then we have units that look for the presence of a file at that exact path and polity and so that's you you into it exactly what we did so but we're getting dangerously close time I think have time for 1 more question yes with so you of the the what longer the
that you have the stock so notice so we have all of the images that we've ever created published up on the kind of content repository we've got going back to you I think I yeah it they're all they're up on the 1 half kind of it's not something that I think we've considered but it's not necessarily a bad idea either the only issue becomes like modifying those images after the fact add kind of a crash flag and the back the rest of yes yes volts that is part of Omaha protocol with how they push updates so if I staged means like if I logged into a machine wt down an old image and ran core OS install Calif forcing a target against that you can force it into place but it is non-trivial like it's definitely kind of an absolute disaster recovery mechanism which you if it does not you have to have it by it is not any kind of automated fashion you have to have access to the system have route on the system but in escalate through a whole series of processes the thing kernel and the you and you know what of so reasonably what the answer to that is the bad side were times up I will talk to off-line about that thank you so much for the time but we have a table over in the AW also if folks want ask questions about this or rocket or any the other stuff that were don't be afraid of me thank you
Bit
Wellenpaket
Synchronisierung
Speicherabzug
Korrelationsfunktion
Randverteilung
Bit
Punkt
Hardware
Synchronisierung
Compiler
Bootstrap-Aggregation
Wiederkehrender Zustand
Physikalisches System
Quellcode
Binärcode
Kernel <Informatik>
Netzwerktopologie
Patch <Software>
Dämpfung
Datenmanagement
Speicherabzug
Ordnung <Mathematik>
Tropfen
Stochastische Abhängigkeit
Synchronisierung
Selbst organisierendes System
EDV-Beratung
Bell and Howell
App <Programm>
Bit
Diskrete-Elemente-Methode
Prozess <Physik>
Datenmanagement
Punkt
Synchronisierung
Bootstrap-Aggregation
Gebäude <Mathematik>
Zahlenbereich
Speicherabzug
Wurzel <Mathematik>
Distributionstheorie
Virtualisierung
Information
Gerichteter Graph
Netzwerktopologie
Metadaten
Mustersprache
Skript <Programm>
Kontrollstruktur
Gerade
Kraftfahrzeugmechatroniker
Distributionstheorie
Hardware
Datentyp
Gebäude <Mathematik>
Güte der Anpassung
Software Development Kit
Software
Menge
Forcing
Wurzel <Mathematik>
Rechter Winkel
Repository <Informatik>
Zeichenkette
Tabelle <Informatik>
Lesen <Datenverarbeitung>
Objekt <Kategorie>
Subtraktion
Selbst organisierendes System
Äquivalenzklasse
Nummerung
Spezifisches Volumen
Repository <Informatik>
Informationsmodellierung
Weg <Topologie>
Bildschirmmaske
Webforum
Netzbetriebssystem
Datentyp
Installation <Informatik>
Skript <Programm>
Inhalt <Mathematik>
Ganze Funktion
Tabelle <Informatik>
Protokoll <Datenverarbeitungssystem>
Open Source
Datenmodell
Elektronische Publikation
Partitionsfunktion
Patch <Software>
Wiederherstellung <Informatik>
Wort <Informatik>
Streuungsdiagramm
Bit
Versionsverwaltung
Gruppenkeim
Kartesische Koordinaten
Kernel <Informatik>
Untergruppe
Datenmanagement
Einheit <Mathematik>
Bildschirmfenster
Maschinelles Sehen
Umwandlungsenthalpie
Software Development Kit
Parametersystem
Dokumentenserver
Synchronisierung
Ähnlichkeitsgeometrie
Nummerung
Quellcode
Arithmetisches Mittel
Verknüpfungsglied
Strategisches Spiel
Projektive Ebene
Information
Verzeichnisdienst
Aggregatzustand
Zahlenbereich
Code
Data Mining
Software
Notebook-Computer
Booten
Speicher <Informatik>
Softwareentwickler
Bildgebendes Verfahren
Meta-Tag
Konfigurationsraum
Mailing-Liste
Physikalisches System
Hybridrechner
Office-Paket
Flächeninhalt
Mereologie
Bus <Informatik>
Speicherabzug
Computerarchitektur
Stapelverarbeitung
Mini-Disc
Addition
Datentyp
Synchronisierung
Wurzel <Mathematik>
Wiederherstellung <Informatik>
Aliasing
Verkehrsinformation
Touchscreen
Kraftfahrzeugmechatroniker
Videospiel
Maschinenschreiben
Synchronisierung
Datennetz
Mathematisierung
Verzweigendes Programm
Schlussregel
Quellcode
Partitionsfunktion
Patch <Software>
Virtuelle Maschine
Dämpfung
Rohdaten
Bimodul
Rechter Winkel
Skript <Programm>
Ordnung <Mathematik>
Repository <Informatik>
Verzeichnisdienst
Bildgebendes Verfahren
Tabelle <Informatik>
Prozess <Physik>
Nabel <Mathematik>
Momentenproblem
Gemeinsamer Speicher
Versionsverwaltung
Kartesische Koordinaten
Binärcode
Raum-Zeit
Richtung
Kernel <Informatik>
Netzwerktopologie
Dämpfung
Bildschirmfenster
Speicherabzug
Radikal <Mathematik>
Skript <Programm>
Dateiverwaltung
Wurzel <Mathematik>
Funktion <Mathematik>
Softwaretest
Schnelltaste
Vervollständigung <Mathematik>
Synchronisierung
Dokumentenserver
Programm/Quellcode
Gebäude <Mathematik>
Quellcode
Biprodukt
Konfiguration <Informatik>
Rechenschieber
Rechter Winkel
Garbentheorie
Information
Programmierumgebung
Verzeichnisdienst
Aggregatzustand
Tabelle <Informatik>
Subtraktion
Mathematisierung
Automatische Handlungsplanung
Overlay-Netz
Virtuelle Maschine
Variable
Mini-Disc
Warteschlange
Installation <Informatik>
Inhalt <Mathematik>
Softwareentwickler
Optimierung
Maßerweiterung
Ganze Funktion
Bildgebendes Verfahren
Finitismus
Physikalisches System
Elektronische Publikation
Partitionsfunktion
Firmware
Mereologie
Bus <Informatik>
Wort <Informatik>
Speicherabzug
Computerarchitektur
Kraftfahrzeugmechatroniker
Server
Protokoll <Datenverarbeitungssystem>
Synchronisierung
Browser
Open Source
Browser
Kartesische Koordinaten
Service provider
Open Source
Client
Diskrete-Elemente-Methode
Software
Netzbetriebssystem
Client
Benutzerführung
Biprodukt
Partikelsystem
Ganze Funktion
Benutzerführung
Proxy Server
Prozess <Physik>
Quader
Synchronisierung
Hyperbelverfahren
Firewall
HIP <Kommunikationsprotokoll>
Variable
Diskrete-Elemente-Methode
Programmbibliothek
Äußere Algebra eines Moduls
Transportschicht
Inhalt <Mathematik>
Programmierumgebung
Public-Key-Kryptosystem
Kraftfahrzeugmechatroniker
Datenmissbrauch
Synchronisierung
Gruppenoperation
Physikalisches System
Zentraleinheit
Partitionsfunktion
Übergang
Internetworking
Minimalgrad
Chiffrierung
Netzbetriebssystem
Hash-Algorithmus
Äußere Algebra eines Moduls
Speicherabzug
Inhalt <Mathematik>
Overhead <Kommunikationstechnik>
Schlüsselverwaltung
Ganze Funktion
Bildgebendes Verfahren
Aggregatzustand
Web Site
Client
Synchronisierung
Kontrollstruktur
Stichprobenumfang
Client
Gamecontroller
Physikalisches System
Bildgebendes Verfahren
Distributionstheorie
Satellitensystem
Bit
Prozess <Physik>
Versionsverwaltung
Schreiben <Datenverarbeitung>
Kartesische Koordinaten
Service provider
Metadaten
Client
Softwaretest
Vorzeichen <Mathematik>
Kryptologie
Code
Gruppe <Mathematik>
Transitionssystem
Güte der Anpassung
Nummerung
p-Block
Bitrate
Elektronische Unterschrift
Ereignishorizont
Rechter Winkel
Client
Server
Information
URL
Versionsverwaltung
Makrobefehl
Aggregatzustand
Tabelle <Informatik>
CDN-Netzwerk
Partitionsfunktion
Server
Gruppenoperation
Mathematisierung
Zahlenbereich
Räumliche Anordnung
Code
Virtuelle Maschine
Informationsmodellierung
Mini-Disc
Endogene Variable
Inverser Limes
Inhalt <Mathematik>
Bildgebendes Verfahren
Graph
Systemverwaltung
Physikalisches System
Elektronische Publikation
Cloud Computing
Partitionsfunktion
Modul
Endogene Variable
Generizität
System F
Wort <Informatik>
Verkehrsinformation
Bit
Vektorpotenzial
Prozess <Physik>
Punkt
Extrempunkt
Formale Sprache
Versionsverwaltung
Kartesische Koordinaten
Fortsetzung <Mathematik>
Binärcode
Kernel <Informatik>
Wechselsprung
Einheit <Mathematik>
Fahne <Mathematik>
Nichtunterscheidbarkeit
Permanente
Skript <Programm>
Default
Softwaretest
Nummerung
Ausnahmebehandlung
p-Block
Frequenz
Teilbarkeit
Konfiguration <Informatik>
Maximum-Entropie-Methode
Dienst <Informatik>
Verkettung <Informatik>
Menge
Rechter Winkel
Server
Projektive Ebene
URL
Repository <Informatik>
Instantiierung
Aggregatzustand
Tabelle <Informatik>
Public-Key-Kryptosystem
Partitionsfunktion
Subtraktion
Rahmenproblem
Quader
Firewall
Wasserdampftafel
Klasse <Mathematik>
Implementierung
Zahlenbereich
EDV-Beratung
Ordinalzahl
Systemplattform
Code
Data Mining
Virtuelle Maschine
Informationsmodellierung
Mini-Disc
Datentyp
Direkte numerische Simulation
Endogene Variable
Inhalt <Mathematik>
Softwareentwickler
Konfigurationsraum
Ganze Funktion
Bildgebendes Verfahren
Booten
Semaphor
Systemverwaltung
Physikalisches System
Elektronische Publikation
Cloud Computing
Partitionsfunktion
Beanspruchung
Patch <Software>
Druckertreiber
Differenzkern
Softwareschwachstelle
Basisvektor
Mereologie
Bus <Informatik>
Speicherabzug
Wort <Informatik>
Stapelverarbeitung
Unternehmensarchitektur
Streuungsdiagramm
Kraftfahrzeugmechatroniker
Prozess <Physik>
Synchronisierung
Protokoll <Datenverarbeitungssystem>
Dokumentenserver
Reihe
Systemzusammenbruch
Routing
Physikalisches System
Kernel <Informatik>
Virtuelle Maschine
Betrag <Mathematik>
Fahne <Mathematik>
Mereologie
Spannungsmessung <Mechanik>
Speicherabzug
Wiederherstellung <Informatik>
Inhalt <Mathematik>
Bildgebendes Verfahren

Metadaten

Formale Metadaten

Titel How CoreOS is Built, Modified, and Updated
Alternativer Titel Distributions - How CoreOS is built, modified, and updated
Serientitel FOSDEM 2015
Autor Harrington, Brian
Lizenz CC-Namensnennung 2.0 Belgien:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
DOI 10.5446/34345
Herausgeber FOSDEM VZW
Erscheinungsjahr 2016
Sprache Englisch
Produktionsjahr 2015

Inhaltliche Metadaten

Fachgebiet Informatik

Ähnliche Filme

Loading...
Feedback