Bestand wählen
Merken

Skynet your Infrastructure with QUADS

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
bond pomarine Joder gutsy total italic Italiano the law of the socalled been English thank God because my times terrible as you can tell when I apologize if I've offended or anybody in the audience just now so my name's will foster uh maybe Ops engineer Red Hat and save talking about in exciting new Python-based framework called quads that we've developed
in-house that solve some of the problems that we have and I'm just gonna get right into it
so before I can explain what quads is and how we built this sort of framework to solve some of our
infrastructure and automation problems I want explain what I do at Red Hat on on a very small team of 2 people and there's not enough core analogies on the internet in fact there's never been a PCA analogy used for open source it's never happened so I'm going to use of of a car analogy here can explain away what I do and what my my other colleague does on the dev upside but so I wanna talk about high-performance computer servers as race cars very high performance they know full respect the latest Intel AMD chip sets as compete as race cars and high-performance networks would be the race tracks and the race car races that run on these tracks would be performance in skill testing of various open source products that Red Hat let develops and also upstream things like OpenStack an open shift in the Cuban eddies and different types of technologies that we wanna test and that had a very large scale and the actual race car drivers themselves they're driving the servers the race on these fast tracks 40 gig 108 networking or racetracks or at the race card drivers there's a scale engineers and that's a pretty cool analogy to have if someone said we the wider every score so it's come thrown the to more the quirk performances scalar years Mike on the race car drivers but look at myself and my colleague on kind of the the dev upside is where the pit crew women the track engineers and our goal is to make as many races all the time as efficiently as possible without any wrecks for about explosions and those do happen which I'll get to but and this tool quite helps us augment the entire thing including I'm writing a documentation force of configuring the lands on juniper Cisco switches and the full life cycle of provisioning bare-metal server spinning amount passing into an engineering group for product and scale testing and then spinning them down when they're done so if this was either a terrible analogy and also 1 of a very simplified 1 and this is the reader's digest version which basically we manage 300 or so high-performance servers and switches and and a large infrastructure and this infrastructure accommodates the parallel product testing and it's really comprised of isolated sets of machines we refer to them as clouds because we're not very creative it for different workloads it happened simultaneously and with quite so we have basically automated our entire drops we've automated herself out of a job instead of spending the time going in being being network engineers be of systems folks that have to deploy servers with automated all this with Python and needs that span the time on actually improving the automation so what is clients and what isn't it what what sort of things does it do and not do well it's not installer so of provisioning system it bridges several interchangeable tools together so I mentioned form and that because that is our back and provisioning vehicle that we use but we we design quads in a way that if you have an existing provisioning system of were a workflow rethink you use to you can plug that into quads quads will simply call out your provisioning system to do you know retaking machines with re provision machines or pushing image-based deployment across a lot of servers but it also helps us on the basically the boring things that maybe you do want you twice since exciting but you never wanted you again you know I I love of network engineering I love love connectivity I like switches in firewalls but I don't wanna do that for a living I would rather automation do it forming because it makes a lot less mistakes and strictly better at but basically our our goal is to build a system that orchestrates and builds all the other systems can only spend our time maintaining that automation framework and trying to waste as little time as possible being hard people were being network people restored because you know it gets boring when you do just 1 thing
so what is what's from kind of a high level we drive everything with high him off so the idea to topple idea is that for every asset in our infrastructure we have a yellow based schedule that tells it where it's what it's supposed to be doing from what start date and what in date and what isolated workgroup aside and this way we can programmatically special things in the future you know in an ideal world you would know the development schedule of all of your engineering groups you would say you know in November T-Maze can be releasing of a of of project project wife team sees can be doing the same thing and if you knew what people's need were ahead of time you can schedule computer network resources in advance the reality of it is none of that ever happens like you would like it to them deadlines shift that there's different hold-ups and blockers with floods in different projects and how you perceive the world ideally be is never actually how it turned out to be so we baked in a little bit of resilience into how we sketch of things in the future so a little bit more
detail about how it actually manages this kind of programmatic Jamil driven scheduling and provision but we set aside the Emil schedule for server assets and then we automate basically the entire life cycle of a set of machines from beginning to end automatically so a Sunday 20 200 ETC if you're going to be receiving say 200 servers in a what we call a q and q 0 or a specific feeling configuration that we support automatically or machines would stand up they would re provision of the tooling would go out to each of the switches in Figure via land so you have 1 isolated environment separate from the other engineering groups and things like that of formant service account be created you would have your IP my credentials to get into machines out of them and then our documentation would be automatically generated to reflect the current state of what your machines you're doing and who using all the assets inside of the
so like how we how we use this internally Red so we have a large R&D vomit called the scalar and this is where we test and that all of our products on your Enterprise Linux OpenStack of you ran on satellites of just I don't even know how many parts we have a we assess we test and there's a it's a very special place because all the hardware is deemed high performance scale here you know we use 100 give networking across the board it's all very high servers and it's not a place for you to have a development test that it's a place for you that if you you you you were doing development work and you hit what could potentially be a scalar she with any sort of bits of the application stack and you're able to reproduce this issue on a smaller scale this is where you go to run at a very large scale so we can anticipate how customers using our software would that would do in the real world and ideally we would identify issues for other customers and but again that's not always the case so in our at scale of about 300 servers we have 40 50 high-performance Juniper switches and right now we run about 16 to 20 different isolated scalar performance workload on these systems for up to 4 weeks of time and these N-Quads helps us that these machines hand them over to the appropriate people for short-term lease and then spend them back down again and with the men carbon allocation for so
give example and was look at a picture because everyone likes pictures so this is the example of some of the scheduling that we do automatically so this is from February to May of this year in you can see how we have very efficiently done back-to-back scheduling of all the machines in environment you can see you know 4 to 5 of parallel running workloads testing different products different scale different aspects of different products and all of this on its schedule in advance all happens automatically so we don't ever have to waste time is a two-person team we get a lot of infrastructure so we don't have to waste time on manually setting up any of this stuff nor would you by here's an example of you
know some of the metrics that we've gone from the lab I just I can pick this at random this is stored workload you know and these are some of the results that we give out of out of the sky lab and quads kind of empowers us to do this sort of work so we
talked a little bit about the time savings and efficiency but I wanna drill down more into what problems are actually solving besides the obvious
so the first one is server hugging is anyone know its server having its OK so server hugging is the idea once I explain it to be like a lawyer and a so server hugging is the idea that if you give someone a resource they're going to hold on to it as long as possible into you pull it back from them developers are very bad at server and usually there's always more of a demand for resources than there are actual resources to give people so if you're lucky enough that you know your managers particularly savvy and he can fight for a budget to to have this on the Dev Test hardware and he's better than the other managers that you're going to have more Europe but the sad reality is is there's never enough bare-metal hardware and that's your Facebook Google or you know one's the humans that you're not gonna have hops bare-metal high in server hardware to run your code and test against all the time when you want to server hunting is the idea that there is this tendency for people to hold onto things longer tenacious it's a natural human thing you you you given something you wanna use it and you can become protective evidence here it's a pet it's your jaw area it's yours but it's not really yours yeah share with other people so why having automated scheduling of part of server resources and network resources you saw forced people to be more efficient to planning you force them to maximize the time that they would have on a set of hardware and you can save a whole lot of money and time by doing it it this way yeah so what
what's the other things that we solemnly on things you know there's there's there's less human error that's good with more automation it and to a certain extent you can kind of give control over to machines because you know what's what's the worst is going happen you know at that point so that you would be the nozzle however if you have a
last I that the problem is obviously beds in quiet and killed last night Germany there's obviously some rough edges in in our software as the rod but you know the idea is that you automate as much as possible but you know that the biggest area of devastation it is on the network side it has 1 typo to configure the wrong court and totally offline the machine of just cause freedom habitat so simply is automating the network administration is is a huge boon to having there's the downside is when automation fails it normally fails in glorious ways is normally just trying explosions slow motion train wreck when you're automation actually fails because it's so efficient at something and if that something
has errors than it's going to be catastrophic so Dave Watson if you're watching this I'm really sorry about your 50 machines they got eaten by our network book but it's fixed now so what are the other things that were solving here so we wanna maximize idle machine cycles electricity is expensive carbon footprint is always an issue and we want a automate instead of machines only with underneath it fast reproducing among
that or and and a worry so what we do is we power machines when they're not in use in only when there have an active schedule in the yellow configure they actually come alive and get participate in some sort of an automated workload and sometimes the the they don't work like you'd like them to but that's kind of a double-edged sword of of automated things so week we want to
solve the scheduling issues we we go with short term reservation so if you rent say 100 machines if you squads and you get assigned 100 machines in a particular feeling configuration year development you're testing it scale in the maximum you can keep it is 4 weeks you only have a couple other servers we have queued up almost a month wait time to get to this those resources so we wanna be more like a b and less like a hobo house air BNB has very defined guidelines you can't stand there be be longer than 4 weeks and has uniform for the most part can things you would expect from a the you know you generally know which you can maybe not but but it's a little more polished and professional then you know like sale of a house and the last thing that we really safe
here and this is kind of the impetus for us continuing to 1st automated jobs and then work on the automation is the time savings in the cost savings so we had done some kind of back in the envelope mass of using 100 machines for example if hundred machines change hands tomorrow and went from 1 development Europe working on something of a set of developers with a specific scale or performance problem they trying to fix what would be the cost and time involved if someone was to do that manually now granted in 2017 I hope no 1 is doing all this by hand and people are inserting an ISO into a server somewhere and someone's on on SSH consul a switching and I hope people aren't doing that way maybe some people are but assuming that you did do everything mainly it would take roughly 90 hours of work to provision 100 servers and possum officer models so for our current two-person teams that would be about 45 hours peace over a week of work and if we tripled our team but we would be about 15 hours 2 working days in which we had a 12 person team we could maybe get it done in a day so what does all this on Sundays when most people are working and automates the entire thing in a span of 2 or 3 hours so when Monday Monday Monday morning rolls around machines already passed off notifications already sent to the users they already have their own special credentials access the machine that only they have and then the clock starts ticking on the reservation so I'm not going to drill into this but this is kind of how we came up with this figure and these are pretty conservative estimates of everything that that you
alright so how does squads actually do all of this we've talked about the problems that we're going solve with it that we we saw today we talked about the level of automation efficiency that were able to yield but how how does it actually look on the back and so I have made
some rather grotesque topographical images for you this is not going to win any website of words but we all remember that Milton from office space were going to say he's your typical scale engineer and he needs hardware so this is the kind of quads architecture at a very high level that we have now Jason API in front of it but generally speaking there's also a demon and also see lion interact with at at its very basic construct there is a yaml schedule that is constantly modified by pi him and now I'll give you an example of that in a couple slides later but this is at the heart of how things are automated in present in the future there's also some provisioning elements if we wanna do any graphing we have folks in to collect the of and refine our and we also can send results Elastic as well but this is more it's hilarious stuff that we would set up after the fact automated of course and then lastly there's the consumable here besides the actual machines is but the documentation it any point anyone can look inside of this quite manage environment they can see what all the machines you doing was utilization look like who has the machines were they working on and for how long the be working on so you when you provide transparency like this it's easier for people they don't have to ask hey you got a spare machines geyser I have this project a month from now what's the schedule gonna look like all this is already published and available to anyone who wants to look at so there's tie-ins
to the actual provisioning which is the next slide
but and there's a plug-in to quads basically a open-ended command that we call move hosts just a simple or cost option in quads but this is where you time your provisioning system so however you run the life cycle of re provisioning operating system were laying on an image of a topic that whatever your Methodist you plug this into the move host command you define this in quality can be what everyone but in our case we just use formant because that's 1 of the tools that we enjoy using and saves us sometimes so the quads moves host command basically spat out if the time is running like if now is scheduled right now for a set of machines to change hands and go to another environment with another revealing configuration we would get this printed out in our we would say this example servers moving from culture 1 environment collegiality as an example on the back and on the
foreman and it the values for us this is a provision workflow so we would tie form into but add and remove role based access for the host we would change the IP my passwords so that users are isolated in their own environment we would do full provision the operant system we would lay down and post configuration stuff and then we would actually move the lands on the physical switches depending on the the land design that we support and then lastly I think more importantly is there's Automated Network validation happened we don't wanna pass off the scent machines to people with the not ready to user for something wrong with the so we're on automated validation checks and checks and checks and if every 1 of the machines doesn't pass never validation we get notified and it continues to check intervals until we fix the problem and then it finally passes validation and the consumers of the hardware of the of the isolated environment get notified so you a couple different mechanisms usually our survive in e-mail the
so we talk about the image schedule for host this is what it looks like this is kind of had its basic the construction of how battalion will drives up current and future scheduling of machines in networks so we see this define schedule here and there's very simple command of Ellis schedule list you all of the all of them current and the past current and future scheduling the quads knows about for particular so the very bottom here number 5 that would be like say current allocation that would be something that started June 28 since the end of the 6th of july for hasn't and we keep a historical record of this in the animal phyla because that drives the documentation and also the visualization that we so I want to
documentation now I like that the writing documentation once In fact I mean really like writing documentation but it's it's it's 1 of those things that is so critical to any sort of project or any sort of endeavor that it is also the most lacking aspect documentation so we decided that 1 of the pillars of this quads framework would be to automate all the things that we either don't wanna do or we're going to screw up at some point if you had even out the documentation is still better than no documentation but it's still terrible so our goal was to have
absolutely up to date by the minute documentation and the way that we do that is we query our provisioning source in this case it's formant it could be anything else it could be interval facts for example and then we we query quads because it knows about the past current and future schedule of everything in our infrastructure we can pass it into to mark down format and then we use an XML API RPC Python library to push up to Wiki and this is continually updated everyman any time there's a change in environment in time of bare-metal servers added removed this gets updated in infrastructure documentation in this case we use WordPress it's got a nice of the API for this but it could be easily be MediaWiki or anything that supports some programmatic of marked down from and again it doesn't even have to marked ounces that's what we curve
so this is an example of what it actually generates so this is of the form sort of infrastructure documentation of a set of Russia and this is why he's continually generated data over time and we have your typical things you would expect from it the structured oxygen hostname serial MAC address IP address we have a link to the out band consul but what's different from this stance something that someone at its is that on the right hand side we have workload so if you're a click on that workload link you would drill down to exactly what those sets a machine you're doing what is clouds you're 6 for example doing we know are Brian is the owner of the sets machines and then this is an older image but the graph link would redirect you to make a graph on a dashboard that has all the historical bandwidth throughput of all the interfaces per machine which is useful and again the serial numbers your actual servers but the out of support so you're not going to gain anything by giving them from my talk when you can certainly pay the bill forest if I you feel inclined
so along with the general at the structural layout of of the documentation we drill down into assignments again the kind of what she's doing right now and this is just an example of a snapshot taken a few months back of various of new internal testing of products and things like that we see a lot of OpenShift stuff in here OpenStack elements of OpenStack and we see some suffer fine networking running and things of that nature and then you
can drill down further into the workload and you'll see how long it had assignment how long it's going to run and what's the remaining time so again this just gives people an added level transparency to see there's no more black box what's going on with this server here that we have you know it's is very clear it's a service available I can request and an organ and I get this nice are generated pronoun and then we also
attacked faulty machine so if there's something wrong with the hardware we simply assign a key value pair of faulty in it it goes into the the spear poor the the fault equal that we can have a local lab people take a look at and fix
but on top of the documentation we have we have visualisations as well we generate a calendar so any point you can see what kind of test running inside of the on the scale of the and then
lastly we have a heat map visualisation l in this it doesn't look like it's gonna win any website words and it's it looks like the old Windows 95 defract programming as remember that works like the big blue grid and the colors change but it's incredibly useful from a scheduling perspective because we can look at and this is generated 3 months in advance 6 months in advance where everyone said that but we can very quickly see what's available from a day of the month or at a longer view and then we can use that to schedule free server for people if requested so again this is all automated automatically generated for you
cool so as you see earlier we definitely need more testing and CI is very important so what we do have testing is not good enough but it's it's getting there but we used here for a code review and then we use Jenkins for BCI and were were working on now kind of a fully instantiated virtual sandbox using of the switch and some other stuff to emulate the switch ports right now we're using flaky reason shall check for some of the the shall kind of glue that we have project and we need to but probably get proper testing so we're getting there quads is about 11 months old it's been running our R&D environment for about 8 of those called so what's
working right now this long list of stuff that I'm not gonna read to you now this is available in all the documentation that we have if you're curious you can you can ask me after the talk but it's gonna do a whole lot of stuff for it where we working on this is even more important so what are some things that we have planned to introduce with courts about we the major thing right now is we will introduce the idea of like a configure and what that would do is it's 1 thing to provision the networks the storage the server for people and hand them off and documented and then reclaimed them when they're done but you probably 1 or the other stuff on top of it so we have a lot of folks testing OpenStack for example and you might get you know say 50 servers to do an OpenStack deployment and then you're testing a specific part of the exact 1 and you could easily burn 1 or 2 days getting up is that pointing getting just right so we wanna offer the option of kind of an open-ended model that whether you're laying down an Infrastructure-as-a-Service other software or you wanna lay down some cover contain orchestration on your host we we want offer the option that that is also automatically done for you so when developers come in on Monday or Tuesday or whatever they want work yeah they all they don't only have the servers their documented that they have the credentials and ready to go but they also have any ancillary software stacks they need to test on top of already set up for and it's again it's about saving time insufficient possible were also working on a flask web interface for quads to have enabled some self-service scheduling so if you're no developer Jane Doe and you want to schedule yourself 100 machines for a week and the available we can go fast interfacing request them in a week's time when everyone a start your machine social for you if you have 1st so that would be really cool feature that but we just put in place the Jason API so that's been pretty useful but we haven't quite were not using internally but it does work pretty well that's been a kind of paved the way for the flask interface work-energy Winslow but moving blocks that way but we just got and placed on Internet validation that talked about and we also want to support more resource backends decides I like to have an interval back and that all you have to do is run interval gets a set a host you yield all of the facts from discovery and then that is what formulates the information that's in the wiki that automatically generated it so again you know the the overall theme of of quads of of this kind of loose framework to we put together is being as efficient as possible with very the the few resources that you do have and you you kind of see this so in our case in in this is really constructive any sort of company that's larger than you know 20 or 30 people if you have here no assets were so you have machines and status in a rented resources is that over time development groups tend to silo the resources so you know like department a is going to have their servers and they're going to be entirely different making model than department being there going to be bought at a different time and the depreciation dates can be different maybe the part where profiles differ so a company ends up spending a lot of money maintaining these little silo areas and pockets of infrastructure and the idea behind wives is that you put everything in 1 giant buckets and then you let wives do all the provisioning all of all of the the scheduling and take care of the whole thing for you no obviously there's down sides of automated scheduling and provision and that sometimes people's deadlines slip or sometimes they have something come up and take vacation or the k is our so we we've built and things into by animal and playing was very good at this and that we simply just need to modify the schedule and then quads framework does the right thing from provisioning perspective the but again this is a lot of very little tools little little small things to do 1 thing and do 1 thing well Source sticking to the Unix kiss Percival if you will that we just keep it simple and selfishly from kind of the Dev Ops operations side is that we don't want to do the same thing more than once if effects can be automated we want automate the crap out if it's boring or we messed up a lot we want automated and if it's something that's just we just don't wanna do obviously machines can probably do better for us so that was kind of the drive to initially get this thing so the parts of quads might be more useful than the sum of its parts some people like to use just on the documentation for example of the of the scheduling aspect might be useful there's parts of the framework that you could consume yourself because it's not all tied together it's very much of so I also got some extra in the quads as well there's some ongoing collaboration with doing with Boston University and MIT but in the Massachusetts open cloud so work can emerging parts of quads with fair scheduler that they've written called hill h i l and we've had some kind of large public companies show interest in using it for the dentist environments for so that's basically it but thanks for coming to my talk at a family time left that could open up for questions the ways got some few in if you wanna read more everything's open source its own get help and we certainly welcome patches because were just not that good Python but we won't be so I welcome for way gives a shout and look at the code yes you use and you see framework of for the testing of the cluster for or for whatever doesn't make major taste so may as native uh I mean if you some of them also called a long term use as such as the scenario but I don't know what yeah but it's it's really it's really a memory now for and bare-metal servers and kind scheduling provisioning and resources based on all of future date it for example what it was like to 1 of 2 modes of doesn't start and the stage of a lot of time just focus on you know go and these by hand the but if you're talking about OpenStack specifically here for middle-class like in use case you like the notes so we took station hot and final time table spot opponents 1 I don't know I don't have to kind of think about it in university in so I mean you can wear on any sort of post provision automation that you like in you know orchestrate that yourself the it's designed in a way that we don't want to dictate the use case for you but we just we we find great utility and efficiency and having have an open-ended framework and if the certain parts that you can reuse it's designed in a way that there's inputs for that 1 so we ship like a configuration file that on just a key value pair them if you want a different wiki for example or or if you don't care about the provisioning aspect you could only do the scheduling or you could only use the documentation part yeah I can give a talk on the maintenance of the of the week scheduled and provisions services weapons when they call like the users of the service and then 2 of them gold like we have an analogy in terms of their endowments their servers but that policy that has I far from the area with his right and then all the rank of time hold that agreement it and also in the 2nd 1 is set the reason that the sum was also things at Bergen always break and there's there's you can you can build then on any level of validation of pre provision validation but we found when you're doing very large deployments like is several hundred servers at there's going to be always be 1 or 2 stragglers where you know maybe they don't pick C. correctly to kick-start maybe there's a a failed disk that is picked up by the monitoring system and so do the best we can do is bacon as much of automated validation as you can and right now that's only in the network side we don't really do it on the system size generally we have enough servers that if there is also a serious hardware failure we can easily shuffle them out of the pile and throw another 1 in the next and then that 1 takes its place and then we can always sort out later and that's why we generate the faulty servers on the documentation but I don't know a good solution for that because you're you're you're fighting against so many factors at that point inside the dataset there's there's the network layer you have power issues you have on anything you could even think of going go wrong at a large scale so we build more of a redundancy is more built into having at least 1 or 2 spare of that type that we can quickly shuffle and and then dealing with it later rather than trying to make extremely long tenuous of validation but we can always do better on the system side validation set that your question great mister under just wondering about the network size and how best to do with connecting up here devices they think you know these guys do you have any issues like vendor interoperability and connects use just like that was something will I'm sorry to be so how do you have any issues with the with interoperability size you might need to buy a safe all right now do you might have play like that that what we do from certain vendors that could could we have a a pretty of better genius environment we we have a lot of super micromachines allotted Dell machines and they're not made equal when it comes to you at a band interfaces and things like that so we do have to do some 1 of things and it's it's more apparent on the network side of that you would interface with Junos completely different than new interface with I was so we have per vendor tooling that would go out and change the lands and everything is done but we basically keep this fact flat file structure and it's done in some kind of the format so there's 1 per per server that has several fields and it maps like say the the Ethernet interface to a MAC address and there's a there's a field therefore vendor ID so if we were dealing with assistance which 1st a jurist which we would need to make sure that the vendor field actually has Cisco and then if it doesn't knows to use the other tooling that does the equivalent stuff but it it's not elegant anyway it just gets the job done but ideally and this is more of a media future thing is that the interval is is working on network orchestration but it's done but at an abstraction layer so you would have interval drive all of your switch changes that you would tell it I I don't care what the vendor as I want this the land for this construct to go from this to this and if it's in its desired state it does nothing so it's item potent but you don't need to care about the vendor semantics are the abstraction manageable interval take care that but I don't know how big that is I think it's very actively being worked on and that's what we're gonna move to but right now we maintain per vendor our network automation scripts and luckily right now were were almost all Juniper across the board so that makes that easy the new questions will feel free to find me after the conference and of the warmer terminator masking attack me if you like and commented that I guess so my think for your time and I appreciate the talk at the
Intel
Viereck
Software
Total <Mathematik>
COM
Grundsätze ordnungsmäßiger Datenverarbeitung
Facebook
Gesetz <Physik>
Framework <Informatik>
Weg <Topologie>
Gruppenkeim
Versionsverwaltung
Bridge <Kommunikationstechnik>
Computer
Skalarfeld
Computeranimation
Internetworking
Client
Softwaretest
Maßstab
Prozess <Informatik>
RFID
Tropfen
Parallele Schnittstelle
Druckertreiber
Analogieschluss
Verschiebungsoperator
Softwaretest
Zentrische Streckung
Datennetz
Biprodukt
Forcing
Menge
Scheduling
Server
Ablöseblase
Versionsverwaltung
Explosion <Stochastik>
Server
Subtraktion
Firewall
Identitätsverwaltung
Framework <Informatik>
Physikalisches System
Virtuelle Maschine
Systemprogrammierung
Weg <Topologie>
Supercomputer
Datennetz
Konstante
Datentyp
Biprodukt
Parallele Schnittstelle
Einfach zusammenhängender Raum
Videospiel
Open Source
Mathematisierung
Physikalisches System
Menge
Quick-Sort
Chipkarte
Viereck
Beanspruchung
Druckertreiber
Dreiecksfreier Graph
Analogieschluss
Bit
Subtraktion
Wort <Informatik>
Gruppenkeim
Programmschema
Identitätsverwaltung
Computeranimation
Übergang
Physikalisches System
Virtuelle Maschine
Systemprogrammierung
Datennetz
Generator <Informatik>
Softwareentwickler
Figurierte Zahl
Konfigurationsraum
Ganze Funktion
Umwandlungsenthalpie
DoS-Attacke
Trennungsaxiom
Videospiel
Datennetz
Rechenzeit
Programmierumgebung
Scheduling
Dienst <Informatik>
Menge
Dreiecksfreier Graph
Scheduling
Server
Projektive Ebene
Aggregatzustand
Betriebsmittelverwaltung
Satellitensystem
Bit
Subtraktion
Extrempunkt
Skalarfeld
Whiteboard
Computeranimation
Systemprogrammierung
Virtuelle Maschine
Softwaretest
Maßstab
Software
Biprodukt
Softwareentwickler
Parallele Schnittstelle
Softwaretest
Zentrische Streckung
Hardware
Physikalisches System
Biprodukt
Quick-Sort
Gruppenoperation
Beanspruchung
Scheduling
Scheduling
Mereologie
Server
Identifizierbarkeit
Programmierumgebung
Term
Resultante
Server
Bit
Gemeinsamer Speicher
Soundverarbeitung
Programmschema
Code
Computeranimation
Eins
Maßstab
Softwareentwickler
Digitale Signalverarbeitung
Hardware
Softwaretest
Hardware
Gerichtete Menge
Linienelement
Datennetz
Beanspruchung
Quick-Sort
Endogene Variable
Beanspruchung
Viereck
Scheduling
Flächeninhalt
Menge
Scheduling
Mereologie
Identitätsverwaltung
Server
p-Block
Wellenpaket
Kontrolltheorie
Datennetz
Physikalischer Effekt
Computeranimation
Virtuelle Maschine
Netzwerkverwaltung
Flächeninhalt
Software
Datennetz
Scheduling
Maßerweiterung
Explosion <Stochastik>
Kontrolltheorie
Fehlermeldung
Datennetz
Datumsgrenze
Extrempunkt
Quick-Sort
Computeranimation
Virtuelle Maschine
Scheduling
Beanspruchung
Dreiecksfreier Graph
Scheduling
Konfigurationsraum
Fehlermeldung
Leistung <Physik>
Impuls
Server
Extrempunkt
Extrempunkt
Term
Computeranimation
Systemprogrammierung
Virtuelle Maschine
Maßstab
Prozess <Informatik>
Datennetz
Uniforme Struktur
Booten
Passwort
Modelltheorie
Strom <Mathematik>
Gleichmäßige Konvergenz
Operations Research
Softwareentwickler
Gleitendes Mittel
Konfigurationsraum
Figurierte Zahl
Hardware
Umwandlungsenthalpie
Schätzwert
Zentrische Streckung
Mathematisierung
Ruhmasse
Einhüllende
Programmierumgebung
Inverser Limes
Office-Paket
Scheduling
Menge
Mereologie
Server
Energieerhaltung
Stab
Resultante
Zentrische Streckung
Web Site
Reihenfolgeproblem
Punkt
Hardware
Softwarewerkzeug
Interaktives Fernsehen
Ungerichteter Graph
Element <Mathematik>
Raum-Zeit
Computeranimation
Office-Paket
Übergang
Viereck
Scheduling
Virtuelle Maschine
Projektive Ebene
Wort <Informatik>
Elastische Deformation
Dämon <Informatik>
Unternehmensarchitektur
Programmierumgebung
Bildgebendes Verfahren
Videospiel
Reihenfolgeproblem
Programmschema
Physikalisches System
Information
Migration <Informatik>
Computeranimation
Gruppenoperation
Konfiguration <Informatik>
Rechenschieber
Virtuelle Maschine
Viereck
Menge
Front-End <Software>
Netzbetriebssystem
Dreiecksfreier Graph
Server
Programmierumgebung
Konfigurationsraum
Bildgebendes Verfahren
Betriebsmittelverwaltung
Webforum
Subtraktion
Programmschema
Zahlenbereich
Information
Computeranimation
Virtuelle Maschine
Sechs
Mailing-Liste
Bildschirmmaske
Datensatz
Netzbetriebssystem
Visualisierung
Passwort
Strom <Mathematik>
E-Mail
Konfigurationsraum
Bildgebendes Verfahren
Streuungsdiagramm
Kraftfahrzeugmechatroniker
Konstruktor <Informatik>
Hardware
Datennetz
Validität
Mailing-Liste
Strömungsrichtung
Scheduling
Viereck
Scheduling
Programmierumgebung
Server
Punkt
Open Source
Mathematisierung
Framework <Informatik>
Quick-Sort
Computeranimation
Homepage
Viereck
Scheduling
Programmbibliothek
Server
Dateiformat
Projektive Ebene
Kurvenanpassung
Programmierumgebung
Subtraktion
Natürliche Zahl
Adressraum
Zahlenbereich
Element <Mathematik>
Netzadresse
Computeranimation
Graph
Virtuelle Maschine
Magnettrommelspeicher
Bildschirmmaske
Serielle Schnittstelle
Gruppe <Mathematik>
Datenstruktur
Bildgebendes Verfahren
Schnittstelle
Softwaretest
Wald <Graphentheorie>
Graph
Datennetz
Biprodukt
Binder <Informatik>
Quick-Sort
Beanspruchung
Menge
Rechter Winkel
Server
Serielle Schnittstelle
Bandmatrix
Streuungsdiagramm
Hardware
Selbst organisierendes System
Blackbox
Stellenring
Automatische Differentiation
Computeranimation
Übergang
Systemprogrammierung
Virtuelle Maschine
Dienst <Informatik>
Softwaretest
Differenzkern
Server
Digitale Signalverarbeitung
Schlüsselverwaltung
Softwaretest
Zentrische Streckung
Punkt
Sichtenkonzept
Freeware
Computeranimation
Mapping <Computergraphik>
Scheduling
Perspektive
Bildschirmfenster
Server
Visualisierung
Wort <Informatik>
Kantenfärbung
Visualisierung
Optimierung
Offene Menge
Mereologie
Gewichtete Summe
Benutzerfreundlichkeit
Computeranimation
Formale Semantik
Spezialrechner
Softwaretest
Gruppe <Mathematik>
Code
Radikal <Mathematik>
Skript <Programm>
Notepad-Computer
Schnittstelle
Softwaretest
Hardware
Datennetz
Krümmung
Profil <Aerodynamik>
Störungstheorie
Scheduling
Dienst <Informatik>
Kollaboration <Informatik>
Rechter Winkel
Festspeicher
Server
Cloud Computing
Programmierumgebung
Schnittstelle
Subtraktion
Relationentheorie
Mathematisierung
Datensicherung
Whiteboard
Überlagerung <Mathematik>
Virtuelle Maschine
Rangstatistik
Perspektive
Datennetz
Datentyp
Vererbungshierarchie
Datenstruktur
Konfigurationsraum
Soundverarbeitung
Open Source
Softwarewerkzeug
Elektronische Publikation
Ultraviolett-Photoelektronenspektroskopie
Hill-Differentialgleichung
Nabel <Mathematik>
Patch <Software>
Rückkopplung
Offene Menge
Identitätsverwaltung
Bandmatrix
Streuungsdiagramm
Reihenfolgeproblem
Punkt
Adressraum
Familie <Mathematik>
Gruppenkeim
Internetworking
Übergang
Gruppentheorie
Prozess <Informatik>
Analogieschluss
ATM
Nichtlinearer Operator
Zentrische Streckung
Abstraktionsebene
p-Block
Wiki
Ein-Ausgabe
Teilbarkeit
Konfiguration <Informatik>
Softwarewartung
Datenfeld
Dateiformat
Projektive Ebene
Information
Aggregatzustand
Einmaleins
Keller <Informatik>
Term
Code
Framework <Informatik>
RFID
Systemprogrammierung
Physikalisches System
Software
Front-End <Software>
Mini-Disc
Arbeitsplatzcomputer
Modelltheorie
Softwareentwickler
Speicher <Informatik>
Grundraum
Hardware
Meta-Tag
Leistung <Physik>
Benutzeroberfläche
Validität
Mailing-Liste
Physikalisches System
Quick-Sort
Keller <Informatik>
Mapping <Computergraphik>
Viereck
Flächeninhalt
Mereologie
Hypermedia

Metadaten

Formale Metadaten

Titel Skynet your Infrastructure with QUADS
Serientitel EuroPython 2017
Autor Foster, Will
Lizenz CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben
DOI 10.5446/33814
Herausgeber EuroPython
Erscheinungsjahr 2017
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract Skynet your Infrastructure with QUADS [EuroPython 2017 - Talk - 2017-07-11 - Anfiteatro 1] [Rimini, Italy] The very small 2-person DevOps team within Red Hat Performance/Scale Engineering has developed a set of Open Source Python-based systems and network automation provisioning tools designed to end-to-end automate the provisioning of large-scale systems and network switches using tools like Foreman, Ansible, and other Open Source bits. QUADS – or “quick and dirty scheduler” allows a normally overburdened DevOps warrior to fully automate large swaths of systems and network devices based on a schedule, even set systems provisioning to fire off in the future so they can focus on important things like Netflix and popcorn or not reading your emails while your datacenter burns in an inferno of rapid, automated skynet provisioning. QUADS will also auto-generate up-to-date infrastructure documentation, track scheduling, systems assignments and more. In this talk we’ll show you how we’re using QUADS (backed by Foreman) to empower rapid, meaningful performance and scale testing of Red Hat products and technologies. While QUADS is a new project and under constant development, the design approach to handling large-scale systems provisioning as well as the current codebase is consumable for others interested in improving the efficiency and level of automation within their infrastructure

Ähnliche Filme

Loading...
Feedback