Merken

End-to-End Django on Kubernetes

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Beta
Erkannte Entitäten
Sprachtranskript
at the time time and
i and
k a and no and the it everybody has stated that any of this may come as a surprise to you but am not just
workers but the more I thought about I realize that you could be easily confused because judges names on the
program still we both appearance we both like Django post-processing Cuban entities we even will have the same name glasses the but I and this some differences the over here on the left we
got the average user oppose this level of knowledge and I know a little bit more than just use a whole lot but about how you can ask yourself what the hell this has to do with anything about the talk and there is 1 final important difference that that does pertain to this talk of buyback works
pretty OK just is not so much
but which is why I'm here to talk to you about J. going to really see Josh managed to hurt is back making these lovely speaker gets for all of the data con attendees just that excellent Potter as well as open source d and last week he hurt his back finishing these up so he wasn't able to make it and uh the data hunting team was like Haiti any chance you wanna give a talk on Tuesday he said that it so this is the primary my most polished talk ever
but under the topic that you're all here for committees the the committees is arguably the best and most popular container orchestration system out there today that is that it won't get replaced year by something cooler and better but right now this is kind of what everybody is playing with for the most part but before we dive too deeply into communities is we have to get into some terminology
so the name to urban areas what does it mean and is it Greek for ship captain or is it a Google learned its lesson naming things get to go or is it 3 all the above if you pick 3 you were correct is great for ship captain and I do think that they named it because they had so much trouble with go
from today most every body is looking at we're moving toward containers and some form of fashion entertainers a great about being able to the package of all of our dependencies for an app into a thing that we can then move around and shared use but working with them on their their own it is it is not exactly the most user-friendly then if you had to remember which port some my use and what volumes do I'm out where that stuff gets complicated over time and that's quite container orchestration systems exist but some examples of those of you not familiar like Dr. compose is really a contain orchestration system of Dr. swarm-based those interviews container service and communities are all handle orchestrating these containers that 1 used to go so what's orchestration now
event loop to use by career components to reconcile things between local machines and the desired cluster state so what is that mean to us we basically tell covered is this is how I would like the world to look and incorrectly sits there and spend a little and tries to make that happen for it can always do it but it will continue into trying to make it happen
so I model so did the other 2 reuse from the documentation here it is a control loop which in robotics is kind of the main processes just sitting there going and am I standing up my advisor Walder any back up what what's happening right now and that's basically what this is constantly do is everything that's supposed to be running running it can everything talk to each other that supposed to be talking to each other I am not a liar you would see the grid is a super easy and to learn we're definitely not going to learn it all in 40 minutes today but it's a big complicated system it's
about it's a big scary there but but my goal was to change repression of it from this tomorrow that's
but when I started play with Cuba the
terminology is what me up there's a bunch of new terms there's a whole bunch of concepts and you just don't tend to think about what we we've all done before but the way they talk about them sometimes it's confusing so most of today's you'd be coming up and getting comfortable with these different terms mean and animal peace altogether and have a working gender at for the end choose
very the slides move so when there's a big
images so to raise has a concept of masters and notes were of the masters or where all the communities magic happens of what should be run where the nodes are where your containers actually you can have different numbers there not one-to-one relationship so this would be a simple 3 master cluster with 3 were a more real world scenario would be something like this so
inside of an AWS region you have a master per availability zone those 3 are clustered together and then you have some number of worker nodes also in each availabilities the this is so bad underneath is is really just an as the cluster that has an ICT over top of it at so when you say I want you to run the sets of containers it puts in that data in getting the cluster which gets then clustered between all the masters and then the API the were little demons that run on these nodes constant we're looking at that and saying am I running what I should be running is there something out there that needs to be running that's not running where should we run I 1 of the things
the trip to block is authenticating to your this cluster I almost everything happens through this can think file in your home directory effectively known as could convey I there are multiple authentication committees is fairly pluggable but not as pluggable an easy to use as Genghis authentication back but out of the box you tend to have 1 user with a password and 1 set of SSL keys to talk to it and you share that amongst multiple says that that kind of the default configuration it feels mn wrong and messy and 2nd it is kind of wrong since it but it works out the other systems you can authenticate against Google but if you use Google Apps and so you can give just certain people access to the cluster of the other schemes that you can employ too far into that but it is important that you have a could convey and that it is properly set up pointing at your cluster and you can have multiple of them so that you can switch between multiple clusters of which 1 you're talking to and the as you can access your cluster by proxy so if you run could CTL proxy it looks for that people could convey it looks for the cluster that you're currently pointing at so like I have 5 or 6 clusters in my conveyed so I pick which 1 I'm going to be playing with in a given moment and if I run could CTL proxy i and then able to access the API of the actual masters the if you but if your cluster is running the company's dashboard and you can then access it with this you move this to it you I don't want it and that's not going to let me the number is well is the to the dashboard portion of the evening and the dashboard is a really decent web interface to the cluster as a whole you can see all of the various components you can see all the continue can make edits to the configured you can see resource utilization of crofter nodes which ones have high CPU high memory usage on and you can get a nice kind dashboards state of your cluster um all of that same information all the information for the dashboard comes from the API which is also all the same exact information that you get on command line tools that you can also then access from python and go languages 1 of the ways that you keep saying sane in a Cuban and cluster is by using interfaces of a namespace in communities is a of offense between other containers so ponds in the namespace containers namespace can only be easily accessed other containers in that same namespace so you can use it as kind of a light mechanism to for multi tendency and you can also use those kinds of namespaces a light way to kind of separating them from stage from problem at all inside the same cluster of but it's not as hard and fast of a wall as true multi-tenant separation like your containers could end up talking to each other it's a it's a balsa wood fence not invertible the on in communities you define resources using animals so this little bit of snippet at the top there is all the and you need to create a namespace called Ramses rocks you created in the cluster by just running CTL applied and action pointed to the file you created and that will come back and say I've created this namespace but if it's already will say it I can figure this namespace because it was already configured and you can reapply the same files all you're doing is adding state of the system and sort the state is the same nothing changes but if there's a new state actions detector deployments deployments are the template of how you like the world look so you say I want have this go out and I wanna run this particular container and I wanna use these environment variables and I wanna run 3 copies of this is kind of hard to read of size wise but the see we just call it a kind of deployment and we say we wanna have to replicas and the template is this particular container and 1 of open that container port 80 2 inside the cluster and just like with the namespace we do the exact same apply command actually put that into clusters services all committees are what we think of as a service I've
I've just created a Django app running in those couple of but is now I need to tell the rest of the cluster that there is this web service out there and ice define that like this we create a service notice all of these are in the same namespace rats rocks so I tend to use the same name for the namespace and the AP and the service and everything to just keep myself sane you can call them things differently I know that 1 of my coworkers Stephen would probably call this service http or whiskey of where I would call it the name of the actual Service I'm thinking of it as the website right but there's no for the fast rule here but all we're doing is saying hey for this service under open up port 80 and it needs to go to the container port has a concept called an inverse controller sub so far everything that we've done is available inside of our cluster 2 other things running in the clusters that is not available to the outside world in any way shape or form opening network 80 did not open port 80 on external IP address so the into inverse controllers outside world things to inside the cluster now depending upon where you host this changes what kind of inverse controller you can you so if you're on AWS it would use and he'll be on the as an inverse control or and it managers what points where when you just say I want 1 of these and it'll go out and create 1 and you start putting pointing DNS at the scene right you don't have to actually go configurable which leads to a quick aside here we use a control called could Lego which handles everything about let's encryptions certificates you install this in your cluster and then in these general definitions you can use what are called annotations and they're basically just keys in the animal that Cooper daddy's itself is not particularly looking for right so a controller is just something that is listening to this API looking for state changes and taking some action the depot combinators ones handle things like I need to be running this container overall this node but you can create your own annotations and take other action so in this case somebody created a system to handle Watson-Crick certificates so you say hey I wanna let's encryptor difficult for this particular hosting it'll go out and register it hijacks the dot well known location handles all the key-management stores it inside the cluster and presented to the world as let's encrypted SSL connection from then on and you literally have to do just a couple of lines of can so this is that any grass controller definition and you'll see we have a similar kind of pattern name namespace and so then there's the rules there and it's a host of this is the domain is actually read Systat got rocks 1 of the new top-level domains and only 2 views that could be . com out or whatever but then we have a little part wanna highlight and this is the could live apart so we just have an annotation they're saying hey I want TLS acne and I when using engine next ingress controller and its host should be red cis . rocks and I want to the story that as a secret named Red Systat rock stashed TLS and I did anything at all the when it comes up it gets a certain it means renewal that all know and I don't have to deal with that matter has to do with that money per-application basis or even per container basis I could throw a couple of reels projects and go project into this cluster and they have let's encryption is totally independent of whatever I'm doing in my container so I 1 of the last pieces of terminology is as pod so when we create deployments all the containers and deployment form upon Potter sets of containers that are deployed together on host so if you have things in a pot and it has 4 containers to it all 4 of those is gonna be are gonna be deployed on host if for some reason it can't deploy host a a host a dies they will all be picked up and run together on host the right so they're always a set together like a pot of Wales the I this to be useful for lots of scenarios in all the example that I have today we're only using 1 single container so that the difference there does not become particularly apparent but if you needed additional containers that only talk to each other and not necessary the outside world this can be very efficient you can do things like share a UNIX sockets on host that you would be able to do because you could guarantee that they would be running on the same coast so if you ever come 1 container Django out with 1 memcached instance you could have that talk over Unix domain socket instead of a TCP socket and a bit better performance and you can only do that because you know they're always running on the same host so at a level
view committees is the masters run this API and store this cluster state and the nodes run pods which provide services inside the cluster and ingress controllers map the outside world to the inside world so if you have a host but they we have 3 worker nodes and we have some stuff running on those Thursday and some stuff running unopposed be if you should post a in the head it just terminates the instance of the other masters were going to go wait a 2nd the parts that were scheduled on host a are no longer running on a host because we can no longer see Jose I need to schedule them somewhere where they fit in OK host C is pretty empty I'm going to run them over here change all of the pointers all the different Proc supports the inverse controller all that stuff gets changed over and you back up and running so what you can do things like upgrade your worker nodes from when 1 AWS Instance size to another and never have any data stuff and I have to change in your configurations or IP address into 2 wait for pointing at IP addresses and and and temporary host names and makes things the new you have a little more smoothly I so How do you run communities in the real world the there has 3 different things you might interact with 1 is called cops K OPS on its utility for spinning up the good clusters in AWS it works really well it handles all the 8 and US specific nature of committees so 1 of the hardest things about communities is getting a cluster start but it is not easy to turn on it is really hard to kill once you turn the ball and that kind of its job and getting them turned on his kind of involved and and prone to error so people of created these wrapper utilities to make the process a little more turn key for us mere mortals of the other option and this is the option I would encourage you to play with 1st if you have an interest in Cuba eddies play with it on google container engine which is a hosted version of Cuban entities with Google the reason I suggested is then you know that you have a well working good to go communities cluster to play with any problems you're having orange or misunderstanding of how career these works all configuration state and not perhaps you set up the cluster and there's also meaning to which runs in a single committed single node communities cluster on your laptop using vagrant or virtual box and then a Linux system on your laptop on and that's a great way to play with communities in the small for developer environments you can use the same definitions to define which continues to rise which services to expose them on your laptop and then just move in use them then you in your production clusters so 1 of the things you would be able to do with containers is configured them right we're all 12 factor apps now right so we gonna be able to push this of configuration in these containers and communities provides several different ways the environment variables of course we can just defined environment into the yam all their toward the bottom a highlight the so that you can see we've just defined environmental name and we put in a value that gets injected into the containers environment and that's great all but a lot of times we don't want expose all that into you are configuration we can also use what's called configure maps and this lets us map sets of variable like things whole files or entire directories of files and our parts so maybe we don't have to list every single environment variable in that deployment him we can say here's a configu map of 25 environment variables take these and apply them into this spot you can pick which map goes to which contain and it just does all for you can also do things like I wanna use this and next configuration file of put it here on this and it will grab it from Greece configuration secret store and plopped into the pod at runtime good is also has a concept of secrets on these are great we could obviously put our database password or API keys into those environment variables in our deployment Gamal but that means that everybody gets to see them but perhaps we don't wanna developers to know those and just the out people from 1 of those 2 committees lets you define secrets of secrets are available like most things only inside the namespace but they're defined and so you can share secrets across that fence but unfortunately for secrets they're not particularly secure but right now communities stores them as base 64 encoded text on the master but so they're not as secret as you might want now to be fair they are working towards real secrets encrypted on the master secrets and this is just a stepping stone to to to getting there but but it does he secret that should not be on a node from getting to that node so Intel apologies to run their that need access to that secret that secret won't exist on so it does keep them off the places where they have absolutely no business it's just that once they're they're they're not particularly secret and so this is how you use secrets in 8 and by so we say hey I wanna have this database password environment variable update its value from the secret named refs is project in the the password and the key in their of possible you can also use and this is just a set of house so you could run of all cluster In Europe communities Costa and get your secrets from all or some other kind of release truly secure secrets story so because we don't know where containers are going to be running centralized logging becomes terribly important for being able to figure out what's going on so if you use Google container engine logs from your cluster go straight into Google's logging tools that factorize over logging system on that works fine we've had good luck with the IAEA stack of which is Elastic Search fluent the or fluent that which is a smaller c version of the fluent payment and cabana but but you got have this United tell what's going on I don't even know where rocks which hosts running on it and have to go and dig and find out where it's running to even get on the host and look log so having centralize logs important and so for part that we've lightly open source to this we use it and how useful will be for you all but but it's is log for Cooper and it configures gonna corn and your Python apps to use g son of logging to standard out and includes information that is Cuban specific so like what was the pods IP addresses inside the cluster what post was running on what was the name of the park but what happens in in those kinds of of metadata that's committee specific added into the chase on that's emitted then you're walking the so data persistence is pretty important but and there's a couple ways to handle with communities the hard way is with persistent volumes of this works but it's kind of hard to manage and it's kind of hard to wrap your brain around this would be a this is advanced Cuba that here what you're doing is you're saying I have this volume and I want to of 4 I provide a certain amount of space and then your apps claim the they make a persistent volume claim of how much space they need and careers tries to match up the claims with the volumes as efficiently as it can and then will mount those volumes on the host or those pods with claims Rock and then if those parts did evicted for some reason or the host dies it then renounced that volume to the new host for these things right and in a perfect world that's exactly how it works and it and and it works that smoothly I have yet to experience that perfect world so the easiest solution is to do off cluster storage of and this is where I encourage people to start and all this is is the existing where you were doing story here the database server somewhere that all of your containers then connect to and you manage database server as bare metal or you use Amazon RDS or something like that for those kind of persistent data stores we and 1 of the things the judges going talk about it is patrolling attorneys a templating system for highly-available of stress the idea is that you can keep a master running in the cluster and slaves running in the cluster and replicated data from 1 to another and as containers were killed off a nodes died you can keep that replication chain working between the nodes to the point where you can these days but I've heard good things about it i've never actually played with that and so I wasn't comfortable showing you how to do it having never done it myself but I do wanna mention that in case you're interested in playing a little fast and loose with your data the the and
OK and this was out of this slide is out of order of sorry but the idea here is your persistent data instance is just an instance there outside of your actual glucuronides cluster just inside the same BPC so that the cluster can access it but it's not actually running on communities it's just a bare metal no using interval or whatever whatever you want use or do it by hand so 1 thing
that I do not have a ton of experience with but I know is useful is how helm is a package management system for Cuban areas and you can think of it as a template in those Gamal box and but it's useful in more complicated scenario so you can say Monday a consul cluster and I wanna have this many nodes and it will figure out what all these to be applied to the Committee's API to get you up and working consul Costa and with the federation and that of the leader election and handle all that stuff for you so you can build these Temple of all systems to the point where I should be able to take your system and Hellman stall it and I just have that running and working on national at the new anything else other than maybe a little bit of secret management so 1 of the things because Cuba nowadays is just an API really we can use the API provider so this is
all you have to write if you got to CTL proxy running on your local history have a well-formed could convict file in your home directory we all you have to write to get a list of all the pods running in your cluster and you can see that I'm just burning out there pods IP address the name space in the name of pot of the the this is not a generated API office wired ox from Grant isn't kept up to date with releases and so this you should have always have full access of the EPI from python you do not have to Bilger tooling in go unless you want to know so was that would like here's the output for that is just I ran this on our says reduction clustering you can see various namespaces we've created there in the middle and various pot IP addresses and then the names of the actual pods you'll see that it takes the name that I gave them like rats as rocks and then appends uniqueness to it and that's that particular instance of that part so every time a new pod comes up and its own unique name and if it gets killed a new 1 comes up so you can see differentiation logs even if it's the same container 1 that they didn't anyone decorated you'll see that name change happen so everything included his works with a operator a controller and and why would you wanna create your own well like could lego you can create your own to win that takes action when these things happen right so you add a little bit of annotation your own and you can watch the cluster using that little bit of Python and insect pop I'm seeing a new pod come up that's annotated Frank needs to do something to it and I see that and I can go take action you inside the cluster out of cluster however I need to see we have whatever I wanna have happen when an annotation comes up I can make happen so here's some examples of operators you could build by the message in the slack any time somebody creates a new deployment and somebody's watching a whole new thing pop a nuke the slack so we know that have or maybe we wanna check any time pods come up and down for whatever reason we wanna get that message and slack and be like you know 10 or 15 lines of either of them particularly hard package it up in a container tell communities to run of you could watch your Django apps and look for the database connection information and automatically back up any databases the running being used by your cluster without having to go in and define each 1 in the cell can continue francs test system 47 I just came up it's annotated backup equals true so I'm going to go back it up pedestrian and I have 1 centralized system for dealing with that and was like and logging we could have centralized control because we've abstracted out the kind of the whole concept of OPS to this API we can watch the the maybe you have really complicated role of scenarios where you have to go 6 hours of collect static runs before finally things come up maybe you want to handle low amounts of downtime by spinning up entirely new service once it's already been down the old 1 and traffic over the new 1 you can orchestrate that just a little bit of Python something communities itself doesn't really support we hopefully that is enough
information to make you interested in communities but I'm sure you probably have questions uh sorry handle of like CPU like I have some services currently a lot of CPU and I don't wanna around you know these 2 services on the same node to this the a foot to each other but so so the effort of getting things on the slide and not melting your brain too much and I left out resource quotas which are just items that are in that same year when you say this takes this much memory it has a sort of this in a hard limit of this and you can have that the CPU memory and storage and race will handle it much like any other kind of quota system so it reaches its sort limit information in the API upon socalled and hard-limited actually kills the pod and then recreates so you can and tag hogs by how much resources they should use and and where you want to stop them if they grow beyond that you can also then target nodes so you may want have a cluster with some memory at the age of US instances and some CPU heavy AWS as instances and you can say this pod needs to run on 1 of my memory heavy instances these should run on my CPU heavy instances of of and that's how you can do that and so it will the static as best it can into those nodes based on the values you given it it will overflow if you don't put any resources selected my examples that will just keep pattern nodes and you will eventually get swap and like that of but if it can't then find a spot but it because there's not enough resources to put a pot it will continually try to find a spot for it and you will see information the dashboard lots of I can't run this part I do not have resources to do it you add another node your cluster and the we put right there the health say and totally new to Cuban that's on the interest points does it come to the nodes or to the plots and so it comes to the inverse controller right and as proxied inside the cluster from there all but you would think all this is then add this extra hopin the aim of it really in practice we hadn't really really huge scale that it matters but your day-to-day use no one's ever gonna notice about extra inherit the little go proxy and it's super effect so considering that the pulse all to replicate a guess the load placed in the load balancer in front of it or not you need us so the inverse controller from the outside will be the load balancer right and so that it comes to the to the cluster and then it load balancers from there and that handles all that we're is this part running and it just shoots it to where it needs to go inside the cluster you don't have to think about so then thinking about your application will look at are some ways to make a decision on disease is solve more problems than it creates so in terms of how you decide what point this is going to do that for me and so the solve more problems than it creates community that's it's a tough call at like any other tool right like other on 1 level I think it's easier sometimes the SSH infobox after all the thing and do it by hand but that's not reproducible anyway so like every every tool has its pros and cons of the thing I like about this 1 is that I'm going to be doing stuff what containers so I need some sort of container based system for the most part and most of the non you know ostensible a coveted chef and things like that are not kind of container focused so I don't see them as good tools for solving things around containers where you pick like which 1 you use or if you use 1 all of this is hard thing away but this is it really does free me up to not think about the mundane things like where is this going to run what work is available to open on it and how i from the support that work and don't to think about in the details of and that the does give me the power to listen on the API for when things change and take some sort of action so I like that I don't know that you know if you have 1 half and you run it and you do a deployed once a week this is probably over if you're managing 15 microservices and you deploy 10 times a day you probably are you're building something like this using something like this to thanks very much for the introduction of I was on what the next step might be cool-headed you go about learning east you have resources that you thought were particularly helpful in in could you recommend so career is a very fast moving east some I diverse started playing with it uh like 1 . 2 and it's already at a 1 . 7 and they come out about every 6 months i'm and what they they were actually fairly nice process of things come out and they are marked as alpha look and then they are marked as beta in and then they're eventually becomes stable and once they get the beta the yaml configuration for the most part doesn't change in you can pretty much just pick it up and move it over the alpha stuff is pretty alpha and the log of it works but but so the documentation often lags behind the version just a bit on the newest stuff or the stuff that just recently changed a lot from so the documentation should be an amazingly great resource and it is as long as you keep in mind that if this thing came out in the last version the docks may be wrong or if you just moved from alpha and beta the doctor may be slightly off right and so that's but there's really it's mostly what in tutorials where I had somebody else go about this let me go look at the Committee's configuration and some playing around there is no really great here's the book on communities and and and it's and it solves all of you all of his and thank them so if the we
were and
Subtraktion
Besprechung/Interview
Optimierung
Computeranimation
Subtraktion
Bit
Open Source
Besprechung/Interview
Übergang
Flächeninhalt
Physikalisches System
Computeranimation
App <Programm>
Stellenring
Physikalisches System
Ereignishorizont
Computeranimation
Arithmetisches Mittel
Virtuelle Maschine
Loop
Dienst <Informatik>
Bildschirmmaske
Zusammenhängender Graph
Spezifisches Volumen
Aggregatzustand
Prozess <Physik>
Kontrollstruktur
Loop
Regelkreis
Physikalisches System
Computeranimation
Roboter
Term
Computeranimation
Rechenschieber
Knotenmenge
Reelle Zahl
Zahlenbereich
Knotenmenge
Bildgebendes Verfahren
Proxy Server
Bit
Metadaten
Quader
Gruppenoperation
Mathematisierung
Formale Sprache
Zahlenbereich
Zentraleinheit
Eins
Variable
Knotenmenge
Multiplikation
Authentifikation
Proxy Server
Passwort
Zusammenhängender Graph
Cluster <Rechnernetz>
Default
Schnittstelle
Trennungsaxiom
App <Programm>
Kraftfahrzeugmechatroniker
Namensraum
Benutzeroberfläche
Raum-Zeit
Template
Softwarewerkzeug
Nummerung
p-Block
Physikalisches System
Elektronische Publikation
Zeitzone
Quick-Sort
Konstante
Dienst <Informatik>
Menge
Festspeicher
Authentifikation
Information
Dämon <Informatik>
Verzeichnisdienst
Schlüsselverwaltung
Programmierumgebung
Aggregatzustand
Punkt
Gemeinsamer Speicher
Computeranimation
Übergang
Eins
Erneuerungstheorie
Datenmanagement
Web Services
Gamecontroller
Mustersprache
Protokoll <Datenverarbeitungssystem>
Phasenumwandlung
Gerade
App <Programm>
Shape <Informatik>
Namensraum
Dean-Zahl
Sichtenkonzept
Datennetz
Inverse
Ähnlichkeitsgeometrie
Dienst <Informatik>
Chiffrierung
Menge
Rechter Winkel
Socket
Projektive Ebene
URL
PCMCIA
Schlüsselverwaltung
Instantiierung
Subtraktion
Metadaten
Gruppenoperation
Schaltnetz
Dienst <Informatik>
Socket-Schnittstelle
Netzadresse
Demoszene <Programmierung>
Bildschirmmaske
Domain-Name
Knotenmenge
Direkte numerische Simulation
Datenspeicherung
Cluster <Rechnernetz>
Einfach zusammenhängender Raum
Digitales Zertifikat
Schlussregel
Physikalisches System
Offene Menge
Basisvektor
Mereologie
Gamecontroller
GRASS <Programm>
Punkt
Prozess <Physik>
Natürliche Zahl
Adressraum
Versionsverwaltung
Login
Marketinginformationssystem
Hinterlegungsverfahren <Kryptologie>
Raum-Zeit
Computeranimation
Metadaten
Einheit <Mathematik>
Prozess <Informatik>
Minimum
Datenreplikation
Volumen
Umwandlungsenthalpie
App <Programm>
Namensraum
Dean-Zahl
Sichtenkonzept
Computersicherheit
Template
Datenhaltung
Güte der Anpassung
Programmierumgebung
Instantiierung
Biprodukt
Knotenmenge
Variable
Teilbarkeit
Web log
Konfiguration <Informatik>
Rechenschieber
Dienst <Informatik>
Verkettung <Informatik>
Menge
Server
Projektive Ebene
Information
Schlüsselverwaltung
Programmierumgebung
Normalspannung
Verzeichnisdienst
Instantiierung
Aggregatzustand
Fehlermeldung
Quader
Sterbeziffer
Keller <Informatik>
Netzadresse
Variable
Knotenmenge
Notebook-Computer
Datenspeicherung
Passwort
Passwort
Elastische Deformation
Spezifisches Volumen
Softwareentwickler
Zeiger <Informatik>
Cluster <Rechnernetz>
Speicher <Informatik>
Konfigurationsraum
Ganze Funktion
Schreib-Lese-Kopf
Open Source
Softwarewerkzeug
Einfache Genauigkeit
Physikalisches System
Elektronische Publikation
Mapping <Computergraphik>
Mereologie
Gamecontroller
Datenspeicherung
Proxy Server
Bit
Punkt
Quader
Gruppenoperation
Mathematisierung
Zellularer Automat
Nichtlinearer Operator
Login
Netzadresse
Service provider
Knotenmenge
Differential
Datenmanagement
Analytische Fortsetzung
Gerade
Funktion <Mathematik>
DoS-Attacke
Einfach zusammenhängender Raum
Softwaretest
App <Programm>
Nichtlinearer Operator
Namensraum
Raum-Zeit
Abstraktionsebene
Template
Datenhaltung
Konfigurationsraum
Eindeutigkeit
Mailing-Liste
Physikalisches System
Elektronische Publikation
Dateiformat
Hochdruck
Ordnungsreduktion
Office-Paket
Dienst <Informatik>
Funktion <Mathematik>
Flächeninhalt
Mereologie
Client
Information
Verzeichnisdienst
Message-Passing
Instantiierung
Proxy Server
Bit
Prozess <Physik>
Punkt
Gruppenoperation
Mathematisierung
Besprechung/Interview
Versionsverwaltung
Kartesische Koordinaten
Zentraleinheit
Term
Übergang
Lastteilung
Eins
Knotenmenge
Datenmanagement
Puls <Technik>
Mustersprache
Vererbungshierarchie
Inverser Limes
Jensen-Maß
Speicher <Informatik>
Konfigurationsraum
Leistung <Physik>
Soundverarbeitung
Zentrische Streckung
Betafunktion
Gebäude <Mathematik>
Inverse
Physikalisches System
Quick-Sort
Entscheidungstheorie
Rechenschieber
Dienst <Informatik>
Rechter Winkel
Last
Pufferüberlauf
Festspeicher
Mereologie
Gamecontroller
Information
Instantiierung

Metadaten

Formale Metadaten

Titel End-to-End Django on Kubernetes
Serientitel DjangoCon US 2017
Teil 23
Anzahl der Teile 48
Autor Wiles, Frank
Mitwirkende Confreaks, LLC
Lizenz CC-Namensnennung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben.
DOI 10.5446/33185
Herausgeber DjangoCon US
Erscheinungsjahr 2017
Sprache Englisch

Inhaltliche Metadaten

Fachgebiet Informatik
Abstract Not only is Kubernetes a great way to deploy Django and all of its dependencies, it’s actually the easiest way! Really! Deploying multi-layer applications with multiple dependencies is exactly what Kubernetes is designed to do. You can replace pages of Django and PostgreSQL configuration templates with a simple Kubernetes config, OpenShift template or Helm chart, and then stand up the entire stack for your application in a single command. In this presentation, we will walk you through the setup required to deploy and scale Django, including: Replicated PostgreSQL with persistent storage and automated failover Scalable Django application servers Front-ends and DNS routing The templates covered in this presentation should be applicable to developing your own Kubernetes deployments, and the concepts will apply to anyone looking at any container orchestration platform.

Ähnliche Filme

Loading...