We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Automate Kubernetes Workloads with Ansible

00:00

Formal Metadata

Title
Automate Kubernetes Workloads with Ansible
Subtitle
Easy deployment, self-service provisioning, and day-2 management!
Title of Series
Number of Parts
561
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Ansible, the familiar IT automation tool, makes it easier than ever to automate your Kubernetes cluster. This talk introduces two practical ways to define and provision complex applications using Ansible Automation with Kubernetes. Have you longed for a simple yet powerful way to automate workloads on Kubernetes? Ansible, the familiar IT automation tool, makes it easier than ever to automate your cluster. This talk introduces two practical ways to define and provision complex applications using Ansible Automation with Kubernetes. The Automation Broker allows users to leverage Ansible Automation to define and orchestrate applications, making them available for self-service provisioning in the Service Catalog. The Ansible Operator allows users to actively manage the full lifecycle of an application by defining management behaviors with Ansible; the Operator handles everything else. In this session you will learn: How to define and deploy your application on Kubernetes using Ansible. How to publish your own applications in the Kubernetes Service Catalog. How to create a Kubernetes Operator using Ansible.
AutomationNetzwerkverwaltungPrincipal idealSoftwareComputer virusBit2 (number)WorkloadMultiplication signQuicksortInheritance (object-oriented programming)Computer animation
MetadataMereologyService (economics)Communications protocolComputer-generated imageryModule (mathematics)Task (computing)NamespaceDefault (computer science)Template (C++)CodeSoftware testingMeta elementReading (process)Similarity (geometry)NetzwerkverwaltungBackupScale (map)Fiber bundleGateway (telecommunications)Denial-of-service attackComputer configurationPattern languageCASE <Informatik>Template (C++)Cartesian coordinate systemMoving averageConfiguration spaceScaling (geometry)Analytic continuationQuicksortComputer fileService (economics)Fitness functionMetric systemLevel (video gaming)MereologyRevision controlModule (mathematics)Directory serviceGrand Unified TheoryTask (computing)SoftwareDifferent (Kate Ryan album)State of matterSet (mathematics)Declarative programmingQueue (abstract data type)NetzwerkverwaltungRight angleGame controllerPrimitive (album)Similarity (geometry)Medical imagingGamma functionPhysical systemVirtual machineGreatest elementResultantBitWorkloadGroup actionCubeData structureLine (geometry)FreewareCodeComputer animation
Run time (program lifecycle phase)Client (computing)NetzwerkverwaltungFiber bundleComplete metric spaceService (economics)Library catalogGroup actionWindows RegistryService-oriented architectureOperations support systemSocial classDisintegrationOperator (mathematics)Term (mathematics)Library catalogMereologyCore dumpPhysical systemService (economics)Line (geometry)Term (mathematics)Moment (mathematics)Installation artFiber bundleProcess (computing)Ferry CorstenOperations support systemMedical imagingInterface (computing)CASE <Informatik>Point cloudRight angleService-oriented architectureCubeNamespaceCartesian coordinate systemPattern languageOpen setTemplate (C++)Series (mathematics)Shift operatorUser interfaceRun time (program lifecycle phase)DatabasePoint (geometry)Mobile appComputer configurationWechselseitige InformationBackupConfiguration spaceInstance (computer science)Form (programming)NetzwerkverwaltungGroup actionComputer iconLie groupComputer fileLatent heatComplete metric spaceoutputFocus (optics)Information securitySocial classModule (mathematics)INTEGRALComputer animation
Operator (mathematics)Game controllerOpcodeField (computer science)MetadataCache (computing)Pattern languageEvent horizonSuite (music)Parity (mathematics)Modul <Datentyp>Tablet computerSoftware development kitLocal GroupRevision controlMaizeComputer-generated imageryBinary fileConfiguration spaceNetzwerkverwaltungGame controllerPattern languageMereologySoftwareCartesian coordinate systemLink (knot theory)Operations support systemLine (geometry)Type theoryMappingService (economics)Keyboard shortcutDifferent (Kate Ryan album)Service-oriented architectureMedical imagingProjective planeComputer fileUsabilityBitChemical equationIntegrated development environmentPower (physics)Term (mathematics)NetzwerkverwaltungElectronic mailing listLibrary catalogState of matterFunctional (mathematics)Event horizonVolume (thermodynamics)CASE <Informatik>MathematicsData managementComputer configurationMoment (mathematics)Binary codeInternetworkingInformationNumberImage registrationConfiguration spaceNamespaceHookingClient (computing)Right angleMultiplication signReal numberProcess (computing)Fitness functionBackupOpen setFormal languageQueue (abstract data type)Distribution (mathematics)Group actionQuicksortNormal (geometry)Revision controlCache (computing)Level (video gaming)Codierung <Programmierung>Computer animation
Operations support systemCartesian coordinate system2 (number)Revision controlProjective planeMultiplication signDichotomyVideo gameCycle (graph theory)Computer animation
Computer animation
Transcript: English(auto-generated)
All right. Hello. Thank you all for coming. Welcome to this talk. My name is Michael Rivnak. I've been at Red Hat for almost seven years. This is my second time at FOSDEM and love this conference. Super happy to be back. I worked for a long time on the Pulp team, which we actually heard a little bit about earlier,
earlier this afternoon. Got involved in the container tooling at Red Hat from the very, very early days when Docker started to become a thing there. And that sort of snowballed. Got involved in container orchestration. And now here I am working on using Ansible in particular to automate the workload side of Kubernetes.
So let's dig into that. So what is Kubernetes? The flip side of this question for this crowd is even what is the role of config management in a world that has Kubernetes? This is the experience of interacting with Kubernetes.
Kubernetes, I'm assuming most of you have some idea of what it is, but in the shortest recap, it's a system that takes some group of machines and turns them into a cluster and enables you to schedule containerized workloads into that cluster. And a bunch of extra stuff around that. But it's declarative and we generally interact with it using YAML.
So here we have the most basic example of I have a container image called company name slash example. I was feeling creative that day. And I want to run that container in my cluster and on the right we have a service. This is a Kubernetes primitive that gives us a network presence
for a running container or a set of running containers that are all one and the same. So what's worth pointing out about this is one, we're interacting with Kubernetes by writing YAML either by hand or using some kind of tooling
maybe something like Helm is something familiar to many of you. We're creating YAML and stuffing this YAML into the Kubernetes API and it's declarative. And what that means is in this case we stuff it in the API and we stand back and we allow the cluster to now do whatever it thinks is necessary to make what we have asked for true. So in this case the cluster would
see, ah, there's a pod and it is requesting this container to be running. Let me go look at the world. I don't see that one running so now I'm going to start it and do whatever else I need to do to make that sort of thing happen. So now let's talk about
Ansible and how that intersects with Ansible and how that actually makes Ansible a really natural fit. So Ansible is a Kubernetes module. And it's a really wonderful module. If you have ever used Ansible pre 2.6 to interact with Kubernetes, I strongly encourage and invite you to take another look at this
new K8S module. It is light years better than what we had before. It's simple, it's elegant, and it doesn't get in your way. And here's the example to prove it. On the left we have a resource called a config map. It doesn't really matter what the resource does or why it exists. But just trust me, it's a Kubernetes resource that you can
create just by writing this little bit of YAML and pushing that into the API. On the right, we're creating the same thing. And maybe to take one step back, the experience we might have with Kubernetes is run the command line tool kubectl. So I could run kubectl create
dash f to give it a file path and then give it a path to this little YAML file and it would create this thing for me. Instead, we could create this simple Ansible task. And you see the red part is exactly the same as the part on the left. The only difference being that I've taken the liberty of templatizing it. Because now we have all of Ansible's template
ability at our disposal. So just right here, we can already see that the K8S module can be a very nice gateway to having a powerful and rich templating experience when interacting with Kubernetes.
Now if you don't want to inline your Kubernetes manifest in quite that way, here's another option. You can create manifest files, which is how people normally store these manifests, and just put them in the template directory of your role. You can access them like this. So even for somebody who's never used Ansible before, but
might be interested in learning it, or maybe is just looking for a good way to manage what's going on in the Kubernetes cluster, this makes it very, very accessible. Even if they just trust somebody, take these four lines of text and then put your Kubernetes manifest in that file in your template directory,
they can really get a lot done and go a long way. So this is the guts of an Ansible role. Most of you are probably familiar with this, I imagine. But here I want to highlight that a role is for packaging related Ansible code together, of course. Our goal in using
Ansible to interact with Kubernetes is to create a single role that knows how to deploy a single application. So maybe we take an application like WordPress or MediaWiki or something like that, we would make a role that knows how to interact with Kubernetes to deploy that application, and then maybe even interact
with that application itself after it's running inside the cluster. And once you buy into that idea of we're going to make a role that does that, then we're going to do some extra very interesting things toward the end of this talk to enable self-service provisioning and reconciliation and continuous management in that sort of way. And then here in yellow, I've just highlighted the two things
that a newbie to Ansible needs to know about and worry about in their brand new role. In our case, we have some tooling that we're going to look at in a minute that scaffolds this all out for you, including some other pieces. But even if you just use the Ansible Galaxy tool to create a brand new role, you get this directory structure
for free, and all you have to do is worry about the templates directory, you can put some templates in there, and then reference them from your main.yaml file. So it's really very simple. So why use Ansible with Kubernetes? So we have this similar pattern of it's not just that it's Ansible and YAML
and Kubernetes and YAML, it's also that we talk about Ansible as being idempotent. You want to be able to rerun the same Ansible role or playbook over and over and get the same results at the end. Likewise, in Kubernetes, we have controllers that want to be able to run a reconcile function
over and over again and always end up at the same state at the end. They're always at least moving toward the same end state. So it's a very natural and similar pattern to bring together. A lot of people are already familiar with Ansible. I'm betting this room is no stranger to Ansible. Even if you're not,
it's really easy to learn. Jinja templating is something a lot of people are familiar with, even outside of Ansible. And then lastly, for these reasons, we get full, actually quite rich day 2 management out of Ansible. It's much more than just a templating engine. You can use it after you've deployed your application to do advanced things like
backing it up, restoring it, upgrading from one version to the next, whatever detailed steps or careful work might need to be done to facilitate an upgrade in some cases. You can repair things when they're broken. And you can scale things in custom ways. You can identify your own metrics.
Maybe it's a queue depth somewhere. Somebody was telling me about their use case a few weeks ago where they're trying to measure a queue depth of a microservice that's multiple services upstream from what they're actually trying to scale out.
So they're trying to basically get advanced warning when there's a flood of work coming in upstream somewhere and scale out the guts on the bottom so that they're prepared when the flood arrives. So all that is stuff that you can do using Ansible. So now that we've bought into this idea, we've got an Ansible role, and this Ansible role can deploy and do some things that can manage an application
in Kubernetes, we have some extra tooling available, two different ways, in particular we're going to look at right now, of taking that role and doing more with it. The first pattern we're looking at is Ansible Playbook bundles. If you think about
provisioning an application just in general, forget about Ansible, forget about Kubernetes, however you're going to do it, whatever tool you're going to use, these are the kind of things you need to have in front of you when you're going to do that. You need just the Kubernetes manifest files, you need to know about any external services you're going to access and how to access them. Maybe you have some config data specific to this
instance you're provisioning, maybe you have some seed data you need to get in there, maybe you are actually restoring from a backup. You need some runtime tooling. So what technology do we know about that we could use to package all of these things, or at least most of these things, up into one place and move them around in mutable form that's testable and all that stuff?
Well of course it's a container. Packaging, by the way, I think is the underrated side of containers. The fact that it's a process running in isolation is interesting, but the packaging aspect of shipping these images around is in many ways, I think, the more powerful side of it. So Ansible playbook bundles are really just a pattern
of taking all this stuff, using Ansible, and putting it into a container that can be run in a particular way with a very simple interface that we've defined. So this Ansible playbook bundle, it runs to completion as a pod in your Kubernetes cluster. If that sounded
foreign, it is a container that you will run in your Kubernetes cluster, you'll start it, you'll let it do whatever work it's going to do, and then it stops and it exits and you clean up anything that's left of it, and it's like an installer. It's very similar to just having an installer that you can run in your cluster and out pops this application. And the nice thing
about being a container is it's testable, it's reproducible, you can put it through a full CI pipeline. But what else can we do with this Ansible playbook bundle? There is this idea of the Kubernetes service catalog, which is similar to other services like Amazon Web Services has their catalog of services in their cloud,
you and your Kubernetes cluster, you can have your own catalog of your services available in the Kubernetes service catalog, and Ansible playbook bundles are perhaps the easiest way to get one of your services exposed and available inside that service catalog. This is an example of the OpenShift user interface, it's just
basically the nicest user interface there is with the Kubernetes service catalog. Come on in and find some seats. It's what you'd expect out of this kind of experience, you point and click on one of these things, maybe you select MariaDB, it asks you some questions, you fill in the answers, it's just like an installer. And at the end you hit go, and
some work happens in the background, and that's it. Now you have a thing provisioned. Well how does that work? In a nutshell, on the right here you see these brokers, and each broker can advertise one or more services to a cluster, and say, hey, I know how to deploy MariaDB, or I know how
to deploy MediaWiki, or I know how to deploy Prometheus, or whatever it may be. You can provision these things, deprovision them, and do other actions. The point of this is that this enables self-service provisioning, so users of your cluster perhaps you have dev teams, perhaps you have QE, when they need a database, they can go and deploy, in a very
simple point and click kind of fashion, whatever services you've made available in that cluster on their own without needing to bother anybody else. Alright, so what's the last piece of this service catalog story? Chapter 5 here is the automation broker. So we just saw we have this series of brokers here that can plug into a
cluster and advertise services for provisioning. The automation broker, we thought maybe we could do something a little simpler. We created one broker that uses APBs of Ansible playbook bundles as services that it advertises to the
cluster. So each APB that you make becomes available for provisioning, and in fact even in that screenshot we saw earlier of the OpenShift service catalog user interface, some of those icons were being powered by the automation broker. So you would click through and
ultimately what's happening is the user input the user provides in the wizard through the service catalog gets passed into Ansible at runtime and is then available for use in your templates or however else as facts as Ansible is running. And then you can do whatever work you need to do. The broker
takes care of running it in a secure name space that's transient so at the end of a provision it throws away that name space and cleans up after itself. So the end of this story is it removes the need for you to make your own broker for sure but it also takes advantage of Ansible
and the K8S module and makes it very, very easy to make your own services available for provisioning inside of a Kubernetes cluster. Don't squint too hard at this. This is an example of the user experience. Just trust me that there's a command line interface you can use to interact with the service catalog. It's not the best ideal experience
but if you need to do it from a command line, it's hard to do that from the command line and this does a nice job of making the best of it. So that's available. kube-apps is another option that came out of Bitnami that you can use that will run on just about any Kubernetes cluster I think. And then this is the OpenShift user experience. It's a very nice user experience. I do work
on OpenShift but I'm not going to lie, it's very nice. So if you're running OpenShift or if you'd like to run OpenShift you'll get a first class service catalog experience. It's quite a nice story. So what's the status of this? So the service catalog side it's a great path for self-service provisioning. It works today. It's a mature
ecosystem. You can just go out and do it. The best use case for this is off-cluster service integration. So say perhaps you have some appliance, you've got an off-cluster database or some other thing like that that you want to interact with or maybe you're running a cloud like if you're Amazon. In fact, Amazon themselves did use
this very automation broker. They wrote APBs. They wrote Ansible Playbook bundles to make a broker that ran inside of OpenShift when OpenShift was inside the Amazon cloud and exposed their services to their customers.
This lacks day two management. We're going to see how to get day two management in just a minute. But that's really the biggest detractor. It'll deploy something. So if you use this pattern to deploy an application that's running in your cluster, it deploys it great. It does a fantastic job of that. But then there's nothing else watching it. It's up to you now to own the pieces, manage it, monitor it, repair it, upgrade it, whatever. That's
mostly on you in this pattern. Unless, of course, you're interacting with an off-cluster service in which case you probably have some other systems in place to deal with all of that stuff. So this thing called operators is really going to take over as the preferred solution and that's largely in part because it does have day two management as a first class concept.
In fact, that's really what the core focus of operators is and we're going to dig into that in just a moment. So the bottom line is the service catalog is going to definitely stay around. It's a thriving part of Kubernetes. It's of course going to be part of OpenShift for the long term.
But I'm going to show you the operator pattern next as the second option and I think it's probably a better option for most people. Operators. What is an operator? So an operator is just a particular type of Kubernetes controller. A controller is a service that sits around running in your cluster watching for some resource to get created or
updated or deleted or something. And whenever something happens with a resource that it's interested in, it just wakes up and runs a reconcile function and does whatever it thinks is appropriate to move the state of the world closer to what that resource says the state of the world should be. And an operator is nothing more than a
controller that is special purpose designed to manage and deploy an application of some kind into your cluster. And beyond that, the real highlight of operators is you can use it to encode human operational knowledge into your cluster. So anything that you would do if your
pager went off or that you would do when you're doing an upgrade or doing backups or doing restores, we all love to automate ourselves out of a job. This is that mentality. Encode what you would normally otherwise have to do as a human typing on a keyboard into your controller so that it knows not only what to do
but when to do it and can pretty much manage your services for you. So how do we make one of those things? And how is that even possible? Well, Kubernetes is interesting. It has a rest-ish API, as I'm sure you can imagine. Just think of a long list of endpoints with resources that you've seen even
tonight so far. A pod and a service and a config map and so on. The interesting thing about Kubernetes is it allows you to add your own custom endpoints to its API. So in this example, we've created a memcache D resource type in Kubernetes. Now, in its list of API
endpoints, there's a new memcache D one. Kubernetes gives you this opportunity to namespace that, but that's a topic for another night. By starting from an Ansible role that we purpose-built to deploy a particular application, say memcache D,
and then having now an API to create and update and delete resources that can describe an application like memcache D, you can probably see where this story is going. So this is the pattern of how an operator works. We have this smiling face-up on the top left. They interact with the Kubernetes API. They create their custom resource. In this case, it's a memcache D, let's say. They describe
what do I want my memcache D to look like. A controller in the middle wakes up, sees the event, does whatever it thinks is necessary, which ends up being it creates some pods, it creates a service, maybe creates a persistent volume for some reason. Who knows what else it does, but it does all those things, and now the application exists, and then the controller sits around and just waits
for anything else to happen to that resource. And if you change it, then it'll go change the real world to reflect whatever changes you made. How does Ansible fit into this pattern? Well, we made an Ansible operator for you. Now in this case, before Ansible is involved,
you are writing your own controller in Go, probably. Some people have strayed off that path and written them in another language or two that I won't mention. But for the most part, you're going to be on the hook for writing a software project, writing a controller that does that stuff. Instead, we've done a lot of that work for you. So just like in that broker story we saw
a minute ago, where we made a generic broker that can just run Ansible for you, here we've made a generic operator that will, likewise, run Ansible for you. So our Ansible operator, it is written in Go, and it's using all the Kubernetes client tooling that's really nice stuff, gives you a lot of power
in terms of caching and queue management and this sort of stuff. So we've done that, and every time that it gets one of these events, it wakes up and decides it needs to run a reconcile, all it does is it runs your Ansible role or your Ansible playbook that can run as many roles as it wants. And we have in the middle, we have this mapping
file, it's not too important a detail, but basically for any resource type you define, you just tell this operator if you see the memcached resource, run this role. If you see some other resource, run this other role. And that's it. So it ends up being a very simple kind of pattern and experience.
This is what that file looks like. You can see group version and kind is how we define a resource, and then we're just mapping that to the playbook. Pretty simple concept. The operator SDK. That is the project we have that is the tooling that helps you build one of these operators. You could do it and go, and we've got some great tools to help you do that.
You could do an Ansible. I think that's probably the easiest path and certainly the best balance of ease to get started and long term power. You could also do it with Helm. If you have existing Helm charts and you want to get into this operator pattern right away, and be able to I guess this bears emphasizing, a key benefit
of the operator pattern is it makes your application part of the Kubernetes API. So it's now Kubernetes native, for whatever that means. In this case, really it means it's part of the API. You can provision, upgrade, manage, do everything just natively through the normal Kubernetes API. So if you want to take Helm charts and make them
Kubernetes native in that way, you can do that with an operator right now. It's a bit more limiting in terms of day 2 management. Well, it's quite a bit more limited in day 2 management. But you can get started that way. And then there's a link to the operator SDK. This is what your base image is going to look like. So we provide the green parts. That includes Ansible, it includes Ansible runner,
it includes the operator binary. All you have to provide is the yellow parts on top. So you provide one or more roles, you provide that mapping between them, and that's it. And now you've got your own operator. And otherwise it's as easy as just writing Ansible to do whatever work it is you need to do. So the bottom
line of these two stories, on the one hand we had, we can make a service broker and plug into the Kubernetes service catalog by making an Ansible role that can deploy your application with Kubernetes. We saw how easy it was just to interact with Kubernetes from Ansible. On the other hand, we can make an operator, which is a different kind of pattern and
certainly what seems to be the future of Kubernetes. And both of these are just made really, really easy by Ansible. I just love working with Ansible and Kubernetes. If you would like to put your hands on this stuff and try some labs, please don't do it right at this moment because you will crush this.
I've seen it happen. Ask me how recently. You can go here tonight, tomorrow, when you get home next week, and there are a number of exercises that you can run for free, no registration, we don't collect any contact info, nothing. You just go there, you click, you get your own environment
with Kubernetes running. It's OpenShift. It happens to be OpenShift. That's Red Hat's distribution of Kubernetes. It's Kubernetes. You get exercises down the left, you get your environment on the right, and you can go through it and you can build your own operator using Go, you can build your own operator using Ansible, you can learn how to use the Ansible Kubernetes module, and all kinds of other stuff. It's a great way to get to know it.
If you want to dig into this more, one, you could go out for a beer with me tonight, but otherwise we can go to config management camp, so I'll be there too, doing a little bit of a longer talk, and then we'll probably have some more time there to dig into more detail on this whirlwind of stuff that I know I just threw at you. So with that,
do we have any time for questions? Is that a yes? 30 seconds. Do we have one question? Anybody have a pressing question? Right here, in the middle. How do you deal with different versions of your roles? It's hard, but you deal with it the same way you deal with versions of operators in general.
So you're going to have a dichotomy of your application lifecycle versus your operator or APB lifecycle. So it's kind of up to you to have a project that's going to be your operator, and it's going to include one or more Ansible roles, and you can
just version that the normal way you would version containers that you build other ways. That's it. All right, we're out of time. Thank you, everyone.