We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Application Provisioning with DSC and Octopus Deploy

00:00

Formal Metadata

Title
Application Provisioning with DSC and Octopus Deploy
Title of Series
Number of Parts
60
Author
License
CC Attribution - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language
Producer
Production Year2018

Content Metadata

Subject Area
Genre
Abstract
Giving developers control of application provisioning through Desired State Configuration and Octopus Deploy. Learn how a DevOps team created a solution that gave developers control over their application provisioning by defining a json document. This talk will demonstrate how to use an opensource PowerShell module InvokeDSC with Octopus Deploy to create such a solution.
Software developerConfiguration spaceFile formatBuildingWeb applicationConfiguration spaceWindowProduct (business)Web 2.0Multiplication signWeb applicationSession Initiation ProtocolComputer fileComplex (psychology)Scripting languageCodeProcess (computing)Server (computing)Entire functionAbstractionSystem administratorPhysical systemSoftware developerScheduling (computing)Cartesian coordinate systemMobile appMessage passingParameter (computer programming)Computing platformCuboidSelf-organizationModule (mathematics)Domain nameImplementationGame controllerBus (computing)Different (Kate Ryan album)Source codeDirectory serviceMathematicsLocal ringFile formatComputer animation
Web applicationDemo (music)Template (C++)Slide ruleConfiguration spaceWeb applicationMobile appSoftware developerRevision controlCartesian coordinate systemDemo (music)Variable (mathematics)Projective planePoint (geometry)Substitute goodCodeModule (mathematics)AbstractionXMLComputer animation
SynchronizationSource codeExecution unitWide area networkComa BerenicesLibrary (computing)Local ringProgrammable read-only memoryCategory of beingObject (grammar)Web applicationConfiguration spaceSoftware developerVariable (mathematics)Sensitivity analysisMobile appModule (mathematics)Type theoryCASE <Informatik>Web 2.0Radical (chemistry)Process (computing)CodeString (computer science)Revision controlInterior (topology)Cartesian coordinate systemRepository (publishing)Directory serviceFunctional (mathematics)Pairwise comparisonLibrary (computing)MereologyBitDemo (music)Data managementEscape characterWordIntegrated development environmentSimulationInstance (computer science)ResultantSingle-precision floating-point formatMathematics1 (number)Computer configurationSequenceZoom lensServer (computing)PasswordConstructor (object-oriented programming)InformationData conversionAuthenticationContext awarenessAbstractionComputer animation
Web applicationMaxima and minimaVariable (mathematics)Programmable read-only memoryLie groupMIDIMenu (computing)Condition numberSource codeWeb 2.0MathematicsDifferent (Kate Ryan album)Software developerSoftware repositoryContext awarenessGreatest elementEscape characterObject (grammar)Configuration spaceWeb applicationProjective planeCartesian coordinate systemVariable (mathematics)Streaming mediaType theoryMultiplicationServer (computing)Condition numberProcess (computing)Physical systemXMLComputer animation
MIDIWeb applicationMenu (computing)Royal NavyInterior (topology)Library (computing)Programmable read-only memoryWeb pageMaxima and minimaConfiguration spaceScalable Coherent InterfaceRepository (publishing)Uniform resource locatorObject (grammar)Server (computing)Field (computer science)Installation artModule (mathematics)Default (computer science)Template (C++)Computer fileWrapper (data mining)Functional (mathematics)Configuration spaceVariable (mathematics)Source codeInternetworkingMultiplicationCodeMultiplication signPoint (geometry)outputShared memoryC sharpWeb applicationLibrary (computing)Hash functionTable (information)Substitute goodMobile appBlock (periodic table)String (computer science)Maxima and minimaLink (knot theory)Computer wormFormal languageRow (database)Cheat <Computerspiel>Sinc functionDot productComputer animation
Web applicationComputer wormConfiguration spaceUsabilityComa BerenicesSimultaneous localization and mappingMaxima and minimaVariable (mathematics)Personal identification numberDefault (computer science)Cartesian coordinate system1 (number)Uniform resource locatorForcing (mathematics)Data managementMobile appPartial derivativeSubstitute goodIntegrated development environmentPhysical systemSoftware developerCodeLoginComputer fileWeb applicationTransformation (genetics)Configuration spaceVariable (mathematics)Web 2.0Software testingFunction (mathematics)PasswordScripting languageMultiplication signInternetworkingDirectory serviceLevel (video gaming)Object (grammar)BuildingSource codeCuboidSet (mathematics)MathematicsCASE <Informatik>Projective plane2 (number)Message passingRevision controlVolume (thermodynamics)Arithmetic meanXML
Rule of inferenceExecution unitGamma functionLie groupVariable (mathematics)Commutative propertyAxiomConfiguration spaceWeb applicationMusical ensembleWhiteboardFingerprintCondition numberVacuumAnnulus (mathematics)Core dumpWeb 2.0Mobile appGroup actionInternational Date LineIntegrated development environmentSoftware testingBootstrap aggregatingDirectory serviceRevision controlRule of inferenceWeb applicationModule (mathematics)Data managementMathematicsInformationBuildingError messageSource codeProjective planeLatent heatMultiplication signGoodness of fitMaxima and minimaCartesian coordinate systemProcess (computing)Computer programmingLibrary (computing)Physical systemAbstractionMessage passingHash functionTransformation (genetics)Software developerExpressionVideo gameDistribution (mathematics)Cycle (graph theory)Self-organizationParameter (computer programming)Default (computer science)Escape characterMultiplicationRegulärer Ausdruck <Textverarbeitung>Instance (computer science)Level (video gaming)Computer animationXML
Demo (music)Projective planeMereologyComputer animation
Web 2.0CuboidServer (computing)Installation artMereologyRandomizationBootstrap aggregatingComputer fileCASE <Informatik>Directory serviceSoftwareCodeCartesian coordinate systemComputer animation
Sign (mathematics)Inclusion mapMenu (computing)Cone penetration testVacuumNormed vector spaceSummierbarkeitLie groupConfiguration spaceIndian Remote SensingSoftware repositoryConfiguration spaceTheoryData managementIntegrated development environmentComputer fileGroup actionWeb 2.0BootingPower (physics)Repository (publishing)Process (computing)Revision controlHookingObject (grammar)CodeServer (computing)Link (knot theory)Variable (mathematics)Heegaard splittingBitString (computer science)Demo (music)Module (mathematics)System callCuboidSource codeIndependence (probability theory)Order (biology)LogicParameter (computer programming)Projective planeComputer animation
Convex hullDemo (music)System programmingOnline helpBackupData managementFunctional (mathematics)Shift operatorMultiplication signFitness functionBand matrixCartesian coordinate systemConfiguration spaceComputer animation
Coma BerenicesXML
Transcript: English(auto-generated)
Welcome to application provisioning with DSC and Octopus Deploy, a world where developers manage their own configurations. So imagine that you're a developer,
and you just spent a bunch of time on a web app, and you're excited to push it out to your production systems. But the current process is you need to provision those servers. And to do that, you need to submit a ticket. And you know after you do that, you're going to be waiting for probably three weeks before it gets provisioned. Now a flip side of that, imagine that you're a sysadmin,
and you just got paged at 4 in the morning, and you had a fixed production outage and go into work at 7, and you get this request. You get a request to provision some IS boxes, maybe 20 or 30 of them. And you think, this is a rather ridiculous request. There's got to be a better way.
And so if you're like me, you'd think, the solution to this is really just some PowerShell. It's really just a PowerShell script. Problem solved. I'll just run it every time I get this request. But there's really kind of two problems with that. It's who still has to run that code? You do. And so this whole concept, this application provisioning
with DSC and Octopus Deploy, is really a story how we offloaded that work to our development team. They're the people that understand what it needs to be provisioned like. They know all the parameter values. And so this is kind of a discovery of a solution that allowed them to do that. And the main message from this is, automation is the answer.
We all know that. But who you write automation for and how they use it is actually more important than the automation itself. So to start off, I'm going to go through what I call my DSC Cave of Trials. I didn't only want to just automate it with PowerShell. I wanted to use DSC. DSC is really cool, really awesome.
And it has a X web admin module that takes care of all the IS provisioning. And so why not use that? Well, bringing DSC into an organization was way more difficult than I had originally envisioned. And so my first implementation of that, first failure, was monolithic control. From a sysadmin perspective, I had a tendency
to centralize everything, centralize logging, centralize domain controllers, and some kind of isolation. So I had one configuration file that, depending on the role of the servers, would generate a different MOF document. The problem with that, though, and the problem with DSC and the MOF delivering, was any time any of the apps made a change, it recompiled every single configuration
and reapplied it, even though there was only one change to maybe five servers instead of 200 servers. And so that didn't scale. It didn't fit the needs of the developers. The other problem was it was written in DSC and in native PowerShell. And our user base is developers, and mostly Windows developers. And so they didn't really understand
that syntax in that DSL. So we moved on to a slightly more complex one. We tried to use partial configurations. This fell flat on its face almost immediately just because of the complexities where you got this team writing this configuration and this one over here, and you're merging them together. But the thing is that they have to run at the same time.
They all compile into one and then they run together. So it still had a lot of the same issues that the first one did. And so from the ashes of these two failures, I kind of went on an uncharted path. I looked at a lot of the tools that were out there in the industry that were successful. I knew that I wasn't gonna get buy-in to buy one of these big management tools.
So I tried to emulate it as much as possible and create custom tooling around what would meet our specific needs. And so out of this uncharted path became a lot of requirements. Grab a sip here. So after those two failures, we got a lot of requirements.
And the requirements are derived by things that they didn't like. The very first thing was they needed a friendly format. DSE and PowerShell wasn't native to them. They didn't know how to use it. And so we asked them, you know, what would you prefer? And they came back with JSON. I was like, well, I don't think it would be too hard. There's other people that have done YAML extractions or abstractions of DSE.
I was like, I could probably do a JSON one. Why not? The next requirement was simplified execution. Again, that MOF document that ran the entire server config was a problem. They wanted to run just the provisioning for their app at deployment time, not the entire box. And so with that, I again looked to the community and I looked to other platforms that were using it.
And they were using invoke DSE resource that one by one invoked resources and gave us that flexibility to use the LCM, the local configuration manager, more of an API instead of run everything in this configuration document. They wanted it to deploy at provision time. So this was actually a really good segue into being self-contained.
So what we did was we took that friendly configuration document and we put it in their source code. And now when new applications and APIs get spun up, they also get a configuration document for their IS app. And that way, they could deploy at deployment time and everything would be self-contained and they're not dependent on some kind of bus
or a big configuration having to go through beforehand. It's just when the code runs, it gets provisioned and then it deploys the web app. So the design overview is this. It starts in source. So their source code with all of their C sharp code or whatever code they want to write also has config.json. And this json is strictly just a file
that provisions our IS stuff and maybe creates some directories for the logging and also sets the ACLs for that. The next step is that source, then it kicks off with CI and it gets built. The product of that is a build artifact. So now they have all of their code in their configurations for their provisioning inside a NuGet file or whatever they choose to compile to.
That is then sent over to our deployment tool which you can probably guess by the heading of this talk is Octopus Deploy. Octopus Deploy then does two things really. Could do several things. But it deploys, it provisions the IS app and then it deploys the web app. This also gets us away from people getting ahead
of the deployment schedule for IS and then ending up in INETPUB and causing issues later on down the road with cleanup and so forth. So that is all the slides for now. But before we jump into the demo, we're first gonna look at the web app JSON configuration. So this is a DSL abstraction of DSC
to make it more friendly for the developers. And we'll take a look inside the NuGet artifact. This is pretty stripped down. There isn't actually any deployment code in there. It'd just be the artifact. But this is to note, I'm not gonna be demoing any CI work in here but just know that the source control is then compiled, test or run and then a build artifact is generated.
And then we're gonna be taking the demo from that point on. After that, we'll dive into Octopus Deploy. We'll take a look at the web app project which is how our applications are set up to deploy. And then the invoke DSC app config which is a step template that wraps PowerShell that invokes a custom module that all this came from. So everything under the hood,
this abstraction of JSON DSC is a custom resource or is a custom module that's out in the community called invoke DSC. Then we'll create and release, create and deploy a release and talk about variable substitution that you get for free with Octopus. So let's dive into the demo.
So here we have the JSON DSL. It basically has two module, our two object constructs. It has the modules object. And so what we wanted to do is we didn't wanna have to have a dependent, we're not using a pull server obviously in this case. So how do we get modules around?
And we wanted to give the flexibility developers to add resources if they were so inclined to expand upon in DSC. So the modules array simply takes the name of the module and it takes the version. You can put null in here and it will grab the latest but this is a string that would then go and fetch that module.
So there's some helper functions inside the invoke DSC module that will go and reach out to the repository you specify and dynamically pull them down. There are specific versions that we do lock the version of like package management. Mostly ones that just have breaking changes that don't have backward compatibility is what we exclude from here.
And you can have as many as you want. The next object is DSC resources to execute. So again this is gonna look very familiar to anything that's abstracted out. If there's a puppet or if there's a chef or even some of the other custom stuff that's out in the community, there's an Ansible module that abstracts it out
into YAML. It's a very similar concept where you have the name of the resource, this is an abstract name. So in this case it's web app folder. And that's just the name of the resource that you're going to be calling in here. The name you're giving it. The DSC resource name is the actual DSC resource that you are calling. So in this case it's file.
And then after that inside the same object are the properties that you would get from DSC. So if you wanted to, can you see the terminal okay? I think you could do get DSC resource file, syntax.
And those same properties you could just match over into the JSON. Now JSON has different types that you can use so it is type sensitive. If it requires an int you would want to put it in. Your JSON, not everything is text like YAML is. Which made it a little bit easier in the comparisons. The type that this is going to be, oh so destination is one of them.
You see this weird syntax here. For people that have used Octopus Deploy you recognize this as a Octopus variable. So this is one of the two ways that you can replace variables is you can just put them in the code. Now there's a better way that you can do that if you want to have offline compatibility outside of Octopus. But sometimes the escape sequence gets weird
and that's your only option. So that's one way. Per different environments you can replace what's in the JSON document and get a different result per your environment. So if the password's different here versus over here you can change that. The type is going to be a directory and it's ensuring that it's present. Anyone that's familiar with DSC it looks very familiar, it's just the properties.
So what this config consists of is creating two directories. It's using the X web app pool to create an app pool and to find some properties there. Again, you can see that some are type. Some are string, some are pool, some are int. Then it will create a web app.
This is the fun part with the sim type. So in DSC some resources use sim types and they can either be an array sim type or they can just be a single instance. And so that is why you see the array here for authentication info. And this CNTFS permission entry
is also one that uses sim type and that's setting the ACL in the directory so that the app pool for that app can log to a specified directory. And so what you can do is you can use this as kind of a template. So when new applications come up you can just copy and paste whatever their names are
or their folder paths and you can keep it very consistent. That was another problem that we had is someone would submit a ticket and they would name, they would put in a different log directory or they would name the app with hyphens or underscores and this keeps it really consistent when you spin up new APIs with microservices. So any questions on the DSL itself?
Here? Yeah, it is. So it's the way, it's how I'm handling if it's a, I do some word detection later on in the convert function
to see if it needs to be a sim type array or a sim type single instance. And so that's why that's there. Because it could be an array and then so I would compile the PS object to do that. So to give more context for this, this is the JSON that a converter function that I've written takes and converts to a PowerShell custom object
to then hand off to invoke DSC resource. And so probably not the perfect way to do it but it's worked for us so far. A lot of people have done I think, you got JSON, there's YAML, there's been other, PSD1, yep, yep, branded's in that.
So we'll see in the demo if it, I think it'll work. But thanks for pointing that out. Any other questions on the DSL? So at this point, you've seen with that, so this lives with the developer's code base and what happens is the CI process would build that artifact and then would push it to Octopus.
So we come over here and to Octopus deploy. And we can look, so in Octopus they have a packages library and if you come in here, I have several of them uploaded but one of them is the web app. So this would be the application that was pushed from our CI pipeline into Octopus. And so we can download this and open it in Nougat Explorer.
Now if this was a real web app, there would be more code in here but you can see that this is just the configuration document living. I don't know if that zooms in but it lives alongside the code. And this has been, there's a few reasons we chose to do that. It gives more ownership to the development teams
but also gives them a lot easier access to changing their own configurations. So what we've seen now after we've implemented this is teams have been doing pull requests to each other's repos or to their own repo to fix their provisioning issues and we've seen almost all provisioning issues
completely go away for applications inside our ticketing system. So that metric is, I think we get them, and usually when the tickets come in we say, hey you know you can fix this yourself. Oh great, I'll take care of it. So that's been phenomenal to see the customers giving them a tool that allows them to help themselves.
And so that's the real value of what this has done so far. Okay so the next thing that we'll talk about is the actual project. So it's fine and dandy, there's the DSL, developers can write that but how are you leveraging Octopus to deliver these? And so we'll take a look at the web application.
So in Octopus, Octopus has the concept of a process and so these processes are run sequentially and you can actually have variables that, conditions that they will run. So the very first thing that it does is it deploys the web app. So this is a NuGet artifact that was uploaded to Octopus. It lays it down so when Octopus has tentacles,
those are the clients, and then it shoves it down and runs everything in the local context. It's not like you would have CI and it would remotely invoke everything as a local context. So it would deploy that down. An important thing to note here is how Octopus targets. So this project is gonna get deployed to an environment,
say development, but how does it know what to go to? And the concept behind that is what are called tags in Octopus and we use them. So this is only gonna get deployed to the web server, web-server. So if we were to look into infrastructure,
tentacles healthy, good, good, the lab won't fail. You can see that we have web one and it's tagged with a target role of web-server. So that's how we can detect what goes on anything. So if you wanted to go everywhere, you could create a role called all or something like that and it would go across everything. But usually you want different types of configs to go different places.
So if this was web app one, it would go to the servers that need web app one. And that way it gives you the ability to deploy to multiple different targets through roles. The other thing that this does is it takes, it has two features that are enabled on this deploy that you can either do JSON configuration variables,
which allows you to transform just the JSON object stream. So that was a reason why there's objects at the top and bottom. So we can go object by object and then replace. So if we were to go back and look at that configuration, take a closer look later, but I could go DSC resources to execute, web app folder, destination and change this.
Change this here without having this weird octopus variable in there. So I could run it locally or I could run it in octopus in the context of change. Like I said, the reason for doing the octopus variables is there's, it tries to interpret your JSON for you and so if you have a lot of escape characters,
it gets messy. So, and then the other one is to substitute the variables and that's exactly what you saw there. So it's gonna find an octopus variable and it's just gonna substitute it so when that, it gets deployed to the node, it has the variable that it needs. Any questions there? Feel free, any questions, welcome,
throughout the entire talk. So if we go back, the next thing is invoke DSC app config So this is actually a step template and in Octopus, a step template is a way to wrap commands in PowerShell or whatever language that you wanna use.
And so you can see right here, I have two parameters. I have path and repository and then I have a little link here to the actual step template. If you look at the path, it's got another weird octopus variable. That's actually referring to the previous step and saying where did you put your payload
and then it's adding the configuration file here at the end so it knows what to invoke. So that way you don't have to push it to a custom location. Octopus has a default location you can use. The next is I'm pointing it to a repository. So this could be an internal artifactory server or it could be the PowerShell gallery
or some other feed that you could pull nougats from. And this is very important because if the node doesn't have the DSC resource modules that it needs, it needs a way to get them. And so that's where that modules object came into play. It goes through there sequentially and then downloads them from whatever feed that you have.
Someone's gonna use the PS Gallery.
So the question was can that field, that repository field have to uncheck against two sources? Current way that it's implemented, no it can't. It only goes against the one. It just wraps around find module and install module with some sorting syntax to fix the weirdness.
So it doesn't do that. Something to mention is you can add those feeds to Octopus by going to library and external feeds. So you can see I have the external feed of PowerShell. I can also pull internally. So if you felt the need, you could upload them all to Octopus if you wanted.
If you don't have a nougat server or any kind of artifact source that you could pull from and you don't wanna pull from the gallery, you could just use Octopus itself. And you would just upload them to packages like I have here. So if I didn't have internet access, I actually had them all in here and I could use an offline way to distribute it through Octopus.
So great question, thank you. Back to the step template. So like I said, Octopus has these concepts of step templates. The idea there is that you can reuse the same code multiple times and share it across and update all of them. It's not easily source controlled at this point. So what we opt to do is kinda keep it to a minimum.
Can you see that okay? Is it kinda small? Is it okay? All right. So it's rather simple. It tries to import the DSC module. If it can't, it just stops, it throws. Cause there's no point in trying to invoke the configuration if that module dependency isn't there. So the next thing that it does,
creates a nice little splat hash table there and then passes that to invoke DSC configuration. Again, here's where it's using that path variable in the repository. Invoke DSC configuration is a wrapper function inside that module that both does the install module,
that's a private, I think it's a public function and then also invokes the DSC. So it basically wraps both of those so you can point to a configuration, pull on the module dependencies and invoke the configuration. It also takes input objects. So that was a pull request that I got as soon as I put it up on GitHub is why doesn't this accept input objects?
So you can put JSON here strings in here as well. So if you didn't actually want a file on disk, you could just pass it into a code block and run it. We wanted to have some kind of historical record and since it's coming from source as a file, it made sense. So that is all of that.
Go back and then here's my cheating web app. I thought about learning some C Sharp and getting a web app going but I didn't have enough time. So it's just gonna write output, deploying a web app. Dot, dot, dot. But ideally you'd have like a MS deploy
or something like that there that would deploy your web app. So now what we'll do is we will go ahead and create a release for this. And notice here one thing that I should have mentioned in the artifact.
See how that NuGet is named 001 and there's a version here 001? That's tied to the release. So you actually have, that was a huge benefit here is what version of the configuration was ran against that node. And since it's in source now and their build pipeline versions all their NuGet packages, that code is now versioned.
So you have an idea of what broke and what version. And then that's all captured in the Octopus release. So we'll go ahead and create the release for 001 and we'll deploy it to the development environment. So it might take a couple, maybe a minute to acquire the packages.
So questions will fill up time. But right now it's acquiring the packages from Octopus. It's gonna deploy them, transfer, transform variables, full configuration and deploy the web app. And we'll walk through this. The other benefit here is we actually have logs now inside the same release that the code went out of how the app was provisioned.
So it went ahead and you can see right here it's transforming variables in this file because it found some Octopus variables in there that it's gonna transform. I have some JSON ones that replace the same value and then it's also using it to replace the Octopus value.
If we come down here to the app config, the function, the helper, the invoke DSC is written the same way that DSC would execute this where it tests whether the conditions true or false. That was another huge benefit of this. Before we just had brute force partial scripts that go in like hey, we're gonna delete this directory and then we're gonna create it again no matter what,
even if nothing's changed. So this is a softer touch with that. Again, an advantage of the DSC resource. Oh, no problem. I invoked the phone over there through Octopus.
I could do anything with internet connection, right? But you can see here that it's executing on web one and it goes sequentially through those resources. So instead of using the depends on because invoke DSC resource is independent, there's no way to tie those together so it's a sequential execution. So it goes through and deploys or tests the web app one
or the web app folder and then does the set. I'm still working on cleaning up, hopefully I can steal some more of Brandon Owen's code to clean up invoke DSC resource to not put out verbose output when I don't want it to. But it goes through and does test set, test set, test set because all of those were, it was a new box and they didn't have the changes.
So if that wasn't the case and they didn't change in their provisioning on the next run, we would just see tests and no sets. Any questions there? Yep, yep, native DSC would do the same thing.
I think native tests again after it sets, that's not in there. But how I get around that is with some pester tests. But you can see that it outputs. I tried to mimic the verbose logging of DSC as much as possible. Octopus is a little bit weird. You take the verbose of PowerShell and you verbose it again with Octopus and it gives you a ton of stuff.
So this is actually output messages that pull in what resource it was using and then also the name of the resource, the name that you gave the resource. So it's access log but it's actually using the cntfs resource.
And then there's my fancy write output. So the next thing that we'll take a look at, I'm using containers here so I actually have the volume mounted for the applications.
And if we go to applications. So this is the default location that Octopus would deploy on these artifacts. And what I'm gonna do is look at the artifact after it was deployed so you can see the variable substitution. So it goes underneath applications and then you can see all the projects
that were deployed to this. We're gonna go into the web app and we'll go into the most recent one. And we'll open up the web app IIS config. So here again, there was the old one without the transform and here is the new one
with the transform. So you can see it replaced the JSON in that way. So how did it do that? It did that through Octopus variables. And so if we come here, there are different scopes. There are different ways to have Octopus variables. You can have projects, you can have local. All of these are local variables to the project.
So if you did the project, the variables go away. In Octopus you have the concept of scoped variables. So you can see the bottom two are scoped only to development and the rest are undefined, meaning they will apply to every environment that you deploy to. But the bottom two will only transform or apply when it's pushed to the development environment. And that's how you can get a config management
tool system out of this is by using these variables to change as they progress through environments. Because often times you don't want the same password in all your environments. So again, here is how you would navigate with the JSON transform. You would have to specify your top level object which is DSC resources to execute.
The next is app pool. And we can see that path here. We go DSC resources to execute. Web or app pool. And then it's doing idle timeout action. So idle timeout action is what we're transforming when this gets deployed. And it's being set to terminate.
The same thing for the app pool name. It's just gonna be web app. The log application, this is the name of the web app, NIS. And then down below is the destination pass. Now notice how these have double slashes and this one doesn't.
And that's because the Octopus is gonna like, it will literally replace the text with whatever it is there. Because I'm using a JSON transform here, it handles some of that escaping for me. So for this one, if I were to put straight JSON, I would need two slashes for IS app pool, whack, web app. But because it knows that I'm going to transform
a JSON file, it does that for me. And that's where you get, if you have a lot of escape characters, it gets weird. Like if you were gonna try to put a regex expression in here, which I did do. Didn't work very well.
So the next thing that we'll take a look at is how did invoke DSC resource get there and how do I handle the distribution of that. So this was another fun learning experience that I got out of this project. So we're now dependent on this invoke DSC resource heavily
it's provisioning all of our IS apps. And one time I introduced a breaking change and it broke three of our testing environments, our QA environments. And I said, you know what, there's probably a better way to push this around instead of just blasting out everything that needs it.
And one of my teammates came up to me and he was like, well, why don't you just use Octopus deploy to deploy your module across the environment and your bootstrap your dependencies through that. And I was like, you know what, that's a fantastic idea. So that's exactly what I did here. So this actually has a whole pipeline to it as well. So it has sources internal, it goes to CI builds,
it runs all my Pester tests against this module, and then it pushes a new version to Octopus and it creates a release from TeamCity in here and it goes to our dev environments for vetting. So if you look at the process here, there are some hard dependencies that this module has. It needs a specific version to handle the bootstrapping
of package management. There were a lot of breaking changes in the early versions of package management that made the commandlets like the parameters change. And so we needed to control at least a few modules. We use Pester to validate the changes that this has made since we're not using get resource or have a moth that can pull out all that information. So we're using Pester to validate that.
So we wanted to make sure the latest version of that is out there so we don't have to worry about syntax errors in our Pester tests. It does of course deploy and invoke DSC. And we had some fun with the max envelope size returning back to Octopus, so we'd set that. And then for my lab, I got lazy and I wanted to make sure that the PowerShell gallery was trusted
so when I tried to reach out for modules it didn't crash and give me a prompt error. So how do I version lock these? Like we want a specific version of package management, a specific version of Pester and so forth. How do we do that? And the answer is in Octopus
you can set versions and channels. So the channel here is a default channel. You have multiple channels and that's how they deploy. So a channel for instance might be you have to go to dev before you can go to stage, you have to go to stage before you can go to production. Those channels and life cycles will define that.
But an advantage of this is that you get to define version rules. So whenever this gets created, it's never going to allow you to pull down an artifact that is higher than version 1170 for package management.
The same is true for Pester, I'm using version 411. And then for invoke DSC, it's actually tied to whatever the latest release came from the build. So if version 2093 came out, this would have to get updated.
Any questions so far? So let's take a look at this deployment here.
So it goes through and copies those. Let's look at the process actually. So here's a nice little trick that you can do to place it in the directory that needs to go. DSC pulls everything from the PowerShell, the program files,
and so I'm just giving it the Octopus NuGet version package here. So that way I don't have to come into these steps every time and change the version. It's just gonna take whatever that NuGet version is and create a directory and put it in there. I think I'm way under in time, so we get bonus material.
So any questions with the IS application provisioning itself and how developers can contribute to that and the infrastructure overlaying that? Okay. Okay. Very good. So bonus time.
So with this, we ran into some issues where, okay, this is working great. It's working for all our existing web servers, but what happens when the infrastructure team introduces a new node and they have an installed web server? Failure. That's what happens. I'll fill in the gaps for you. So I looked out to, again, the community,
and I kind of, so we got this nice tool that abstracts the JSON, we like that. I personally started to enjoy writing DSC more in JSON than I did in native DSC, mainly because I didn't have to compile them off and go through that rigmarole. It was just really handy for me, and plus I could test really quickly with the hash tables, or with the here strings, rather,
and just put it in there and test locally on my systems. So it was fantastic, and we took that a little bit further and we said, how can we, because we're not gonna get approval, or maybe it's not that we wouldn't get approval, we kind of didn't feel like fighting to get approval across the entire organization for a CM tool. How can we take what we've written and use it to do config management?
And what we came up with was Ansible-ish. So, because it was heavily, heavily influenced by a lot of the stuff that Ansible has done. And before we get into the process, because it's really simple, let's take a look at what the library looks like.
That is not fun to read. This is all bonus and on the fly. So this is on GitHub. What's part of this project that all this actually is.
This will work great. So what we did was we decided to mimic Ansible as much as possible. And we have these JSON documents and they're all defined by roles. And so if we come in here, we can see that we have two different roles. The problem we wanted to solve is, this box doesn't have the web server feature installed.
We don't want to remote into it and install it or use invoke PowerShell or put it as part of the application deployment, because that would be silly. Why does the application deployment need to check that web server is installed every single time? So we came up with this. And so we devised the common.web.json. And what this does is it uses that same DSL
and it goes out and installs the web server and then just some random other web stuff that I wanted to have on here. So this is obviously one more encompassing. It kind of bootstraps some directories and some file permissions. Anything that we really want, we can do in here. And we can deploy it with Octopus. So we also have another common one.
And we gotta put Chocolaty in there. Because that's how we're gonna get our software. We're not gonna have random PowerShell installers or have to log into it. A lot of the use cases were for build agents and they just required a bunch of software to compile code. And so we just used Chocolaty for that. So how does this all work?
And that most of it is inside this inventory file. So inside this inventory file, you have a host object. And then inside there, you define all your hosts. If you've used Ansible, it looks really, really similar. So I have web one, which is the tentacle that I have in my lab. And then I have a few different environments.
Octopus environment is gonna be development. So this can be read and hooked into like power CLI, which is what we have in VMware stuff. So it could hook into that or it could hook up into Hyper-V if you want to create a link module. Because really now you have a document that defines the way that your infrastructure could look. A basic one.
So you could at least create those VMs or modify those VMs for that. So it uses those two variables to define how it's gonna be put into Octopus. It's gonna be in the development environment and it's gonna have the role of web server. And it's gonna have these roles because we wanted to make it as gnostic to Octopus as possible. We didn't know if we were gonna continue to use this and we didn't know if we were gonna continue
to use Octopus for it. So it goes through and it will invoke these roles in these orders. And the only thing that Octopus is doing is orchestrating this and it's defining deployment targets. So now that you have an idea of what the code base looks like, we can take a look at the app deploy.
So here's Ansplish. Deploys down the package. It can use the same variable transforms as before. So that's not a non-issue. And then it goes through the invoke playbook. The invoke playbook is a little bit more complicated than the last one. Because as we ran through this, we iterated and be like, you know what? Sometimes we gotta reboot these things. Sometimes we wanna exclude roles.
Sometimes we wanna say only this one. And so that's how you can see here is it grabs the Octopus name. You then specify the path to the file that you're gonna call. You give it a repository so it can obtain objects. And then you can do include exclude
and then you give it the inventory name. So this one step will run against all the servers in that call and then invoke the role. So it's independent of Octopus. If we look at this PowerShell code, it's a little bit more complicated.
So this is actually the invoke playbook is dot sourced in so we can source control that step template. And then it basically, the rest of it is just the construct of this. The parameters are being fed to it. So we're doing the FLs include all that logic
and we're splitting the string into an array as it comes from Octopus in a variable. And then we're invoking the playbook command. So that playbook is actually source controlled with the inventory file and all that good stuff. So we can take a look at this in action.
I'll just redeploy version 001.
So we get to the same process. But the community lightning demo yesterday showed how you could use a DSC through that. And they were running a bunch more steps than I am here.
I just chose to extract more of it out into PowerShell and into a code repo than to have Octopus handle all those different steps. But this is a way that you can use Octopus, a tool that maybe you have. We chose to use it because it was a tool that we had and my entire team was familiar with it. And it made a lot of sense for us
to at least experiment and prove out the theory of config management and environment and get a quick win, so to speak. But you can see here that it reports back to you, hey, I found this configuration that you were expecting to run. It runs it, it tested it, it passed, so it didn't set it. And then it went and found the web common, or the common web, and ran the test against the features.
So whenever we ran into that issue, we would just run this project versus remoting it in the box and now everything is self-contained into our CI and CD tools. Bring this back up.
Shift F5 would have been better. So the moral of the story. It's okay to build new tools. You can use familiar systems, and that's gonna be a great advantage because your entire team doesn't need to skill up on a new tool.
This was just using PowerShell and Octopus. We were the deployment engineers, so we knew Octopus intimately. So it was just a good fit. It made a lot of sense. Help them help themselves. So we're reducing that. We actually cleaned up a lot of bandwidth that we could do. Automating the provisioning gave us the bandwidth
to actually do the config management tool stuff. So you gotta help your customers help themselves. And who you build automation for and how they use it is actually way more important than the automation itself. If you write a really cool function and you gotta run it all the time, you still gotta run it all the time. Put that extra effort in and make it more usable from your customer standpoint.
So this has been application provisioning with DSC and Octopus Deploy. Thank you for attending.