We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

One MOF to rule them all, and in the Azure bind them

00:00

Formal Metadata

Title
One MOF to rule them all, and in the Azure bind them
Title of Series
Number of Parts
60
Author
License
CC Attribution - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language
Producer
Production Year2018

Content Metadata

Subject Area
Genre
Abstract
DSC is typically node specific. This talk explores a case example where both standard and custom DSC resources will be leveraged to break out of managing many MOF files or partial DSC configurations. One MOF in Azure can be used to dynamically configure all devices throughout your environment. Desired State Configuration (DSC) is a powerful DevOps tool enabling you to provide a consistent, standardized configuration throughout your environment. On its own DSC has a narrow focus on individual target nodes requiring you to author many Management Object Format (MOF) files, or get creative with partial DSC configurations. This talk aims to strip back some of the mystery of DSC and explore a case example where we’ll leverage both standard and custom DSC resources to create one “smart” MOF capable of dynamically evaluating devices throughout your environment, and configuring them appropriately. We’ll tie all of this into Azure Automation to show you how you can leverage its capability to rapidly create new devices and ensure their standardization moving forward.
Common Information Model (computing)Point cloudLatent class modelElectric currentCommon Information Model (computing)Scaling (geometry)Level (video gaming)Order of magnitudePoint cloudArmOrder (biology)Social classService (economics)Barrelled spaceSelf-organizationBitAuthorizationProcess (computing)Computer animation
Server (computing)Common Information Model (computing)Single-precision floating-point formatWeightPlastikkarteAerodynamicsGamma functionLevel (video gaming)Generic programmingType theoryEndliche ModelltheorieServer (computing)Multiplication signCommon Information Model (computing)NeuroinformatikMultiplicationIntegrated development environmentComputer animation
Common Information Model (computing)Local ringServer (computing)Client (computing)Directory serviceData typeTelnetComputer fileDemo (music)GodLevel (video gaming)Complex (psychology)Source codeSlide ruleBitDemo (music)Server (computing)CodeWindowSoftware testingDirectory serviceCommon Information Model (computing)Integrated development environmentDifferent (Kate Ryan album)Client (computing)Computer animation
DecimalCommon Information Model (computing)Demo (music)Data typeLie groupServer (computing)Normed vector spaceEmailGastropod shellComputer configurationClient (computing)Latent class modelSoftware testingMilitary operationCompilerGroup actionServer (computing)Common Information Model (computing)TelnetMatching (graph theory)Variable (mathematics)Computer animation
Integrated development environmentCommon Information Model (computing)Latent class modelMilitary operationGastropod shellSupersonic speedServer (computing)Client (computing)DecimalComputer fileDirectory serviceData typeTelnetServer (computing)CodeRight angleComputer fileSoftware testingComputer animation
Common Information Model (computing)Chi-squared distributionDecimalDirectory serviceData typeServer (computing)Function (mathematics)Lie groupClient (computing)TelnetConvex hullServer (computing)Point (geometry)Compilation albumConnectivity (graph theory)Common Information Model (computing)Single-precision floating-point formatIntegrated development environmentCodeDemo (music)Slide ruleLink (knot theory)Computer animation
Data typeCommon Information Model (computing)Execution unitPersonal area networkServer (computing)Asynchronous Transfer ModeDirectory serviceForceIntegrated development environmentComputerLatent heatForm (programming)Integrated development environmentDirectory serviceServer (computing)Boss CorporationLatent heatComputer fileWindowVideo game consoleBitComputer animation
Convex hullExecution unitCommon Information Model (computing)ComputerServer (computing)Computer configurationInformation managementEmailNormed vector spacePermianLatent heatDirectory serviceLocal ringParameter (computer programming)ForceAbelian categoryPasswordQuadrilateralLatent class modelMilitary operationPhysical systemComputer fileChi-squared distributionLie groupData typeAddress spaceFamilyInterface (computing)Server (computing)Integrated development environmentBoss CorporationSoftwareLaptopCompilation albumBitPerspective (visual)IP addressTelnetProcess (computing)Computer fileNormal (geometry)Goodness of fitDifferent (Kate Ryan album)Right angleLatent class modelComputer animation
Common Information Model (computing)Data typeDirectory serviceMathematicsFunction (mathematics)Factory (trading post)Local ringDemo (music)Chi-squared distributionExecution unitModule (mathematics)Set (mathematics)MereologyIntegrated development environmentCuboidPoint (geometry)Common Information Model (computing)Firewall (computing)Order (biology)IP addressFlow separationOpen sourceSoftwareOpen setAddress spacePartial derivativeServer (computing)Computer animation
Common Information Model (computing)Address spaceSet (mathematics)Interface (computing)Computer networkString (computer science)FamilyVideo game consoleCausalityDisintegrationConvex hullWechselseitige InformationInheritance (object-oriented programming)Modul <Datentyp>Lattice (order)AliasingStrutFunction (mathematics)Dynamic random-access memoryMessage passingNormed vector spacePhysical systemLie groupDemo (music)Interface (computing)Set (mathematics)IP addressProduct (business)FrustrationOpen sourceAddress spaceModule (mathematics)FamilyAliasingNormal (geometry)Parameter (computer programming)CuboidSoftwareFunctional (mathematics)Type theoryInformationConfidence intervalBoss CorporationInheritance (object-oriented programming)Order (biology)Integrated development environmentProcess (computing)Power (physics)CASE <Informatik>MathematicsStress (mechanics)Universal product codeComputer animationLecture/Conference
Common Information Model (computing)Data typeOpen setTerm (mathematics)FamilyComputer fileStrutDisintegrationMessage passingConvex hullInformationAddress spaceServer (computing)Route of administrationOptical disc driveJava remote method invocationFunction (mathematics)Parameter (computer programming)Price indexWeightGroup actionInclusion mapCorrelation and dependenceCategory of beingDecimalLocal ringSoftwareServer (computing)Link (knot theory)GodCompilation albumWindows RegistryBlock (periodic table)MereologySet (mathematics)Module (mathematics)Software testingAddress spaceComputer fileCodeSource codeIP addressMoment (mathematics)Image processingInterface (computing)Disk read-and-write headIntegrated development environmentLocal ringEndliche ModelltheorieAliasingDemo (music)Computer animation
Directory serviceCommon Information Model (computing)Server (computing)Gamma functionProgrammable read-only memoryHessian matrixModul <Datentyp>Open setExecution unitIntegrated development environmentLie groupMathematicsConvex hullMessage passingForceIdentity managementPasswordStructural loadMIDIIntrusion detection systemLengthSource codeAddress spaceLatent class modelInclusion mapInformationLipschitz-StetigkeitState of matterSoftware testingGastropod shellMilitary operationTablet computerEmailMaxima and minimaMUDTerm (mathematics)MereologyFunctional (mathematics)Presentation of a groupServer (computing)IP addressModule (mathematics)InformationAddress spaceLocal ringCommon Information Model (computing)SoftwareGodBitMoment (mathematics)Different (Kate Ryan album)Computer fileStructural loadComputer animation
Common Information Model (computing)Execution unitRevision controlMaxima and minimaMenu (computing)LaptopMereologyDefault (computer science)Computer fileServer (computing)Module (mathematics)SoftwareNumberCognitionState of matterSoftware bugCore dumpProcess (computing)CodeRight angleResultantMathematicsRevision controlSet (mathematics)File formatComputer animation
Common Information Model (computing)Pay televisionGroup actionDefault (computer science)Moment (mathematics)Module (mathematics)MathematicsSoftwareComputer animation
Common Information Model (computing)Computer fileMenu (computing)Open setChemical polarityMilitary operationData typePermianLie groupClient (computing)Directory serviceTelnetInheritance (object-oriented programming)Module (mathematics)IP addressMereologyGreatest elementCodeComputer animation
Common Information Model (computing)Descriptive statisticsCodeView (database)Source codeTelnetAddress spaceClient (computing)Compilation albumProcess (computing)Module (mathematics)BitGreen's functionMultiplication signComputer animation
Common Information Model (computing)Integrated development environmentServer (computing)Greatest elementParameter (computer programming)Type theoryPoint (geometry)Perfect groupLocal ringComputer fileAsynchronous Transfer ModeScripting languageCodeComputer animation
Common Information Model (computing)Server (computing)Default (computer science)Block (periodic table)Type theoryStack (abstract data type)Right angleSet (mathematics)MathematicsOrder (biology)Key (cryptography)Common Information Model (computing)Uniform resource locatorLatent class modelIntegrated development environmentComputer animation
Common Information Model (computing)Scripting languageNeuroinformatikIntegrated development environmentInformationComputer animation
Common Information Model (computing)RankingExecution unitComputer wormComputer fileServer (computing)Key (cryptography)IP addressDirectory serviceDemo (music)Traffic reportingMultiplication signConfiguration managementLocal ringComputer animation
Power (physics)Common Information Model (computing)Configuration managementContext awarenessModule (mathematics)Service (economics)Local ringServer (computing)Directory servicePoint cloudSource codeComputer animation
Common Information Model (computing)SoftwareService (economics)Computer programDirectory service2 (number)Multiplication signWindowIntegrated development environmentModule (mathematics)Software testingTelnetClient (computing)Point cloudComputer animation
Common Information Model (computing)Information securitySoftware testingGoodness of fitSystem administratorIP addressComputer fileProcess (computing)Software developerMathematicsComplex (psychology)MassSpectrum (functional analysis)Local ringBitCompilerServer (computing)Computer animation
Common Information Model (computing)Integrated development environmentIP addressProcess (computing)Online helpSystem administratorWindowArithmetic progressionAsynchronous Transfer ModeGreen's functionRight angleServer (computing)CodeCuboidTraffic reportingComputer animation
Common Information Model (computing)Video game consoleStress (mechanics)Integrated development environmentCuboidMeasurementDifferent (Kate Ryan album)Type theorySystem administratorMultiplication signInformation securityDomain nameComputer animation
Common Information Model (computing)Demo (music)AerodynamicsMereologyPresentation of a groupPoint (geometry)Demo (music)Integrated development environmentRight angleDifferent (Kate Ryan album)DampingLocal ringSingle-precision floating-point formatModule (mathematics)Modulare ProgrammierungSoftware suiteSet (mathematics)CASE <Informatik>Source codeNeuroinformatikComputer animation
Common Information Model (computing)AerodynamicsMilitary operationData typeSelf-organizing mapExecution unitNim-SpielScripting languageCodeDifferent (Kate Ryan album)Category of beingVideo gameKey (cryptography)Module (mathematics)SoftwareComputer animation
Maxima and minimaCommon Information Model (computing)Source codeSmith chartAsynchronous Transfer ModeModule (mathematics)Military operationInclusion mapCache (computing)Chi-squared distributionProcess (computing)International Date LineStrutSoftwareLie groupOpen setServer (computing)Message passingCorrelation and dependenceIntegrated development environmentForceModul <Datentyp>Hessian matrixExecution unitData typeDemo (music)Computer fileAddress spaceDuality (mathematics)DeterminantData structureCompilation albumGroup actionRight angleCategory of beingProper mapModule (mathematics)Declarative programmingDynamical systemServer (computing)AdditionSoftwareComputer fileDemo (music)CodeComplex (psychology)Windows RegistryGodAbstractionIP addressMathematicsDecision theoryPoint (geometry)Type theoryCuboidInformationMereologyGreatest elementRegulärer Ausdruck <Textverarbeitung>Key (cryptography)Source codeMoment (mathematics)Power (physics)Local ringIntegrated development environmentComputer animation
Software testingDecimalSimulationCommon Information Model (computing)SoftwareFunction (mathematics)Military operationGastropod shellDemo (music)Data typeScripting languageOpen setMaxima and minimaModul <Datentyp>Parameter (computer programming)Duality (mathematics)Inclusion mapIcosahedronStrutLengthAsynchronous Transfer ModeForceLatent class modelInstallation artServer (computing)Visual systemSoftwareKey (cryptography)Integrated development environmentArmMereologySoftware testingSheaf (mathematics)Perfect groupModule (mathematics)Computer animation
AerodynamicsCommon Information Model (computing)Chi-squared distributionExecution unitMaxima and minimaPort scannerPerspective (visual)Traffic reportingServer (computing)Modal logicLine (geometry)LoginInformation securityGroup actionIntegrated development environmentMathematicsComputer animation
Common Information Model (computing)Scale (map)Macro (computer science)Type theoryBitSelf-organizationTouch typingWindows RegistryLevel (video gaming)Confidence intervalVideo gamePlastikkarteMultiplication signData conversionIntegrated development environmentServer (computing)Computer hardwareOrder (biology)Different (Kate Ryan album)Rule of inference1 (number)System administratorSet (mathematics)Group actionSystem callDynamical systemMacro (computer science)Computer animation
Common Information Model (computing)MathematicsGreen's functionModal logic2 (number)Traffic reportingOrder (biology)Utility softwareServer (computing)Asynchronous Transfer ModeCASE <Informatik>Decision theoryIntegrated development environmentOperator (mathematics)Goodness of fitWindowProduct (business)Information securityUniqueness quantificationWindows RegistryComputer configurationComputer animation
Common Information Model (computing)Computer filePresentation of a groupCodeUniform resource locatorTwitterBlogEvent horizonUltraviolet photoelectron spectroscopyComputer programOpen setLatent class modelSoftware testingSimulationServer (computing)Military operationScalable Coherent InterfaceInfinite conjugacy class propertyModule (mathematics)Wechselseitige InformationIntegration by partsModul <Datentyp>Gateway (telecommunications)PlastikkarteMultiplication signCodeSingle-precision floating-point formatError messageVariable (mathematics)LogicBoiling pointMoving averageSoftware testingAddress spaceModule (mathematics)Scripting languageFehlererkennungComputer fileSet (mathematics)IP addressComputer animation
Common Information Model (computing)AerodynamicsDreizehnOpen setLatent class modelMilitary operationConvex hullServer (computing)StatisticsChi-squared distributionIntegration by partsBoiling pointFehlererkennungWindows RegistryMultiplication signType theoryDirection (geometry)MathematicsLevel (video gaming)Source codeIP addressPerspective (visual)Server (computing)Directory servicePhysical systemComputer fileDynamic Host Configuration ProtocolSoftware testingModule (mathematics)System callSpacetimeMultitier architectureFigurate numberProcess (computing)Numbering schemeIntegrated development environmentCompilerAsynchronous Transfer ModeComputer animation
Server (computing)MaizeOpen setModule (mathematics)Common Information Model (computing)Military operationLatent class modelGateway (telecommunications)Integration by partsStatisticsDecimalData typeClient (computing)TelnetTerm (mathematics)Lie groupRippingComputer fileExecution unitModule (mathematics)Right angleServer (computing)Integrated development environmentCodeContext awarenessSource codeComputer fileOpen sourceMereologyInformationIP addressTelnetClient (computing)Numbering schemeState of matterOpen setType theoryShared memoryComputer animation
AerodynamicsCommon Information Model (computing)VideoconferencingProcess (computing)Hybrid computerContext awarenessSet (mathematics)Projective planeServer (computing)Point (geometry)Asynchronous Transfer ModeModule (mathematics)Point cloudScripting languageIntegrated development environmentInterface (computing)Stack (abstract data type)Boiling pointSoftware as a servicePerspective (visual)Computer configurationBit2 (number)Computer animation
Common Information Model (computing)CodePresentation of a groupUniform resource locatorTwitterBlogEvent horizonComa BerenicesPower (physics)Server (computing)Traffic reporting2 (number)CodeRight angleTwitterEmailComputer animationJSONXML
Transcript: English(auto-generated)
Alright, so good morning. My name is Jacob Morrison. I am a private cloud engineer from Rackspace and I came to Summit last year to learn about DSC. I was very excited to
bring it back to our organization to start doing some stuff with it. Because we're a managed service and hosting provider, we have a scale problem. And I was looking down the barrel of creating to the magnitude of something of around 100,000 MOFs in order to do what we were trying to do. And I don't know about you, but I didn't want to manage or be responsible for managing a 100,000 MOFs. And so basically out of sheer
desperation kind of came up with a way to get it down to one MOF, a one dynamic MOF. And that is our goal today, is going to be discussing how to achieve that. And my goal is that you're going to leave this room with the capabilities and arm with the abilities to do that. This is not a 400 level DSC class, but it's also not a 101 DSC class. So I'm
hoping that you have some familiarity as we approach the subject with the basic authoring process and the general workflow of authoring a MOF, staging it, and then executing it and the LCM can consume that, creating a pending MOF and a current MOF. If you're not familiar with this particular process, you're going to be a little bit lost I think. You're still welcome to attend. I still don't think you'll get
through it, but it'll seem like we're going a little fast since I'm not going to be going over the intricacies of DSC 101. So with that being said, this is probably what you're doing today if you're interested in my particular talk. You're either using a push model where you're authoring
MOFs and pushing those respective host name MOFs to their respective end user computers or servers, or you're using some type of pull method, either on-prem or in Azure Automation, and again authoring those MOFs and they're pulling down their respective MOFs from that location. And you have something that probably looks a little bit like this,
depending on how many devices you have in your environment, a MOF for every single node or host name in your environment. Now through some very easy trickery, it's very easy to get this down to a single MOF. You can use localhost and other things and get it down to one MOF to manage. But the issue with this is that when you go down to that single MOF, you have to
stick with very generic configurations, and it's very complicated and often times implausible to get that single MOF to do some more dynamic things and start treating different servers differently. So the single MOF isn't enough. We have to take it a level further and create a smart MOF. This gives us more dynamic capabilities and
makes the MOF intelligent, makes it more flexible and allows us to do things with multiple devices. How can we do that? Here it is right here, the secret recipe. We're going to obviously use localhost so we can bypass the requirement of the host name. We're going to use some custom DSC resources. We're going to leverage some locally sourced configuration
data and we're going to tie all that up right now into Azure Automation DSC. This is going to be very demo heavy. That's basically the end of my slide deck and we're going to now cross our fingers for the demo gods as we progress through the rest of this. We're going to take this nice and slow and do this in stages. We're going to start very basic and then ramp up the complexity level and get a little bit
more interesting. So here we go. Basic configuration, DSC, nothing special. I've got a node declared here on my configuration for the first demo and we're going to use a very basic setup, Windows feature, and we're going to not allow
the Telnet client to be installed in our environment. We're going to ensure that that's absent. We also have a basic requirement for a directory to be present and that's going to be in C required directory. This is the most simple DSC that you could possibly configure, ensuring that one role is not installed in that one folder. We can make this very complicated for the purposes of the demo. We're going to keep it nice and
simple. If you compile this, it would create a server one MOF. Pretty simple. Now you could of course copy this code and run it with a different server name, server two, server three, server four, server five, so on and so forth, and make many MOFs to deploy out to your environment. Or you could do something like this, where you
simply declare the same server using the same exact code base inside of the same DSC. You can compile this out and let's look at what that actually does. If we look in here and take a look at our test flow, where we're going to compile that out, go ahead and clean this up. If we run this, because we declared both
DSC server one and server two inside this particular configuration, you'll note that two MOFs are generated. This is pretty basic. Let's go ahead and execute a push on that, and push that out to our server, removing that role and creating that configuration. We'll do this with a CIM session. We'll
go ahead and declare our server name into a variable. Can everybody see okay by the way? Text is good? Okay. We'll go ahead and give some credentials so that we can actually authenticate to that device and push out to it. We'll declare our CIM session, and then we'll go ahead and initiate a start DSC configuration against that CIM session that we're about to create against that device.
Let's go ahead and execute that now. We'll get prompted for those credentials for that device. You'll notice that it immediately kicks off the LCM, passing along that DSC config. Our telnet is not allowed. It verifies that it's not installed.
If it doesn't find the required directory, it goes ahead and creates it. This is DSC being amazing and doing the thing that DSC does. The thing to note here, and why I demo this, is that we did not declare the name of the MOF here. It is implied by the name of the server itself.
When we initiated the start DSC configuration, it went to find the name of the server that it was talking to, and it expects to find an exact match for that host name inside of that location. Because it did, it successfully pushed that out to that particular server name. That's very important and something to keep track of as we move forward. That's a basic push-out with a
like-for-like host name, but we have not achieved a single MOF to rule them all. In fact, we have multiple MOFs, and we also have a very fat configuration file because we had duplicate code. If we wanted to do this again for server 2, our DSC had the same code duplicated for both server 1 and server 2.
So let's see if we can improve upon this particular method right here. We're going to leverage a very special variable called allNodes. allNodes is awesome because it allows you to remove the duplicate code from your configuration. Note here that I don't have this declared twice in this example.
Instead, what I do is I declare an array and populate all of the server names that are going to be compiled for that particular config. So here for my configuration, again, the code declared once and my config has now had a diet and is much slimmer and slim down. So if we clean up our test example
right here, and if we load up this array and compile this out to the same location, you'll note that because we declared three servers this time, that we now have three MOFs at the end of our compilation.
For some of you, this may be enough. At any point as we stage up this complexity, you may find yourself thinking like, that will work for me, and that's cool. At any point in time, you may find that magical component that seems to make sense. So again, we have simplified the configuration, but we still do not have
a single MOF yet. So we could go ahead and push this out again, and again, it would do the exact same thing that we just demoed in the initial push. I see a few people taking pictures. I forgot to mention at the beginning, we're going to be demoing a lot of code today, and all of it is on my Git, which I'll link at the very end.
So you'll have access to all of this code. You'll be able to run it in your own environment, and the slide deck is also included. And I'll have that, again, linked at the end. So feel free to focus on the demo if you like. I promise you, I'm going to give you every single thing that we're showing up here today. Okay. So again, we put our guy on a diet,
but we still got three MOFs, or as many MOFs as we need in our environment. We haven't achieved what we set out to do. Enter localhost. localhost is very special in DSC land because everyone is localhost. Every server or every device you've ever used has that name. And because DSC is very sensitive to the actual host name,
it likes localhost, and the server likes localhost as well. And so we can work with this. And you should be wondering at this point, like, why did Don give me a one-hour-and-15-minute session to tell you about this? Because that seems incredibly simple, but we're going to talk about some trade-offs here as we engage with localhost.
So we'll go ahead and clean up our compilation here and remove our previously compiled MOFs. And you'll notice here that I'm no longer declaring new servers. We'll go ahead and declare a new directory because we got a new boss and he wants a new directory in our environment. And you'll notice that when we compile this out that we no longer get a
specific server name. We get a one MOF at localhost. And this is what I like to call the single dumb MOF because this is great and could function. And if you're doing a very generic deployment in your environment, let's just say something very simple like every single server in my environment
requires the SCOM agent. So I'm going to use DSC to deploy SCOM throughout the entire environment. Great. This one localhost MOF can do that because it'll do the same thing on every single server in your environment. You can deploy it out as much as you like and it will do the same thing to every device. But if you want to start becoming more dynamic and doing other things, this is not sufficient. Let me explain why.
Okay. So first, before I get into that, let's talk about some things we lose with this method from a push perspective. So if we go back to my execute push, and let's see if we still have that active CIM session engaged. We do. You can see that here in the console. What I can do is I can go
ahead and push that out again. And as you can probably imagine, we're going to get a bit of a bloodbath here in the console window. Why is that? Well, the reason that we can't push this any longer is because the computer-specific MOF file for server one does not exist in the current directory. Again, and as I pointed out on our
first demonstration, this path, you're not allowed to specify the name of the MOF. You can't specify localhost here. It expects to see the name of the server. Now, there are some workarounds here. So what I'm trying to articulate is that when we renamed this to localhost,
we achieved the ability to use one MOF, but we lost the ability to natively push it, if that makes sense. Okay, now there's some workarounds here, right? You could get creative with renaming this localhost MOF to the names of the servers that you're going to be engaging with. Well, that seems really not efficient. So you could do that, but don't do that.
Here's another workaround that you could engage in this particular scenario. Again, we're going to declare server one as the server we're going to be interacting with, and we're going to give it credentials again. Same thing, no difference there. But instead of creating a CIM session, we can create a PS session. Well, why would I want to do that?
Well, because I can copy stuff over a PS session. I can copy that localhost MOF to the server. And while I cannot execute the start DSC configuration from my push device any longer, I can get tricky and invoke it to run locally
on the server itself. So follow the process here as I execute this. Again, we're just establishing a normal PS session to this device. It's going to prompt us for creds down here. When it establishes that PS
session, we will copy over the localhost.mof file and then run the start DSC configuration on the device itself, not from my laptop. Notice that during this push method, I get the exact same process, and it looks almost identical. You still see the LCM engaged,
you still see the LCM reporting, you still see the LCM not finding that new boss request folder, and it went ahead and created it. So this is a workaround from the localhost perspective where you can still use push in your environment if you want to utilizing this one MOF method. Questions on that so far?
We feel good? We like this? Okay. Again, ratcheting up a little bit more. So let's pop over, and things start getting interesting right about here. Okay. So at this point, we're feeling good. We've got it down to one MOF. We're doing things like removing that telnet. We've got those required
directories, but now we want to do something fun. We want to set an IP address using DSC. Now, we're not here to get into the debate of, like, should you do that? What I've picked is something really challenging here, which is, what will this compile down into? Can I compile this? Sure, I can compile this, right? This is a local host,
just like we had a moment ago. It's referencing the X networking module, which allows me to use DSC to IP. And if I clean this up real quick, and I compile this, I get this thing. And what that actually is,
is a pink slip. Because if you push this out to your environment, you're probably going to be let go because you've just IP'd every single device in your environment to the same IP address. This is the rigidness and the inflexibility that I'm talking about that DSC gives you out of the box.
You can't utilize this methodology with that single MOF because an IP has to be unique. It's a unique thing to a server. Now, this is at the point where most people say, well, I guess I'm going to go back to a multi-MOF world. I'm going to create different MOFs because I do need to use this to IP my environment. This is the way that we've decided to go, and so I'm going to
go back to a multi-MOF world and start creating separate MOFs or partial DSC configurations in order to satisfy this problem. And now we're going to show you how you don't actually have to do that. Okay, so what am I doing here? I'm using XIP address. What is that? Okay. It's part of the
XNetworking DSC module. The XNetworking DSC module is a community module, which means it's open source. It can do all kinds of neat stuff. It can IP stuff. It can set DHCP settings. It can set firewall settings, RSC, RSS. Again, it's a great networking piece for your DSC
environment, but it is open source. Because it's open source, I can download it and I can see the code. Let's do that now. It is experimental, and so let me quantify that. So the question was, isn't that experimental?
That doesn't sound like something I should use. Okay, so it is experimental, but just experimental from the fact that you can't pop a SEV-A case at Microsoft and say, I'm having some problems with this module. Microsoft's going to be like, that's great. I can't help you. But it is still production code, if that makes sense. It's just not something that they're going to actively support if you run into any problems. It's production code that is supported
by you. You're always welcome to pop a Git issue, though, and the community will definitely help you out if you run it into any problems. Yeah, sometimes. Yeah. Yes. So the comment was that Microsoft has announced that they're dropping the X, and that is absolutely true. The X is going away, and we're
rolling all up into normal community modules. Pretty excited about that. Okay, because we downloaded this, and we have this module on our box now, we can actually explore this X networking module. And if we open up that folder and go into DSC resources, we'll see all the various different functions and
modules that are inside of here. And the one that our boss has given us the requirement is IPing devices inside DSC. We're going to be looking at X IP address. There's two things inside every DSC module, and this is going to be the PSM1 and the schema. We're going to look at both of these.
Most importantly, we're going to start off with the schema. See, the problem as we look at this configuration is I've got to give this thing some information or for it, DSC, to work. And we can look in the schema and see that there are three things that are required in order for this to do its DSC job. The first is the IP address that you are going to be setting. Makes sense.
The second is going to be the alias of the network interface. So if your interface is Ethernet or public or whatever it may be, then finally, the IP address family. Are you setting an IPv4 address or are you setting an IPv6 address? These are the three things that we're going to have to provide in order for this to operate. Okay, so stay with me here.
Snover, if you've listened to him yesterday or seen him talk previously, will tell you about open source and how it's great and how you should submit bugs and make help better and things of this nature, and all that's great and you should absolutely do those things. But one thing that I'm going to stress is
open source allows you and empowers you to bend this to do your bidding. You can make this do whatever you want. And so when you look at a problem like this and you're like, man, somebody's wrote this fantastic module, it sets the IP address for me, I don't want to write a custom module, but this isn't good enough. I want it to be more dynamic and this isn't allowing
me to do that. Well, look at it from a top-down level and think to yourself, like, man, I don't want to give it an IP address because if I do that, it locks it in. Cool. Do that. Just delete it. It's so easy.
I also don't want to give it the alias of the network interface because the alias might be different on every different server and that frustrates me, so I'm also going to delete that too. I'll keep the IPv4, IPv6, that seems cool because I probably know in my environment if I'm using IPv4,
IPv6, so I have to have a key, so I'll leave that around. We feeling good? What happens over here now? The module gets angry, right? Because if we look at the get, the set, and the test, the parameters that are required for these are going to be identical
to the schema. So not coincidentally, we see the address family, the interface alias, and the IP address. Again, these frustrate me. They're not allowing me to do what I want to do, so I'm going to do this. It's so easy. Make the thing do the thing that you want it to do.
That's the power of open source. In fact, I'm going to do that with every single one, the set, the test, and the get. I'm just going to remove them because it's not allowing me to do what I wanted to do. So we'll come down to the last one, and we'll remove that as well.
One thing to caution you about as you're editing these types of community modules is that Microsoft loves to use this thing. If you've never used the PS-bound parameters before, it's really cool. What it does is it takes the parameters that you gave the parent function and it passes those same parameters onto the
next function. But because we deleted some of those parameters, this isn't going to work anymore in that next function. Does that make sense to everyone? So instead of using the PS-bound parameters, we'll actually have to declare those parameters individually. So we'll have to give it the IP address itself. We'll have to give it the
interface alias itself, and we'll have to give it the address family itself. Any questions on that change that I just made? They like to snake that in on you and it will break things if you're not careful. And I'll save this. So based on what we know so far, we're good, right? Because now I can just come
back here and we should be good to go. What's our confidence level that this will work properly? Well, it's not going to, right? Because what we've essentially done here is we've removed these two requirements.
So let me kind of explain what I mean by that. So if we copy those changes that we just made to our actual modules folder, so we'll blow this away. And if we come back up here and copy in the edit that we just copied or just made, you'll notice that if I reimport the module, one way to do that is to
kind of comment it out for a second and then bring it back in. That re-associates everything. Notice that it's angry now that those two things are present. Well, this makes sense. We deleted them so it doesn't want them anymore.
But when I remove that from the config, this is going to run just fine now, but it's not going to IP anything because that IP address is no longer provided in the configuration. And this is where we're going to flip everything upside down. Okay, because we always start at the top of DSC and we tell the server through the LCM what to do.
Now we're going to flip that model on its head with a piece of code. I'm going to copy this. And if we come back to our custom module here,
inside of our test and set and get, I'm going to paste a particular code block. And I want to spend just a moment as we digest this code block. We're declaring a file name, a JSON file. We're going to import it and we're going
to load an IP address from that config file and it looks like a MAC address from that config file. And then we're going to load the net adapter from that MAC address and get an interface alias based off of that.
This is the dynamic piece in sourcing a local configuration, which is driving the config from the server up to the configuration, not from the configuration down to the server. Does that make sense to everyone at this point in time? So we can take something like this and put this on the device. The module can pick it up.
This can have any IP you want. And you may be asking yourself at this moment in time, well, where do I get that from? That's a great question for your environment.
Everyone's environment in this room is different. How will you get the configuration that you need on the local device? We'll be part of your deployment process. We'll be part of your imaging process. At Rackspace, we query the switch and pull the MAC address and pop the config file on the server because we have API capability on our switches. So we know which MAC address each individual server has. Does your environment have that
capability? Maybe it does, maybe it's not, but you're all very smart people. And you can figure out how to get creative with populating a configuration file like this. If JSON scares you and you don't like JSON, that's fine. You don't have to configure this in JSON. It could be a batch file. It could be a notepad file. Don't
do that. It could be something in the registry. You can source anything locally from the device to make DSC do whatever you want based on the criteria that you specify. Okay. We'll paste the same code on every single block for the Git, the set,
and the test. Again, this code is available to you today, and you can start utilizing it as soon as you get the link to the Git. This is now production ready. We can roll this, and it will source the local configuration file on the device and IP the device. So
let's go ahead and pray to the demo gods and push some buttons. Okay. So if we come over to the push, we've got our compilation. We've got our newly configured X networking. And how long did that take us to edit that? That was nothing.
We took away the pieces that were frustrating us. We replaced them with a simple import from a JSON file. No time at all. Okay. So we'll go ahead and compile this down, clear out anything that's inside of our test location, and we'll go ahead and run this,
and it compiles. Now, because we created a custom module, we have to deploy a little bit differently. We have to copy over the module itself when we're doing a push. If you're using pull, this is not a problem, and we'll cover that in a moment. But when
you're actually, when you create a custom module like this and are referencing it in your MOF, you have to copy over that particular networking module to the destination device, and then we'll execute this command as we normally would. Let's go ahead and load that up. Promps for creds. It will copy over the module. It will copy over the local host MOF,
and then it will tell the LCM to begin processing that. And you see in the X IP address, because of what we copied in there, we see our IP info is this. Remember, it's sourcing that locally on the server. It says the MAC address is this. It went ahead and checked to verify the IP address was correct. So we had to change very little. We leveraged
the community module to do all the heavy lifting. I didn't have to write anything. I just engaged the community function that was already there, and I just bent it to my will a little bit and tweaked it so that it sourced something local on the server to actually be smarter and more dynamic. So this is the part of the presentation where
you should be kind of thinking to yourself, there's no such thing as a smart MOF. What there is is a MOF armed with smart modules. Does that make sense to everyone? So if anyone tells you that there's a smart MOF, that's not true. There's no such thing. But you can make your modules extremely intelligent and wrap that up. Any questions so far on anything we've covered up to this point? Yes.
Yes, so the comment was that you're shifting the problem. You're shifting the problem from managing many MOFs to instead managing many configuration files.
And that is absolutely true. However, I'm going to show you in a moment why that can actually be a really good thing. And why managing many MOFs is very complicated and creates workflow issues, whereas managing many configuration problems can actually be beneficial to your team. So that's a very good insight.
I hope to address that here in a moment. We feel good? We like this so far. Cool. Okay. Now Don has kind of put the fear to God into all of us presenter people that if we like connect to the internet, our laptops are basically going to blue screen and die. So this next part is pre-recorded and it's pre-recorded because like I said,
he's made us really paranoid because he said the Wi-Fi was going to really be horrible here. So I apologize for the pre-recording. But it was a necessary thing, unfortunately. So let's get fun with this and start rolling it up into Azure.
Okay. We created a custom module. And I want to caution you as you edit community modules that there are some pieces that you may want to consider. And that is versioning. That community module for X networking has a version number and it's not, we changed it, right? That has implications, especially if you want to update it later.
We had just forked that module and you need to be cognizant of all the pros and cons that come along with that. So it may be beneficial in some circumstances to break out of that and maybe remove just the X IP piece and separate out and rename it. That's all up to you how you want to approach that. Just be cognizant that if you do leave it in the default X networking format
that there is a possibility once you roll it up to Azure and I'll show that in a minute where you may overwrite the changes if you upgrade the version number inside Azure. Okay, because we made a custom module, we're gonna have to zip that up and put it up into Azure. Now, for whatever reason
you can't do that through a normal zip process. You can't right-click and add that to a zip file. You also can't use 7-Zip. It's very, very sensitive when you zip up the module that it has to be done in a very specific way and you have to use the XPS desired state configuration module to actually use the publish module to pull server cmdlet
to zip up any custom modules that you create. So again, this isn't super complicated. You have the code today. You don't have to do anything fancy. You just adjust it to zip up your module. So pretty thoughtless process. You just got to do it in this manner. So install that module, the PS desired state configuration module if you haven't done so already, and
then we're going to go ahead and import that module so that our code will work. Once that's done, you'll go ahead and click the publish module to pull server and notice on the right there that when we run that code that it's going to pop up a zip file containing our new module set and then we'll be able to upload that to Azure
without any problems. Again, failure to do it in this way results in Azure getting really upset about the zip file. Okay, another quirk to this process. Is that you can't leave the version number on the name of the zip file. I put in a bug to Microsoft. Hopefully they fix it one day, but you have to rename it to just the core name of the module. Very important to remember that if you're designing it and
editing your own stuff. So we're going to rename that back to X networking and now it's ready to upload up in Azure and I'm going to do this from scratch like we've never done it before. I'm just going to Google, not Google, but search Azure for automation. I'm going to click on the automation account and
I'm going to click create and I'm going to give it a name. We're going to call it PS Summit. I'm going to associate that with a subscription and a resource group. So pretty basic. We're just going to create a brand new never before used automation account to load up our stuff.
When that is complete, you'll notice that it'll start populating some of the default runbooks and other things associated with Azure automation and in a few short moments in Azure time, five minutes, you will have your automation account. The first thing we're going to do is upload that module that we just zipped up a moment ago. Notice again that this comes with some default modules installed and because it has an interlink with the gallery,
you could actually technically install as X networking from the gallery itself. If you quit and leave and someone comes behind you, they could potentially upload or update X networking, thereby breaking the changes that you made to that default X networking. Does that make sense to everyone?
So make sure that if you are editing a community module and forking it, that you're taking the appropriate precautions as you do that. So we're going to go ahead and upload that X networking module that we edited, which has our dynamic IP capabilities. And we're going to stick it up inside the module area inside of our brand new Azure automation account. That will go into
pending and then available and you'll see that X networking module is now there with all of the DSC resources that are associated with it knowing that we have a nice healthy module upload. And so we're looking good and ready to start utilizing this. The next step is going to be to update our
configuration. So before I show that, one important part to do this as we look inside of a push versus a pull is when we go up to Azure, we're not using a pull method anymore and so we're not going to compile. And so our configuration for a one MOF setup,
notice at the bottom that we no longer compile that out. That's because it's going to be compiled inside the Azure automation account itself. So make sure that you remove your compile code at the bottom of your configuration prior to uploading it to Azure. You don't want to leave that in there.
So it is not on this one. Okay, so with that one MOF uploaded to, we're going to upload that one MOF to Azure. And this will publish that configuration to the Azure automation account.
We'll give it a description to rule them all. And this will publish again that, but this is not ready for consumption yet. This is just published. We can click on it. We can do things like see the code. But we can't actually utilize it yet until we actually get that compiled down.
So I'll go ahead and click on the view configuration source and notice that it's exactly the same as we had before with our dynamic XIP address setup and our two folders and our telnet client. And I'll go ahead and compile that inside of the Azure automation account. This will actually generate the piece that we need which is actually going to be pulled down to your devices.
Notice that it clicks off a compilation job. It's going to go ahead and process that. Make sure you have the necessary modules. Make sure all your code is aligned correctly. It takes a little bit of time. But if you get that green check mark, you're good to go and you've got everything set up the way that you need to.
So with that compiled, we now have a one MOF localhost. And we have the dot localhost because our config had localhost in the name. This is very important and this is what empowers us in Azure automation to allow every device in our environment on-prem or in Azure to consume this particular localhost MOF.
Notice that on my end device server that I do have that local config already populated on the device. And now what we're going to do is we're going to adjust the LCM on the end device to go up to Azure.
So every server device in your environment running in LCM is sitting in push mode today, right? But you can edit it with this DSC metaconfig file and point it up to Azure. Now you're looking at this probably thinking well it's like a lot of code and he's not really talking about it in any depth. That's because I don't have to.
Microsoft publishes this script on TechNet. I also stuck it up on my GitHub, but I didn't write this code. Microsoft did. It works. It's perfect. You just run it and it moves your LCM from being in push mode to pull mode and brings it down. I'm going to show you the only pieces that you need to care about in this complicated setup.
And that's down towards the very bottom of the script. All the parameters that we care about are down towards the bottom. Stuff like where we're going to get them off from, what is our Azure account, what are our credentials, all these types of things. So again, if we look at PowerShell and we run a git DSC local configuration,
we'll see that our LCM is currently set to push mode. This is by default. All LCMs by default are sitting in push hoping one day to become associated with a pull server. And we're going to make that happen right now. So all those settings, apply a monitor, how often it'll reboot, all those types of things are all done in this parameter stack.
So you can set up your LCM for the appropriate settings of your environment. How often it will refresh, check for a new config, all this type of stuff that you actually care about. That's the only block that is a big deal. And when you compile that down, it will create a metamoff. MOFs are for the server. Metamoffs are for the LCM.
So the MOF will make changes to the actual device, but the metamoff makes the changes to the LCM itself. And that's what we're about to generate right now, is a metamoff. In order to do that, we do need some stuff out of the Azure Automation account that we just created.
If you scroll down the left side, you're going to see a key thing. And it has a handy-dandy copy button on the right there for the URL of your new Azure Automation account, as well as the primary access key that will give you access to that location. We'll throw those into this script that Microsoft has provided, and this is what allows the LCM to become one
with our Azure Automation account, and start pulling stuff down and reporting into it. Notice that we will also set its own host name and the computer name, because we're using localhost. It doesn't matter. We don't need it. And we are also going to tell it what config to pull down from our Azure Automation account.
So we'll go up to our node configurations that we just uploaded and compiled. We'll copy that name, and we'll stick that inside of the script to create our metamoff and our LCM.
This is all the information you need. All the other settings like apply a monitor, whether or not it should check in every 16 minutes versus 90, those are all things that you can decide about in your own environment. Notice that when we're going to run this script, that it's going to compile down into a DSC metaconfig's metamoff file.
We see the folder pop on the right side there, and if we drill into that, we'll see a clear text metamoff for our device. If we look inside that metamoff, there's no surprises here. This is the server URL for the Azure Automation account we just populated, along with a key and the name of the config that we're going to go up and consume and bring down.
It's cool, right? Pretty neat. The last step that we need to do is we need to change our to our directory to where that metamoff is located.
Then we can use set DSC local configuration manager and specify the path to the particular moff. The first thing I'm showing you right here is that our IP address on this server is 10.0.276. Just keep that in mind as we finish up the demo here.
And notice that it is not currently in Azure. We look in DSC nodes, and there is no node reporting in at this time. So I want to get some context of how fast this is. So we'll set the DSC local configuration manager. We'll specify the path, which is just going to be dot because we're already navigated to that directory. We'll give it a verbose and watch what happens.
The LCM is consuming the metamoff. It's associating itself with Azure, and it's going to go ahead and apply and report up and notice that it gets to the new DSC agent and is communicating back into SUS agent service prod1 azureautomation.net, which is our account.
Azure automation takes care of everything else. If there are modules required they will be downloaded to this device. It will download the moff. It will take care of everything transparently. Notice we click refresh in Azure. The server is already there and reported in. It's not sending any data to Azure,
so there's no sensitivity here. If you're afraid to be in the public cloud, you're not in the public cloud. You're just pulling down the configuration from the public cloud. This is going to bake for a second. The bake time is important, and when it's done baking
it's going to obey whatever we put inside of that configuration up inside Azure. This does work on-prem. This does work in Azure. Any device in the world that has 443 access can utilize this technology, and you can manage every single device that you own. Notice if we navigate to the C program files windows PowerShell modules folder that X networking has already been downloaded to this device.
Azure is providing that service to you and is pre-populating all required modules for this particular configuration to run. So you upload it to one location, modules flow down to every device in your environment. Notice if we run test DSC configuration that we get a true back saying this device has finished
compiling and has run through the config and has made all the necessary adjustments. It removed the telnet client if required. It created the directories that we had inside of our config. So your local end-user helpdesk folks or junior admins can run test DSC configuration if they don't have portal access and can still get that goodness to see
if it's compliant and obeying everything that DSC has set. Notice that we were at 276 again, and now at 10.0.5.100 our IP address in this device has changed because our locally sourced
configuration file was set to do so. So back to your original comment that happened a little while ago about pushing this down to a gazillion configuration files. So in your DevOps process and your pipeline process and deployment processes when you shuffle servers around or repurpose things or if things need to be adjusted
sometimes in DevOps land that creates a little bit of complexity as you now have to engage a more senior team or a more senior person to revamp the configuration to deploy down the necessary changes. But now with a very simplified config file in the box, you're empowering the opposite side of the spectrum.
You're empowering your junior folks to now be able to make those changes without having to engage a loop-to-loop back to the original development team to make that change to push down to a device. So if someone needs to repurpose the server or for whatever reason an IP needs to be adjusted instead of that becoming a
mass ticket thing that has to, you know, go through a million people to get resolved, you simply open up a notepad file, change the IP address and recompile on the local device itself. Does that make sense to everyone? Can you see how this could do some neat things in your environment and how you can empower your help desk or junior admins
to make necessary adjustments without having to know a lot of DSC code, know a lot about Azure, anything like that. So notice I just ran start DSC configuration. I used the existing MOF and because I changed the configuration that's local to the device, it changed that new 10.0.5.101 IP address.
No muss, no fuss. Your junior admin had to learn one command instead of having to learn the entire process. Went ahead and tested that, working great. So notice that sometimes in adder automation what I'm showing you right here is that the refresh doesn't always work
Sometimes you have to close the window because what I'm trying to do is I'm trying to see this go compliant inside Azure. A very important and very cool piece to the Azure automation process, but it's not going compliant. It's stuck in in progress mode. Now you could wait, but I'm impatient. I want to see my green check mark right now. It's what I just ran right here is update DSC configuration
and what that's doing is it's telling the box to go up to Azure, see if there's anything new and give Azure its latest and greatest report. With update DSC configuration, run, notice if I click refresh, I instantly get a check compliant box and I can now see that that server is compliant for my DSC code
inside of Azure. This is super empowering. I can't stress that enough. Try to imagine this on every single device in your environment. You can see which devices are compliant. You can see which devices have not been responsive for a while because Azure will keep track of that.
You can see which devices have failed to successfully apply the configuration and warrant further investigation from an L2 or an L3 perspective. It's not just an overarching console that you're seeing here. This allows very powerful drill down capabilities into your configuration. If we click on that failed server,
for example, we can see how long it's been failing for down to the minute. You can see exactly, based on security logins, who made what change and when. You can also see exactly where the problem is. If you're doing a lot of cool stuff with DSC, like installing SCOM, setting winRM,
doing your domain join, all of these different types of things, you can drill down to focus on the exact problem. Someone here has adjusted the cluster-aware updating settings. How many of us have ever met a SQL admin that was like, my SQL boxes don't need a patch, I'll just turn this crap off. You'll be able to see that inside Azure Automation and take the appropriate measures.
Soft skills, right? So lots of troubleshooting time saved with this type of setup.
Okay, this is the part of the presentation where you should be feeling pretty good and have a good solid idea of like how all that comes together. I've been sitting in your shoes before and normally at this point what I'm thinking to myself is I'm thinking, well, that's great. That seems like it could work. And he gave me one example of how
I could edit a custom resource to kind of make it more dynamic. But that's never gonna work in my environment. That's the way I always feel when I attend these presentations. And so what I tried to do on our next demo is I tried to come up with a scenario that was just ludicrously impossible because I can't obviously,
you know, try to demo every single niche case in your environment because you all have your own challenges. What I tried to do instead was I tried to come up with something really challenging that just seems absolutely impossible and we'll work through that right now. So in this example,
a hospital. And let's say we have different floors in this hospital and let's say for our demo here that this is ER and requires a different set of software packages to be deployed. This is, you know, nursing, radiology, so on and so forth. You get the idea. And every single one of these floors
requires a different software suite to be deployed. But how can a MOF possibly know what floor it's on? It can't, right? You can't make a MOF that smart. Or can you? So this is the example that I decided to work through.
Is there a community module for identifying which floor your computer is on? No, there is not. So you will not always be able to lean on a community module to solve your problem like we just did a minute ago where we were very quickly able to edit that and utilize a local configuration source.
So now I'm going to show you how easy it is to create your own custom resource from scratch and solve this problem. There's only one script you need to do this. And I put it over in the ISC because for whatever reason VS code doesn't like running this particular one.
This is the XDSC resource designer. Making custom modules is hard. If you've never made one before from scratch, there's a lot of different steps, there's a lot of different syntax, and you will fail. You'll get lots of blood bath red and you will hate your life. Don't do that. Use this thing. It takes all the guesswork out.
You install the module off of PS gallery. Nice and easy. If you don't already have it, you import it. You give your resource a name. We're going to be calling it software by floor. And you give it a property and a resource name. These names are not important. These names are whatever you decide is fitting for whatever you're trying to accomplish.
We'll give a path where we would like it to actually go B. And then we'll use the new DXC resource property to create this with the necessary keys. If you want this to have writes, I've given you an example here in the comments. So you can give it many properties. It doesn't just have to have one. In our example though,
we're just going to have one key property, which is going to be called resource property name, which is going to be install. And if I click play on this, it's just going to take all the guesswork out. And it's just going to work. So what we'll do real quick is we'll bring this over here, and we will look over here and see
our outside resource get created. So I'm going to click play on this, and a whole bunch of stuff just happened. And I don't care about all that stuff. That's the whole point of this is it abstracts all the complexity away, and it did things like it created the PSD1 file for me.
There's a lot in there. You would not do well doing this by hand. It also created the proper folder structure to ensure the success of this module, and
it also created the PSM1 and the schema file that is necessary to make this particular module a success. Cool. The scaffolding is there. You have that scaffolding right now. Again, you're going to get this code. You can go home. You can utilize it. This will successfully create resources for you, and you can uncomment this, copy this line, and create as many
properties as you want. If you want to be able to pass six or seven or eight things into that particular module, have at it. Very easy to make that happen. Okay, but now we actually have to make it something to do something. So we're going to do that inside of the PSM1.
As we demonstrated with the IP, it's not the MOF that has the intelligence to determine these dynamic type of things. It is the module that is intelligent. It is going to make the determination about what floor it's on. How can we make a module know what floor it's on?
Again, this is a question for you. And you are very creative and come up with all kinds of ways to identify uniquely where a floor is and how that is uniquely identified. You could be subnetting by floor, and you could simply ask the module to go out and
determine the IP address and do some fancy array work and say, if I'm a 172.16.90, that means I'm on floor four. Simple. Or you could do something even more basic. You could do something like this. And again, we'll add this to all of our
things, but I'm just going to, for the purpose of this demo, add it to test. I'm determining what floor I'm on. I'm going to add a reg key path to evaluate. I'm going to look at the reg key.
And I'm going to tell you what floor I'm on. How do you get the reg key with the right floor number? That's up to you. Is it part of your pixie deployment process? Where if it's deployed in a certain subnet, then that registry key gets three instead of four? There's ways to solve these types of technical problems. What I'm trying to articulate to you
is that you can utilize this bottom to top method to source that local information to make really powerful dynamic changes to your environment in DSC. DSC is not the rigid box that we've all come to kind of have a love-hate relationship with it.
Because of the power of localhost and locally sourced configuration information, you can funnel that information up to the module and make intelligent decisions. So with this in place, we can push this to that server now. It will evaluate the registry piece and determine which floor it's on and take the appropriate action. Let's pray to the demo gods and see if we can make that happen.
So we'll pop over here. And again, because this is a custom module, we will need to deploy this in a fashion that allows it to be copied over. So we will pop into
CDSC and let's see if we can find, yep, this one. Again, nothing changes. Declare the server credentials, create a PS session. In addition to copying the localhost MOF, we will also copy over a
our module for our software by floor. Now our compilation will look like this in this example. So we could take something like this one. We could remove this
and give it the names that we gave it a moment ago, creating our module, so on and so forth. So if we look back at our module creation, our resource name was software by floor.
The subsequent name doesn't matter and our key was install being a boolean value. We can, of course,
copy that out and compile it. Go ahead and remove this localhost. And compile this particular configuration. It's unhappy. The reason it's unhappy is because we failed to import our new module up here. Make sense?
We'll do that now. And with that now properly imported this should successfully compile. Let's see if it's actually inside of that location.
So if you haven't done this before, it's in C. Thank you very much. I love it when people help. Thank you very much.
Perfect. Okay. So with that imported and compiled out, we now have your localhost MOF. And we should at this point be able to copy that module over and run it against our device. We'll go ahead and do that now. Again,
session creation, copying the localhost, copying our custom module, and then executing that against that device. Prompts for creds.
LCM is engaged. And notice software by floor kicks off, determines what floor it's on, finds that it's on floor 4, and tests that the radiology software is installed.
I've armed you with the scaffolding. The software is obviously part of your own unique environment and the tests that you create in that section would need to be appropriate for the software that you're trying to evaluate. My hope is that seeing that you should be able to do literally anything armed with localhost,
the configuration setup, all of that questions so far. Still feeling good? Okay. Again,
when rolling this up into Azure Automation, incredibly powerful. This gives you that capability to see line by line, resource by resource, which is compliant and which is not. This is incredibly powerful from a troubleshooting perspective.
and incredibly powerful for you in doing compliancy reporting. You can see, again, things like this, where some servers have not reported in. Are those actionable? Are some servers failed and need further investigation? And you get that minute-by-minute updating from your on-prem or in Azure devices
reporting into Azure Automation, letting you know how long they've been compliant or when they went not compliant. Again, we use this very effectively to trace this back to security logins and can determine, based on that security login, who, what, when made the change. And then we can take appropriate actions as necessary. Very empowering inside of our environment.
Your environment today may look something like this, where you have what we call snowflakes, correct? Some of your servers follow a standard configuration of some kind, but Franken-Accountinger, you know the drill, has a little bit of a different configuration.
And, of course, there's always one group in your environment that requires some extra special care and requires some type of one-off and their whole department is a little bit different. I get it. You can, though, need to realize that this is what we call micromanagement, right? We're micromanaging our special servers.
And I wanna talk to you about, as you engage this and make your moffs more dynamic and more powerful, that will enable you to achieve a true macro capability. Why am I harping on this? Because this will change your life. And let me kind of touch on why that is.
When you get a call or a ticket of some kind from this example on the left, and it could be something very generic, right? Something like, my server is running slow. We love these, correct? Those are always the best ones to troubleshoot because it's like, oh, okay. That helps me with nothing.
Because this micromanaged environment is very different, you have to take micromanaging steps in order to troubleshoot that issue. Is it because the way the server is configured? Is it because the way the cards are set up? We'll use Hyper-V as an example. Is it because VMQ is not set correctly? Is it because the virtual switches
are not configured to our standards? On and on and on. You don't know. So you spend an enormous amount of time troubleshooting this guy to determine like, what is it about this guy that's making him slow? And the rest of the environment seems to be fine. Now let's have that conversation on this side, with our dynamic MOF, our dynamic setting that's flexible enough to still kind of achieve
some of this, but is giving us that base level of configuration where all of the important settings are what they need to be. This instills and empowers your support team with confidence to say things like, that's a layer two problem. Why is it a layer two problem?
Well, because everything else is fine. How do you know that? Because everything is the same. This, for our environment at Rackspace, has changed the way that we approach ticketing, for the better. I can't stress that enough. We don't have these types of conversations anymore about maybe something's different about the server. There is nothing different about the server.
The dynamic MOF capabilities have allowed us to flexibly control the environment enough where I know with absolute 100% confidence, because it's green checkmarked in Azure, that this server is exactly the same as this server, which is exactly the same as this server. Now does that rule me out from hardware problems?
No, this can be having a NIC issue, but here's the rub. How many of you ever changed your registry setting to troubleshoot a problem? That should be everyone in the room, right? We've all done that. Like maybe I found some technic article, right, and it's got this registry key. I don't know what it does, but I'm gonna change it. Oh, I fixed the problem. You do that all the time over here.
We all do. And what you need to do as senior leaders in your organization or more technically strong leaders in your organization, that never happens here. If you successfully go home after this summit and are able to achieve this using some of the stuff that I demoed today,
you're gonna start having a conversation with your junior admins that goes like, why did you change that registry key? Did you change it here and here and here and here and here? If the answer is no, then you do not change that registry key here. You operate in a macro environment. And if you're not changing for the macro,
you never change for the micro. This goes away. And that will change the way that you approach things for the better. This is a big decision that you're gonna make as you go home and start to implement this in your environment.
There's three options in Azure Automation. There's monitor only. This is what we call the toothless option. You hope that it sets. You'll get the red X. You'll get the reporting capabilities, but Azure Automation will never make a change to your environment. The LCM will never actually do anything.
That's maybe a safe way to start. I don't recommend it. The middle ground, well, before we talk about the middle ground, the no kidding way is apply, right? Apply, we could argue about it, and I'm happy to talk about it after the thing, but I'm gonna suggest today that you don't use apply.
The reason is because sometimes, against all odds, we do find ourselves in a situation where we are having to troubleshoot, and what we found, or learned at Rackspace, is that when my guy is troubleshooting and maybe making a slight change to see, well, maybe this is the problem, when you have it set to apply, it's like, were you troubleshooting that?
Did you change that setting to see that? Don't care, and it changes it back right in the middle of the troubleshooting process, so really sharp razor fangs, but maybe not always that great for day-to-day operations. Also, if the change requires a reboot, LCM don't care. LCM is honey badger mode. LCM will, in apply mode, take any changes necessary
in order to make that a thing. So, I encourage the use of apply and monitor. Now, you may be thinking like, well, that seems like kind of a middle ground. What happens in this example is that we'll first get a successful apply, we'll go green check mark in Azure Automation, but then we'll switch to monitor mode, and then we're toothless for the rest of the time.
Well, not so, I say. There's kind of a better way to approach this, and it comes back to that one MOF to rule them all. Because we're utilizing one MOF, we find ourselves in a very unique predicament here, which kinda gives us the best of both worlds of apply, but also monitor. So, when we stick this in here, we will go to monitor mode, and these servers will live the rest of their lives
in monitor mode, unless we make a change to that one MOF. The act of recompiling the MOF inside of Azure, if we were to add a new directory, or a new Windows feature that we wanted to add, changes the checksum of the MOF. The Azure Automation utility
is constantly evaluating the checksum and saying, is it the same, cool, I'm in monitor mode? Is it the same, cool, I'm in monitor mode? Is it the same, cool, I'm in monitor mode? But when you recompile the MOF, and make a change to that one MOF to rule them all inside Azure, you will change the checksum, and your entire fleet will go back to apply mode. This is a double-edged sword.
It is extremely empowering. I, with my thousands of servers at Rackspace, can recompile one MOF inside of Azure, and push out a change to the entire environment in seconds. Extremely amazing. So, if I get a compliancy thing down, if I need to make an adjustment, if I need a registry key to be changed to change the way that we handle dump files,
you name it, I can click that compile button, which needs to be read, because it's dangerous in this scenario, and it will make that change. That is the beauty of the one intelligent dynamic MOF. If your security department comes down and says, I need your servers to be blue now, you make the change to the one MOF,
you click compile, and every single server in your environment becomes blue. It's that simple. So, this is why I encourage you, or making the case for the apply and monitor, because it'll sit you in a nice middle ground. You don't have to worry about some of the hardships that come along with just monitor only, which is really a toothless solution. You don't have to worry about some of the intricacies
of apply only, which can be really aggressive. This, I think, pays a really good thing for production. Okay, I've left lots of time for questions, because this was a complicated topic. So please, give me lots of feedback, and I'm happy to talk about some of the things
that I've run into challenge-wise. You got the cue cards up there. I'm on the left, if you wanna get in touch with me, or if you wanna scan the right, that will give you access to every single piece of code that we showed today, which is up on the Git. Please use it. So the question was, when I'm determining my compliancy and reporting that up to Azure,
where does the actual configuration file come in that's locally sourced from the device? And that all depends on your module. So it has to do with the testing of the module, because the eventual compliancy doesn't come out of set or Git, it comes out of test. So your test needs to evaluate and determine whether it's compliant or not.
So if we look at this JSON file, the test is going to import this, right? It's gonna populate the IP address variable inside the module with that IP. And then I would assume, that in our example that we demoed earlier, that the test will evaluate, is that IP applied?
And if it's not, it will go red in Azure. So it's inferred through the imports of the actual configuration, whether or not the device is compliant or not. It all boils down to your test. Make sense? Great question. So the question is, is we change the MAC address to something that maybe doesn't exist
on the destination device? Where does this start falling apart? That comes back to the intelligence and the robustness of the actual module itself. So in our example, when you leverage that community module from Microsoft, but that community module had no concept of utilizing the MAC address. So we would have to, if we wanted to,
roll this to production, put some error control in there, right? We would have to say, does this MAC address exist? If not, then we need to handle that situation inside the code. This comes back to just PowerShell 101, right? This isn't a DSC question so much as it is a, you need error logic inside of the actual PowerShell script to handle that situation. So for example, in our floor example,
when we had our floor set up, and we were getting DSC to determine which floor it was on, what if the registry reports a floor that is not existent? Well, we need code inside of our script, inside of our module, to handle that particular circumstance. It all boils back to error control. Make sense?
Cool. What other questions do we have? Yeah, so you're still operating from the top-down method, which is fine. There's no wrong way to do that. It's the beauty of having these types of discussions. Do you wanna go top-down? Are you comfortable going top-down? Great. Do you wanna come bottom-up? That worked better for us. The reason it worked better for us is because our switching APIs have direct capability access to the server.
So I can query the switch, and I can say, oh, I see that you're connected to these switch ports. We have that capability. And because I see that, I can drop the config file locally to the device. Happy day. Yeah, once our provisioning process is done, the server is kicked, has a baseline operating system on the actual physical chassis, the last thing that happens during the kick process
is it reaches out and sends a call. That call initiates an automation process, which goes and talks to the switch, evaluates which switch ports it on, and dumps the configuration file just on the local directory on the device. From there, the last step that happens is LCM kicks off, goes up to our Azure account, and pulls down and the server builds itself. That's why, because we operate at a tiered level
at Rackspace, and it may be different in your environment, but we have the concept of L4s, L3s, L2s, L1s, because when we had it in our top-down perspective, that really limited us to, those changes were all being funneled to L3s and L4s, which it was a more complicated change to make it from a top-down perspective. But now with this in the mix,
my L1s can make this change, recompile at the local device, and they only need to know one command and where to do that. It's really empowering, and it's changed the way that we approach DSC troubleshooting. Yes, so the question is, if I fat finger this IP address, what happens? We're locally sourcing. So yes, a locally sourced configuration file
injects the possibility of a fat finger. So the compile will not, Azure in this example, with the changes that we made to that module, would apply the IP address stuck in here, no matter if it was right or wrong. Does that make sense? Right, we have another system at Rackspace that deals with the IP scheme and lets us know if something like that were to happen,
but that's external outside of the DSC space. Again, you have to evaluate the concepts that you learned today and figure out what makes the most sense to your environment. You may be looking at the IP example, thinking to yourself, it's a terrible idea for me to IP using DSC, and that's cool. Don't use DSC to make your IP. Use DHCP and use other methods
for IPing your environment. I'm just showing you something that seems as impossible as IPing a server using DSC in a dynamic fashion is possible, floors are possible, anything is possible utilizing this technology format. Right, so yes and no. For a moment in time, depending on how long the refresh is,
it is from Azure's perspective compliant still, because the last time it checked in with the original IP address, it was compliant. But on the next check-in, if we were to change this and it was in monitor mode, it would do the test, pull in the new config, see that it has a new IP address, bump that against the test,
and it would notice that the test is no longer passing, the IP address is not applied to the adapter, and it would go to fail mode on the next check-in to Azure Automation. So not immediately, but very soon. Each time the test runs, the module is being engaged. When you run test set get, when those things are running, those modules are being actively engaged
every single time they kick off. And when they do that, they're pulling in the config file every single time. So if you set Azure Automation up for, like example, a 90 minute check-in period, every 90 minutes the config file is gonna be actively resourced. Yes, if you have an SMB share,
for example, let's go crazy. Let's say you have an SMB share somewhere that everything has access to, and you wanna control the IP scheme through that CSV file. I mean, that doesn't sound like a super great idea, but you could do it. The modules can literally source from anywhere in the environment any type of information in the environment. So if you wanted to say server one has this IP and server two has this IP
and server three has this IP in that CSV document, the module can source that document and assign IPs like no tomorrow. That's the beauty of changing these modules and making them more dynamic so they can make intelligent decisions like that. So the question is, the MOF file is assuming that everything is the same across the environment.
And you're not wrong. That's kind of DSC at its core, right? When we set up this config, every device in the entire environment is going to get the telnet client removed. Cool. But maybe not. This is a module, and it's open source.
It's part of the PS desired state configuration module. So if you wanna crack that bad boy open like we just did a minute ago, and you wanna say, except for SQL one, you can adjust that module, and it would evaluate and be like, oh, I'm on SQL one right now, and it would not remove the telnet client.
You can make the modules bend to your will. That's the beauty of having all this code open source. Oh yeah, yep, for sure. Yeah, so I'll close out with kind of one more example. You saw on our demonstration
that one of the things that we configure is cluster aware updating. Yeah, cluster aware updating. Cluster aware updating can only be configured on a cluster. So this module has the intelligence to evaluate
if the server is a standalone server or not. It will only apply cluster aware updating settings if it's actually on an active cluster. Make sense? That's the beauty. Again, no smart MOF, very intelligent modules. The modules can literally evaluate any criteria you throw at them.
I mean, the sky's the limit, guys. If you wanted to evaluate if the server is blue or not, if it's a Dell, do this. If it's an HP, do this. If it's Linux, do this. If it's Windows, do this. It is there for you to bend. When I left last year, I had just had this kind of thought process that DSC was so rigid it seemed almost unusable,
so inflexible, but it's not. It's as buttery and as loose as you would like it to be. It all boils down to what you stick inside those modules. And again, don't overcode. There's probably a community resource module for you to gobble up today, and we edited that one in less than a couple minutes. It doesn't take a lot.
You keep all the meat, and you just change the one thing that you wanted to make it a little bit friendlier to your environment. Yeah, so Azure Automation is gonna require 443 access. There's no getting around that. You've got really two options that I know of today. First is, and I don't know if it supports it or not, I have to do, but Snover's in the building,
and he runs it, so you can go ask him, Azure Stack. Azure Stack is local. I don't know if it has Azure Automation capabilities off the top of my head. I'm seeing a no back there, so that's not an option. The other option is Tug. Tug is a open-source project for having a pull server on-premise. It's not quite as fancy as Azure Automation,
and it's kind of IIS-y, so you get to have all the fun that you normally would with that, and it's not as polished as Azure Automation, but if you're truly in a disconnected situation like that, Tug is there to solve that problem. Last point that I'll close out with is that
lots of resistance, it seems like, with sticking things in public cloud, and I get it. When you go back, try to articulate to your leadership and for your senior architects that you're not sticking your environment into the cloud. If you're using this on an on-prem scenario, we use this on-prem to great effects, but the only thing that's actually up inside Azure
is the actual configuration itself. So if you make a brilliant recipe to make a Hyper-V server at the worst, you have a recipe to make a great Hyper-V server up in Azure, and it's doing great things for your environment. So you're not having to interface actual any data up into a public cloud.
Everything is in pull mode. Your on-prem servers go up to Azure automation and pull the config down, and the processing happens at the local device layer. So if you have compliancy reasons or other things that are keeping you out of Azure today, you shouldn't run into any problems. It's all over 443, it's encrypted. It's done some really great things for us in our environment.
No, no hybrid worker required. Hybrid worker's more if you want to run a script on an on-prem device. Really fantastic thing too, but no, you don't need hybrid workers for this to work. You literally just have to click those buttons that I showed you in the video earlier, which is create Azure automation account, upload the modules, upload your config, literally done.
All the rest of it is just telling the LCM where to go. So, I don't know, there's probably some Microsoft person that'll throw something at me in a second. Have you tried any of this using any other non-Azure? Is there other ways you can do it with Google or AWS or any of the other cloud vendors? I mean, you could. You don't have to stand up your own pull servers, I'm not sure.
Yeah, not from a native SaaS perspective that I know of, no. But you can, through IIS, in any cloud you wished, set up a pull server. I'll do that today. But in my opinion, and again, this is just my opinion, the Azure automation just gives you those icing layers that really make it the complete package.
It's zero maintenance, I don't have to do anything, I get a lot of really powerful reporting capabilities. You can tie this into OMS in your ticketing system, and when one of those things goes red, you'll instantly get a ticket. So there's some really powerful feature capabilities that TUG and IIS pull servers just don't give you. Cool, and again, right side, if you want all that code that we demoed today,
thank you so much for coming. I hope you enjoy it and have a really great, fantastic lunch. And reach out to me if I can help you at all. I've been doing this for about a year now. I've seen some things. So feel free to hit me up on Twitter or email me.