How to Love Unit Testing
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 60 | |
Author | ||
License | CC Attribution - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/37385 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Producer | ||
Production Year | 2018 |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
1
10
14
18
24
25
26
32
34
40
41
44
46
54
00:00
Execution unitSoftware testingPower (physics)Software frameworkScale (map)CodeSuite (music)Execution unitCodeModule (mathematics)Software testingOperator (mathematics)Decision theoryPoint (geometry)Electronic mailing listINTEGRALSpacetimeMathematicsoutputAnalytic continuationUnit testingProduct (business)Function (mathematics)Type theoryRevision controlNumberMultilaterationSoftwareReading (process)Multiplication signComputer fileComputer animationXML
05:55
Web pageSpacetimeExecution unitSoftware testingDisintegrationScale (map)CodeSuite (music)Confluence (abstract rewriting)INTEGRALSoftware testingPoint (geometry)Object (grammar)Functional (mathematics)Instance (computer science)Module (mathematics)Point cloudFunction (mathematics)2 (number)CodeMultiplication signProduct (business)NumberComputer animation
07:07
Service (economics)Software testingOrder (biology)RootDisintegrationPoint (geometry)Execution unitDiscrete groupPresentation of a groupGoodness of fitInstallation artINTEGRALType theoryExecution unitSoftware testingProper mapMultiplication signNumberProduct (business)Real numberSoftware developerPattern languageElectronic mailing listAsynchronous Transfer ModePrice indexPoint (geometry)Right angleOffice suiteUnit testingSheaf (mathematics)CausalityCodeDirectory serviceService (economics)RootInterrupt <Informatik>Single-precision floating-point formatConnected spaceInternetworkingPhysical systemSlide rulePoint cloudInstance (computer science)Domain nameOrder (biology)Functional (mathematics)Entire functionSuite (music)Flow separationTerm (mathematics)WindowComputer animation
10:58
SpacetimeSoftware testingExecution unitGastropod shellGame theoryScalable Coherent InterfaceFormal verificationGamma functionSummierbarkeitSchmelze <Betrieb>MIDIDisintegrationCodeModul <Datentyp>Function (mathematics)Function (mathematics)Multiplication signCodeSoftware testingDirectory serviceArithmetic meanScripting languageINTEGRALDot productSource codeUnit testingOperator (mathematics)Proper mapFunctional (mathematics)Module (mathematics)Commitment schemeExecution unitGroup action2 (number)Revision controlPoint (geometry)Suite (music)Context awarenessGodRadical (chemistry)Parameter (computer programming)Computer clusterInternetworkingDemo (music)Complete metric spaceComputer fileString (computer science)Computer animation
18:18
Software testingExecution unitSpacetimeDrum memoryDensity of statesComputer fileCodeModul <Datentyp>Function (mathematics)Asynchronous Transfer ModeParameter (computer programming)Identity managementCellular automatonoutputDirectory serviceParameter (computer programming)Type theoryFunctional (mathematics)AbstractionDependent and independent variablesEndliche ModelltheorieObject (grammar)Directory serviceCodeMultilaterationCode refactoringSingle-precision floating-point formatControl flowNumberNP-hardSoftware testingQuery languageExecution unitData managementException handlingScripting languageFile archiverString (computer science)ImplementationSource codeModule (mathematics)2 (number)Pointer (computer programming)CuboidRun time (program lifecycle phase)Unit testingBlack boxSuite (music)Software bugStructural loadPower (physics)Right angleAsynchronous Transfer ModeLimit (category theory)BitSystem callComputer animation
26:53
Execution unitSoftware testingWechselseitige InformationCone penetration testUser interfaceArtificial neural networkWindowEmailMaxima and minimaIdentity managementSpacetimeOptical character recognitionHecke operatorValue-added networkGamma functionGame theoryMenu (computing)Inclusion mapModule (mathematics)Object (grammar)CodeSoftware testingParameter (computer programming)Latent heatSystem callLevel (video gaming)Directory serviceInstance (computer science)String (computer science)Condition numberGroup actionShared memoryType theoryPoisson-KlammerMultilaterationIdentity managementFunctional (mathematics)Function (mathematics)Filter <Stochastik>XMLComputer animation
31:55
Execution unitSoftware testingMIDIInclusion mapZoom lensFunction (mathematics)Parameter (computer programming)Digital filterConsistencyCodeIntegerPositional notationoutputString (computer science)Position operatorLevel (video gaming)String (computer science)MathematicsUnit testingRevision controlNumberLine (geometry)Process (computing)Motion captureFunctional (mathematics)Parameter (computer programming)Software testingScripting languageObject-oriented programmingConsistencyOcean currentExecution unitRecursionCodeRight angleIntegerException handlingSingle-precision floating-point formatControl flowCASE <Informatik>Variable (mathematics)Computer animation
36:58
Software testingExecution unitCodeModulo (jargon)SpacetimeGastropod shellModule (mathematics)Data typeLoop (music)Game theoryMaxima and minimaProgrammable read-only memoryMenu (computing)MIDIAdditionAlgorithmType theoryScripting languageParameter (computer programming)Right angleOnline helpSoftware testingRoboticsComputer fileFunctional (mathematics)Object-oriented programmingControl flowMathematicsModule (mathematics)Level (video gaming)Software developerNumberGodMetadataDemo (music)State of matterGoodness of fitData storage deviceSoftwareExecution unitVariable (mathematics)Default (computer science)Video game consoleRow (database)CodeSet (mathematics)Unit testingComputer animationXML
43:50
Execution unitSoftware testingWritingIndependence (probability theory)Single-precision floating-point formatFunction (mathematics)Parameter (computer programming)Module (mathematics)Test-driven developmentMultiplication signExecution unitCodeFunctional (mathematics)Single-precision floating-point formatComputer fontParameter (computer programming)Virtual machineUnit testingLevel (video gaming)Point (geometry)Frequency responseBlack boxSoftware testingVector potentialAsynchronous Transfer ModeSuite (music)Computer animation
46:19
Software testingExecution unitWikiMIDIEvent horizonWebsiteSoftware testingPattern languageSoftware developerScripting languageMultiplication signUnit testingDemo (music)WikiSoftware repositoryYouTubeComputer animation
48:10
Coma BerenicesXML
Transcript: English(auto-generated)
00:10
Okay, that's probably good enough. We can let others run in. Let's get started. Hello! My name is Brian Bunkie. You will never guess. I love unit testing, and I'm here
00:24
today to spread the love. When we talk about testing our PowerShell code, we're talking about Pester. We love to tell you, you should learn Pester. You need to know Pester. Sure, you should, but you also need to learn the entire freaking
00:47
cloud. You should know. You need to know. I know I come from an IT ops background. Pester's on the to learn list, but you won't prioritize it until you feel the
01:02
pain point that brings it to the top of your list. For me, that was when other people started using my code. Probably, for most of you, it'll be the same way. So when I published my first PowerShell module to GitHub a few years ago,
01:21
it did not take very long at all for me to go, oh, wait, maintaining and improving this code is kind of a headache when other people are banging against it. Maybe I should look into this Pester test thing. So I started by writing what we commonly refer to as integration tests, and I did that because
01:40
I have an IT operations background. I assume most of you do as well, and integration tests really are just a repurposing of manual testing. So in the dark ages, and definitely not today, nobody would do this today, you write a script, you save a script, you run the script, and you deal with any red text
02:01
that comes out of the console, right? Well, adopting integration tests is just taking all those manual tests that you sometimes remember to run, putting them in a common testing framework, and saving them in a file so you don't forget them for later. But you're still hitting all those end points,
02:21
you still need your network share, you still need Active Directory, whatever your code is interacting with. So when I started writing integration tests, I ran into some very, very common questions. I ran into these, and I know lots of you run into these because these questions continue to
02:41
come up all the time. Am I doing this right? What does a good test even look like? What's not enough? What is too much testing? And when I hit these with integration tests and figured them out to a good enough state, once you decide, hey, that mocking thing sounds pretty cool, maybe I should learn
03:01
about unit tests, you hit the same uphill slope. It is the exact same questions. You feel like you're starting on day one again. So these come up all the time. If they were easy to answer, I wouldn't have to give this talk today. But I am up here because I want to help provide some examples for this. We can say testing is easy and do it this
03:26
way, but often it takes putting some examples in front of you. So I'll give you some, hopefully you can take these and run with them when you go back to your home. So with that said, today's goals, number one,
03:43
I want you to leave feeling more comfortable reading and writing unit tests in Pester. And number two, when I learned unit testing, I went way into the deep end. I got too far, I wrote too many tests, and it just creates a big headache
04:03
for future you. So the other goal today is I'm going to evangelize the love of Pester, but I also want to point out some potholes that I hit that you can keep an eye out for on your journey. So quickly, if you haven't fully adopted
04:24
Pester yet, why do we love test suites? Well, they're fully automated. So through the magic of source control and continuous integration, or a CI pipeline, your tests now run automatically. You don't forget to run some of them. You don't say, oh, all I did was delete a period.
04:40
I don't need to run the tests this time. No more of that. Tests are self-documenting. So they require you, when you type into a tests file, for my code, given input X, I expect output Y. And because you have that type of self-documentation, now that allows support for your product to scale. Because when you are the only one writing
05:05
your code, you know where all the bodies are buried. But when other people come in, now they can look at your tests file, look at that common framework, and have a better jumping off point for knowing what their changes might be affecting in your code. And it improves code reviews.
05:23
So it feels awesome to have test coverage for very basic stuff like linting. And now you have your tests nitpicking any pull requests that come in. You no longer have to play the bad guy. The tests are bad cop now. And the promise of automation, humans are freed up to make the important decisions.
05:46
For this pull request, does the feature make sense in my code base? I don't have to check if it's tabs or spaces or whatever nonsense, right? So all of those things, go back. All of those things apply to an integration
06:03
test suite, and that's great. But there are pain points with integration tests as well. So this is a snippet of test output in one of my public modules. These are integration tests. And I'm going to point out two things. Number one in red, these tests take a long time to run. So this product is a
06:24
publicly available cloud instance. So my integration tests interact with that public instance and actually delete a test object. So it goes out, nine seconds, eight, 11, 22 seconds for one test. That takes a long time. And the other thing to point out is because these are remove functions,
06:44
there are also correlated add functions. And you can't remove a test object that doesn't exist, so your adds always have to run at the top so that your removes have something to come into after. And if you change your remove code and something fails, then your add can get messed up when it's trying
07:04
to create a duplicate object. It can get messy. So when you run into these pain points, integration testing, they require infrastructure because we're operations, we're contacting a lot of other services. They're slow or in
07:21
testing parlance, they're expensive both in time and in resources. They're subject to service interruptions. So if Office 365 is down, your tests fail and you don't know why, that's never happened. The order of tests matters and it's tough to isolate your root cause
07:41
sometimes when a test fails for some innocuous reason. So I have a bullet list of pain points here. You will never guess what I'm proposing the solution is to all these pain points. I wonder why we're here. So unit tests. Let's talk about
08:01
falling in love. And I really believe that unit tests, I mean we're at the PowerShell and DevOps Global Summit. Unit tests are... DevOps is nothing without unit tests. People talk about DevOps and deploying 10 times a day. That's not happening without unit tests. You're not just flinging crap into
08:23
production over and over and over again, right? It's all got to be tested and unit tests make it happen quickly. So what are we actually talking about when I say that? Let me just give you my definition real quick. Number one, we're talking about making our test suite discreet. So for one
08:41
function in your module, you have one correlated test that tests that function. You keep everything separate so the tests don't need to know about each other. That's where the term unit testing came from. And number two, we're talking about isolating external dependencies from your tests. So we no longer need the
09:04
Windows domain or to connect to VMware, an internet connection for our cloud instance. We strip all that out. It feels super awesome to just grab the nearest knife and go, nope, I only care about this single system under test that
09:21
I am working on right now. So when you see mocking, that is an indicator of this behavior. We are trying to remove the external dependencies because, for example, if your test is running on a build runner, have you ever tried to install Active Directory on just random endpoints somewhere?
09:41
Or if you're so motivated here at the summit that you want to code on the flight home, if you have a good proper unit test that is fully isolated, you can run those tests in airplane mode. You don't need to connect to anything, and that encourages better development practices because your tests are in quickly. You don't have to wait for internet. You don't have to wait for
10:03
an entire test suite to complete. And really, to phrase this another way, how would you prepare to come up here and talk? Do you think I created a slide deck and decided, okay, I'm going to stand in front of a mirror and go,
10:21
hello, my name is Brian Bunkie, and talk for 45 minutes. And then, 30 minutes in, I go, you know, what would be super awesome is if I took these two slides and rearranged them because that sets things up better. Okay, I'm good. Reset the clock to 45 minutes. Hello, my name is Brian Bunkie. No, you don't do that. That's crazy. You chunk it up into small
10:43
digestible portions, and you work on tightening each section. You don't have to give the whole talk every time, right? So, that's the same type of pattern. So, if this is so great, let's walk through some basic code. We have, and we'll do mocking 101,
11:02
is it? I assume we can see this. Maybe I should. Can you still see it in the back? We're good? Okay. Okay, so we have a very basic function, and the best example I could come up with for this, again, for my IT operations friends. Hello world in IT operations is
11:22
user retirement. Okay. So, we have a remove contoso user. It takes a string user name and an optional credential, and it just does two things. It removes the directory, and it removes the user, right? And we're calling get item and
11:41
piping it to remove item, and then we call remove AD user. Small code path here. So, the integration for this, the integration test, looks like this. So, we require the active directory module up top, and you have to create folders and sub-folders and files so that you make sure that your recursive delete is
12:04
good, and you have to create the AD user to delete the AD user, and you act on it down here, blah, blah, blah. So, the unit test, we're stripping out all that. So, what we need to do here is,
12:20
I like to tag unit or integration. We're following arrange act assert. This is a common pattern outside of the PowerShell world. You set things up up front and group things together this way. Dot source your function, set a user name, and then we're mocking. So, we mock get item, we mock remove item, and we mock remove AD user.
12:41
So, what we're saying here, for remove AD user, any time my code calls remove AD user, do absolutely nothing. Just do nothing. And this verifiable parameter says, okay, I said do nothing, but really, Pester, you should track that we actually called this mock.
13:03
We did nothing, but the counter goes to one, we called remove AD user. So, down here, we act on our remove contoso user. And then, Pester has a command called assert verifiable mock. And all that does is it says for
13:20
each mock that you have marked as verifiable, that ran at least once. So, because this function has no output, we're just removing things. That commonly has no output. We're just tracking that we called things, we expect to call these things,
13:41
to perform these two functions, remove the folder, remove the user. You'll notice I have output here in get item. That is because get item is piping into remove item, and you can't pipe absolutely nothing. So, you have to put something in there. I usually do an abbreviated string, but you can just put a true boolean or
14:03
whatever. So, if we get tab complete, I will not spend my talk complaining about tab complete. So, and we fail.
14:39
I broke something.
14:55
So, it's supposed to isolate the AD module.
15:01
Yeah, and we're mocking it away. All right, I gotta go on. Apologies. That code will be up. It worked every single time until now. You'll love this. I woke up this morning. I'm like, okay, I don't have to worry about any conference internet or anything.
15:22
Demo gods are gonna be kind to me. I wake up this morning and the integrated terminal in VS code won't even load. It's just like, starting PowerShell. What? What? Come on. Okay, so that is mocking, kind of.
15:42
Mocking, mocking, yes. So, okay, this is actually, this has a whole new context now that my demo failed. I want to talk about loving unit testing.
16:05
But, I also want, I don't want you to feel suckered into it, right? I want you to come back next year still loving unit testing. So, we need to talk about problems we have through memes. My favorite meme, two unit tests, zero integration tests.
16:25
Just think about how that would work for a second. But really what we're saying here is we don't want to completely eliminate integration tests. Even if unit tests are great, we can't completely cut integration tests out of our lives.
16:40
If APIs change, we need to know that. We just need to know it eventually. So, if you take on Active Directory as a dependency, when was the last time Active Directory changed on you? I mean, it's been ten years, right? So, you need to trust at some point that the APIs won't change.
17:03
So, what we're advocating here is not completely eliminating integration tests, but just limiting them. So, for example, your integration test suite is smaller now, but it can still exist and just run on every
17:22
version bump that you do instead of every single commit. So, we need to talk about a few drawbacks, and this first one is drawback in quotes. So, you need to write testable code for mocking to work properly. Mocking works with functions and modules primarily.
17:42
If you have just basic scripts that aren't parameterized, if you have C-Sharp or .NET, there's plenty of well-documented problems with that and pester tests, most of which include just wrapping them in PowerShell functions. So, you need to write parameterized functions. And writing tests makes you write more testable code.
18:05
That's a great phrase that doesn't mean anything concrete, right? But let's look through a quick example of what that actually means. When we say that, what we're getting to is,
18:21
you need to break your functions down, right? So, if we go into a module, this is a pretty basic module layout. And we've taken our removeContosoUser, moved it into a module.
18:41
CLS, that never happened. We've moved our function into a module, and then management came back and said, hey, that retirement script you wrote is great. But we need to add one simple thing to it. So that executive we termed last week, we actually needed to archive
19:05
her user folder instead of just straight deleting it. Okay, we'll add one thing. But because you're accepting a string for username, you have to actually go out to active directory and query for department. And then you have to act on department. If it's an executive, then you zip the folder.
19:22
Now we're at four things in this function. Query ID, conditionally zip the folder, remove the folder, remove the AD user object. Break that down. We need to break it down into more digestible chunks. So if we close this, we have removeContosoUser2, and
19:44
we have removeContosoUser folder. So now in our parameters, we're no longer looking for just a string. I'll get to the typing of that later. But we're accepting a more rich user object, and we can now just check that department without having to go out to AD.
20:03
We still have the code path, conditionally zip the folder, and we still remove the directory. So when we talk about writing more testable code, we're kinda talking about the single responsibility model of one function does one thing, and everyone in the crowd is going,
20:23
your function still does two things, dude, and it does. And you should break it down further, but we don't have time, so we'll just deal with this. It does two things, you can shoot me, that's all right. And now our adopted removeContosoUser function
20:44
will also take an AD object if we need it. But it's just ultimately, we can get to the active directory later, but it's just ultimately doing a conditional removeAdUser. And you can, when you break it down, you decide, do you want it public
21:03
because other people can call it, or do you just want to break it down into a more private function that kind of abstracts some of that code out of the basics. Okay, so one function, one purpose. Let's talk about parameter types. People that know me know that I have had trouble with strongly typed
21:24
parameters in the past. So if you're writing a function that accepts an AD user object, like this one could, or if you're writing a function that accepts a VMware power CLI object, PowerShell best practice is to strongly type that parameter.
21:43
You type it, you let PowerShell do the heavy lifting for you. And that is great, except when we talk about isolating our dependencies. Because if you do not want active directory on your test runner, if you do not want power CLI installed, those types aren't available.
22:04
You try to import the module, and it looks for all required modules, and it checks all parameter types to make sure those types are available at runtime. And if they're not there, you're just gonna fail. So one of the ways you can get around that is by isolating
22:21
that call to your dependency into a private function. And this is moving away a little bit from that PowerShell best practice we talk about. But when we talk about falling in love, we talk about unconditional warts and all, and this is just kind of a pester limit.
22:40
Well, our PowerShell limitation that we want to work around. So for example, in our Contoso user two, best practice would dictate that I type this right here, as AD user. And I have the comment up here, it should be this type. But if we want to run a unit test that fully isolates all these dependencies,
23:03
we can't do that, or it's gonna fail at runtime. So instead, you can pop a private function down here that says, import active directory, and you can even pass the user if you want. And all this code is doing is it intakes the user.
23:20
It loads the active directory module if it's not there. And then it explicitly casts the user object that you supplied into an AD user to make sure that it's still a valid type. It doesn't even return anything, it just out nulls all of it. All it is doing is validating the AD is available and
23:40
your user object is valid. And now, you have active directory when you need it, but you can also completely isolate it in your tests. So we'll look at those tests in a second. Drawback number three, you can use new mock object.
24:05
I will show you an example of that real quick, in just a minute. Sorry, for strongly typed objects, not usually.
24:21
So some types that are available without importing that module, absolutely you can. For others that, if you load up something, you don't have that module available. If PowerShell doesn't know about the type, new mock object cannot do anything with it.
24:42
Okay, try hard mode. I said, when I learned unit tests, I went way into the deep end. I tried to test absolutely everything. I tried to fix 100% of the bugs, this test suite's gonna be great. Don't do that. You will never catch 100% of the bugs.
25:02
And one of the things that will help, I guess, keep your love with unit tests going, is that you exercise restraint in your test suite. So, Glenn Sardi talked yesterday about black box and white box tests. And basically, a white box test assumes that you have the source code, and
25:24
you know every single method and implementation that is happening inside of that code. In black box, pretty much you know the function, and the inputs, and the outputs, and that's it. As a default, all things equal. If you can black box test, please black box test.
25:43
And what that means in practice is, do not test your private functions. Now, are there exceptions to that? Of course there are. It's IT, there's exceptions to absolutely everything. But when you're writing tests, start by not testing your private functions.
26:04
Keep it simple, because if we posit that tests are meant to maintain your code, improve your code, and to help you refactor your code, well, you're gonna run into a lot of problems refactoring.
26:21
If you test every single step in your private functions, your refactors are going to be very painful. They're gonna punish you for trying to improve your code. So anywhere you can exercise that restraint will future you, will thank you. Cuz we all hate our code in six months, right? It's not if the refactor will happen, it's when.
26:44
So let's look at the unit tests for our new functions, and some examples of parameter filter. Okay, so we have a test folder, and so
27:03
for our test file, we want to import our contoso-retire module. We're using a module scope, and that's just basically, you need to be in the module scope if you want to test your private functions. And not just test, but even if you wanna mock your private functions,
27:22
you need to be in module scope. You can be more targeted at the mock level. There's a module name parameter, but it's up to you. I just hit the big easy button and wrap everything in a module scope. We're still tagging units.
27:40
So we create a mock object up here that we're going to act on later. Jane Doe is an executive. And again, because we've isolated active directory from this, we can just pass in a PS custom object very easily. And new mock object is a pester command that works for known types.
28:02
In this case, it creates an empty credential object that you can just pass in, even to a strongly typed parameter. And now we're talking about guard mocks and worker mocks. So parameter filter down in worker mocks. So if we wanna run removeAdUser, a worker mock provides a parameter filter.
28:25
And it says, this mock only occurs if you call removeAdUser with the identity parameter and supply mock user SamAccountName. A worker mock, because it only runs in that scenario,
28:44
if you change your code and whiff on your worker mock, which happens a lot, you want the guard mock up top to say, okay, actually at the beginning for import active directory and removeAdUser, do nothing. And then when we get into these conditions of specific parameter filters,
29:06
then we wanna do something. So we're mocking away active directory, because we don't care about the AD call at all. We trust it works. And because AD's not available, removeAdUser doesn't exist as a function.
29:20
So you need to create it. And so probably that was a scoping problem in my first test. But you cannot mock what doesn't exist. So you need to create a dummy removeAdUser here.
29:43
And that acts as our guard mock in this instance. Because down here, we're testing a simple code path. We're saying, if we pass an identity, we want to return the string no cred. And if we pass both identity and credential, we wanna return the string cred.
30:06
So the reason we use parameter filters is because we think about, when you think about removeAdUser, it doesn't run without an identity. So one simple check you can do is just to make sure that both you are calling removeAdUser with verifiable, and you're supplying the mandatory parameter.
30:24
Identity. So when we act on this, both of our worker mocks output strings. So we capture that in a variable, act1, act2. We assert verifiable mock again to make sure that our worker mocks ran.
30:43
We don't put verifiable on the guard mocks, cuz they're just there to not break anything if we miss. And then we assert on our actions. Code path one should return no cred, and code path two should return cred. And if we quickly jump over to the user folder test.
31:07
We have two mock objects now, because this is our code path. We care about the department. So we have Jane Doe as an executive, and Max Moosterman as from IT. Your guard mocks here are now compressArchive and removeItem.
31:21
You don't have to put the brackets. That's the mockWith parameter that is totally optional. So we'll do nothing with them until they hit a worker mock. Excuse me. This worker mock, compressArchive requires a path and a destination path.
31:40
But we don't care where those are. I don't care what was supplied to it, because we can change our user shares without breaking our code. We don't wanna break our code just because we migrated a user folder. So we just check that they're there, and you can return a string on that. And the worker mock for removeItem just ensures that you've
32:03
specified any path and the recurse parameter. So your acts down here, again, capture the variable. You assert on verifiable mock. Your compressArchive returned a string, so that should be zip. And the other mock did not return a string, or you could, either way.
32:25
And you just check should be null or empty. And, boy, again, I don't get, are you gonna be nice to me?
32:45
You're not gonna be nice to me. Could not find command compressArchive. Okay, so for whatever reason in VS code, compressArchive isn't even loading into my session. That's the same example as what we did here with removeAdUser.
33:04
It couldn't find the command, so we created it and then mocked on top of it. I don't know why it doesn't see compressArchive. I made an assumption that it would. So maybe you need to add a function there as well,
33:20
so the worker mock can act on it. All right, when you start using parameter filter mocks, a couple of gotchas with Pester right now. Parameters must be explicitly named. So if you are calling functions with positional parameters,
33:41
parameter filter will not figure that out. And if you are piping things in to a parameter, parameter filter also will not figure that out. So you need to know that going in, at least with the current version of Pester. And tidbit from the Pester book, common parameter names may collide with Pester internals.
34:02
I haven't run into that, so I don't know which names it is, but you can leverage psbound parameters if you hit that. So last drawback, number four, we're talking about consistency. So when we talk about unit tests,
34:22
and stripping away all external dependencies, we are trusting that that dependency will not inflict any unnecessary breaking changes on us. So if we're optimistic about that and trusting, we also need to look inward, because as users begin to consume our product,
34:43
we need to make sure that we're not inflicting unnecessary breaking changes on them. And sometimes you notice that, and sometimes you don't. Large pull requests happen, and human code reviews are fallible. If you have a small product, you might think, okay, I'll just review every pull request,
35:03
and it'll be fine, I'm a smart person, I can catch everything. As pull requests increase, you won't give 100% to every single code review. Because we are all in various stages
35:21
of being in total git noobs, large pull requests will just come through, and there will be hundreds or thousands of lines of code that you will just skim, it happens. So the idea here really comes down to negative tests. So a negative test expects that you should fail,
35:42
you should throw an exception. So your function has positional parameters. When I give it a string and then an integer, it should work, but if I give it an integer and then a string, it should throw. Because if I change the order, if I change those positional parameters,
36:03
that is a breaking change. People are writing scripts that may depend on those positions. But when you think about it, some of these, when we talk about parameter metadata, we don't want to write tests like this. Don't put these into every single unit test,
36:22
because you will do a better job of catching edge cases than your coworker, but you'll both miss some stuff. This just makes your unit test longer, harder to read. We want to codify that and test for it once at the top level.
36:43
So there are many ways you can solve this problem. The way I solved it is by writing a module, and I called it oops, because you should say oops before you deploy and not after. So the idea here is we have a module,
37:02
and we want to, if your module is at a known good working state, you want to record the relevant metadata of its commands and parameters. And then if you store that in JSON with the tests file,
37:22
and then every single commit, or whatever interval, you can run, you can compare the current state of the module against your saved JSON. And that asserts things like, okay, we want to know,
37:40
because if our function supports pipeline input, removing that would be a breaking change, right? Well, sure, that might be obvious to catch in a code review, but it might just be really easy for a robot to catch as well. If we remove pipeline input, we do want to fail a test
38:01
and say, hey, you're making a known breaking change. But if we add pipeline input, that's a feature. We're doing good. We do not want the test to fail. We just want to say, hey, we're shipping good stuff. It shouldn't feel painful to ship good stuff. So it records conditionally some of these parameters.
38:22
So if we import this module and get commands, going from the bottom, so we get parameter, export parameter, and assert parameter. So if you store commands in a GCM variable and we pipe that in to get parameter,
38:45
so our module has four commands, and some of them have an array of parameters, and then we record the relevant metadata of those parameters. So let's look at that. If we pipe get parameter to export parameter,
39:01
and then we just open that in code, this is what the JSON could look like. So for example, the assert parameter command is stored here, and if you remove assert parameter, also a breaking change, it's in JSON, it should exist,
39:20
but we also care about some of these parameters. So we always care about name and type and position, because if you change those, that's always a potentially breaking change you should know about. But there's also a small number of other things that we can conditionally check. So if a parameter is not mandatory
39:41
and you make it mandatory, breaking change. But if it is and you remove it, we don't care. So there are some conditional additions down here. So we store all four of these commands in our JSON, and we have that just live side by side
40:01
in our tests folder with our other tests. So now, think one week, one month, one year in the future, you have that known good JSON, and you wanna check the current state of your module. So we get the current state of the module, GCM get parameter, and then you pipe it in
40:22
and assert parameter against your known good JSON. And this just runs a set of pester tests that asserts all of the, all of the data in your JSON matches what's happening currently in the console.
40:41
And of course it passes because I just created it, but down the line, you still want it to pass, right? So you record what's good about that. And then we wanna store this in one module level file. Instead of gumming up all of your unit tests, you do it once at the top level and forget about it.
41:02
So if you get your command data here, go in from the bottom up, this is the same thing. We get parameter and assert against JSON. But for a bonus round, we can also assert help coverage. So two years ago, June Blender gave a talk here about pester and help-driven development.
41:22
And one of the things that came out of that is she wrote a nice script file to make sure that all of your functions have proper help coverage. So we can easily fail against things like that. So I just repurposed that and parameterized it for use in the module. I did not incorporate psScriptAnalyzer,
41:42
but this is an example of how you can check your commands against the default rule set. So if we invoke pester on our module level file, of course I get 132 exceptions.
42:05
The network path to my C drive was not found. When you say there's no possible way the demo gods could screw me on this one.
42:24
Yeah, I chose to use VS code, that's true. It's getting better, but I unfortunately am reaping the rewards right now.
42:43
That's true. Okay, we're gonna move on,
43:00
because it's doing the starting PowerShell thing again and not starting for me. Well, maybe we can jump to, now I'm really asking for it is right. Yeah, all right. Oh, I don't even have pestering.
43:21
All right, well, we wrap up. So the, oops. The idea there is basically you can very easily run a check right at the beginning. My parameters haven't changed.
43:41
My help coverage is valid. It passes all psScriptAnalyzer. You do all that linting up front. You keep it out of your unit test, keep your unit test simple. So those are the drawbacks, but it's also kind of talking about making your love last, because when we talk about this, I want you to know the potential drawbacks
44:04
of unit testing going in, so that you don't overdo it like I did, and get into something that becomes unmaintainable. So how can you love unit tests? Write as few of them as possible.
44:21
Absence makes the heart go fonder. No, really, that's just exercise restraint in your test suite. You don't have to test all of the things. Exercise some restraint. Trust is free. If you trust your external dependencies,
44:42
you test a lot faster. You only care about your logic, your function. Your unit tests become much faster. They become independent of other tests and your infrastructure and endpoints, and they're repeatable.
45:01
So again, these tests become portable. You can run them anywhere. There's no longer any, well, worked on my machine. And how to keep loving unit tests. Single responsibility functions. One function, one thing. Know the caveats about strongly typed parameters
45:23
and what you can do with them. Push everything possible toward the module level. Everything you can bump to the module level keeps your functions, unit tests, short, sweet, to the point. All that linting and stuff up front.
45:42
And when in doubt, keep it simple. If you're gonna err to one side or the other when you start writing your tests, please consider going black box mode and not testing your private functions until you have a reason to otherwise. And now that you know the basic outline
46:00
of what a unit test looks like, you can consider test-driven development. You don't do it right away, but once you're comfortable unit testing, once you go home and just start writing, now you feel more comfortable. Maybe next time I'll write the tests first and backfill the code to do red-green. And some quick resources.
46:21
XUnit test patterns, I tried to keep this as independent from the PowerShell community as I could, bring in some wider software testing concepts. So this is about 10 years old, but a great resource for these concepts. This practical test pyramid is an article that was posted on Martin Fowler's website earlier this year.
46:42
It's a great update and kind of cliff's notes to XUnit test patterns, and because it's an update, it's refreshed for DevOps patterns and CI pipelines, things like that. So you can go check that out. Certainly the Pester Wiki for all your syntax and command needs.
47:00
Adam's Pester book. Don Jeffery's Scripting and Toolmaking book has Pester session now. And Summit resources. So my stuff's up on GitHub and SpeakerDeck. Glenn talked yesterday about tending to your unit test suite, not letting it get too bloated. Dovetails perfectly with what I'm talking about here.
47:21
Just keep everything tight. You don't want to be testing things, testing too much and letting it get out of hand. Last year Chris Hunt did a great deep dive into mocking. If you have questions about how do I test X, go check out Chris Hunt's repo. And June Blender two years ago on YouTube about Pester and help-driven development
47:42
was great as well. Thank you very much for being here. This was almost super awesome. Next time that I give this talk, the demos will work for sure. I hope that this has motivated you to be more comfortable exploring unit tests
48:02
when you go home. And I hope that you will share the love. Thank you very much. Thank you very much.