We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

How I Learned to Stop Worrying and Love Unit Testing

00:00

Formal Metadata

Title
How I Learned to Stop Worrying and Love Unit Testing
Title of Series
Number of Parts
69
Author
License
CC Attribution - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
We all know that testing is important. But it's also hard to get right. We'll talk about how to write effective tests that not only protect against defects in our code, but encourage us to write better quality code to begin with. You'll leave this talk with ideas on the philosophies that should inform your tests, and a good idea of what makes a good test suite.
43
Thumbnail
29:29
44
SupremumStatistical hypothesis testingExecution unitProcess (computing)Thomas BayesProduct (business)TrailStatistical hypothesis testingSoftwareComputer programmingMusical ensembleCommitment schemeStatistical hypothesis testingContinuous integrationCodeUnit testingSoftware bugFormal languageMultiplication signContext awarenessNatural numberFunctional (mathematics)Projective planeProcess (computing)Proof theoryUniform resource locatorMoment (mathematics)BitQuicksortDifferent (Kate Ryan album)Product (business)Cartesian coordinate systemMereologyRight angleMathematicsInheritance (object-oriented programming)TwitterSoftware engineeringComplex systemPower (physics)Confidence intervalCollaborationismGoodness of fitWordSoftware developerCodeControl flow1 (number)XMLComputer animation
Statistical hypothesis testingForm (programming)NumberCausalityWikiComputer animation
CodeStatistical hypothesis testingLie groupState of matterAnalytic continuationNumberContinuous integrationMathematicsStatistical hypothesis testingComputer animation
Meta elementObject (grammar)Statistical hypothesis testingStatistical hypothesis testingConfidence intervalMereologyThomas BayesSoftware suiteExecution unitDisintegrationFunctional (mathematics)Physical systemType theoryDifferent (Kate Ryan album)Functional (mathematics)Electronic program guideSource codeWritingIntegrated development environmentPoint (geometry)Function (mathematics)MathematicsObject-oriented programmingKeyboard shortcutCode refactoringContext awarenessPhysical systemINTEGRALSoftware developerFile formatUnit testingBitCodeWordType theoryMobile appGrand Unified TheoryDifferent (Kate Ryan album)Suite (music)Statistical hypothesis testingStatistical hypothesis testingCartesian coordinate systemMultiplication signExecution unitRuby on RailsMereologyRevision controlGoodness of fitLogicVariable (mathematics)Scripting languageState of matterComplex (psychology)CASE <Informatik>Interactive televisionGame controllerSocial classModule (mathematics)MultiplicationInformationPhysical lawOrder (biology)Real numberWorkloadWeb browserDisk read-and-write headSingle-precision floating-point formatQuicksortObject (grammar)Control flowComputer programmingMetreNumberConfidence intervalPerturbation theoryCounterexampleCountingDependent and independent variablesFunktionalintegralSoftwareFlow separationComputer animation
Execution unitStatistical hypothesis testingSummierbarkeitException handlingConstructor (object-oriented programming)NumberString (computer science)LogicMoment (mathematics)outputWhiteboardCodeInformationBranch (computer science)Statistical hypothesis testingINTEGRALException handlingExpected valueUnit testingExecution unitString (computer science)Social classMessage passingCondition numberRight anglePhysical systemFunctional (mathematics)Object (grammar)Statistical hypothesis testingMultiplication signMereologyDifferent (Kate Ryan album)Type theoryError messageContent (media)Statement (computer science)Slide ruleNeuroinformatikOperator (mathematics)Single-precision floating-point formatCASE <Informatik>Electronic mailing listDependent and independent variablesPoint (geometry)Product (business)Task (computing)NumberMathematicsFormal grammarWorkloadOverhead (computing)WritingDescriptive statisticsRule of inferenceWeb browserCartesian coordinate systemOnline helpModul <Datentyp>Software frameworkSuite (music)BitCombinational logicDatabaseDecision theoryProcess (computing)Video game consoleWebsiteThree-valued logic
Constructor (object-oriented programming)NumberString (computer science)Statistical hypothesis testingNP-hardSoftware developerComputational number theorySystem programmingSystem callMultiplication signCodeIntegrated development environmentStatistical hypothesis testingDependent and independent variablesStatistical hypothesis testingComputer configurationTime zoneProduct (business)Condition numberServer (computing)Branch (computer science)Fraction (mathematics)TimestampOrder (biology)Inclusion mapArithmetic meanSoftware developerPointer (computer programming)Process (computing)Analytic continuationDifferent (Kate Ryan album)Object (grammar)Boiling pointNeuroinformatikWritingSuite (music)NP-hardUnit testingGoodness of fitSoftware frameworkMobile appTask (computing)Physical systemMereologyData structureHoaxState of matterConstructor (object-oriented programming)Moment (mathematics)Decision theoryInterpreter (computing)Row (database)Context awarenessString (computer science)Point (geometry)Web 2.0Strategy gameOperator (mathematics)NumberValidity (statistics)Software bugCASE <Informatik>Test-driven developmentNegative numberRule of inferenceMathematicsFile formatOcean currentCovering spaceResultantComputer animation
Term (mathematics)Statistical hypothesis testingSuite (music)TwitterStatistical hypothesis testingComputer animation
Content (media)Coma BerenicesPhysicalismXML
Transcript: English(auto-generated)
Welcome to my talk, entitled How I Learned to Stop Worrying and Love Unit Testing. I'm Valerie Woolard-Trini Vossen. I'm a software engineer at Panoply,
where I help build tools for your favorite podcasts. You can find me in the hallway later or on Twitter. My handle is ValerieCodes for podcast recommendations. But right now, we're going to talk a little bit about unit testing. Before we get into the talk, I also want to take a moment to appreciate
our location here in the beautiful city of New Orleans, and to borrow an Australian tradition called the Acknowledgement of Country, which is something I first saw Pat Allen do at Nation Ruby, to acknowledge the native people who first lived on this land. I take time to acknowledge the Choctaw, Hoomah, and other tribes, the traditional custodians of this land,
and I extend that respect to other indigenous people who are present. Thank you for taking the time to appreciate their contributions to this land and this culture with me, and for coming to attend this talk. Let's get started. I'm happy to be kicking off the testing track.
I hope that this can teach you something if you're already doing testing, and to give you an idea of where to start if that's something that's new to you. I chose this GIF of Kermit the Frog, partly because I love Kermit the Frog, and it made me laugh. But the other reason is that I love the carefree nature of Kermit the Frog.
I think that a lot of the power that testing gives us is confidence. And good tests allow you to be confident that you won't accidentally break something. Good tests allow you to verify that a change in your code isn't actually changing the functionality of your program.
And good tests let you worry less. And maybe they don't let you be as carefree as Kermit, but I'm hoping that they can come close. So let's start right off the bat with what are tests even for in the first place? Some of you may be fairly new to software, and just know that testing is probably something
that you should do without really knowing why. I talked a little bit about confidence, and one way you can think about tests is also as an act of kindness. When you write a test, you are taking the time to verify that your code works. This is a favor to your teammates and your future self. Think of it as an investment.
Coming into a new code base or starting on a new project often feels familiar in certain ways. Maybe it's a language you're familiar with, or a kind of program that you've written before, but there may be little gaps or assumptions that you make that are incorrect. It's kind of like playing croquet with a flamingo and a hedgehog.
There are ways in which things might seem familiar, but there are ways that they are really not at all what you expect. Your old assumptions may not hold true, and you have to figure out where the gaps are between your understanding and reality, and tests can help you bridge that gap. Who are tests for?
So let's take a moment to talk about that. There's a bit of a story behind this quote. My parents are both in the art world and got me this book written by a friend of theirs about being a professional artist. I'm pretty sure they were hoping I'd be a professional artist. I'm not, but I really love this book. It's called Art and Fear,
because I think that it's incredibly relevant to programming and all sorts of creative work. I'm really obsessed with it. This will not be the last time I quote it in this talk. I have an idea for this talk where I just lecture about this book in the context of programming, but that's for another day. I had to sneak it in somehow.
So to all viewers but yourself, what matters is the product, the finished artwork. To you and you alone, what matters is the process. In the context of software, the process of creating your artwork is often a collaborative one, so it's definitely a little bit different from being a solo artist,
unless you're just working on a side project by yourself. That said, writing tests is part of the process of writing software, not the finished product. A user of your application doesn't care about your tests as long as the product works. You should care about writing tests, because they will help you build a product that works.
If you're a user, you'd probably rather use a product that works perfectly and has no tests, but as a software developer, you'd much rather use a product that has some issues but is well tested. So the first and perhaps most obvious function of tests is protecting against bugs, or at least catching those bugs
before they make their way to production. The software that you're working on is likely to be a very complex system. You're probably not going to be the only one modifying it, and it's not going to be possible to keep track or understand what's going on in every part of the application at any given time. A test, therefore, can serve as an audit trail
of how something is supposed to work or used to work. When a test fails, you should have some proof or paper trail of when that test used to work so that you have an easier time identifying what broke it. You can use tests to prove that your code
does what it says it does when you write it. Ideally, you can use a continuous integration process that will test your code every time you make a new commit. That way, when your test is green one commit and red the next commit, you can be pretty confident
that some code that you changed in between those two commits has broken it. Good tests coupled with continuous integration will allow you to pinpoint and quickly correct any code that breaks those tests. Another great asset the test can provide for you is documentation.
You may be more familiar with writing documentation in the form of comments or internal wikis, but that can get out of date very quickly and cause you some trouble. For example, here I've written a method that always returns the number four and I've written a comment talking about my method
that only returns the number four. However, let's say I change this method like this. So now it returns the number five. But there is nothing that is going to force me to now update that comment to make it reflect the actual state of the code base.
And if I change the return value of the method, the comment remains there as this terrible blatant lie. If you're using continuous integration, tests that have to be updated when they fail. So they're more likely to reflect the current state of the code base than your comments. Take, as a counter example, this test that I've written
for this same method that always returns the number four. In the above example, nothing forces the comment to be updated when the output of the function is actually switched to five. And in a less trivial case, this could lead to comments giving future developers, including yourself,
misleading information about the actual state of your code. Another good one is the comments that say, talk to so-and-so before you change this, and so-and-so hasn't worked for the company for six months. Good tests can serve as an introduction to your code. Well-written tests, along with well-named functions
and variables, should explain the desired behavior of your code as well as testing its functionality. Tests can also help prompt you when your code is getting too complex and help encourage good practices. As Sandy Metz said in Practical Object-Oriented Design
in Ruby, tests are the canary in the coal mine. When the design is bad, testing is hard. Writing tests as you write your code will help give you an idea of where the complexities in your code are. If you're not sure how to test a method, that method might be too complex and need to be broken up into smaller functionality.
If you find yourself having to write lots of scaffolding in order to even run your tests, that can help to signal that you may have too many external dependencies. The Law of Demeter in Object-Oriented Programming states that each unit should only have limited knowledge of other units and that a unit should only talk
to its immediate friends. So unit testing is a great way to sort of enforce that methodology. How much does your class actually need from its neighboring classes? And if it's too much, then writing unit tests will be more complicated, which should serve as a hint to you to go back and simplify the code that you're testing.
Another thing that tests do is allow for confidence and refactor. If you've got some code that's written in a way that makes it very difficult to reason about, you can't rewrite it if you can't reason about it in the first place. How are you going to preserve all that functionality without knowing quite what all that functionality is?
Without tests, it's impossible to make changes to your code and ensure that you haven't broken other things. You are indeed living very dangerously because your only conception of how your code is supposed to work is within your own head, and you have no way of verifying
that this actually lines up with how it actually works or how it was originally intended to work, which could be three completely separate things. Efficient refactoring is only possible with a well-written test suite. This is an example of how taking the time to write good tests will save you time in the future.
Refactoring is a hugely important part of writing good code, just as revision is an important part of writing. Remember in high school when you were writing an essay and you would type and type and type until you reached the word count, and then you would print it out
and never look at it again? That's not the way that you wanna write code. You're not only depriving yourself of the chance to introspect a bit more about the code you've written while it's fresh in your mind, you're depriving future developers of more insight into how your code is supposed to work
and how to tell if it is working. This knowledge is essential to making changes later. Here's art and fear again. This quote struck me in the context of testing because I feel like it's also kind of about refactoring. You make good work by, among other things, making lots of work that isn't very good
and gradually weeding out the parts that aren't good. I think this is absolutely true about programming as well. I do most of my work by just putting my fingers to the keyboard and reasoning out the problem at hand in whatever way first comes to me. This is often pretty repetitive and sometimes needlessly complicated,
but this rough draft and starting point is essential to growth. Being able to edit and make improvements to your code over time is an essential part of improving as a developer, but you can't remove the cruft without understanding exactly how your code is supposed to work and crucially, when it stops working.
That's the insight testing gives you. When you write tests, that gives you a chance to look through your code to get a sense of how you feel about it. Are the variables well named? Is the logic clear? Do you feel like you're doing something hacky? Do a gut check and make changes where you see fit.
After all, you've got some tests now so that shouldn't feel quite as scary. So these are the things that tests can do for you, but what exactly makes a good test? Obviously not all tests are created equal, but what differentiates them? A good test suite really only has to do two things.
It needs to break when code that you've changed has broken your app and not break the rest of the time. Both of these things turn out to be much easier said than done. As it turns out, it's very difficult to predict what parts of an application are most likely to break with future changes and write tests in a way that exposes those things.
It's also quite easy to write tests that break in ways that don't actually indicate failures in your code, like a copy change or a timeout because something takes too long or the change in formatting of an output. You should also be wary of tests that might be prone to timeout or assume anything that could change,
such as say the year or anything about the environment that it's being run in. So this talk is about unit testing, but we haven't really differentiated the different types of tests and what they really mean. You'll hear slightly different definitions
from different sources. These are the definitions that are used in the Ruby on Rails guides. There are unit, functional, integration, and system tests. A unit test tests the smallest functional unit of code that it can, such as a single method on a single class.
An integration test tests aspects of a particular workflow and thus tests multiple modules and units of your software at once. You might create an integration test to make sure that users can log in or create accounts. Functional tests look at controller logic and interactions. They are testing that your application handles
and responds to requests correctly. This is, for example, tests on your controllers in a Rails app. System tests allow test user interactions with your application, running tests in either a real or headless browser, again in the case of Rails. System tests are like an automated QA script
and probably most closely mirror the way you would perform manual QA in an automated environment. So out of all these types of tests, I'm choosing to focus on unit tests. So why is that? Of the test types I talked about, you'll notice that unit tests are by far the simplest. I like them because they're easy to write,
easy to run, easy to reason about, and also help encourage modularity and cleanliness in your code as we discussed earlier. Individual methods are probably the things in your code that you have the best understanding of, making them easiest to write good tests for. And if your code is clean and modular,
well-tested units should lead to a functional application. Unit tests are also very fast to run since they have the fewest dependencies and don't require spinning up something like a headless browser. That said, there will be times when you have to write other types of tests, but you should be thoughtful about when those times are
because of the overhead involved. It's probably a good idea to have integration tests for the most critical workflows of your app, for example, or in the case of a company, the things that would be most likely to lose you a lot of money quickly if they were broken and shipped to production. System tests can be used to test important workflows
but can be slow and brittle, so should be used with caution to supplement a robust unit test suite. So here are some of the things that you might be interested in testing. This is by no means an exhaustive list, and there's plenty of documentation to be found online
as to how to test different things, but these are some of the things that I find myself testing for most often. The most important thing to keep in mind as you start testing is to keep things simple. Each test should look at only one very small thing.
Each of the bullet points I've listed here, and the most common one, at least for me, I've given an example for, they're examples of what are called assertions. You're establishing an idea of what your code should do, and when you run the test, the computer will tell you if it actually does or not. I've added in an example of using RSpec
to run a few tests in just a simple IRB console. It may be a little too small to read, but these slides will be online if you wanna take a look. It's basically, I'm saying expect two plus two to equal four. That returns true. I then expect two plus two to equal five,
and I get an error, expectation not met error, expected five, got four. So that's kind of the syntax that you'll be looking at. So the first time I run the test, it passes, and when the test fails, the failure message includes the expectation that was not met,
the expected value, and the actual value. It's important as you first start testing to make sure that you glean as much information as you can from these error messages because they can tell you a lot. Don't just say, oh, my test failed, go back to the drawing board, really take a moment to think about what it's trying to tell you.
You can see, or for example, you can test the exact value of a return value. Incidentally, you can also use array matching, greater than, less than, includes the whole deal. You can make sure that a method causes another method to be called.
You can check that a job is enqueued. You can check that a database object is created or destroyed. You can see if something is truthy or falsy or whether running a particular bit of code throws an exception and the content of that exception. This is, as I mentioned, not an exhaustive list,
but instead meant to get you thinking about how you might start writing unit tests for your code. So what is a unit test? As we noted, a unit test looks as the smallest possible unit of code. In the case of Ruby, that's a single method on a single class.
In unit testing a method, you should think about all the reasonable inputs to that function, as well as how the method should respond to invalid inputs. If your method uses branching or conditional logic, you should have a unit test that hits each possible branch or combination of branches.
Here's an example of how I might approach unit testing this method that I wrote called greet. It's a fairly trivial method. It takes someone's name and says hello to them. If it gets a weird input, like anything that's not a string, it just says hello and mentions that it didn't catch the person's name.
That's a design decision on my part. I could also throw an exception or just call toString on any input that I received. The code examples here use RSpec, but the general ideas should be applicable to whatever testing framework you're using. Because I have two possible conditions
in my return values, as I mentioned, I wanna at the very least test every branch of any conditional logic that I'm using, so I know that I need at least two unit tests. If I wanted to be especially thorough, I might test for other edge cases, like different types of non-string input. You can see how if you have lots of branching logic
that this can multiply and get complicated very quickly, so let that serve as yet another incentive not to nest too many conditions in a single method. I've written a simple test for each of the possible conditions in this case. I've written two unit tests,
one that calls the method using the expected input. This is something you might see called a happy path. I call the method using my name, and it says, hi, Valerie. And then I call the method using the number two, and I make sure that I get the response saying, hi, I didn't catch your name. Just as a syntactical note here,
I apologize for having to use a ternary operator here. I did it so I could fit everything on the slide. If folks are not familiar with that syntax, basic idea is that it's just an if-else statement with the if part behind the question mark and the return value for the if right after that, and then the return value
for the else condition after the colon. So the site Better Specs has tons of resources on testing best practices and also ways to write your tests down to the grammar of how they should be written
in order so that the printout of like what failed and passed in your test suite is easiest for you to read. The same principles of readability that apply to your code are probably even more important in your test. My general rule is that an English speaker
who doesn't know Ruby should be able to read your test and have an idea of what it's doing. In RSpec, things are named very deliberately to allow for this. You'll notice that the test syntax is it and then a string do,
and in the string you can give an English description for what you're actually testing for, and then the syntax here, expect this thing to equal this thing, should be fairly easy to parse. The less readable your code is, the more straightforward your test should be,
although both should ideally be as straightforward as possible. So we've talked about what's good about tests and why you should write them, but we've probably all been in or will be in situations where test writing is skipped or overlooked.
So why is that? What makes testing hard or causes it to be passed over? A lot of the challenges around testing boil down to time. Developer time, computational time, the passage of time. Writing tests take time. The first time you're writing something,
you're probably testing all your code manually in a development environment. You've convinced yourself pretty well that it worked and it doesn't seem like it's worth it to write tests then. Why write automated tests? That takes time that you could be spending writing your next feature. Tests also take time to run.
And if you've configured a continuous deployment environment where your tests have to all pass before you can deploy to production, this in one way is very deliberately slowing down your deploy process. This can make it more frustrating to push, say, an urgent fix through,
especially if those tests take a long time to run or have spurious failures. You can also mitigate this by writing fast tests, such as unit tests as we talked about, keeping those tests simple, and using tools like Zeus to help load your tests faster.
Testing anything that involves time can also be challenging. Let's say you wanna check that a timestamp is correctly recorded, but fractions of a second elapse between the time the object is saved and the test is run, so the matching test actually fails. What if your test server is on a different time zone than your development and production environments?
These issues can be mitigated by using gems like TimeCop, which allow you to freeze time in your testing environment. And the final aspect of time that can be difficult is knowing at what point in the process to write your tests. And if you're waiting to write them until the end
or trying to write them long after writing the code under test, it can be hard to remember what exactly you were doing and what things are most important to test. We tend to overestimate our own abilities to remember things, and you're likely to forget the context for your code and decisions very soon after writing it.
I think we've probably all experienced the moment of coming to a piece of code that we wrote maybe six months ago and not remembering anything about it or the state of mind that we were in when we wrote it. Even if you're not using true test-driven development, you should be writing tests alongside your code,
and writing out an idea of what you want your test to look like before you begin writing code can be a helpful exercise in thinking about how you wanna structure your code. Another really challenging part of testing, if you're faced with an app or code base with no tests,
is trying to figure out where to even start testing it. Instead of trying to take on the monumental task of writing tons of tests at once, instead make sure that every new piece of code that you add is well tested. You can choose a testing framework and use tools to get an idea of your test coverage, and then you can just start chipping away at it.
I like to think about the Boy Scout rule, where you leave your campsite cleaner than you found it when you got there, and leave the code cleaner and better tested than you found it with every pull request. Are you fixing a bug? Stop before you fix it. Write a test that exposes it and fails.
Now go and fix the code and turn it green. Are you writing a method? Make sure that you have a test, a separate test, for every branch of its conditionals. Think about edge cases, as we were talking about. Let's say the method gets passed a nil value, a string instead of a number, a negative number,
zero, a really big number. Think about the ways you want that method to respond in those conditions, and write tests that validate that behavior. Favor tests over comments as a means of explaining your code to future engineers. Get everyone on your team to agree
that testing is important, and agree on a strategy. You can use tools like coveralls to give you feedback about what code is and isn't covered by your existing test suite. Once you have a starting point, you can set a goal. For example, to increase code coverage by 5%. You can also decide, for example,
that all PRs need to include tests in order to be merged. Now, another thing that's really hard to test is external systems or dependencies. Let's say that your code makes a call to an external API and then parses that response. It's inefficient and clunky to actually make that call
every time you run your test, so what can you do? You have a few options. You can mock out a valid response in your test and just make sure that you're doing the correct operations on it. You can use a tool like WebMock to construct fake HTTP responses, or a tool like VCR to make a real request once
and record the result, performing all future tests from the recording. Keep in mind that both of these methods rely on the current state, or your interpretation of the current state, of the API response format, and if that changes, it has the potential to break your live app
but not your tests. And with that, I hope that this talk has given you some ideas for how to start testing if you haven't already and has given you some ideas and things to think about in terms of your test suite if that's something you're doing already.
Feel free to find me later or on Twitter with any questions, and if you're interested in working for my company, Panoply, we're hiring and I'd be happy to chat about that as well. And go forth and conquer. I'll see you later.