Introduction to pytest
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Part Number | 41 | |
Number of Parts | 119 | |
Author | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/19941 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Production Place | Berlin |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
| |
Keywords |
EuroPython 201441 / 119
1
2
9
10
11
13
15
17
22
23
24
27
28
41
44
46
49
56
78
79
80
81
84
97
98
99
101
102
104
105
107
109
110
111
112
113
116
118
119
00:00
CodePC CardGoodness of fitSoftware maintenancePoint (geometry)MathematicsStatistical hypothesis testingLecture/Conference
00:38
TrailOrder (biology)Statistical hypothesis testingMultiplication signCartesian coordinate systemStatistical hypothesis testingGoodness of fitComputer animationLecture/Conference
01:04
Statistical hypothesis testingProjective planeBoilerplate (text)Statistical hypothesis testingCodePhysical systemPlug-in (computing)Statistical hypothesis testingSoftware testingComputer animation
01:42
Physical systemModule (mathematics)Software testingStatistical hypothesis testingSoftwareUnit testingEntire functionExecution unitGoodness of fitLecture/Conference
02:32
Statistical hypothesis testingFunction (mathematics)Statistical hypothesis testingSuite (music)Direction (geometry)Continuous integrationCuboidExecution unitPhysical systemScaling (geometry)Computer animationLecture/Conference
03:21
Metropolitan area networkStatistical hypothesis testingBoilerplate (text)Statistical hypothesis testingSelf-organizationCodeProjective planeStatistical hypothesis testingOpen sourceXMLComputer animation
03:44
Demo (music)Raster graphicsStatistical hypothesis testingSocial classDiscrete element methodConnected spaceReverse engineeringDemo (music)Unit testingSuite (music)Graph coloringSocial classFeedbackStatistical hypothesis testingFunctional (mathematics)CASE <Informatik>Function (mathematics)Computer fileSoftware bugOperator (mathematics)Statement (computer science)Physical systemModule (mathematics)MathematicsExecution unitPlanar graphLecture/ConferenceComputer animation
06:47
Statistical hypothesis testingDemo (music)Statement (computer science)Perturbation theoryCodeStatement (computer science)Statistical hypothesis testingData dictionaryFunction (mathematics)Key (cryptography)CASE <Informatik>CodeSocial classInstance (computer science)Computer animation
07:24
LogarithmInformation systemsInclusion mapStatistical hypothesis testingSuite (music)Statistical hypothesis testingStatistical hypothesis testingGroup actionSuite (music)Social classStatistical hypothesis testingMereologyPlug-in (computing)Projective planeInferencePower (physics)Virtual machineMotion captureProcess (computing)Asynchronous Transfer ModePhysical system1 (number)Computer fileMultilaterationDirection (geometry)Directory serviceLecture/ConferenceComputer animation
09:24
Statistical hypothesis testingStatistical hypothesis testingComputer fileStatistical hypothesis testingDirectory serviceStatistical hypothesis testingEntire functionSuite (music)Standard deviationUniform resource locatorComputer fileComplete metric spaceGastropod shellLecture/ConferenceComputer animation
09:52
Statistical hypothesis testingStatistical hypothesis testingStatistical hypothesis testingDefault (computer science)Data structurePattern languageDirectory serviceStatistical hypothesis testingProjective planeDifferent (Kate Ryan album)Computer fileType theoryMatching (graph theory)DivisorMathematicsRecursionMereologyComputer animation
10:28
Port scannerDataflowStatistical hypothesis testingStatistical hypothesis testingStatistical hypothesis testingMultiplicationInstance (computer science)CASE <Informatik>Statistical hypothesis testingFunctional (mathematics)Lecture/ConferenceComputer animation
11:15
Statistical hypothesis testingStatistical hypothesis testingStatistical hypothesis testingEscape characterInstance (computer science)Projective planeMechanism designLatent heatTask (computing)CASE <Informatik>Multiplication signLogicBitStatistical hypothesis testingObject (grammar)Lecture/ConferenceComputer animation
12:05
Demo (music)Statistical hypothesis testingConnected spaceReverse engineeringDevice driverWebsiteBildschirmtextWeb 2.0Discrete element methodRegulärer Ausdruck <Textverarbeitung>Statistical hypothesis testingObject (grammar)Demo (music)Virtual machineGraphical user interfaceData managementWeb browserParameter (computer programming)Statistical hypothesis testingParallel portMathematicsSuite (music)WindowCASE <Informatik>Instance (computer science)Point (geometry)DatabaseDevice driverCodeProcess (computing)WebsiteRemote administrationFunctional (mathematics)Multiplication signOrder (biology)Run time (program lifecycle phase)Constructor (object-oriented programming)Plug-in (computing)SurfaceSet (mathematics)Series (mathematics)Entire functionElectronic mailing listWeb 2.0Auditory maskingView (database)Type theoryLecture/ConferenceComputer animation
18:32
2 (number)Installation artVirtual machineProcess (computing)Statistical hypothesis testingInstance (computer science)Lecture/Conference
18:57
Statistical hypothesis testingAerodynamicsStatistical hypothesis testingSpring (hydrology)Different (Kate Ryan album)SolitonParameter (computer programming)Electronic mailing listStatistical hypothesis testingFunctional (mathematics)Multiplication signStatistical hypothesis testing1 (number)DatabaseCASE <Informatik>Device driverInjektivitätVarianceWeb 2.0Object (grammar)Graphical user interfaceMultiplicationSet (mathematics)ImplementationComputer animation
21:06
Remote procedure callStatistical hypothesis testingCASE <Informatik>Line (geometry)Unit testingBitStrategy gameGoodness of fitLecture/ConferenceXMLComputer animation
21:29
Plug-in (computing)Statistical hypothesis testingStatistical hypothesis testingParallel portUnit testingOrder (biology)Multiplication signCASE <Informatik>Suite (music)CuboidLecture/Conference
22:30
Statistical hypothesis testingAuthorizationSurfaceWave packetServer (computing)ForceNatural numberStatistical hypothesis testingComputer animation
23:01
Plug-in (computing)Keyboard shortcutTwitterLecture/Conference
23:24
EmailMultiplication signStatistical hypothesis testingAreaParameter (computer programming)Computer animationLecture/Conference
23:52
Statistical hypothesis testingInformationObject (grammar)Statistical hypothesis testingInstance (computer science)Computer animation
24:31
Statistical hypothesis testingSocial classUnit testingParameter (computer programming)InformationCASE <Informatik>Uniqueness quantificationData structureStatistical hypothesis testingStandard deviationDifferent (Kate Ryan album)QuicksortFunction (mathematics)Cache (computing)DivisorGreatest elementSoftware frameworkMultiplication signLecture/Conference
27:05
EmailStatistical hypothesis testingInformationSocial classPatch (Unix)BitStatistical hypothesis testingCellular automatonCASE <Informatik>MereologyModal logicComputer animation
27:46
Statistical hypothesis testingTraffic reportingStandard errorException handlingPatch (Unix)Error messageStatistical hypothesis testingCASE <Informatik>Phase transitionMereologyFlagPhysical systemLecture/Conference
29:10
GoogolCode
Transcript: English(auto-generated)
00:15
Hi, good morning, so the first session today is going to be an introduction to Pytest
00:21
with Andreas Helm, who is also the maintainer of the Pytest Django library, so Andreas. Thank you. Hi, so I usually work at a place called Pashnal Column, we build a time tracking
00:45
and salary application, and in order to manage that we write a lot of tests and we use Pytest, so today I'm going to try to show you some good things with Pytest and how you can make use of that when you write your tests.
01:02
So these are kind of the topics I want to cover today, I'm going to show you what Pytest is, I'm going to show you how you can write tests with less boilerplate code, I'm going to show some useful plugins and the plugin system of Pytest, and I'm
01:20
also going to discuss how the test discovery is done, and then I'm going to show a feature of Pytest, it's called fixtures, which is really cool, and then I'm going to discuss how you can port your existing projects to make use of Pytest in an easy way. So first, testing.
01:40
The kind of testing that I'm going to talk about today is automated software tests that verify that your software works correctly, and this can be anything from small, fast-running unit tests to more high-level tests that test your entire system, or anything in between there.
02:01
So, before we get started, how many of you are familiar with the unit test module in the standard Python, the chips with Python? Yeah, you've used it, you can, yeah, you know roughly how it works, okay, good.
02:21
And how many of you are already Pytest users, or have used Pytest? Okay, that's good. So Pytest is a full-featured testing tool for Python, so that means it does everything from test collection to run the test to give you the output on whether which test failed
02:44
and passed, and it also has some other nice features that help you maintain a test suit and help you organise it in a good way. And you can run anything from those short unit tests to more system-level testing
03:01
with Pytest, it scales like in both directions there. You can also use it if you do TDD, it fits very nice, and it's easy to integrate with continuous integration systems such as Jenkins that works out of the box.
03:21
So here are some people that are happy with Pytest, and it's used in some, also some bigger organisations, in both commercial and open source projects. So, one important, one thing that I find really nice about Pytest is that it allows
03:41
you to write tests with less boilerplate code. And to show you what I mean by this, I'm going to show you a little demo. So let's see. So I have a test file here which contains a very, very simple test, and it uses the
04:03
I can make it bigger. Yes, it uses unit tests. So this is probably familiar to most of you. So we basically just make sure that the return value of this function is what we expect it to be. And I can run this with, by invoking the unit test module, and it collects a test and
04:27
runs it. So if I instead use Pytest, what we do is we invoke the test suit from the py.test command. So if I run that, it finds the test and it runs it.
04:42
The output is slightly different, but it's the same thing. So I want you to notice that Pytest can run existing unit tests just as they are, but we're going to change this test to make it, to show some features of Pytest.
05:01
So the first thing I'm going to change is the assertion. Instead of using assert equal, I'm just going to use the plain assert statement that's built into Python. So I'm changing this and just use the equal operator to make sure that the output is
05:23
the expected one. And the next thing is that we don't actually longer need to subclass from the unit test test case. So we can simply just remove that. And in fact, we don't need to, we're not anymore required to wrap all our test cases
05:42
in a class. So we can just remove the class too. Since we don't have the test case anymore, we can just remove the unit test import. So this is what we're left with. So let's just save this and run it.
06:03
So it works the same. I just want to show you what happens if we get a failure. So it's really important that you get useful feedback on your failures when your test does not pass. So let's assume we introduce a bug here, just add some garbage to the output, and
06:24
we save, and we run the test again. So what you can see here is that you get a very nice detailed output and you even get some colors in them that shows you exactly what the wrong value was and where it failed.
06:49
So this was the code before, and this is the after code. So what I did was I used a search statement instead of a search equal. And pytest can handle pretty much anything you throw at the search statement.
07:04
It's very smart. It can, for instance, if you check for a dictionary key or something like that and it doesn't exist, it will give you a very nice output. So it's hard to find a case where it doesn't work well. Also we did not have to subclass from test case anymore, and we didn't have to put
07:26
our tests in our class at all. But we can still do that if we like. That can be kind of nice to group your tests. And we invoke the test suit by running the py.test command, which collects and runs
07:43
the tests. So one thing about pytest is it has a very powerful plugin system. So in your own project, you have a lot of hooks into pytest that you can use to customize how tests are collected, how they are picked up, and how they are invoked.
08:05
And you have the same kind of hooks into pytest in your own project that third-party plugins has access to. So these are some plugins that extend the capabilities of pytest. There are a lot more, but these are probably the most popular ones.
08:24
The pytest-xtist plugin, it provides distributed testing. So you can actually run your tests directly and have them distributed to remote machines. Or you can run them locally on your own machines in parallel processes to have some
08:41
nice speedups. I'll come back to Xtest later. And there's the pytest-jango plugin, which helps to integrate with Django and with Django test suits and make them run directly. There's also support for running twisted tests.
09:02
And there's lots of other plugins. For instance, if you do log capturing, you can install the capture log plugin. So how to actually run the tests. I showed you in the demo, by just invoking pytest, you run all the tests.
09:22
And pytest will recursively search all directories to find test files. So this is usually what you do to run your entire test suit. But then you can also limit the tests you run in one invocation by just specifying the file names of the test files that you want to run.
09:42
And this, of course, works very nice with the standard tab completion in your shell. So it's very convenient to use. You can also match, you can also select tests based on the test's name. So K stands for keyword.
10:01
This can be used to find a specific test if you don't want to type the directory. So as I told you, pytest recursors into directories and finds test files. And by default, it looks into files that matches these patterns.
10:20
And you can, of course, change that if you like, if your project structure looks different. But that's kind of saying default. There is another way of organizing your tests with pytest. You can use the marking functionality. So you can make up arbitrary marks.
10:43
I just made up this slow mark. For instance, if you want to mark all the tests that are very slow, then you may not want to run them or you may want to run them. You may want to filter them out. You can put multiple marks on one test if you like.
11:04
So this is how you run only the tests with that specific mark. And in this case, this might actually be more useful because you may not want to run them as often. There are also markers that are built into pytest.
11:22
For instance, the skip if marker. So you can also annotate some extra data within the marker for each test. So in this case, this test will simply be skipped if you run it on Mac OS. But you can implement your own logic with markers by yourself.
11:42
It's in your project, specific things to your project. So I'd like to take some time to go a bit into a feature in pytest that is called fixtures.
12:01
So fixtures is a mechanism for injecting objects that you need for your tests in a very structured way. You probably have no idea what I'm talking about right now, so I'm going to try to give a demo of this.
12:35
So we have two tests here which use, do you know what selenium is, by the way?
12:43
Yeah, so it's a way to remote control a web browser. And that can be very useful for tests. So I'm using that as an example here. So I have two tests. They will navigate to these websites. They will look at the title of the websites and verify that Nix is in the Nix package manager website
13:07
and that pytest is in the pytest website. So I'll walk you through what happens here. So when pytest finds this test with the WebDriver argument in the argument list,
13:25
it will try to construct the value for this argument. So in order to do that, it looks for any fixtures that are named WebDriver. So it works by looking on the name of the argument itself.
13:45
And in order to register how these fixtures are constructed, you create a function with the same name. And then you annotate that function with the pytest fixture decorator.
14:01
So this function should return the object that we want to have passed into the test. So in this instance, I just create a Firefox WebDriver instance. And I also add some code that will run in the process of a tear-down
14:21
when the tests are finished, how to clean up after this test. And so I add this driver quit. It's a method that will be invoked after the test. And then simply return the driver. So this is what happens for the first test.
14:42
And then for the next test, the same thing will happen. It will be constructed once again and passed in, and then the test will run, and then the WebDriver will quit. Okay, so let's just run this and see what happens. And the hope for the wireless are with me.
15:02
Okay, so we can see, okay, so it seemed to work well. It was very fast. So we had Firefox start up, close down, and start up for the other test. So that's what we expected. One very important point of fixtures is that you move the dependency construction
15:26
out of the test case themselves. So the test case is not concerned about how the test fixtures are set up. And this has some nice implications. So we can, for instance, you saw that Firefox started twice.
15:44
Let's say we want to use the same instance on Firefox for both of our tests. That is really simple. And actually everything we have to do is to add, we have to tell PyTest how this fixture should be scoped,
16:00
or how it should be cached. So we just add this, and PyTest will then cache this fixture object for the entire test session. So this means that if we try to run it again, it might be very fast, but we should only see one Firefox window showing up.
16:26
Okay, oh yeah, it was very fast. But you see, it can certainly speed your tests up very nice in a lot of cases. For instance, creating a database.
16:40
Let's imagine your tests need a database with the schemas and everything in them. Then you can just create that once before all tests run, and then have it reused for every test. And the other thing when we separate the creation of the fixtures
17:00
is that these tests don't need to be concerned about what kind of browser we use. We can, for instance, change this to Chrome and we'll have it just work. And we can even generate more test invocations. We can generate new tests by parametrizing this fixture. So we can actually trigger each test to run in multiple browsers
17:23
just by changing this fixture definition. So another thing I want to talk about, I mentioned the pytest-xtist plugin earlier.
17:40
It is a way to run your tests in parallel. So that can give you a very nice speedup because most people have multi-core machines these days. So the good thing is it's really easy to use it. So if we want to speed up these tests, we can just run them in parallel.
18:02
So I already installed the pytest-xtist plugin. So if we specify "-n", we can instruct pytest to run these parallel. So I just start to process now, and this will probably be very fast, but let's see if we can... Yeah, so there are two Firefox browsers at the same time.
18:24
So these tests now run in parallel. So that's a very easy way to speed up the runtime of your test suite. You just install pytest-xtist, and you run it with "-n 2", for instance. So my own test suite,
18:40
it usually takes about two minutes to run, and when I run it on my machine in four processes, it's about 30 seconds. So it can give you very nice speedups. So once again, it's the dependency injection,
19:02
and it's based on the name of the arguments and the name of the fixture function. That's how they map together and how they are found. Question about pictures. You maintain the different set of different pictures in the same file,
19:22
and use the different pictures at the same time. So you can define some pictures for Firefox, and let them run, for example, and make them run sequentially, or try to put Firefox in the same file? Yes. So in this case,
19:40
if you want to have both Firefox and Chrome run, you specify the implementation of this fixture itself. So you instruct... I don't have time to show you that now, but you can tell pytest to, for each time a test needs this fixture, invoke it twice, once with Firefox and once with Chrome.
20:06
And you can also use multiple fixtures, just add more fixtures to the argument list. You might want to have a database, you might want to have access to something else too, or just another object. So you can use multiple fixtures in the argument list.
20:23
And you can also... So the WebDriver fixture, in this case, it can depend on other fixtures too. But I won't go more into detail about fixtures. It's very powerful, and it can be used in a lot of different scenarios.
20:44
Yes, yes, I'm going to tell you about it. So, I also showed the Xtest. It doesn't really have anything to do with fixtures, but it's a nice way to speed up your tests.
21:01
And this is how easy it is to get started with it. Just install it and then run with the N command line. And you can do all sorts of remote distributed tests with it too. But this is the simple case. So I want to talk a bit about how you can port...
21:20
If you want to switch to PyTest, how you can... A good strategy for doing that. So I showed you that the unit test was just... The test case was just picked up by PyTest. And there's also actually support for nose-style tests. So if you use nose currently, you can probably just switch to PyTest
21:42
and it should mostly work out of the box. And if you install the PyTest Django plugin, your Django test suits should also run as they are. So in that way, you can just start using the PyTest command to run your tests. And then you can gradually write new tests in this lighter style.
22:04
And you can make use of fixtures. And then you can change your existing tests over time, or you can just leave them as they are. But even unit tests and nose tests, they all get benefits such as parallelization from Xtest.
22:21
So you don't have to go and throw all your tests away and start over in order to start using PyTest. You can switch gradually. Yes, so I highly recommend that if you want to know more about fixtures, that you go visit Flori's talk this afternoon about fixtures.
22:41
So I just scratched on the surface on what you can do with fixtures, and he will show you more advanced uses of them. So don't miss that talk. And also there will be a training with Holger Kreckel, who is the main author of PyTest. So that will be on Friday. I'm not sure if it's still possible to sign up for the trainings,
23:03
but check that out if you're interested. And I will do some sprinting on the PyTest Django bindings and the plugin in the sprint, so feel free to join in there if you like to get started with it,
23:21
or see what it's all about. So yeah, feel free to just come and talk to me if you have any questions after, or I guess we might have time for some questions. Five minutes? Okay, cool. Where you started to discuss fixtures,
23:43
where does the request argument come from? Oh yeah, that's a good question. So the request argument, it is a special fixture that you can use to get information. You can introspect the surroundings.
24:01
You can get, for instance, if you need access to a command line argument from within a fixture, it's available on the request object. So you can leave it out in the fixture definition if you like, if you don't have any use for it, but it provides access to PyTest, basically,
24:22
so that you can communicate with it. So it's a special fixture that gets passed into this fixture. More questions? Right, I'm not very convinced, I must say.
24:42
So my question is, what features of PyTest you would say you use every day and you wouldn't find in other things like unit tests or nodes and stuff like that, because just not having a class and inheriting from unit test or test case is not a very strong argument for me, at least.
25:01
I would say that the fixtures is the really unique feature and it's hard to show you how powerful they are in this short time, but you can combine them in different ways and combine the caching and the scoping of them and you can parametrize them
25:20
and you can really have a very good structure of how your tests are invoked. So I would say that is the absolutely biggest feature of PyTest if you compare it to other frameworks. Hi, I have a question.
25:44
How much more difficult it gets to debug your tests with PyTest? So in unit test you need to do much more work. So you don't have these nice features but you do everything explicitly. And here you have a lot of features that are nice
26:01
but they hide so the work is done by itself. But you use it every day, so is it gets more difficult to find problems in your tests?
26:21
Well, I've used it for a while but I wouldn't say it's harder to debug it. And there are all sorts of... You can run PyTest collect only and it will show you how it collects the tests and you can have all sorts of different outputs. You can have very verbose outputs
26:41
if you really... You can get a lot of information about the failures so in that sense I guess you can get more information out of PyTest than with standard unit tests. But I don't see it as a problem. I don't really see that it makes it harder to debug anything.
27:01
Thanks. Any more questions? So I've used PyTest a little bit recently and one of the problems I've found with it is the test discovery. So it's one of the advantages that you don't need
27:23
a lot of kind of info in your class for it to find the test. But if there's something wrong, like one of the examples is I had a mock.patch on the test but I missed out the t in patch and it couldn't find the test so it wasn't running and I didn't realise that it wasn't running.
27:40
Is there anything where you can find out tests that haven't run? Well it should run tests that are mocked. You used a mock decorator? Yeah, it was just because I had a mistake in it though so I had misspelled patch so it hadn't found it but I didn't realise that it hadn't run
28:02
because I didn't check every kind of test. I think you should have gotten an exception then an error if you misspelled the patch decorator or something like that. It should have failed during the collection phase. Yeah, there was nothing. It just didn't even try to run. Oh okay.
28:21
I'm not sure in that particular case but if there are errors during like importing things or standard errors like that they should be shown. Okay. Oh, so you flagged to show the skipped tests?
28:45
Yeah, so the report skipped a command line argument I guess. Yeah, then you can show all the skipped tests. Any more questions?
29:01
Okay, thank you very much Andreas.