We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Plone Conference 2022 - Fast tests

00:00

Formal Metadata

Title
Plone Conference 2022 - Fast tests
Title of Series
Number of Parts
44
Author
Contributors
N. N.
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Slow tests suck. They are annoying and slow down you and your team. Hours and hours of engineer time is lost waiting for the CI to finish. I'll go through a number of approaches and principles to help you write fast tests.
Software testingTurtle graphicsType theoryWater vaporComputer animation
Sinc functionCore dumpRepresentational state transfer
Right angleSlide ruleMeeting/Interview
Uniformer RaumRevision controlConsistencyEndliche ModelltheorieFirewall (computing)Physical systemSoftwareBookmark (World Wide Web)WindowSoftware testingSoftware testingRun time (program lifecycle phase)Multiplication signMenu (computing)Software developerEncryptionSet (mathematics)Web 2.0Mobile appFirewall (computing)Information securitySystem callSystem administratorGoodness of fitCartesian coordinate systemComputer animation
Single-precision floating-point formatSoftware testingTable (information)Multiplication signSoftware bugDirection (geometry)Existential quantificationIterationComputer animation
Software testingGamma functionTrailMultiplication signCASE <Informatik>YouTubeInternetworking
Game theoryNetwork topologyDigital photographyCausalityLink (knot theory)Slide ruleSoftware testingSpeech synthesisElectronic mailing listNumberUniform resource locator
Software testingMeasurementMultiplication signDifferent (Kate Ryan album)Control flowMathematicsComputer animation
BenchmarkSineDifferent (Kate Ryan album)Inheritance (object-oriented programming)Arithmetic progressionJSON
Total S.A.AuthorizationRing (mathematics)Pareto distributionSoftware testingPasswordMassSoftware testingLetterpress printingSource codeComputer animation
BefehlsprozessorFunction (mathematics)System callPrimitive (album)Software testingElectronic mailing listCore dumpSoftware testingGraph (mathematics)Profil (magazine)CodeMultiplication signINTEGRALSystem callCountingFunctional (mathematics)Point (geometry)Level (video gaming)Computer animation
CodePointer (computer programming)Inheritance (object-oriented programming)Group actionPoint cloudCircleLocal ringComputer hardwareBefehlsprozessorVirtual machineLecture/ConferenceMeeting/InterviewXMLComputer animation
Software testingSuite (music)Software testing2 (number)FlagGoodness of fitMultiplication signArithmetic meanFigurate numberMeeting/InterviewLecture/ConferenceXMLComputer animation
Hill differential equationSoftware testingNetwork topologyDirectory serviceComputer fileLecture/ConferenceMeeting/Interview
Graph (mathematics)Virtual machineEndliche ModelltheorieLatent heatSoftware testingModule (mathematics)Machine learningComputer animation
Function (mathematics)Graph (mathematics)Endliche ModelltheorieLibrary (computing)Software testingCASE <Informatik>Computer fileComputer animation
Graph (mathematics)Software testingDistribution (mathematics)Multiplication signComputer animation
Configuration spaceHill differential equationComputer configurationBytecodeSoftware developerBitSoftware testingStructural loadSuite (music)Right angleBranch (computer science)Multiplication signPhysical systemComplex (psychology)Configuration spacePlug-in (computing)IterationTraffic reportingDifferenz <Mathematik>Virtual machineComputer fileDatei-ServerLine (geometry)CodeComputer animation
Hill differential equationPlane (geometry)Branch (computer science)Software testingMeeting/InterviewLecture/ConferenceComputer animation
Computer fileComputer networkSoftware testingDependent and independent variablesDependent and independent variablesInternetworkingUnit testingSoftware testingDifferent (Kate Ryan album)Asynchronous Transfer ModeCodeComputer animation
Computer fileComputer networkSoftware testingDependent and independent variablesProfil (magazine)Uniform resource locatorCASE <Informatik>Software testingConnected spaceWeb pageSoftwareComputer animation
InternetworkingSoftware testingMeeting/InterviewLecture/Conference
Software testingComputer networkDependent and independent variablesTotal S.A.Witt algebraMiniDiscComputer configurationFile systemComputer filePointer (computer programming)HoaxLevel (video gaming)Software testingControl flowOpen setOcean currentCodeMathematicsSoftwareReal numberError messageContext awarenessComputer animation
DatabaseSoftware testingEntire functionDatabaseSoftware testingWeb 2.0Unit testingDistribution (mathematics)User profileSuite (music)Functional (mathematics)Table (information)Computer animation
Personal identification numberDatabaseSoftware testingMultiplication signOperator (mathematics)Table (information)Lecture/ConferenceMeeting/Interview
DatabaseSoftware testingEntire functionSoftware testingDatabase transactionDatabaseBitRollback (data management)Computer animation
Hill differential equationUniformer RaumBefehlsprozessorSoftware testingAsynchronous Transfer ModeMereologyParallel portSoftware testingDatabaseExistenceMultiplicationCuboidWeb applicationRevision controlBitComplex (psychology)INTEGRALDiagramProgram flowchart
Asynchronous Transfer ModeBefehlsprozessorExplosionSoftware testingSerial portComputer fileBitSemiconductor memoryDatabaseEntire functionSuite (music)
Hill differential equationMatrix (mathematics)Heegaard splittingCodeSoftware testingComputer fileTask (computing)Software developerOnline helpLecture/ConferenceComputer animation
Plane (geometry)Hill differential equationTraffic reportingSoftware testingLecture/Conference
MedianSoftware testingIndependence (probability theory)Group actionComputer fileSoftware testingGroup actionProjective planeCartesian coordinate systemMeasurementInstallation artMobile appMoment (mathematics)Procedural programmingComputer animation
Software testingGroup actionTotal S.A.Hill differential equationMultiplication signSoftware testingMobile appOpen setComputer animationLecture/Conference
Plane (geometry)Pointer (computer programming)Software testingGroup actionMedianBlogGame theoryMeasurementCycle (graph theory)Semiconductor memorySuite (music)BefehlsprozessorSoftware testingBitBlogIterationPlug-in (computing)Projective planeMultiplication signLecture/ConferenceComputer animation
Suite (music)ChecklistDatabaseSoftware testingComputer hardwareSubsetMiniDiscGroup actionOnline chatBefehlsprozessorLink (knot theory)Descriptive statisticsChecklistBitPhysical systemOnline helpXMLComputer animation
Software testingSubsetMiniDiscDatabaseSuite (music)Plane (geometry)Ring (mathematics)Lecture/Conference
OvalSoftware testingSuite (music)Lecture/ConferenceMeeting/Interview
Software frameworkLecture/Conference
Projective planeLecture/Conference
Software testingTurtle graphicsComputer animation
Transcript: English(auto-generated)
Right, I present to you Nate Zuppam and he was a geek
since he was able to to be work and he talked about fast tests. So yeah, as many of you know, I'm Nate. I'm from Slovenia
but I spend my winters in the Canary Islands doing any type of surfing, windsurfing, stand-up paddle surfing, boogie boarding, foiling, winging, whatever, whatever gets me on the water. But when I'm not doing that, primarily I am a geek. I've been in Plone community since around 2008. PloneConf in Budapest was my first one and
my biggest contribution to Plone is probably the Plone API which has since been merged into Core years back. Thank you. And since then I've been mostly active in the pyramid trying to bridge, bring pyramid closer to the Plone community
and I also built an API package for pyramid as well. So if these days if you want to build a robust REST API with pyramid, this is the package you should check out. The other thing that I've been super active in Plone are sprints. I've organized
20 to 30 sprints probably in the past 15 years. Anywhere from the Balkans up to the north of Finland. This is a picture from a sauna sprint. We're sprinting in a sauna, yes. It was fun. Speaking of sprints, there's a sprint I'm organizing in Lanzarote in November. The topic is gonna be Nix slash NixOS. Some of you already use Nix and might be interested. That's why the slide is here.
We have this really nice villa in Lanzarote and I'm organizing sprints there and I'd love to have a Plone sprint. Like I don't do any Plone these days, but you know, I can organize and get everything together. You just come up with a topic and
someone to cover the catering and that's it. I can organize it. So let's talk about that. Let's get you guys to Lanzarote. Right before we begin, these are not my pants. The owner kindfully already
win. But yeah to get to the meat of what I'm trying to talk about today is for the last year I've been developing a Mac OS app that checks for basic security on your device. It lives up there in your in your menu bar and then yells at you if you don't have this encryption if you're firewall disabled and then it sends this in a team setting and sends this data to a web dashboard where as a team admin
you can see that everybody in your team has this installed and all the checks are passing. It's a very simple pyramid application and I was focusing on marketing for a few months of this app and I didn't do any development and didn't really review the pull request and I came back after, you know, a couple weeks, couple months
and I saw that the tests were running for three minutes and I completely flipped out because we only have like 300 tests and like three minutes for 300 tests is just too way too slow. And you know, I started I was looking why are the tests slow and it's like it was just the really basic things that
I implicitly have in my mind, but I didn't really realize that maybe other people in my team didn't have. So I started writing up an internal document, you know, what to check if your tests are slow and then realized well this is actually a good talk subject and just around that time the call for proposals for Plunkov came up and I submitted and here we are.
First I want to get you excited that you want to invest time into trying to make your tests faster. So here's some anecdotal data. There's a person that increased their runtime for 50%. There's another person that tests like 75% increase or decrease, sorry, so decrease in time, increase in speed.
Another one more than half, a third, half, half decrease and it's also found someone that you know, I'm gonna give you nine to ten items or steps you can try today and
this team just did one of them and got an 82% decrease in speed time. Crazy. So, you know, let's imagine that a 30 to 50% speed increase is possible without too much of an investment like with you know a couple of hours.
That means you're saving a lot of money on your CI because you know every minute that you're running your CI it's direct dollars or euros. So you're saving money directly. You have much faster deployments because faster CI means faster deployment means your company, your customers get bug fixes faster, features faster. When you have a problem and you need to
like do a couple of quick deployments because something's wrong in production, the roof is on fire, it's way nicer to have faster deployments and not to wait for 5, 10, 15 minutes until the fix comes up and you're like oh, is it gonna fix it, is it gonna fix it? And when you're doing local development, if you have a faster iteration time, you're more productive. It's very simple. And you know, finally, it's just this,
I hate when people are just waiting for stuff to happen and are not productive and this is definitely the case for me. If I know that the CI is gonna take 15 minutes I'm just gonna start browsing YouTube and then half an hour later, I'm gonna be like, what was I doing?
You know, because 15 minutes is enough that I'm not gonna stare into a wall and wait for 15 minutes and it's also not enough to start something meaningful. So I'm just gonna, yeah, start browsing the internet and just completed this track of my thought. So ideally for me, CI should be two to three minutes, maybe up to five. If it's over five, then I already start acting on it.
Yeah, I've already talked about this with a couple of you in these last two days and this is the usual excuse, but I don't have time to make more time for me. And I can give you the usual, you know, anecdotes and quotes on, you know, if Abraham Lincoln said, if you give me an hour to chop down a tree
I will spend the first 45 minutes sharpening the axe. And there's true to that. But instead I want to convince you that speeding up tests is actually easy and it's also fun to do because you get immediate results.
speaking of coming back to Plone API, I think the biggest reason why Plone API was so successful was because it has amazing documentation and let's see if we can fix slow tests by having an amazing documentation too. So yeah, like I said, I'm gonna give you a list of a number of things you can try. You don't have to take photos during the talk,
cause I'm gonna have the last slide, all the links there and just do the last slide and that's it. Yeah, I see Ediko recognizes the location of this photo. So we're gonna talk about how to run your tests versus how to write your tests. But before we do, I
know that a lot of you want to just get your hands dirty and just start applying all the fixes and then have an amazingly fast test suite, but I urge you to don't do that. Go one by one, change just one thing, measure it locally, measure it in your CI. If there's a difference push it to the main or master and then wait for a few days, maybe some test breaks for someone else,
maybe doesn't work for someone else and after a few days everything's still as it should be, do another thing and then just rinse and repeat. Don't do everything at the same time. Obviously, you can measure the full suite, how long it takes for this I suggest. Is it running yet?
There we go. Hyperfine, which is a super nice CLI benchmarking tool. You give it a command, basically you can give it two commands and it's gonna run either of the two and then compare them and then tell you if there's a statistical significant difference between one or the other. It's super nice. It has this nice progress bar and just yeah, hyperfine, amazing, use it.
And then if you want to drill down to more on a test by test case, if you're using pytest, you just give it a dash, dash, durations and a number. It's gonna print out the 10 slowest tests
and so you always want to start fixing the slowest test, of course, because if you have 10 slow tests and a thousand really fast tests, you want to fix the slow tests. Otherwise, they're not gaining anything. And then moving forward, maybe at some point you're also gonna get to the profiling stage again with pytest,
which is what I use these days and it's like the most common testing runner in Python. You get this nice profiling integration. So for every test it gives you the exact call count of all the functions and sub-function that the test calls and the code under the test calls and also gives you this nice graph so you can visualize where time is spent
in your tests. But that's, you know, it's very specific to a product. So profiling, it's hard to talk about it in a general sense. You have to, you know, see for each code base, you know, what applies. So I'm just gonna give you a bunch of pointers and then you just go through them and see which one apply for your code base. Right, this is
we're now actually going back to, like I said, how do you run your tests? First, just throw money at the problem. Modern CPUs, Macs, Intel's are super fast. Make sure everybody on our team has like up-to-date computer, laptop, whatever they have. If you can buy bigger
so bigger machines on the CI, bigger images, bigger runners on the CI, or if you can afford it maybe think about self-hosted runners. Circle CI and GitHub actions allow you to run the actions on your local hardware device and, you know, Mac mini costs like
below a thousand euros and it's super super crazy fast. And if you have one or two of those as a CI runner it's gonna be much faster than using some virtual CPU in the cloud. Right, with that out of the way, yeah, just hardware needs to be there.
The first step of running a test suite is finding out which test needs to be run. And this is a super important step because even if you just run one or three tests, this will still do, like this collection of tests will still happen every time you run tests. And if this takes 10 seconds, meaning
even if you run a test that usually takes like a second, you're still gonna wait 10 seconds for the collection to happen. And in pytest, it's very easy to check if collection is fast or not because you can give it a collect-only flag and it's just gonna do the collection. And if it finishes in a second, you're like, okay, good. If it takes a few seconds or 10, 15, 20 seconds, like, okay, this is where we have a problem.
Sorry? Yeah, bad smell, yeah. Yeah, yeah, and my, like, ballpark figure is around a second. It should take a second for a thousand tests. If it takes more, it's probably something in there that
we can improve. Yeah, and usually the culprit is that you're pointing pytest at too big of a directory tree and then it scans so many files and that's why it's slow and you can, with this configuration, you can tell it don't look at .git, don't look at this tox file, don't look at this large machine learning model that we have here. Or alternatively, you can say pytest,
just look in this, just scan this really specific tests directory, which sometimes is not possible if you have tests split out into different folders. That's when you use the first approach. Another common problem with the collection is the collection also runs conftest.py,
which defines our fixtures in pytest and also imports all the modules. So you might have big top-level imports. For example, if you import, again, machine learning models or libraries or Django or Plone as a top-level import in your tests,
that's going to happen during test collection. And if that import is slow, your test collection is going to be slow. To figure out if this is the case for you, you give it a dash dash no conf test, so it's not going to run the conf test, not going to do the import. And if there's a big difference between collect only and then collect only with this no conf test, if there's a difference, that means the problem is in the conf test file.
And there's also a tip there how to visualize the import time. For example, here you can see that I had problems with LXML. So one test actually had a top-level import of LXML and that slowed down our tests. The solution is simple. It just moved the import into the test function. And so the import will get run when the test function runs, not when tests are collected.
So yeah, very easy fix. Another easy fix is to prevent Python to generate bytecode because you don't need that in development and you almost surely don't need that in CI because you're always starting with a fresh system.
And why would you prepare this cache if you're never going to use it? It's probably not going to give you a huge amount of improvement, but you know, anecdotally, I know people that this really helped for them, especially, you know, people that use maybe network drives and you know, maybe a bit more complex CI systems.
Pytest comes with 30 built-in plugins and potentially you don't need all of them. You list them with trace config and then you disable them with this dash p no column. I usually disable doctest, nose and pastebin plugins because I just never use them. So why load them every time I run pytest?
Right. So collection is now sorted. All our tests are collected super fast. And it's time we get a bit picky. By this I mean usually we have some tests in our test suite for which we know that they're slow. And maybe it doesn't make sense to run them every time on
your local machine. So you can mark them as this test is slow. It's not going to get run. And then in CI you can say run all tests including the slow test. So this is just a small improvement for the iteration speed for your local development. You can be a bit smarter and use pytest incremental which looks at git diffs to determine
what tests need to run based on what you change in the gif file. Or you can use pytest testmon which goes a step further which checks because it checks the coverage report to see aha so these five tests touch this line in your code because that's you know, you see it in the coverage report.
So we need to run these five tests because this line was changed. So it picks up even more tests than pytest incremental but it's a bit more involved to set up because you need to save the coverage report from the previous file so that this this tool can work with, has some data to work with. And this can also be used in CI. For example
you can do incremental testing with these two tools for branches and then just do the full suite for the main branch or you could do the incremental run first and if that passes then do the full test and maybe you save some CPU credits on the CI with that because you never run a long test until you check
you know, the things that you've changed. Right, so we now know how to run our tests. Let's see how we should write our tests. Unit tests rarely need to access the internet. Usually what we do is we mock all the requests to the internet so that
we can test the various different responses and failure modes and also to make them fast because the tests are not actually waiting for responses. They just get them because they're mocked. However, we often don't realize that the code that's being used under the tests is doing some network connections. For example, what happens to us specifically was
we added support for Gravatar profiles and those were added into the main endpoint for the user and the user endpoint is called basically on every page because you need to display the user name and the profile and that meant that the majority of our tests now started to push requests to Gravatar to get the
Gravatar API to get the profile URL or whatever and we didn't realize that and we had this, you know, our test case got slower by this. Like it was 10% slower for me locally on a really really fast internet. If I would be somewhere on a conference like this would be super slow.
And by using pytest socket, this test started failing because they fail with, you know, internet access was prevented and then I realized, oh wow, yeah, we're not mocking the Gravatar requests. Let's mock them. We mock them. You have safer tests, more robust and faster. So pytest
socket, very nice to use. Similar to network access, I would also say that disk access should very rarely be done in tests. Because, you know, it's error prone and also again, it's slow. Disks are slow. They're faster than network, but they're still slow.
And one option is that you mock everywhere in your in your test code where you have, you know, open file pointers or whatever or you can use spike by fake FS which builds you a in-memory file system. It can also take actual files, map actual files from your file system, but it will never write the changes back.
So again, it's safe because you're not touching the real file system. It's more robust because you're not taking context from the current file system and potentially then tests break in CI or for someone else. And it's faster because it's an in-memory file system. It's not accessing your disk. So maybe on CI if you don't have SSDs like this would be a really big speed improvement.
Yeah, cool to use. And then we go to databases. These days if you're testing a real-world web app, you usually have a database access in your tests. So we're not really writing unit tests. We're mostly writing integration tests.
But still some tests might not require a database. If you're testing a function that calculates an age of a user, you don't have to fetch the data birth from the database. You can just provide it to the like a hard-coded value to the test function and then that test doesn't have to set up a database. So you probably can set up your test suite in a way or your test cases, your test fixtures in a way where you
have tests that do require a database and then there are tests that don't require a database and they can run faster. So that's the first approach to speed it up. The second one is maybe not all tests require the entire database. If you're just testing user profiles maybe you just need the tables and the data for the user profiles and the related tables, not everything else that you have.
You can ask yourself, can you create the database only once? When people start testing they usually create, populate and destroy the database for every test. So if you have a hundred tests, you do a hundred times. So creation, destroy, creation, destroy.
Probably you can create just once and then at the end of the test you truncate all tables which means you empty the data of all the tables. Truncate is a fast operation. So this happens much faster than create, destroy, create, destroy. So you create and then truncate and then truncate, truncate, truncate after every test. To be even faster
you can make sure that your tests never commit. I'm doing this and I like the approach. So you create the database once, you populate it with some dummy database and then you run your test and in the end of the test you just roll back. So you never commit the transaction. Sometimes you still need to because for example, you have to test something where the commit is necessary.
So this is a bit more involved because you have to remember that whenever you do a commit you then have to manually revert the data that you committed. But you know for us maybe there's one or two percent of the tests that are like that and the rest of the tests can use this rollback mechanism, which is way faster than truncating and repopulating on every test run.
So a couple of approaches. Right, we've covered both. But there's more. There's one part where you actually have to combine both and that's parallelization. This is where the big wins are. So if you want to have really really fast CI you need to look in parallelization.
But it's a bit more involved and takes a bit more time. So definitely starts with other things first. In the pytest world this is the defacto tool pytest exist. It allows you to run to split your test into multiple CPUs, multiple sandbox sub processes, even remote
workers. The problem is it usually doesn't work out of the box with real-world web apps where there's you know very complex fixtures and tight integration with databases. The main reason is because pytest basically runs the test setup on every worker and then you get database conflicts and the workers don't have synchronization between them.
So again if you're doing if you're doing the populate and truncate and populate and truncate and there's two tests running simultaneously and one truncates while the other tests are running it's just gonna fail. Pytest Django works around that by creating a database for each worker and then
subfixing the database name with the worker name. So that's one possible solution. It takes a bit more memory because you need to you know spin up if you have five workers five databases but it still runs faster compared to running the entire suite serially. And yeah
The same for the file system, yeah, yeah, exactly. Pytest split is potentially a better first try. It's much easier to start. Usually you don't need to change anything in your code base. The drawback is it doesn't help on your local development. So this is only for the CI. What it does is it keeps
so it keeps the durations of all the tests in a file and then it looks into that file to decide how to split the test into similarly sized chunks based on the duration. So it's it's not like alphabetically, but it's based on the previous run duration. And then if you tell it, you know, I want to have five workers
you just tell Pysplit, you know, I have five workers and it's gonna split them in your CI And then you have to make sure that, so this is more of a DevOps task than it is a developer task because you need to configure CI correctly to run these five workers and then save the coverage for every worker and then have another CI step at the end to combine and to merge the coverage report and then run your
like your total coverage at the end and maybe fail the build if it's not a hundred percent. So yeah, I like this one a lot because it doesn't matter how legacy and poorly built your test with this you can probably split it with this with Pytest xdist. Yeah, a lot potentially a lot of work
but with this should be fast. Now that your tests are fast how do you keep them fast? Because this was why I even started this talk. If you remember tests were fast I didn't work on the project for a few months. I came back and they were twice as slow or three times as slow. So
my answer is this. I created a GitHub application that will for every pull request measure or check if the tests got slower or not and then will annoy you or even block the merging of a pull request saying like, you know, this your tests have gotten slower.
You just go on a GitHub, click install. There's very very short installation procedure basically just tell Pytest to dump the test durations into a CSV file and then you upload the CSV file for the app to use and yeah, like I said you get a comment like this into every pull request. At the moment I support Pytest and GitHub actions
but should be quite trivial to support Plone as well because Plone uses Jenkins and Jenkins has JUnit output, right? And JUnit has the time duration if I'm, the test duration. Yeah, yeah, that's what I meant. Sorry. So it's if there's if there's any interest that I would be very glad to support this.
And yeah, the app is free for personal use and for open source, and I'm also absolutely making it free for the Plone foundation. So let's get this. Yeah, check it out and let me know
what's missing. Yeah, one more thing. At the moment, we're just measuring the you know, this full suite duration, but in the future, I already have ideas how to also measure CPU cycles and memory consumption and then I'm thinking that it's probably a good idea to say that like if you add five tests and those tests are in the like
ten slowest tests, also to you know, make them visual. So there's you know, it's gonna grow a bit. It's a bit basic now, but it already works. It's like we've been using it in our company for about a month now and it's already catching you know test performance regression, so it's definitely usable.
A few extra tips because we have time. For local development, I suggest using last failed. So you run a test suite, three tests fails, fails and with this Pytest we run just the failed test. So again, your iteration is faster and there's five, six Pytest plugins that I use in every project.
I've listed them on that blog post so you can take a look later and yeah, like I said, find this repo. It's called Awesome Pytest Speedup. There's a checklist there. You can just copy paste into your own ticketing system and then just go one by one.
Just one a week or one a month or you know, two hours a month. Go through them. Those are all links to descriptions how to use them and you know more details. There's some general guidelines like how to measure stuff and some extra tips at the end. Yeah, very open to requests and comments and also I'm potentially considering doing a bit of consulting for this
because you know, it's exciting to me helping people be more productive and it's also I see it as a climate conscious action because you know if we're saving CPU executions in the cloud, that's definitely saving some carbon emissions.
So yeah, if you'd like some help with applying these techniques today, let me know and we're gonna have a chat. That'll be all.
If you have any questions? So I was not there yesterday and the day before so I don't know if it was mentioned, but your talk is about Pytest which most of the plug community does not use even though they should.
So there's just a plug-in, I mean there's an egg that will enable you to run the test, the whole Plone test suite with Pytest so you can reuse all nice hints from it with Plone as well. As David said yesterday, there are a number of add-ons that, Plone add-ons that are already tested with Pytest and there's a
fixture from Gosept that allows you to run. Oh yeah, yeah, okay. Yeah, so there's definitely, as far as you know, I would say that there is definitely movement toward Pytest in the Plone community. But a lot like everyone that's doing Plone is also doing other stuff
like Pyramet and other frameworks and you probably use Pytest there, so. I'd say it's maybe early days for using, not a lot of projects are using it. If you try it and have problems don't give up, post about it in community and we'll discuss how to make it work. Yep. Any more questions? Okay, it's time.
Thanks.