We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Beyond 100% test coverage

00:00

Formale Metadaten

Titel
Beyond 100% test coverage
Serientitel
Anzahl der Teile
53
Autor
Lizenz
CC-Namensnennung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
At Niteo, we've enforced 100% test coverage on our projects since 2014. I'll explain what else we do to make our code robust and as bugfree as humanly possible.
40
CodeDigitale PhotographieSoftwaretestEntscheidungstheorieOrtsoperatorEDV-BeratungClientProdukt <Mathematik>ProgrammfehlerGüte der AnpassungFreewareE-MailVorlesung/Konferenz
SoftwaretestSoftwaretestMereologieTeilbarkeitSuite <Programmpaket>SoftwareentwicklerApp <Programm>Program SlicingPunktRefactoringComputeranimation
KonfigurationsraumGraphfärbungDateiformatKonfiguration <Informatik>DreiComputerspielCode
TupelExogene VariableCodeDreizehnSystemplattformPlug inSoftwaretestIndexberechnungNabel <Mathematik>Dämon <Informatik>Lokales MinimumSchreib-Lese-KopfVerzweigendes ProgrammDifferenz <Mathematik>Demo <Programm>HochdruckWechselsprungAusnahmebehandlungBimodulRechenwerkENUMFehlermeldungAttributierte GrammatikDatentypMailing-ListeVerzeichnisdienstCodeFormation <Mathematik>SoftwaretestVerzweigendes ProgrammFahne <Mathematik>ComputerspielGeradeBildschirmmaskeGraphfärbungSkriptspracheProgrammfehlerRechter WinkelFormale SpracheMinkowski-MetrikTypentheorieParametersystemAbzählenVerkehrsinformationDateisystemKardinalzahlDifferenz <Mathematik>AusnahmebehandlungWechselsprungFunktionale ProgrammiersprachePunktHydrostatikSoftwareentwicklerAnalysisSpielkonsoleKonditionszahlMessage-PassingHackerComputeranimation
HackerSoftwaretestENUMOrdnungsbegriffDifferenz <Mathematik>Schreib-Lese-KopfHochdruckNabel <Mathematik>Demo <Programm>BildschirmfensterGerade ZahlFehlermeldungDatentypVariableStatistikObjekt <Kategorie>Befehl <Informatik>Topologischer VektorraumIndexberechnungCOMSichtenkonzeptSystemaufrufPatch <Software>SoftwaretestCodeGefrierenObjekt <Kategorie>InformationsspeicherungMathematikProdukt <Mathematik>SoftwareentwicklerSelbst organisierendes SystemQuaderPi <Zahl>PunktKartesische KoordinatenWrapper <Programmierung>InformationFunktionalPlug inCASE <Informatik>ParametersystemFehlermeldungNeuroinformatikDatensatzMailing-ListeAbschattungProgrammfehlerTypentheorieLeistung <Physik>ZahlenbereichKlasse <Mathematik>GeradeDickeMultiplikationsoperatorRefactoringTwitter <Softwareplattform>Computeranimation
SichtenkonzeptVerschlingungBimodulSystemaufrufKontextbezogenes SystemKonvexe HülleBeamerTrigonometrieSpieltheorieNormierter RaumSoftwaretestNabel <Mathematik>Demo <Programm>VIC 20Prozess <Informatik>Plug inSchreib-Lese-KopfHook <Programmierung>SystemplattformMathematikInformationPatch <Software>FehlermeldungFolge <Mathematik>Konstruktor <Informatik>Attributierte GrammatikDatenstrukturMini-DiscIndexberechnungGruppenoperationMotion CapturingPrimzahlzwillingeOISCBildschirmfensterCASE <Informatik>PaarvergleichRegulärer Ausdruck <Textverarbeitung>Arithmetischer AusdruckMessage-PassingQuick-SortElement <Gruppentheorie>MultiplikationsoperatorFunktionalSchreib-Lese-KopfKontextbezogenes SystemSystemaufrufVererbungshierarchieProgrammbibliothekInhalt <Mathematik>PunktDatenverwaltungGeradeGanze FunktionFunktion <Mathematik>CodeSoftwaretestProdukt <Mathematik>DifferentePlug inFehlermeldungTaskKomponententestPaarvergleichQuick-SortArithmetischer AusdruckAppletKlasse <Mathematik>AusnahmebehandlungGenerator <Informatik>Front-End <Software>Prozess <Informatik>DistributionenraumMotion CapturingInformationObjekt <Kategorie>Mailing-ListePatch <Software>ZahlenbereichKontrollstrukturSoftwareentwicklerResultanteProgrammfehlerObjektorientierte ProgrammierspracheLoginHilfesystemFaserbündelÜberlagerung <Mathematik>DifferenzkernUmwandlungsenthalpieHochdruckExplosion <Stochastik>ExistenzsatzComputeranimation
Element <Gruppentheorie>Funktion <Mathematik>FehlermeldungNabel <Mathematik>Differenz <Mathematik>IndexberechnungSichtenkonzeptDatenstrukturParametersystemInstantiierungSystemaufrufQuick-SortSchreib-Lese-KopfSoftwaretestDemo <Programm>Exogene VariableManagementinformationssystemGravitationsgesetzElektronische UnterschriftDefaultTupelAusnahmebehandlungBefehl <Informatik>SpeicherverwaltungPlug inSystemplattformAttributierte GrammatikSchlüsselverwaltungElektronische PublikationHochdruckEntscheidungsmodellOrdnungsbegriffTaupunktDrehfeldLASER <Mikrocomputer>RechenwerkURLTransduktor <Automatentheorie>BildschirmfensterAnalysisDemo <Programm>TypentheorieCASE <Informatik>SoftwaretestTupelGüte der AnpassungCodeSoundverarbeitungElektronischer ProgrammführerObjekt <Kategorie>AutorisierungDatenstrukturSoftwareMultiplikationsoperatorProdukt <Mathematik>NeuroinformatikGlättungAttributierte GrammatikFunktionalHochdruckExogene VariableSchlüsselverwaltungAbzählenURLREST <Informatik>Funktion <Mathematik>Ein-AusgabeKomplex <Algebra>SoftwareentwicklerÄhnlichkeitstheorieFehlermeldungPhysikalisches SystemParametersystemMailing-ListeKonstruktor <Informatik>ZeichenketteHydrostatikDefaultKlasse <Mathematik>SystemaufrufData DictionaryProgrammfehlerBaum <Mathematik>Computeranimation
TupelExogene VariableGefrierenCodeHydrostatikDateiformatTexteditorCoprozessorSchreiben <Datenverarbeitung>AbzählenMultiplikationsoperatorFront-End <Software>Klasse <Mathematik>GeradeAnalysisFunktionale ProgrammierspracheTypentheorieFormale SpracheTupelPlug inExogene VariableMotion CapturingDebuggingProzess <Informatik>ComputeranimationVorlesung/Konferenz
SoftwaretestElektronische PublikationDifferenz <Mathematik>MAPAliasingProgrammbibliothekComputeranimationVorlesung/Konferenz
StellenringTypentheorieMinkowski-MetrikPhysikalisches SystemFormale SpracheDezimalbruchSystemplattformParametersystemFehlermeldungTaupunktDemo <Programm>EntscheidungsmodellKonfigurationsraumWeb SiteSampler <Musikinstrument>VersionsverwaltungBefehl <Informatik>Quick-SortElektronische PublikationCodeProgrammierumgebungGeradeMinkowski-MetrikVersionsverwaltungHook <Programmierung>KonfigurationsraumElektronische PublikationWellenpaketHinterlegungsverfahren <Kryptologie>Computeranimation
Transkript: Englisch(automatisch erzeugt)
Hi. So all the code is on GitHub. You don't need to write things down or take photos or whatnot. Everything is there. So you're on that path, you know, you see, you want
to get everybody says that you need to have 100% test coverage and you're working on that path and then after months, weeks or years, you reach that summit and then what? What happens then? So I've had this happen to me already four or five years ago when we transferred our company from a consulting company into basically developing our own
products. On the 1st of January, 2014, I sent all of our clients, some of you in this room, an email like, sorry, no consulting anymore from us. We're going to be doing things like our own products from now on and that kind of put me as a CTO in a position where I made the requirements all of a sudden and one of the requirements were I want to
have 100% test coverage on everything we do and I'm, that was one of the best decisions when and today when I have to work with code base that does not have that, it feels icky because when I change things, I'm like, how do I know this works? Like that
said, 100% test coverage does not give you bug free code but it's still better than, you know, 60% or 70%. So how many of you have at least some of the packages tested covered 100%? Oh, okay. Good, good, good. So what do you do when you get to that?
You have this huge test suite, then you actually start to remove tests. So how does that work? You still keep the test coverage but you actually start removing the test because the more tests you have, the slower they're going to be to run and more annoying they're going to be to run. Your developers are not going to run them because if the
test suite takes 20 minutes, who cares? I'm just going to commit and push. Your builds are going to be faster so your points are going to be faster and it's going to be way faster to refactor code because if you change just this tiny part of your app
and then you have to fix 37 tests, every refactoring is painful and you then don't do them and if you don't do refactors after two or three years, you're in huge amount of technical debt and you're feeling the pain. So take a Katana, slice it up. Before
we begin, who knows what's this? It's a... Yes, how many colors you could order your model T in? Any cool one as long as it's black. So there's a great Python package now available called black. It has zero configuration options and basically formats
your code into some way and since adopting black half a year ago, my life has improved so much that I just want everybody to start using it. It does not allow you to configure anything. It just basically says, yeah, this is how our code is going to look now and I don't care how it's going to look. Mostly true, yeah. I know, mostly true. And I actually
went back and looked at our pull request like a year ago and I think like 30 to 50 comments on every, like percent of the comments of pull requests were about styling and they disappeared. So it's just amazing. And you're going to see, I'm going to use
it, I'm going to use it now, you're going to see things moving around. So that's all black. Enough about talking, show me the code. Oh, yeah, I have to go into, come on, minimize this one. Okay, so what we're going to do now is we're going to have
a very simple script that just prints something to the console in either, in one of the colors that you want to print in. So roses are red, this is a hack. And we have
tests, let's confirm that the tests pass. Okay, so two passed. We're going to open the test coverage, 100% test coverage, right? And then we run the thing and roses are
red, this is a blah, blah, why is this not black? I have 100% test coverage, why do I have a bug? This is not fair. Well, if let's, so what we're going to do now
is we're going to add another flag to the testing command which is cough branch. What this does is it tests also that you've covered all the code branches in your code
base, all the if statements. And if you run it again, you see now we're at 93%, our code is actually not entirely tested. And there's another, there's a new thing here said that line four didn't jump to line seven because the condition line four was never false.
And if you look at the test, that's true, we test for red, we test for blue but we don't test for anything else. So this is a small thing that you could do to improve your basically, your test coverage reports is just to add, to set up in a way that it also reports on branches. And now that we know that, we can add this, you know, exception
and we can also test the black, some other color that is not defined up here. Let's see if it's, if we're, yep, back up to 100%. And now if I run it, we at least
get like an expected failure. So the most annoying bugs are when things just kind of seem to work but not really. Here at least know like, well, there's like, you know,
I know where to start debugging this. It's still a lot of code and a lot of tests for such a simple thing. Can we improve it? Okay. So let's look at the diff between
the previous thing and the new thing. So we removed all of these ifs and we removed a bunch of code and we just added an enumerator and we have just one test instead of three, four tests, sorry, two tests. So what changed? Instead of having, like in here, instead
of having like a big if saying, like if red to this, if blue to this, if black to this, you define an enumeration and say like this is the, these are the colors that we support, like red is this, blue is this. And then you say here in the, in your method
that like this color parameter needs to be, needs to come from this enumeration. And this then allows you to not do any of the checks in your code or in your tests but use MyPy for that. So if I now do this, it's going to tell me that this color enumeration
has no attribute black and this is here, right? So anywhere in the code we would use, we would call this method and give it something that does not exist, MyPy will report on
that. So MyPy is a static type analysis tool for Python that was inspired by all this latest developments in the functional programming space from Haskell and languages like that. And I've had the privilege to work with Domon this summer and he's now, he's a full-time
Haskell developer. I dragged him back into Python for three months and he really inspired a lot of the packages that we're publishing these days also for pyramid. And I also started looking into Elm which is a fantastic, fantastically designed front-end language that is functional
and statically typed. And now my code started changing. So if you look at, like this is now 15 lines of code compared to the previous over 40 lines of code and it has way more guarantees that it's going to work as one would expect it to. So I really think that, like,
we should start using enumerators more and we'll come back to them. So if we now fix this to blue, for example, yay, discard. Any questions at this point? Yes. Okay, so
one of the biggest developers of, for organizations driving development behind MyPy is DrawBox
and they're still on 2.7. So everything works on 2.7 just with comments instead of like in syntax you just write comments next to the code. So it works. And also DrawBox, yeah, the point stands. And they also, they actually developed a tool where you run like
a wrapper around your application in production and it will record what kind of type, what kind of data is actually getting sent around your code base and then basically you can also store that information, you can download it and then apply it to your code base. And they've used that to, I think they have all of their code base now covered with
types and they did it in a couple of weeks with that tool and they have a million and a half lines of code. So it's not something that you have to do manually. You can actually record production data and then just inject the types into comments. Okay. Who's using
flake8? Awesome everyone. I'm just going to go quickly through a couple of flake8 plugins. So again, a very simple method that gets you max values from a list. And here's
a test. Obviously like from here you would get a five, from here you would get a seven, but then there's some strange error. List object is not callable. If I would run flake8 before that, I would get that list is used as an argument and basically shadows the Python
built in and maybe you should consider renaming and that gives me the idea that like this here just won't work. So again inspired from what Haskell and Elm are doing, there
are things that a computer can prevent us from, bugs that a computer can prevent us from doing, from writing and computers are really good at that. And so let's use computers to do that. Use as many checks as viable for a use case and don't rely on your mental power to always recognize all the little nasty bugs. There's, flake8 has a number
of plugins. You should definitely look into them. And there's another one here. Basically a class with some stupid names. And then we just test that these two parameters are
of the same length. And I get this strange error where there's one is 29, one is one, but how, what? It's the same amount of text. Ah, there's a comma here. Who had this happen to him or her? Yeah. Stupid commas, invisible commas at the end. And again
if you would run flake8, you would see that line 23, well sorry, no line 23, line 5, there's, you would get notified. Or if you just use black, when you save it, this is, you're
going to get parenthesis around this text and you're going to be what's, oh, oh, now I see. Done. So use black. Okay, we don't need this anymore. Okay. Any reasonably sized
test tweet has a lot of mocks. And I review a lot of code and I see people abusing mocks a lot so I'm going to go through a couple of use cases how not to abuse mocks. Very
simple function to get the current microseconds and then how we test it. So let's see that the test runs. It does. So what we do is we patch this, this daytime object and then we say if now parenthesis is called and then like dot microsecond return 999
and then we assert that this method returns 999 and it does because we mocked it. Everything's fine. But then a week later, we add another function that gets the yesterday's microsecond and now we also need time delta. So instead of from daytime import daytime, we just import
daytime and then use daytime daytime and daytime time delta. And it's the same, like nothing changed except we added this one, this one function here and changed the import but the test now breaks and it breaks in a really bad way. It's assert magic, mock, whatever.
This is a really prime example of a really bad test because you basically didn't change anything and the test broke and imagine this code base would be, you know, several hundred thousand lines of code and probably 37 tests would fail in a similar way and you would
just spend an hour fixing it for a very simple refactor. So what to do? And why the test failed was because we're still patching the daytime object but now we're patching this object so this would actually have to be like this. But you have to find it
and you have to know how to do it better. Anyone uses freeze gun? All right, cool. So not everyone. Freeze gun is a fantastic package to basically allow your test to travel
in time and remove mocks from your test code. So if we go back to our test code, instead of mocking and then, you know, mocking the actual calls that your functions are doing, you just basically say, in this method, this is the time. Any time, like
any code that uses, you know, get now, that is going to be the return value and, you know, and then just assert and let's see if this works. Okay, so even if we
now do like this and we change these imports back to where they were, this is still going to work. I mean, even if we would do like UTC now or whatever, that would still work
because we're not mocking the actual internal calls in this function but we're mocking the value that Python gets from the time libraries. Super useful. Any questions at
this point? Yes? No, not really. You have to then do like content managers instead
of the test, not the entire test function but just like certain lines of code or rewrite that underlying code that is smells, not from the top of my head. Yeah. If you give
me an example, maybe I come up with something but a similar thing that is abused all the time is mocking logs. So here we have a function that processes something
and it prints into logs and we want to make sure that this print makes sense so that because this is what the user sees and it's really nice to have in test like all the logs printed so you know, you can visualize how the output will look like. And again, we patch this object and then we say, was it called two times? Yes, it was. And what was the
second call? Let's see if it runs. Cool. But then we just change because product owner wanted this to be a warning because it's more visible instead of an info and we run
the tests because we want to see what broke and we gain this, everything explodes because again, the calls are a bit different. So how we can improve this is by using the
log capture from the test fixtures library. So this is for pytest. I know for a fact that something like this exists for Zope test runner. I'm just not sure what the name is but it exists. I've used it in the past. And what you do is basically you capture
the log and then you just say, do logs look like this? Sure, they are. And now even if I do this, the error is much better. So this was expected, this was actual, you immediately
see what the difference is. You're much faster to fix the test. Again, like this might sound trivial in the 25 lines of code but when this log span hundreds and hundreds of line in the backend processing task, it's really useful if you have error reports that
actually help you instead of confuse you. Another from the flake, another flake plugin. I've seen this happen too when people use just assert true, assert true, assert
true, assert true and then an expression. And again, this will run just fine. You say yeah, we have everything covered. Then you change something here, I just like super trivial. I change this from one to two and then I want to see what broke and false
is not true, false is not true, false is not true. Like all the failures are the same which they are failures so you know where to start but it's not as good as it could be. Again, if you would run flake beforehand, it will tell you that instead of doing equal equal comparison, it's better to use assert equal because the error message is going
to be better. Let's look at those and also instead of true is not false or yeah, whatever the error message was, like this error messages are way better. So two not
less than two, two not found in zero one and two equals two and it shouldn't equal. We can be better though, more Pythonic especially if you're using pytest. Yey, only one person in the room. There's two of us. So instead of having a class that inherits from unit
test and then have this self assert, Java style asserts, you just write Python. And this will actually generate, also generate nice error messages like so. So, you know,
two less than two were like two was this and then this was the function that we called and again here was the result and then here. So yeah, use more, it's always good to use a more specific assert than a less specific assert because when things go wrong,
the developer that's going to look at that code, probably you three months down the road when you don't remember why you wrote something and you're bitching about it and then you go get blame and do, oops, that was me. Use specific asserts. And there's another
plugin called BugBear that helps you prevent all sorts of nasty bugs that we're aware of. So this will increment a number by one and then this will append to a list. We have two tests to confirm it, run the tests and a break because this in Python is actually
something like this and doesn't really do anything. And this is a known Python gotcha where you shouldn't use mutable default arguments because they get used over and over
again and you don't get a new list every time you call a method just on the first time the method is instantiated, called. And again, if you would run flake8 on this code before you start writing tests or running it in production, you would get that Python does not support this and you shouldn't use mutable data structures. Sure, a lot
of people know this but, you know, maybe someone in your team does not and then you're going to have to catch this in your code review and maybe you won't catch it. Why risk it if it's like a couple of milliseconds for your computer or the CI system to catch it? Ooh, another one where people, everyone gets stuck with mocks, reload. So we're
doing a, we want to see how he ended last month so we do a request for the first of October and the last of October and then we return the first of month and last of
month value and bring it out and we have a test that confirms that. Cool. So what we did here is we mocked the requests, so this object here and we say when you have request.get. parenthesis something in those parenthesis.json.parenthesis return
this thing and here we're lazy and we're just mocking it once and say for every get request return the same value because this is simple code we don't need to complicate and it's the same value but, you know, it doesn't matter. You could actually do
it so that it would return different values based on what was passed into the get method but you would need to use side effects and, you know, if you haven't worked with side effects before with mock, good luck figuring it out. So this is 100% test covered,
everything's fine. Let's see how we can do it better. The code here is exactly the same. What changed was we started using responses which is from the same author that wrote requests. Who uses responses? Cool. People are going to learn. So you say there's going
to be a get request to exactly this URL and please return that and there's going to be another get request to this exact URL. You see there's a different date here and please return that. And then you just specify what kind of the requests they're going to be and what the return value should be so you can really nicely mock
the network input output. And we run the test and it fails. One, one, one, two. And it fails because here we said response and response two but here we didn't. Good
because perfectly the previous code was 100% test covered, still a really nasty bug in there. So it's 100% test coverage is not everything. Your tests need to be good.
I think that's the last one. Yes. So we have a method to basically describe a car. So this car has X wheels and Y doors and we have a test to describe it. So we give
it a dictionary. This car has four wheels and five doors and we also check for bad parameters because we're not using enumerations here because it's a more of a complex data structure and we assert that if we don't give it wheels and if we don't give it doors, this
function will complain because we needed to complain so that the error message is descriptive to the user, to the developer so they know what they did wrong. We run it. 100%
test coverage. Do I get a yay? Everyone expects, everyone knows what's going to happen next. Boom. String object has no attribute keys. What? And then you're digging through the code and you realize you put here with a parenthesis. It's, I mean, sure you're
in an instant. So save time. Be lazy. Oh, I have changes. Okay. So how to do this better
is the same, similar principle to enumerations which are basically key value, list of key value values. And then you can use the name topple which kind of constructs a very simple object that you say it has, you know, it can has only this parameters. And
you say, again, for this described car method, the parameter needs to be of this type. And we can again run the tests. Everything's covered. But now we run mypy before we're
running the code and we see that the argon one to describe car has incompatible type, string expected car. You go here and then we see that we actually passed in string instead of car. So this is what I said before, like the computer caught this in an instant and you just basically need to tell, need to be specific like what are you expecting,
what are you expecting this car parameter to be. And again, we don't need the whole if structure here to check if certain parameters exist or not and we don't need to test for that because using name topple already gives us guarantees that the object that
was constructed and sent to this method has all the parameters that we need. Yes. And going further with that, you also have especially, you know, so name topples have been around since Python 2.5 or something, at least as an add-on package. And you
also have now in Python 3.6 I think or 3.7 you have data classes which are name topples. So if you go from enumerations to name topples, the data classes are then the next thing. I'm going to showcase data classes possibly tomorrow on a lightning talk about how to do rest APIs with pyramid and I'm going to use data classes there. Basically with
a data class you can also create like functions, sorry, methods for that, for that object and they have really good support for the static type analysis built in. So, look into those. That's it. Demo end. We're done. Okay. Recap. Use black. A lot
of the issues that we pointed out today would come up in your editor because when you hit save your code is going to go like whoop and then you're like, whoa,
something strange is happening, what am I doing? Or you're just writing things down like using black feels like you're a mad scientist and then you have this butler cleaning up after you. I don't remember even today like how we're supposed to format things. I just like save, whoop, everything is nicely in place. Done.
Just like takes the mental burden off of that. And so like if you're writing something and code does not get thrown around, that's also like a signal that my code is not good. Something should have happened there and didn't happen. Freeze gun and lock capture and responses are fantastic tools to make
your mocked code much more readable, much more robust. And then just go on PyPI, type in, you know, search for flake8 and then there's, I think, over 100 plugins and a lot of those are not usable but still, you know, maybe every week or every month take one or two and put it into your build process
and your code is going to get better and better all the time. And it's not that you're just improving code, you're making sure that you're not degrading the new code that's coming in. And then really, really look into how to use, think about how to use enumerations, name topples and data classes along with mypy. And to do that, I strongly recommend going to
this talk that's going to happen on Friday. Elm is, like I said, a fantastically designed language for the front end. It has all this functional programming static type analysis built in and I've started looking into Elm in the summer and I haven't, I don't think I wrote
more than 100 lines of Elm to date but it still completely transformed how I think about writing code in Python because it gives you that idea that you need to take care of the, you need to think how the data will look in your code and then you describe it and then you have all these guarantees that I've shown you. And just going to a
talk and thinking about the ideas that how this functional programming language is, how they are designed is going to improve your code writing abilities. Shameless plug, we're hiring. That's it. Questions, please. Oh, so if you go to
here, read me, there's an alias that and how to do that. Yeah, we use
Niteo, we use pre-commit and pre-commit works in two stages. You have the commit stage and the push stage and on the commit stage you can say like on commit stage just run this couple of checks that are really fast and only run them on the diff on the things that you do, files that you changed and on the push stage when you're actually ready to push, run everything and run on
all the files, dot pre-commit CFG files. Pre-commit is a library to do that. I have it linked. No, that's not it.
Config. So you say I sort, you say flake8, I want to run
pip and run flake and so this would run on every commit. This would run on every commit because it's fast. So check merge conflicts is slow so I just run it on push. Yeah, this works really like pre-commit is a fantastic tool.
Well, it installs itself into git hook. Yeah, let me verify that.
Can we shut down the camera? I think we have it here.
Yeah, it's a python package. Just do this and then I'm not sure this is gonna run now but we can try. So yeah, it's running I sort. I think my environment is out of date. No, it's not.
And then it's running flake8 and then it's dreaming training space and then it's fixing end of files so you don't have end lines and it's checking for merge conflicts so you don't accidentally push garbage.
We do spec challenging in our code base because my eyes hurt when I see typos and that's it. So you see this was and it's 30 000 lines of code and it's pretty fast. More questions? Okay, thank you.