Who let the robot out?
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Subtitle |
| |
Title of Series | ||
Number of Parts | 39 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/47847 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
Plone Conference 201323 / 39
1
9
10
12
16
17
23
25
29
31
32
37
00:00
Multiplication signDivisorINTEGRALGoodness of fitProjective planeProcess (computing)Software testingSoftwareSoftware bugMessage passingSoftware developerCloningContinuous integrationMachine codeCore dumpCodeXMLLecture/Conference
02:10
Video gameMereologySoftware developerMultiplication signProjective planeRight angleData managementPoint (geometry)CuboidBuildingProgrammer (hardware)FreewareMoving averageOffice suiteOpen setCodeRepository (publishing)SoftwareSet (mathematics)DivisorPresentation of a groupVirtual machineSoftware bugControl flowInstance (computer science)Real numberEmailLecture/Conference
05:04
Rule of inferenceContinuous integrationAnalytic continuationProof theoryControl flowSoftware testingMathematicsSoftwareMultilaterationPhysical systemNP-hardMultiplication signSoftware developerINTEGRALCASE <Informatik>Right angleSet (mathematics)ArmProjective planeLecture/Conference
07:33
Software testingComputer hardwareTime zoneControl flowExpected valueRule of inferenceRight angleArmLecture/Conference
08:43
Core dumpBuildingFeedbackRule of inferenceMessage passingPhysical systemSoftware developerRight angleInstance (computer science)Point (geometry)Lecture/Conference
09:46
HypermediaSoftware testingRight angleMultiplication signSoftware developerDependent and independent variablesReal numberService (economics)Rule of inferenceBuildingContinuous integrationLecture/Conference
11:51
Software testingPhysical systemSubsetProof theoryLatent heatGraph (mathematics)HookingCodePointer (computer programming)Mathematical analysisCommitment schemePoint (geometry)Limit (category theory)Server (computing)Different (Kate Ryan album)Repository (publishing)BitLine (geometry)Web pageSystem administratorFreewareSoftware developerPairwise comparisonContinuous integrationFeedbackRevision controlMessage passingComputer configurationRoboticsMereologyEmailExecution unitCuboidOffice suiteProgrammer (hardware)Fluid staticsINTEGRALElectronic mailing listLoginTraffic reportingDesign by contractProjective planeFunction (mathematics)BuildingSoftware frameworkPlug-in (computing)Online helpState of matterGreen computingInstance (computer science)Roundness (object)Right angleLevel (video gaming)Insertion lossHoaxService (economics)Multiplication signStatisticsGame controllerCovering spaceLecture/Conference
20:52
Software developerBasis <Mathematik>Software testingServer (computing)Proof theoryEmailMultiplication signProcess (computing)Virtual machinePhysical systemPlug-in (computing)Level (video gaming)SubsetINTEGRALInclusion mapSoftware frameworkFeedbackProjective planeResultantMathematical analysisCodeReal numberAnalytic continuationData managementDependent and independent variablesKeyboard shortcutSoftwareUniform resource locatorBuildingCycle (graph theory)BitContinuous integrationLine (geometry)Computing platformRoboticsTelecommunicationInformationInstance (computer science)Control flowTerm (mathematics)Fluid staticsScripting languageForcing (mathematics)PlanningCartesian coordinate systemSystem administratorOperator (mathematics)ArmMeasurementTorusType theoryRight angleVideo gameScheduling (computing)Drop (liquid)Covering spaceReading (process)Archaeological field surveyMessage passingOrder (biology)MetreRegular graphTouchscreenLecture/Conference
29:54
Analytic continuationProof theoryRight angleSoftware testingSlide rulePhysical systemBasis <Mathematik>Lecture/ConferenceComputer animation
30:29
Slide rulePhysical systemWeightSoftware testingLevel (video gaming)CodeINTEGRALLacePhase transitionContinuous integrationProjective planeSoftware developerSoftware maintenanceRight angleSinc functionComputer architectureSoftwareComputer configurationTerm (mathematics)CuboidData managementPoint (geometry)BitSubsetResultantBasis <Mathematik>Alpha (investment)Multiplication signQuicksortReal numberSoftware bugLecture/Conference
34:22
Software testingBitPhysical systemAnalytic continuationMultiplication signTest-driven developmentSoftware developerLatent heatServer (computing)Object (grammar)Continuous integrationBit rateSoftwareNumberSubsetEmailDampingRelational databaseObservational studySeries (mathematics)Programmer (hardware)DatabaseElectronic signatureHuman migrationSoftware bugGroup actionUniverse (mathematics)INTEGRALComputer programmingMilitary baseCuboidRight angleNumbering schemeTwitterArmSound effectContinuum hypothesisReading (process)Condition numberStatisticsProcess (computing)BootingSign (mathematics)Office suiteLecture/Conference
Transcript: English(auto-generated)
00:10
Hello, everybody. My name is Timo Stoltenberg. I'm a Plone core developer. I work with Plone since nine years. And I'm leading the Plone continuous integration and testing
00:27
team, so we make sure that Plone, the software keeps up the good quality that we have in there. And the reason why I got interested in topics like testing and continuous integration was that, like everybody else, I had projects over the
00:46
years and some went good, some went bad. I made mistakes like everybody else did and I wanted to, yeah, improve myself. And continuous integration and testing was a way of doing that. So every time you start a project, I still really
01:02
like my job like probably everybody else in the Plone community does, at least as what my feeling is, so I'm still very enthusiastic if I start a new project. And every time I start a new project, like, you have, like, the best intentions, right? You have, like, what you want your software to be is
01:20
like first thing, it should work, right? That's a plus, definitely. It should be bug-free, whatever that means. It should be fast, of course. Everybody should be maintainable because we want to be flexible. Agile is pretty common. Agile approaches are pretty common, so we want to refactor our code and change
01:44
it accordingly to the demands of the customer. The code should be readable because it should be easy to grasp what the code does and everything. It should be, of course, well-documented. And in the end, it should be in time and in budget. So this is what the wishful thinking is, what everybody
02:05
wants at the beginning. I never heard any developer who said I don't want any of those. And then there's the real world. So real world is like this, and all these are, like, true stories. So you go, like, as a
02:22
developer Monday morning, you come into the office and, you know, fresh start and you check out your Git repository and the build is broken. So what do you do? You look at the code, and, like, after 15 minutes or half an hour or whatever, you see a commit that might have broken
02:43
your build. So you call the developer or write him an email and say, hey, I think you broke my build. I can't start my Zoe instance any longer. So then the developer says, no, it works on my machine. It wasn't me. So you start discussing, like, for half an hour or whatever.
03:01
So you look into the code, and you will figure it out at some point. But it will take time, okay? Then you're finally able to, like, maybe deploy the software, and then it has, like, really bad performance. Or you run into lots of bugs. So your project manager really gets angry because, like, yeah, you ship buggy software and everything,
03:24
the customer gets angry, and you want to maybe fix code. I mean, if the performance is bad, you want to fix it. So then you look at parts of the code where you might see or might expect that this is responsible for the slow performance.
03:47
But you can't really, the code is pretty unreadable. You can't really refactor it because if you start, like, changing something, everything breaks down. So you can't really refactor things. So in the end, this leads to projects that are over time and over budget
04:03
because nobody expects you to, like, spend two weeks or four months or whatever just fixing bugs, right? Oh, yeah, I forgot those. So what's the problem between those two, like, between the wishful thinking
04:23
and the real life? It's assumptions, basically. Because, like, when we start a new project and you just assume that, like, you have all smart programmers and they don't introduce any bugs, you expect them to write clean code that is, like, maintainable
04:41
and that is bug-free, that developers don't make mistakes. That's a software that works as expected and everything. But this is what most people do, actually. I mean, they say, okay, at the start of a project, I want to, we want to have bug-free code and everything because everybody wants, but they just assume that this will magically happen.
05:02
And this is unfortunately not how it goes. So building high-quality software is really hard. It's not like, I won't present you any, like, silver bullets, like, saying, hey, I have this cool software you have to buy and then all your problems will be gone. It's hard work, but I will try to show you how, like,
05:24
continuous integration can help you to build better software because it helped me to build better software. So what should, like, continuous integration do? Like, it should help you to reduce assumptions and to reduce risk
05:41
and to replace them with some kind of proof. Whatever that means, I will come to that later. So what do you need? What are the basic ideas of continuous integration? The idea is that you have an automated build and set of tests that can actually tell you if your software breaks.
06:01
This is the basics. Sorry, there's no way around it. Like, you have to write tests if you want, like, continuous integration, right? And if you want to have bug-free software, then you have to write some tests. There's just no way around it. So the goal of continuous integration is, like, pushing this idea of having, like,
06:21
an automated build and test system a bit further by having the goal that the software is proven to work with every new change. This means that every time a developer does a commit, you want to make sure that the software works as expected, that it's not broken, and that you can somehow prove that this is the case, right?
06:41
So you don't break things. One important thing is that continuous integration is a practice. It's not a tool. It's not like Jenkins or whatever, or Travis. It's really a tool, and it's really a practice, sorry. And for that, you need an agreement on the team.
07:01
So before you start a project, you have to sit together with the team and think about ways how you can build quality software. And continuous integration just helps you with that, but there's no way around having an agreement on the team because you have to decide what you want before you can actually make a continuous integration server or whatever test what you want, right?
07:23
So this is the first step. And if you want to do this practice of continuous integration, there are a few really simple rules. The first rule is do not break things. Makes sense, but if you have an automated build
07:42
and a test, you actually can prove that you don't break things, right? You don't expect that if you commit something, you don't just expect that you don't break things, but you see if you break things. So what you should do is, before you push something to a repository,
08:02
you should run your tests locally first so you don't break things, right? Then you should wait until the automated build and test set run through. And one very important lesson that I had to learn
08:21
on the hardware is never go home on a broken build or do not push something and then go home and expect that the build will be green, right? Because then you will break it for everybody else and that kind of sucks, especially if you have distributed teams across time zones and continents and everything, then you can't just expect that everybody goes home, right?
08:43
So the second rule is if things are broken, don't make it more complicated. This is related to some of the things that I said before. Don't check in on a broken build. I mean, the only way to achieve that is if you wait until the build passes
09:00
and then commit on it. What you need for that is, of course, like a fast build and fast feedback. In Plone, for instance, the core dev builds, they take an hour. Right now, we are working on that. We will improve that, I promise. But this is how it is right now. So what I do if I do core dev development
09:21
or what I used to do is I don't want to wait an hour, so I was pushing and pushing, and then you see at some point, oh, the build broke. But then you have like six additional commits, so you have to look at it and figure out what actually broke, right? So that kind of sucks. So try to don't check in on a broken build.
09:42
That's easier if you have a fast CI system. And the third rule is that if you broke the build, fix it as soon as possible. First thing is that if you break the build, you should take responsibility, no matter if you're really responsible for it or anything else broke.
10:01
I mean, it's hard to write tests that are really reliable. We have those tests that are unreliable in Plone still. I worked a lot on that, but sometimes it happens that the test just fails. But if it was your commit, you are responsible, because if you're not responsible, then nobody is, right? And you have a broken build,
10:21
so we have a problem that we don't want to have. If you broke something, that's not really like a real bad thing. I mean, the CI service is there to tell you that you broke something, right? It's not like, yeah. So, but what the problem is that if you broke the build,
10:43
then you should be prepared to revert your commit if that's necessary. You have like, there are different approaches, but most people say that you should fix things within like 10 minutes or half an hour or something like that. And if you're not, if that's not possible, then just revert your fix
11:01
to make sure that other people can work, because if they don't commit on a broken build, they essentially can't work. And this is also something that is best practice for CIs that if someone broke a build, the entire team stops working and works on fixing that build. That's not always necessary, because sometimes like these are easy fixes
11:21
and you see immediately what you did wrong. But if there's a serious problem, then everybody in the team should stop and work on fixing the build, because we want a green build all the time. And the last rule is don't comment out failing tests, because it's really tempting. And this is what happens often, that you break the build
11:41
and you know that the other developers, they want to go further. So you comment out the test and then you forget about this. And this is also nothing you should really do. So how to get started actually with continuous integration. I will do one example of a setup that I usually do.
12:02
There are plenty of CI servers and plenty of different systems out there, but this is my setup, but it's more about the... Yeah, I mean, it's replaceable. So what you of course need
12:21
is like a central repository for everybody, but I just expect that everybody works nowadays with version control and everything. So I choose Bitbucket because they offer private repositories and it's for free in comparison to GitHub. And what you need is a post commit hook.
12:42
If you read like the documentation for Jenkins, for instance, they tell you that you should run your tests on a regular level, like every, I don't know, every 10 minutes or something. But there's a problem. If you run your tests every 10 minutes, then you might not be able to see which commit broke the build, right?
13:02
Because if two people commit within this 10 minutes and the build is run, you have two commits and one of them broke the build and you have no way of seeing that. So this is really essential for a working CI system that you have this post commit hook. And it's really easy to set up for every system. It's an SVN, you have a post commit hook in Git and Mercurial and everything.
13:23
In Bitbucket, it's very easy. You just go to the admin page and to the hooks and add Jenkins there. If you have a private Jenkins instance, then it's a bit more work because you have to add a token and everything, but that's also not too hard to do. Then you have like the Jenkins server,
13:40
which I will cover in detail in a minute. So this is basically where you run your build and your test. And then something that's also very, very important is to give feedback to the developers. If you have a CI system and it's just like, it requires you to go there and look if the build is like green, it's no use.
14:05
So you have to notify the developers. Usually you do that by email. There are other options. You can have it like push a message on IRC or have a big monitor in your office or whatever, but an email is the usual way of doing things.
14:22
And that's also pretty easy, for instance, to do this with Jenkins because Jenkins out of the box gives you the ability to just send emails. So you can send the email to a mailing list or you can send the email to a single developer that broke the build to let him know that he broke the build.
14:41
So this is really the most basic setup and it's really easy to set up, right? So what does Jenkins then actually do or DCI server? So the most important thing is, of course, that you run your tests, right? Because with our tests, you just assume that your system works
15:02
and we want proof, right? We don't want wishful thinking. We want proof that our system works. So if we run tests, we can prove that our system works. And that's also pretty easy to set up for Plone. We have this collective XML test report plugin for the test runner that spits out
15:24
an output that Jenkins can read and process and then you have those nice statistics and graphs. And something that's also important is, of course, test coverage because it's not enough that you have just tests.
15:40
You want to know that you have a decent test coverage so that your tests will tell you if things break. If you have only 10% test coverage, there's no way that those tests will really tell you if the systems fail. I mean, 10% test coverage is better than having no tests at all, but still that's a big part of the system
16:01
is untested, basically, so you will not notice if you broke anything. The second thing is acceptance tests. Acceptance tests differ from the other tests because like unit and integration tests or even functional tests,
16:20
they tell you if you break things. But acceptance tests, the idea of acceptance tests is that you have a specification. You get a specification from your customer and you test that actual specification. So you have something like, if you want to test a login form, for instance, you have this text above.
16:42
You say, given a login form, when I enter valid credentials, then I'm logged in. This is something that everybody can read and even write. And we have, thanks to Askusuka, robot framework in Plone that you can use to write those kinds of tests.
17:00
And if you, if every specification that you have, if you have a acceptance test for every user story, say, then you have with that proof that your system does what your customer expected to do, right? You have a formal specification and when you run those tests on a continuous level, then you have proof that the system
17:22
does what it should do, right? Without acceptance tests, you don't have that. You have proof that the system does what the developer expected to do. But with acceptance tests, you have really the proof. And if the customer then comes along and say, hey, our system does not work as expected,
17:40
then you can go with the customer to your acceptance test and discuss, is this what the system is supposed to do and do we have to change that? But if not, then I have to prove that the system works as we agreed on. We replace assumptions with proof. Another thing is static code analysis.
18:04
In the Python world, we have certain tools like PEP8 and flake8, PEP8 and pyflakes, and flake8, which does both of it. We have, for JavaScript, we have jslint and csslint and many other tools. So where we can, where we can,
18:22
where we see, where we can measure the quality of our code. I mean, the thing about code analysis is that you can have the really crappy code that is PEP8 compliant. So if you have like, if your code is PEP8 compliant,
18:41
it does not mean that you have like clean code that is easily readable, right? The only thing that code analysis can really give you is pointers where your code might have a problem. Like I, with Hector yesterday, I discussed the famous 80 character limit in PEP8.
19:00
I don't want to discuss this here, but like my point was basically that if you, if you go beyond this 80 character limit, this might show you that you have like too many if and thens or whatever, for instance, or you have like an if, whatever, and whatever, and this clause is unreadable. So you have to, you might have to do something about this.
19:22
That doesn't mean that if your line is 81 characters long, that there's a problem, right? There's a possible problem, but there's no, there's no, code analysis does not give you proof that there's a problem. So it just gives you a pointer where you might have a problem. One other thing that's important about code analysis
19:41
is readability. I mean, Python code is pretty readable. We all agree on that, and this is why we like Python. But if you have like different personal preferences, for instance, in code, that makes code more unreadable. So you need an agreement on the team and you can't just assume that all programmers
20:02
have the same style, you need an agreement. And we have kind of those, this kind of agreement in the Python community, which is PEP8, right? So this is also something where we replace assumptions with proof where we can do that. Another thing that I, another kind of problem
20:23
that I usually run into in projects is performance, right? I mean, performance is, it's hard to say like I want a fast system, right? Because you need a goal. Some, if you do a, if you have a contract
20:40
with a customer, sometimes they say, okay, we want every page rendered within like 500 milliseconds or something, then you need proof for that. But usually it's just that customer come and say, yeah, we want the system to be fast, right? Whatever that means. So this is also something that is assumptions usually. But you can have proof, you can run a performance test.
21:06
Jamie, the tool that I use these days is JMeter. I also use the Grinder and some, try some other tools. But what's nice about JMeter is that it comes with a desktop application where you can basically click together your test plan. It's really easy to do.
21:21
And you can not only run those JMeter tests from your desktop system, but you can also run it on Jenkins and make Jenkins run it. So you can really click together your test code, push it to your repository, and then make Jenkins run it on a continuous basis. The problem with performance tests is that they usually
21:42
take some time, especially if you want to hit the systems with a lot of users. So you need a dedicated system to test against. And you can't use the Jenkins machine if other jobs run there, right? Because then you won't get decent results.
22:00
But if you run those performance tests on a regular level on your continuous integration server, then you can replace the assumption that your system is fast with the proof that it is fast or that it reacts in a certain time.
22:23
So the next thing is documentation. Documentation is also, everybody loves documentation. Everybody wants to write documentation. If you do test-driven development, you basically have developer documentation, right? Because if you want to know how a system works,
22:41
and what I usually do is I look for tests, and if there are decent tests, then you know how to work with the system. So you have this lower-level developer documentation. You can achieve with writing tests. But there are also other kinds of documentation needs that you have in a project.
23:01
For instance, if you have robot framework acceptance tests, you might want to have them in a nice format, which you can give to your customers so they can have a look. We have this technology, for instance, to include robot framework tests, even with screenshots in the Swings documentation. This is what I do in projects these days.
23:21
I have like, I start a new user story to implement it. I write the acceptance tests. I make sure that the robot tests do some screenshots. I integrate those screenshots into Swings. So my project manager can just go to this URL and look at those, and see if that's what he expects me to do.
23:44
So this is kind of like documentation. There's, of course, some kind of other documentation that you need to write that has no tight bindings to the code. But what Jenkins can do for use is auto-generate this Swings documentation, right?
24:01
Because it's really easy to just have a Jenkins job that runs a Bash script, that creates a Jenkins documentation and uploads it automatically to a server. That's really two lines of Bash script. So that's really easy to do. Then the last thing is notifications.
24:21
I already talked a bit about that. There is the Jenkins X email plugin, for instance, that you can use, which allows you to send emails in a really flexible way. You can decide, for instance, what you do if a build breaks or if the build is fixed again, or if you made things worse
24:42
by introducing new test failures, or if you improved it. So you can choose for everything that might happen on your Jenkins instance to send an email, and you can choose what to include, for instance, in the body of the email. And you can choose where to send those emails. You can say that you want to send emails to the developer,
25:03
your internal developer mailing list, only if the build breaks. But for a developer, you might want to always send an email to let the developer know that everything went okay. So that's a real flexible system, and it's really essential. The notifications are really essential that you give rapid feedback, as I said before.
25:22
You don't want to wait an hour as a developer because this is like in test of development, where you want to have really short cycles. That's the same in continuous integration. You want to have short cycles. At best, the CI system should give you a response within 10 minutes.
25:41
More is then hard to do. Also, it's beyond the emails. Jenkins can become your communication platform for all kind of users in your system. You not only have developers, but you have project managers that might want to know about the software quality, and you give them a way to measure that
26:02
by your code coverage or your static code analysis and stuff like this so that the project manager can go there. Or you can tell your project manager, hey, we had a real tight schedule, and we are a bit low on our test coverage, so we want to catch up. So let us work for a week on our test coverage
26:23
so we can improve that. And a project manager can then actually see that you improved things. The project manager can go to Jenkins to look up the latest acceptance tests and stuff like this. Also, of course, for the developers, developers can go there and get tons of information.
26:46
Taking continuous integration a bit further is the, you might have heard about that term already, is continuous deployment or DevOps. The idea is that if you have an automated build, which are good with decent test coverage,
27:01
so you can tell if the tests pass, that the system works as expected, right? Then you can just push it a bit further and say, okay, if we are sure that our software does what it is supposed to do, then we can just push a button and deploy,
27:22
just like this, right? Because if we know that our software works, why should we wait to deploy things? So you can even extend that and say, okay, this is not the end. The continuous integration is not the end, but push it further. So what I usually have is,
27:42
what Jenkins allows you to do with this build pipeline plugin is basically to create a build pipeline, the custom build pipeline. And what I usually have is the first step are the tests, because they have to run fast. Usually I run only integration tests, not the acceptance test because they take too long to give the developers a quick feedback.
28:03
The next step is then the acceptance test, the code analysis and the test coverage. Because, and the good thing is with this pipeline, you can run them in parallel, right? So if you have a multi-core machine, that really speeds up the process. Then you run, yeah, code analysis, test coverage,
28:20
the acceptance test. And if all those passed, then I have a really simple Jenkins job that says just release, for instance. So you can do with zest-releaser or jon-make-release, you can just make easily make releases from Jenkins. So you push a button and it does releases, uploads them to your internal PyP server,
28:42
and then you have another job where you say deploy to staging. Or you have a button that says deploy to live, right? The thing is that this build pipeline system doesn't force you to do that. If you use that, this does not mean that for every commit that is green,
29:01
it will deploy to the live server. You still can do this manually. But the thing is that usually if you have system administrators in a team, then there's usually one person that is responsible for doing the deployment and everything. And if you automate those things, others can also click that button. Even your project manager can click that button, right?
29:20
I mean, somebody has to be around. But no, I mean, it's up to you to decide that. But that's really a nice thing. I mean, what I usually do is we have a live server and a staging server and a playground or dev server. And the Jenkins job always pushes to the dev server
29:40
because it's just for the developers. So you can easily do that, right? And I usually have this button to deploy to staging and to live especially. But you can do that. So what do we get from my first slide, from the wishful thinking?
30:01
It's basically really the same slide, so you know that I don't trick you. We want a working system, right? So if we have enough tests and we run them on a continuous basis, we have really proved that we have a working system actually. Oops, that was too far.
30:21
I don't have a working system here. Yeah, it's not tested, that's a problem. Oh, I can just use the first slide. So we have a working system if we have tests. We will never have a bug-free system.
30:41
I think we all know that if you have some experience with programming, it will never be bug-free. But what a continuous integration system or what testing can give you is catching the bugs early. So they're easy to fix and cheap to fix. We all know that if you catch a bug early, that is far easier to fix than if you catch it after you deploy it and the customer yells at you, right?
31:05
Then we can have a fast system if we have a definition of what fast means for us and we can run our performance tests on a continuous basis to monitor that. And that's pretty handy actually, even in the development phase.
31:22
Because then you see at what point you might introduce code that leads to a bad performance, right? And where the problems lay. I mean, as I said, there's no silver bullet in that. You still have, if you see that, you still have to at some point go to a profiler
31:40
and find out where the real problem is. But you see this earlier, so you can react. You see when you introduce the problem. What I usually had at projects is that we were working for a long time developing software and then we deployed it like on alpha phase for a couple of users. And then we said, oh, the performance is bad.
32:00
So we started to write some performance tests. And then you spent like a year or something to work on the software and you have to figure out where the problem lays. If you see immediately where the problem was introduced and you were working on that, then it's really easy to fix and it's way cheaper to fix.
32:20
So we can have fast software. If we do like code analysis, we can have kind of maintainable code. Also tests play a big role in that. If you have testable code, then you usually have a better architecture than without tests because tests force you to have a decent architecture and everything.
32:43
And since we're testing the agreement of the team in terms of coding best practice, we have more readable code, of course. Then as I said before, we have like documentation, like at least for developers.
33:00
And if we have acceptance test and we have documentation about what the system should do. I had like projects where I came in, where I did not develop, where I had to extend a system and it was not working as expected, more or less. But I had like the customer was telling me,
33:21
no, this is not what we wanted. So I looked at the tests and the tests were actually failing. And I looked then at the code and the code was doing something else. So I had like three options to go with, right? And the reason was that they even had tests in these projects, but it was not run on a continuous level. So I never could see when it actually broke.
33:40
So if you have a continuous integration system and you run tests on a continuous level, then you have a documentation on what the system is supposed to do. So it is documented. Yeah, if you have all these things, then it's easier for your project manager. Either if you do an agile approach or you do a more classic approach,
34:01
then it's easier to finish a project in time, right? Because you don't have to add at the end like a couple of weeks or months to fix bugs because you fixed them right from the start. And it helps you to finish the project in budget.
34:22
Now I'm behind. Again, okay, this was just last slide. Yeah, this was it. And I hope I could show you how you can improve things.
34:43
I don't expect you, if you are like maybe still a bit new to testing, I don't expect you to just go ahead and implement everything. I did this one step after another. I got into testing and then I run my tests on a CI server and then I thought about acceptance tests and performance tests. And you have like the CI server for everything basically.
35:03
And it really becomes like the central piece of your software development. And my experience is that it can really improve your software a lot and it can help you a lot. It shouldn't scare you like everything, just do whatever you think makes sense
35:21
or work on like, yeah, I said that I had those problems, right? And that I talked about, these were real problems. I was not lying. So I wanted to work on that and improve that and CI helped me to do that. And I hope I could convince you
35:43
that that is worth investing maybe some time to work on it. Thanks. Any questions?
36:07
A lot of us work in system that have databases. How do you would make the transitions and the schemas of the databases?
36:26
This is one of the reasons why I'm really, really happy that we work with Zope and Plone because it's really easy because like we have the Zope test runner. So you can set up a database from scratch for each test.
36:42
So you can be sure that you test the right thing and not something that is only on your deployment. And we don't have this problem of having to migrate schemas. If you're interested, there are a couple of really good books about the topic.
37:01
One is called Continuous Deployment, which also covers, which has an entire chapter about migrating relational database schemas. And to be honest, I just skipped that chapter because as I said, we don't have the problem. That's really nice. If you have the ZODB,
37:21
you don't have to migrate the schema. You just changed something and then you push it and then there's nothing. I mean, it's just attributes on an object. So you can remove them. You can add new ones. The ZODB will work that out. So that's a pretty specific problem, but I'm not sure if CI can even help you
37:41
because this, yeah, I mean, this is what the continuous deployment people say is that you should automate everything, right? So you need a system that helps you to do these migrations and you should automate them. So you just click a button, but that's pretty hard to do if you have relational databases. It's a lot harder than when you work with the ZODB.
38:05
So sorry if I can't really help you with that. Yeah, but I forgot to add final slides about like two books that I read that are really good. They're both from the Martin Fowler signature series.
38:24
They're called continuous deployment and continuous integration. I will tweet those tools. They're not about like any technology. It's really just about the process, but they're really good and I can really recommend them.
38:59
Yeah, I told you the other day
39:01
when we were discussing this on email that we should just drink beer and then discuss this because there's no use to discussing PEP-8 things without getting drunk at the same time. Thank you.
39:37
Do you have any results,
39:41
like some statistics or something, some numbers on how the adoption of continuous integration does lead to a lower rate of defects or lower time to deliver?
40:02
Lower time to deliver, I'm not sure. I mean, the thing is there are some studies from a couple of universities about test-driven development and how you could reduce the number of bugs in your software and what they're basically saying, what they're all saying is that the more experience
40:21
your programmers are, the more you can gain from test-driven development. But usually you will gain something. You will not really lose time. If you usually talk with people about test-driven development, they say, oh, I can't do this because my customer does not pay for it. And my experience was really the opposite,
40:43
that test-driven development can really speed up your development and all those studies more or less say that. They say that if you're unexperienced, then it will more or less take the same time in the end and if you are experienced, it will really speed up your process. But for the rest, I can't tell you any numbers.
41:15
Okay, then thank you.