Better Software — No Matter What - Part 6
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 150 | |
Autor | ||
Lizenz | CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben | |
Identifikatoren | 10.5446/51508 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
|
NDC Oslo 201334 / 150
3
4
5
6
8
11
12
15
17
22
26
27
31
32
39
40
41
42
44
47
51
53
56
57
59
60
61
63
64
66
67
68
69
71
72
79
80
81
82
83
85
87
89
90
93
94
95
97
98
99
100
101
102
103
106
108
109
110
114
118
119
120
122
125
126
130
132
133
134
135
136
137
138
139
140
141
142
145
00:00
RechenwerkSoftwaretestSimulationSLAM-VerfahrenCodeLoopLeckPhysikalisches SystemFunktion <Mathematik>FokalpunktCMM <Software Engineering>Relation <Informatik>Brennen <Datenverarbeitung>SummierbarkeitProgrammiergerätSoftwareentwicklerInterpolationEin-AusgabeSoftwaretestRechenwerkClientCodeFunktionalKontextbezogenes SystemTest-First-AnsatzKomponententestGüte der AnpassungInterface <Schaltung>MultiplikationsoperatorNP-hartes ProblemImplementierungPhysikalisches SystemSoftwareentwicklerKlasse <Mathematik>ProgrammiergerätDebuggingMathematikLoopUmwandlungsenthalpieSchreiben <Datenverarbeitung>Arithmetisches MittelMereologieFramework <Informatik>SoftwarewartungInformationBitKartesische KoordinatenOverhead <Kommunikationstechnik>TeilbarkeitCASE <Informatik>ProgrammierungAdressraumVerband <Mathematik>BitrateProgrammfehlerMAPEinsFunktion <Mathematik>Coxeter-GruppeEin-AusgabeMetrisches SystemSchlussregelXMLUMLComputeranimation
08:38
CodeProgrammiergerätSoftwareentwicklerEinfacher RingLeckEin-AusgabeNormalvektorSimulationSummierbarkeitRechenwerkSchlussregelSoftwaretestFokalpunktHash-AlgorithmusAutorisierungSoftwaretestInterpretiererUmwandlungsenthalpieSoftwareentwicklerProgrammiergerätSchreiben <Datenverarbeitung>SoftwareschwachstelleDifferenteZusammenhängender GraphKomponententestSpeicherverwaltungMultiplikationsoperatorMechanismus-Design-TheoriePunktProjektive EbeneCMM <Software Engineering>EinsPufferüberlaufNeuroinformatikProgrammfehlerParametersystemCodeFunktion <Mathematik>MathematikFunktionalÄußere Algebra eines ModulsZweiPerspektiveAlgebraisch abgeschlossener KörperKonditionszahlOrdnung <Mathematik>MereologieCASE <Informatik>RechenwerkAdditionProgrammierungThreadDatenverwaltungTermEin-AusgabeAusnahmebehandlungProzess <Informatik>IdentifizierbarkeitHalbleiterspeicherLesen <Datenverarbeitung>Arithmetisches MittelSoftwareBlackboxt-TestRahmenproblemSelbst organisierendes SystemQuaderFigurierte ZahlComputeranimation
17:13
FokalpunktProzess <Informatik>Regulärer GraphNormierter RaumSimulationKonvexe HülleROM <Informatik>Airy-FunktionLokales MinimumProgrammschemaInformationSummierbarkeitSoftwareentwicklerProjektive EbeneInformationUmwandlungsenthalpieDatenverwaltungSelbstrepräsentationMultiplikationsoperatorBitMathematikGewicht <Ausgleichsrechnung>Inklusion <Mathematik>Prozess <Informatik>ProgrammiergerätSoftwaretestBildschirmmaskeFrequenzIterationTermFokalpunktHalbleiterspeicherMathematische LogikTeilbarkeitCoxeter-GruppeVollständiger VerbandVerkehrsinformationSpiegelung <Mathematik>Güte der AnpassungWeg <Topologie>DatensatzSoftwareGruppenoperationMAPDifferenteNormalvektorVersionsverwaltungZahlenbereichHypermediaResultanteSchedulingTUNIS <Programm>IntegralKomplex <Algebra>TaskSpieltheorieHilfesystemZellularer AutomatComputeranimation
25:47
PrimidealInklusion <Mathematik>Formale GrammatikSimulationPhasenumwandlungENUMFrequenzStichprobeGebäude <Mathematik>SichtenkonzeptData MiningIndexberechnungRechenwerkGravitationsgesetzKonvexe HülleGruppenoperationInformationAutomatische HandlungsplanungPlastikkarteExogene VariableAnnulatorWhiteboardVollständiger VerbandProzess <Informatik>Figurierte ZahlEreignishorizontGruppenoperationProjektive EbeneArithmetisches MittelBitData MiningMailing-ListeDatensatzPunktNormalvektorMultiplikationsoperatorHalbleiterspeicherUmwandlungsenthalpieCoxeter-GruppeProgrammierumgebungEinsMusterspracheAutomatische IndexierungLeistung <Physik>SoftwareentwicklerWort <Informatik>Objekt <Kategorie>MereologieLesen <Datenverarbeitung>ResultantePhasenumwandlungKurvenanpassungSpieltheorieVorzeichen <Mathematik>BildschirmmaskeCASE <Informatik>RichtungFrequenzKomponententestStichprobenumfangSchaltnetzSichtenkonzeptComputeranimation
34:22
LeistungsbewertungWurm <Informatik>PhasenumwandlungSturmsche KetteSimulationKonvexe HülleCAN-BusAnalysisProzess <Informatik>Nachbarschaft <Mathematik>StellenringCMM <Software Engineering>SinusfunktionE-MailBiegungCodeProgrammQuick-SortE-MailProjektive EbenePhysikalischer EffektInverser LimesNebenbedingungSystemplattformMultiplikationsoperatorVererbungshierarchieVollständiger VerbandMultifunktionBitHilfesystemCodeLeistungsbewertungSoftwaretestSoftwareentwicklerSchedulingHalbleiterspeicherAnalysisVerzweigungspunktCASE <Informatik>Ganze FunktionPhysikalisches SystemResultanteAnnulatorSoftwareClientGruppenoperationFrequenzAdressraumInformationGleitendes MittelBenutzerfreundlichkeitChirurgie <Mathematik>FokalpunktDatensatzComputerspielRechter WinkelHardwareGüte der AnpassungVersionsverwaltungComputeranimation
42:56
ProgrammCodeSimulationSturmsche KetteFormation <Mathematik>AnalysisE-MailLokales MinimumInformationVakuumAppletZeitzoneZahlenbereichMIDI <Musikelektronik>Offene MengeMathematikFormale GrammatikProgrammierungRechenwerkBenutzeroberflächeDatumsgrenzeElektronisches ForumUmwandlungsenthalpieFehlermeldungInterface <Schaltung>HydrostatikMultiplikationsoperatorComputerspielAnalysisVirtuelle MaschineDynamisches SystemDatenverwaltungZahlenbereichInformationCharakteristisches PolynomTemplateSoftwareGlobale OptimierungProgrammiergerätProgrammierungDesign by ContractProjektive EbeneNebenbedingungGüte der AnpassungQuellcodeObjekt <Kategorie>DifferenteSchlüsselverwaltungProgrammfehlerSchedulingHardwareBefehlsprozessorVersionsverwaltungWeg <Topologie>BinärdatenSoftwaretestBenutzerfreundlichkeitKategorie <Mathematik>AdressraumEinflussgrößeKomponententestBenutzeroberflächeDichte <Stochastik>RechenschieberDatenparallelitätURLDiskrete UntergruppeCodeOptimierungsproblemSoftwareentwicklerWeb-SeiteE-MailCoxeter-GruppeMobiles InternetEntscheidungstheorieKonfigurationsraumVerkehrsinformationTeilbarkeitFreewareHackerStrahlensätzeApproximationTropfenBeobachtungsstudieFormale SpracheFunktion <Mathematik>LeistungsbewertungComputeranimation
51:30
XMLUML
Transkript: Englisch(automatisch erzeugt)
00:05
All righty. Thank you. All righty. We just finished talking about the importance of unit tests,
00:23
and I haven't talked about TDD or anything else because I want to get across the idea that unit tests by themselves are worthwhile, independent of whether you're using TDD or something like that. And that becomes important because if you have legacy code and it doesn't have unit tests, it's not too late to add those unit tests or to start introducing unit tests.
00:44
You don't have to use some other fancy methodology. Now, the next thing I want to talk about is automated unit testing, and automated unit tests are those that are run automatically, usually inside a testing framework that will check to make sure that they have succeeded or they have failed.
01:01
It makes unit tests a lot more useful because you get to run them very frequently, ideally after any non-trivial code change, which hooks into the idea that you want to make sure that they can be run very, very quickly with almost no overhead. I mentioned this once before, but it bears repeating that unit tests are software, and as such, they need to be maintained.
01:23
They're not just kind of temporary scaffolding. And what this means is they have to be kept up to date with the code that they test. They should be made up of good code, which is readable, which is maintainable. And remember, one of the roles of unit tests is to serve as documentation for interface clients.
01:40
So that's another reason why they need to be really nice to look at. So now I want to talk a little bit about test-driven development, and I'm going to assume, well, let me ask, how many people are familiar with the basics of test-driven development? As expected, especially at a conference like this, pretty much everybody. So you already know the basics.
02:02
Fundamentally, test-driven development means what you do is you write a test for functionality before you implement it. This is also called test-first programming. And then we've got a TDD loop that you execute to improve the functionality of the system. So you write a small test for the new functionality. You confirm that the test fails to make sure that the test is being executed,
02:24
so you want it to fail right away. You write enough code for the new functionality to pass the test. Now you know you have the new functionality that you wanted. You then refactor to get rid of any code smells, because while you were in step three, you wrote just enough code to get the new functionality to work.
02:41
Now you're going to clean it up in step four. Now that you've cleaned it up, you run the test, excuse me, now you're going to refactor to get the code smells out in step five, and then you're going to reconfirm that the system still passes all of the tests. So that's the basic TDD loop. This essentially elaborates on what I just told you,
03:03
but since most people here are familiar with TDD, I'm not going to go through this in great detail. Basically these are, again, the steps along with their motivation. What makes TDD work is it relies on automated test execution.
03:23
This is usually in some kind of test execution framework. Almost all the frameworks at some level, they mimic the original ones, which was JUnit. Basically the idea is that as you're running tests, usually you've got a green bar, and as long as the bar stays green, that means you're passing all the tests, which is good. If any of your tests fail, then the bar turns red,
03:41
and that indicates that at least one of your tests failed, and there's usually information below to tell you exactly which tests failed. So people doing TDD often talk about red-green refactor, which means first you write a test before you've written the code that will make it succeed, and then you run the test.
04:00
The test is supposed to fail. That's to make sure you really are running the test. A lot of people have thought that their systems were passing all the tests, only to discover they weren't actually executing the tests. So that gives you the red bar. Then you write the code to make it succeed. Then you get a green bar, ideally to show that you are now passing all of your tests, and then you refactor and rerun the tests.
04:23
There are a lot of benefits of TDD, which I want to summarize for you, because I do think that they're important, and these are benefits that are separate from just using unit tests. First, it addresses both external and internal quality, because you are making sure that you do provide the functionality the interface promises.
04:42
That's external quality. But an express part of the methodology is to refactor to eliminate code smells. That is internal quality. So what I like about the methodology is it addresses both aspects of code quality. Defects are detected sooner than they otherwise would be in many cases.
05:03
You can find problems in the specification because it's really hard to write tests for what's not well specified. And remember, you write the tests before you write the code. If you don't know what the test is supposed to do, then you get to go back and have the specification become clarified. If you find a problem in the code, if you get a red bar, you know it immediately.
05:24
And in practice, this means that TDD programmers tend to spend a lot less time debugging. If you get a red bar, it has to be because of some change you just made. And if you're working in small increments, then you didn't change very much most recently. So most people don't even bother to fire up a debugger. They just go back and look at the most recent code change that they made.
05:45
Another thing I like a lot, interfaces are created before implementations. This I just think is great. The interface that you have to a function or the interface you have to a class is going to be determined by the person writing the test case, rather than by the person who's doing the implementation.
06:03
And the test person is going to want to have the clearest, most straightforward interface that they can imagine. You tend to get better interfaces this way. On the other hand, if you have the person implementing the function, determining the interface, they're going to try to produce an interface that is going to be easy to implement, which may not be an interface that is easy to use correctly
06:22
and hard to use incorrectly. Another good thing of TDD is that code reuse is facilitated. Now, this is what we know. We know that any time you write code, if you want to use it in a different context, that's a huge amount of work.
06:40
Almost always, going from one user to two users is a hard amount of work, because you didn't really understand how to make it general enough to be usable in multiple contexts. But going from two clients to three clients is typically a lot easier, because you've already done some necessary generalization. Well, with TDD, the code is born with two clients,
07:02
the original application and the test code. So you've already got to the first two clients, which means using it in new contexts should be relatively easy. Another advantage is that gold plating is discouraged. Fundamentally, developers are less likely to add a lot of unnecessary functionality to the system
07:23
if they know they have to write unit tests to confirm that it passes all the time. So it discourages people from doing a lot of work that doesn't otherwise need to be done. Now, using TDD means that we are asking developers
07:41
to be active in some kind of a test role. It is important to recognize that most developers are better programmers than their testers. So these are separate skills. Good developers are not necessarily the same people as good testers, and vice versa. For example, a lot of developers don't use code coverage tools
08:04
and they don't use metrics, and there's been a lot of empirical evidence that shows that if you say to a developer, okay, you've written some tests, what percentage of your code do you think it covers? They'll go, ah, I'm covering 80, 90 percent of my code for sure. It's covering 22 percent of their code, something like that.
08:20
Remember I said earlier, programmers are optimists, which you kind of have to be to be a programmer. Really, it's a miracle anything ever works. I mean, if you think of all the things that can go wrong. I mentioned this morning, programmers tend to focus on clean tests. Clean tests, these are positive tests. They exercise normal functionality.
08:42
I gave the appropriate inputs, I checked to make sure I've got the proper output. Those are clean tests. Now, dirty tests, those are negative tests. They exercise exceptional use. For example, I give it invalid inputs. I set things up so that the heap is going to be exhausted. I set things up so that I'm going to get numeric overflow on some of my computations.
09:03
All of the things that are no fun to test because they're not supposed to actually occur while the program is running, but they actually do have to be tested. Steve McConnell says that mature testing organizations, they have five times as many dirty tests as clean ones. In other words, they have five times as many tests around the fringe,
09:21
checking to make sure that all the weird situations which aren't supposed to occur are correctly handled. Then they do tests for the situation which are correctly handled. Programmers usually like to test things which are going to work as opposed to things that are going to fail. Furthermore, if you have a test author be the same as the person who wrote the code,
09:41
they bring the same interpretation of the spec both to the test and to the code, which means an outside tester might interpret the specification differently, and that could help expose weaknesses in the specification. Programmers need to write unit tests, and they do a fine job of doing it. It's just important to bear in mind they don't do the role of professional testers.
10:01
Testers tend to focus on these other things. Furthermore, unit tests can't replace independent testing. They're actually complementary. Fundamentally, developers do white-box testing. They know how the code works. Testers can do black-box testing. And we also know that if I have multiple methodologies
10:24
for accomplishing something to identify defects, such as testing, I'm going to get more defects discovered if I combine them than if I use either one of them independently. So if you have programmers doing some testing and you have testers doing some testing, you're going to get better coverage in terms of identifying defects
10:41
than if you have only programmers or only testers identifying defects. Having said that, unit tests could reduce the time and the cost of conventional testing simply because fewer bugs should be downstream for testing to identify, assuming that developers have done a decent job of unit testing upstream.
11:15
Okay, so the question is that a lot of the dirty things that I talked about, for example,
11:20
invalid inputs, forced overflow, heap exhaustion, yes, ideally those should be part of the unit tests. It just turns out that programmers in general don't tend to want to test those edge case conditions as well. But they certainly should be part of it. In order to convince yourself as a developer that your component works correctly, you have to check those edge conditions as well.
11:41
And you would ideally like to also test the exceptional conditions like I run out of memory or I run out of threads or whatever happens to me. So yes, I agree with you. So the guideline is to embrace automated unit testing.
12:02
Any questions about unit tests, TDD, the advantages that you can accrue from them? Yeah. I'm sorry, can you please speak louder?
12:31
Okay. Quit. Okay, so the question is, my manager comes to this meeting,
12:41
he goes, great, unit tests are wonderful, tells me I need to write unit tests for the 100,000 functions in the code base that we've been maintaining for the past 20 years. What should you do? So you heard my advice. Let me generalize your question slightly, which is, I have an existing code base, a very large code base. It wasn't developed with unit tests,
13:01
so what should our approach be to adding unit tests to an existing code base? Is that a reasonable alternative question? Okay. For things like this, I really like the approach that Michael Feathers takes in his book. He wrote a book called Working Effectively with Legacy Code,
13:20
and he has an interesting definition of legacy code. His definition of legacy code is code that has no unit tests. So if you wrote the code 30 seconds ago and it has no unit tests, that's legacy code from his perspective. And so essentially his argument is if you have code that's already an existing code base
13:42
and as far as you know it's working fine, then there's no compelling reason to go and add a bunch of unit tests to it because there's been no evidence that it's going to need to be modified. But from time to time, you're going to discover bugs in the code or you're going to find places where you'd like to do some refactoring or you're going to want to add features. I mean, you're going to go back
14:00
and you're going to revisit this code from time to time. And his argument is at the time when you are going to make some changes to the legacy code, then what you want to do is start imposing unit tests around the part of the code that you're going to change because you want to make sure that when you change something, you don't break any other functionality inadvertently.
14:20
And he describes this as you have a sea of legacy code with no tests and over time little islands begin to pop up where they have put in some unit tests. I think this is a very pragmatic, practical approach to introducing unit tests to an existing code base, which is basically you only do it at a time
14:41
when you're going to need to modify the code anyway because once you've modified the code, you're going to have to verify that it does the right thing. So does that seem reasonable? Okay. So I'm pretty sure that the book is listed in the reading, but I'm not sure. If not, it's by Michael Feathers. It's called Working Effectively with Legacy Code.
15:08
Okay, so the next topic I want to talk about is to perform retrospectives. A retrospective is a mechanism for learning from something you've done in the past.
15:20
Basically, the idea is you're developing software, and maybe you're at a point now, for kicks we'll talk about the end of a project. The project is done, and now what you want to do is you want to take some time and step back and say, what can we learn about software development based on our most recent experience? And typically, what you're trying to figure out
15:40
is what worked well, because this is stuff that we want to make sure we're going to do again in the future. So what were some ideas that we came up with that we think are worth keeping, ideally focusing on ideas that might otherwise be forgotten? Or you also want to identify things that should be done differently. For example, what did we do that really didn't buy us anything?
16:01
Because we want to stop doing that. Or what did we do that didn't work as well as we would have liked, because we want to change the way that that works. Ideally, you want to identify what worked so we don't forget, what didn't work so we don't keep doing it, and what needs to be changed. Now, those are the two technical aspects of retrospectives.
16:24
Everybody agrees on those. If you look in the agile community, everybody's talking about that kind of stuff as well, which is completely applicable. But there is an important component that also has to be taken into account, which is a social component. At the end of a large work unit, so for example, the end of a project or a major release,
16:42
in addition to having a technical artifact, you have people who worked on the technical artifact. And especially if the process of producing it was not necessarily as pleasant as it might have been. Those people need to deal with those issues in some way so it can help the participants
17:01
achieve effectively closure in one way or another. Work is a really important part of people's lives. And so if you finish a project that was very, very successful, it's kind of like having your child go off to college. And if you work on a project and it didn't go very well, it's kind of like having your child flunk college, I guess.
17:22
So that's something which needs to be addressed in some way if you want those people to continue to perform at the highest possible level in their software development. So retrospectives allow you to address both technical issues but also social issues. Retrospectives, the purpose of them
17:42
is to lay the groundwork for an improved software process in the future. You want to improve the development process both technically and socially. Norm Kurth, who wrote a book on retrospectives and I'm going to be basing my presentation largely on what he said, he calls it the single most important step in process improvement,
18:02
which may be a little bit self-serving since he wrote a book on it, but it does enable people to focus on the future and put their problems in the past. Now, retrospectives have a really good cross-disciplinary track record. They're used in athletics all the time. Why did we win the game? Why did we lose the game? The military, why did we win the war? Why did we lose the war?
18:21
Medicine, why did we lose the patient? The notion of experience reports is all about what worked and what didn't. And they're also one of the 12 Agile Manifesto principles, which says that at regular intervals, the team reflects on how to become more effective, fine-tunes and adjusts its behavior accordingly.
18:40
So there's a lot of evidence that retrospectives are a positive way to improve software development. They help through the process of learning. Now, experience is something which comes automatically. If you do some stuff, by definition you get experience. But learning is something that comes about through reflection.
19:03
Retrospectives basically force you to think about what you've been doing and whether it was effective. In many people's experience, the only time software developers really stop and think about how they produce software, about what works and what doesn't work, is during a retrospective,
19:21
because the rest of the time they're so busy trying to produce software, they just don't have time to think about it. If you don't stop from time to time and think about what you're doing and try to figure out a better way to do it, you're unlikely to change things. As they say, if you always do what you've always done, you'll always get what you've always gotten.
19:43
Retrospectives also permit hidden issues, a chance to surface, and they help build an institutional memory. You want to remember, okay, what worked on these projects and why? And what did not work and why not? And the justifications are particularly important because as time goes on, it may turn out that the reasons for things working
20:02
or the reasons for things not working change. And you want to have made note of why they worked or why they didn't work. Retrospectives can also help in terms of building a software development team by facilitating behavioral change.
20:22
It turns out that unresolved issues actually hinder behavioral change and people tend to embrace the practices that they helped establish. If you can explain to people why they need to do something and especially if you can feel like they had a role in making those changes, that encourages them to adopt
20:41
different kinds of behavior in the future. One manager who I talked to says retrospectives are the best tool for getting team buy-in on change. So if you can get people talking about what's working, what's not working, let's do things a little bit differently, that can help you change the way you develop software.
21:03
The time to hold retrospectives depends on what you're trying to accomplish, but logically, it's at the end of any logical work period for example, the end of a project is a reasonable time to hold a big retrospective. You could also have a lesser retrospective at the end of a milestone and at the end of an iteration
21:21
or the end of a sprint which some of the agile methodologies advocate. The important thing is they need to be included in the schedule. If they're not included in the schedule, they're not going to take place. Nobody has a bunch of extra time. The longer the work period, the longer the retrospective that you should have.
21:41
So for example, if you had a month-long iteration, maybe a retrospective for a couple of hours would be fine. If it was a 12-month project, you might need something as long as two or three days, assuming there were a lot of people involved to really be able to figure out what worked and what did not work and how you can do better in the future.
22:01
You're not likely to get much of meaty retrospection in a stand-up meeting. It's not really the kind of thing where you do it that way. The duration is also going to depend on the size of the team, the project complexity, whether the team is distributed, a lot of factors enter into it. And the sooner after the work period, the better, because people forget things,
22:20
they focus on new tasks, that kind of stuff. So you really want to be able to get people to talk about how the project went or how the iteration went or how the milestone went while it's still fresh in their minds before you move on to do additional things. Now, I'm going to be talking about
22:41
an approach to retrospectives, but I'm going to tell you right now that it's kind of a heavyweight approach to retrospectives. There's two basic schools of thought for retrospectives these days. The heavier weight approach, which I'll be talking, which was described originally by Norm Kurth, is based on the assumption
23:01
that there's been a fairly large amount of work by a fairly large number of people. So it might have been a 12-month project or maybe a 15-month project, and there might have been a couple of dozen people involved, where the people involved are the programmers and the testers and the people who were doing the requirements and just everybody involved in the software. At the same time,
23:20
I recognize that a lot of retrospectives are now done in the form of agile methodologies. So you have a sprint, which maybe is going to be a couple of weeks, or maybe it's going to be a month. And as a result, you can have retrospectives much more frequently. Having said that, it is increasingly becoming recognized that even if you have a project,
23:41
let's say that lasts 12 months, and it's using some kind of an agile methodology, so every two weeks or every month, you're doing a little bit of a retrospective, it doesn't change the fact that at the end of that really long period, you need to have a meteor retrospective. So even the agile methodologies are beginning to recognize that occasionally you do need a heavier weight retrospective,
24:01
especially at the end of a longer period of time. So like I said, I'm talking about retrospectives in general, but the focus I'm taking is a little bit more on the heavyweight versions, because I think that they don't get the attention that they deserve. The people who need to participate in the retrospective, fundamentally it's a representative of all the relevant parties involved in the project.
24:22
Norm Kurth talks a lot about what he calls the full story. The full story is essentially everything that happened that went into the process of making the software. It would include specification, development, testing, deployment, delivery, satisfaction of customers, everybody involved there. And you want to get the full story,
24:41
which means you need to get representatives from all these important parties. The more information you get, the more you can learn. And remember, the whole purpose is learning. So for example, party A can learn why party B behaved the way that they did. So you might find out that the people writing the specifications were really insistent on something
25:00
and the people doing the coding couldn't understand why they were behaving that way. At a retrospective, they should be able to find out. So possible parties would include customers, requirements analysts, developers, testers, managers. And then in the more formal retrospectives, you want to have a facilitator. And a facilitator ideally is somebody
25:21
who is skilled at working with groups of people and who is neutral and who is a trusted party by everybody who's going to be there. Because sometimes issues come out that need to be discussed and there can be some tension. Ideally, there should also be somebody to take notes, formally called a scribe.
25:41
It shouldn't be the facilitator. Their hands are busy doing something else. You don't want to lose the information. One of the most important things about a retrospective is this notion of safety. Now, the purpose of a retrospective is to figure out what worked and what didn't.
26:04
Sometimes describing what didn't work especially can potentially hurt people's feelings or can potentially lead to bad feelings of one form or another. But it's really important to find out that something did not work effectively.
26:20
As a result, it is very important that people can express themselves without feel of repercussions because if people suppress information, you're losing part of the story. It is therefore up to the facilitator and the participants to work to maintain safety, and this leads to what Norm Kurth calls his prime directive. Everybody who is participating in a retrospective
26:42
has to agree to the following, not necessarily explicitly, but this is the philosophy you need to have, that regardless of what we discover, we understand and we truly believe that everyone did the best job they could given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.
27:00
In other words, you go into a retrospective with everybody saying, listen, I take it for granted, everybody did the best that they could. Nobody was trying to sabotage the project. And this is true even if the project was a success because the goal of the game is to learn, it's not to blame, and the ultimate goal is to improve things, it's to be constructive.
27:21
If you don't have safety, if people cannot speak freely or express their opinions without fear of repercussions, you're not going to get anything out of a retrospective. So, the phases of a retrospective, there are three of them.
27:40
First, there's some preparation prior to the meeting. You have to determine what are the objectives in our retrospective, what are we really going to be focusing on here, and then you gather some data and you gather artifacts which may be relevant. At the meeting itself, you create and you discuss the project record, so basically you talk about what happened on this project,
28:00
what did we do. You then prioritize the topics that you want to talk about during the retrospective and you analyze the ones that are most important. Because the goal is learning and you want to figure out what worked, what didn't, what needs to be changed, you need to figure out, okay, if something needs to be changed,
28:21
how is it going to be changed? So you develop some kind of an action plan and then you perform a retrospective on the retrospective. So you ask yourself what worked, what didn't work, how can we improve what we did? And then there has to be some follow through. If you have some action items, you need to make sure that they have follow through to make sure that those things are really pursued.
28:41
If you have a retrospective and come up with some ideas on action plans and nobody follows up on those, people are going to lose faith in the process of retrospectives. They're going to be less inclined to fully participate in the future. So I want to talk a little bit more about these different phases.
29:00
There's the meeting itself. It is interactive, it is participatory. It's very important that this is a meeting to try to get information to come out. This is not a presentation. So Ellen Godestiner, she calls them a workshop as opposed to a meeting. It has been commented that if you see somebody show up at a retrospective and the first thing you want to do is show PowerPoint slides,
29:20
that's a really bad sign. That's not what it's about. It's some kind of a presentation. The people who do retrospectives professionally will often incorporate what they call exercises. The exercises are designed to first establish and maintain safety,
29:40
to bring out important lessons, and to allow difficult topics to be discussed. Basically, the facilitator is setting up an environment that is most likely to lead to useful information coming out of the retrospective. Now I want to talk a little bit about this notion of exercises that can improve the retrospective.
30:02
I'm going to give you two, or maybe only one, I'm not quite sure, either one or two sample exercises. I can't remember how many I have in the presentation now. The first one is called Timeline. Timeline is a way to combine everybody's view of the work period under review.
30:22
This is especially useful for big retrospectives. The idea here is that you have a lot of people who worked on the project, where a lot might be 12 to 15, and nobody knows everything that went on in the project. So what you do is, the first thing you do is you build the timeline. What often happens is
30:41
they'll put up a big piece of paper on a wall, or there can be a whiteboard, and then everybody gets to go up, and at particular points in time that are marked on the board or marked on the paper, you get to say when important things occurred. So it might be, well, this is when the build first succeeded. Or it might be, well, this is when we found out that the unit tests were not being run, and we thought that we had
31:01
better coverage than we did. So people write down whatever events they consider to have been meaningful during the course of the project. The meaning of a significant event is determined by the participants. They can write down whatever they want to. So if somebody wanted to write down this is when we all went to dinner at the Chinese restaurant, they get to write that down if that was significant to them.
31:23
Each event can go on an index card or a sticky note, or you just write it on the whiteboard if you want to. The events themselves get to be anonymous in many cases because that way, if somebody comes along later and goes, oh, I see that somebody wrote that this is when somebody broke the build and they didn't want to say who wrote that,
31:41
it needs to be, that issue is something which still was important enough to the person to put on the timeline. It probably needs to be discussed. After everybody has gone up and written on the timeline, put all the significant events that they consider to be relevant, then there is time for viewing and reflecting on all of the events.
32:01
Norm Kurth doesn't actually name this. I called this considering. That's a time when you get to look at what everybody else said. So this is the first chance for everyone involved to look at all of the comments that have been made by everybody and potentially discover some things which they had not realized. After that becomes the discussion,
32:21
which is what Norm Kurth calls mining for gold. What you're trying to identify is, okay, what worked well that we don't want to forget? Let's write that down. What should we do differently in the future? What didn't work as well? What do we need to change? And at that point, great, what should we continue discussing further in this meeting? How are we going to spend the rest of the time?
32:42
So this is Norm Kurth's timeline exercise. Again, most useful for large retrospectives. When the reading is over, when the retrospective is over, you should have some concrete results.
33:01
They should include the things that worked we don't want to forget. Maybe document those as patterns for the institutional memory. Things we want to do differently in the future. Maybe you want to add some things. Maybe you want to modify some things. Maybe you want to abandon things. Interestingly, it may turn out that some of the things that you abandon are things that you had noted in the past
33:21
were really being helpful. Something which worked in the past may not work as well in the future, either because the people are different or because circumstances have changed in one way or another. You may want to have a list of things that require additional research. We want to know why certain things occurred during the software development process.
33:41
We didn't understand. You also want to have some specific action plans. The action plans need to be specific, so now we know exactly what it is we're trying to accomplish. You only want to have as many as can reasonably be accomplished soon. A laundry list of here's 22 things we'd like to be able to change,
34:01
that's not really actionable. You want to have a comparatively small list of things that can reasonably be accomplished soon. Somebody has to accept responsibility for every one of those action items because if you have an action plan and nobody follows up on it, as I said, that's just going to demoralize people and sour them on the idea of retrospectives.
34:27
Unsurprisingly, retrospectives end with a retrospective, so the meeting ends, figuring out what worked well, what should be done differently in the future, how can we run better retrospectives.
34:40
Now, I've already mentioned that with follow-through, it's very important that somebody follows up on those action items. I've already mentioned, too, that if the retrospective results are ignored, then participants lose faith in the retrospectives, but more importantly, things that worked may be forgotten and not done again, and things that didn't work may be forgotten and therefore done again, so you definitely want to have some follow-through here.
35:08
Recently, Norm Kurth has talked a little bit what he calls a kickoff retrospective. Now, a retrospective is something you do at the end of a project to figure out what worked, what didn't.
35:23
A kickoff retrospective is completely different. It is used before you start the work, and it is based on the fact that what you do is you get a bunch of people together who are going to be working on a new project, so there's no history yet, and what you say is, in the future, when we look back on this project,
35:43
what was so good that we want to repeat it on future projects? So you're sort of saying, if in the future you look back on this, what do you want to be able to say, we did so well that we're going to want to repeat it? Because most people involved in software projects have worked on other software projects in the past,
36:01
they already have some experience. They know what worked. They know what didn't work, so you're essentially saying, how shall we plan this project based on your experience on what worked on previous projects? It's also kind of nice because it's an initial meeting where the people working on the project get to say, this is how we want to run this particular project.
36:21
So the guideline is to perform retrospectives. Any questions about retrospectives? Okay, so the question is,
36:42
okay, let's suppose it was a 12-month project and you put together this timeline. People aren't going to remember what happened over the course of 12 months. When you're doing the timeline, especially for a long project like that, what you would do is you would tell people in advance, look, we're going to be holding a retrospective. It's going to cover the full 12-month period. So we would expect you to review your notes,
37:02
review maybe some email records, maybe look at when things were checked in, so people actually have a chance to refresh their memory and gather some data and gather some information before the retrospective. You don't simply say, why don't you show up and let's talk about what happened 12 months ago?
37:21
I didn't so much skip over it, but I didn't mention it very much, but it had to do with, in the preparation prior to the meeting, it's the gathering of project data and artifacts. So you actually give people some warning, this is what we're going to be doing, and like I said, then they can refresh their memory about things that are relevant during that time period.
37:41
Does that help a little bit? Okay. Other questions on retrospectives?
38:18
Okay, so the question is, all right, so let's suppose we've had a fairly long project,
38:22
and maybe there were some things done at the beginning of the project that didn't go as well as it should have, but by the time the retrospective rolls around, we've forgotten about that, and so it doesn't get brought up. I mean, presumably, you're dealing with little issues as they arise during the course of developing the software anyway.
38:41
The retrospective at the end is ideally a situation where people get to say, listen, these are the things that I think are relevant in taking the entire project experience into account. So for a kind of a problem that occurred early in the project, what I would say is, either that issue had ramifications
39:01
so that at least one person at the retrospective still wants to talk about it, in which case they're going to bring it up, or it turns out that that mistake that occurred early on in the project in the long run of the whole project was not significant enough for anybody to want to bring it up at the end. So the idea,
39:21
I agree there's going to be a natural tendency to talk about little things that occurred recently just because they're recently. But a good facilitator will try to say, listen, we're trying to focus on the project as a whole, we're trying to identify broader lessons that we can use on big projects like that. Does that help a little bit?
39:40
Okay. Yeah. Okay, so the question has to do with,
40:02
can you sort of treat retrospectives like performance evaluations, like people have, sorry, to treat, okay, so the question is could you treat performance reviews for individuals sort of using the same methodology as this.
40:22
You know, I'm actually going to punt on that question and say I don't know, because I haven't had to deal with performance reviews before. I focus more on software than on that kind of stuff, so I'm just going to have to answer with I don't know. I'd hate to give some misleading advice only to turn out that it was a horrible piece of advice.
40:40
Sorry about that. Okay, what I want to do is talk about some things we would talk about if there were more time. We've only got one day. One day's not a lot of time to talk about how to write better software. If I had more time,
41:00
I'd talk about things like minimizing coupling, I'd talk about things like ensuring that inheritance corresponds to substitutability, I would talk about things like how defect cause analysis can fuel defect prevention, I'd talk about sweating the small stuff. I mean, there's a lot more that we could talk about.
41:24
I would talk about performing usability tests. Great book by Steve Krug called Rocket Surgery Made Easy, which I'll give you a reference to, which talks about that. But we've only got a limited amount of time, so instead what I want to do is I want to talk about you.
41:41
I want to talk about the people in this room, and I want to tell you, you're all special, everybody's special, but you're not that special. I mentioned this morning at the beginning that by the end of the day, my suspicion was that most of you are going to say, okay, well, I saw some things that were new, but there was some stuff I also hadn't seen before. My experience is that most developers recognize
42:02
that the guidelines are usually valid, but at the same time, they say, you know, you're right, we should do those things, but we can't. And the reason we can't is our schedule is too aggressive, our performance requirements, they're too great, or our memory constraints are demanding, our platform's weird. Everybody's got an excuse.
42:23
I've been doing this for a while, a couple of decades, and experience has taught me and taught my clients that the guidelines apply even if 32 bits is too small in address space, even if the technology they're working on is changing really, really quickly. I have worked with many companies
42:41
who have told me, well, this code will never have to port. That's because they have their own custom hardware, they have their own custom operating system. Aside from the fact that the code always has to port, it turns out that the guidelines that we're talking about here apply. A significant new version of the software has to be released every year.
43:02
Think about video game manufacturers who either are trying to release a video game in time for Christmas or writing a sports franchise and it needs to come out at the time the season begins, because Christmas is not going to slip and neither is the beginning of soccer season. So people have to deal with those kinds of things. I have worked with people where program runs
43:21
extend for months on the fastest available hardware. It was an eye-opening experience for me to work at one of the research labs for a while where they routinely talk about how many CPU months their programs take. The thing is, following the guidelines is not that hard. It's just kind of inconvenient.
43:41
Let's face it, specification-free hacking, that's really convenient. The freedom, I can do whatever I want. I have no specification. Quick and dirty interfaces are really convenient. Copy and paste, there's a reason why it's a single key. It's so convenient. Skipping configuration of lint-like tools, very convenient. Avoiding retrospection, really convenient.
44:02
These things are all convenient. Bug reports, inconvenient. Whack-a-mole debugging, inconvenient. Working with incomprehensible code, that's inconvenient and unpleasant. Slip schedules are inconvenient.
44:21
Unhappy customers are inconvenient, although in fairness, let's be honest, customers are inconvenient. Making the same mistakes on every project is really frustrating to have to keep making these same mistakes over and over. The inability to add simple new features is just plain embarrassing.
44:41
I mean, really, somebody asks you for something simple and you just can't do it. Convenience is not a good excuse for poor software development practices. Some things aren't as convenient as we would like, but there's good reasons why they're less convenient.
45:01
Now, the principles behind what I'm talking about are fundamentally universal. They apply almost all the time. So here's a fundamental principle. Think first, do second. That's what specifications and TDD are all about. Figuring out what you want to do before you do it. Prevent errors instead of making them
45:21
and then fixing them later, which is what motivates good interface design, the aggressive use of static analysis, avoiding invisible keyholes, that kind of thing. Retain flexibility. This is why internal quality and unit tests are really important. They preserve flexibility to change things.
45:41
And improve what you do based on your experience rather than just doing the same thing all the time. That's what retrospectives are about. I mean, these are pretty fundamental principles. So the guideline is that you should remember, I mean, you're special, but you're not so special that the guidelines don't apply to you.
46:03
And if we summarize the day, this is what we come up with. Software quality is a global optimization problem that's based on both external and internal characteristics. So we have to care about internal and external quality
46:20
and its global optimization. And management of programmer discretion is critical to software quality because programmers have a lot of decision-making ability that they are going to exercise. The guidelines I talked about were, number one, to insist on a useful specification. We talked about how that can be formalized to things like unit tests or designed by contract.
46:42
I spent a lot of time talking about making interfaces easy to use correctly and hard to use incorrectly. We talked about the importance of static analysis, both by machine and by humans. I talked about avoiding the introduction of keyholes, unjustifiable constraints. We talked about minimizing duplication
47:01
of both source and object code. We talked about embracing automated unit testing and finally performing retrospectives. And then I tried to convince you that really, I'm talking to you, not talking to anybody else, just you guys. We talked about a lot of different topics here.
47:22
So general information about quality code, Steve McConnell's book I referred to a couple of times, Carl Wieger's book. I'm not going to read, this goes for many pages, so I'm not going to read this to you. Designed by contract and assertions. Interface and API design, we talked about that.
47:42
User interface design. Template metaprogramming, that's for the C++ people in the crowd. Everybody else, stay away. A lot of references for static analysis. Static analysis by machines, more static analysis by machines, still more static analysis by machines,
48:02
by machines for dynamic languages, dealing with lint output, static analysis of object code, static analysis by humans. I like static analysis. More static analysis by humans, there's a lot that you can read. Keyholes, not much on that
48:22
is about me. At this location here you're going to find the draft chapters of the book that I was going to work on. It's an abandoned project now, but I still feel, as you can probably tell, fairly strongly about the topic. This is about duplication, both source code duplication
48:41
and object code duplication. The pragmatic programmer, this book here is what popularized the DRY principle. This is some information on data-oriented programming, unit testing and test-driven development, some information there,
49:01
some more information there, some information on testing concurrent programs, some stuff on refactoring, and some information on retro, I told you there were a lot of topics, some information on retrospectives, some more information on retrospectives, and some information on usability testing.
49:23
Just out of curiosity, how many people have been here the entire day? So I feel badly for you. I mean, you wasted the whole day. But I have a little something for you. If you would like a PDF copy of the handout, so all the slides that I showed here,
49:42
then send me some email. That's my email address. I hope I spelled it correctly. Basically, all you have to do is say, you said that if I sent you email asking for the handouts, then you would send me the handouts, and if you send me that mail, I will send you the handouts. So I felt badly that,
50:02
I actually thought the conference was going to make them available to you anyway, so there was just kind of a misunderstanding there. Any questions about anything to do with any of the topics that we talked about today? Yes.
50:28
Okay, so the question is, do I have any data or any measures on how much better your software is going to be if you do all of the things that we talked about here? The short answer is no. The short answer is no,
50:41
but there is data on certain aspects. For example, you can find empirical data about the percentage of defects that can be identified by static analysis, both by machines and by human beings. If you want to read about useful specifications, there's a lot of literature about the importance of improving specifications and stuff like that,
51:00
so each of the individual topics, if you look up, except for keyholes, you're going to almost certainly find some empirical data, so that's the best I can offer you there. Alrighty, so that is the presentation. Thank you very much for spending the day with me. Given that there were seven competing tracks and so many people spent the entire day here, I am truly honored, so thank you very much.
51:22
Please fill out, I'd like to see your evaluations, but please choose your colored cards and drop them in the bin in the back. Thank you very much. Thank you.