The art of giving and receiving code reviews
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 60 | |
Author | ||
Contributors | ||
License | CC Attribution 3.0 Germany: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/42476 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
6
13
21
25
41
53
00:00
Mathematical analysisMachine codeMathematical analysisSoftwareQuicksortPrisoner's dilemmaFeedbackMachine codeXMLComputer animationLecture/Conference
00:32
Self-organizationComa BerenicesComputer programLocal GroupObservational studyAverageSoftware maintenanceMachine codeError messageLine (geometry)Software testingAutomationComplete metric spaceDivisorScalabilityStandard deviationSet (mathematics)Population densityComputer programmingPlanningVector potentialTelecommunicationPoint (geometry)Integrated development environmentLibrary (computing)Peer-to-peerVapor barrierTelecommunicationComputer architectureScalabilityMachine codePlanningComplete metric spaceError messageLine (geometry)CollaborationismObservational studyOrder of magnitudeMathematicsVector potentialQuicksortLevel (video gaming)Pattern languageFormal languageGraph (mathematics)Bus (computing)Software developerSelf-organizationSoftwareMultiplication signWordUniversal product codeMilitary baseProcess (computing)Single-precision floating-point formatFunctional (mathematics)Computer programmingBitFlow separationNumberSoftware testingIdeal (ethics)Rule of inferenceExecution unitOnline helpShared memoryQuantificationGroup actionType theorySound effectDivisorSoftware maintenanceStandard deviationConvolutional codeTerm (mathematics)Set (mathematics)Bit rateProxy serverPopulation densityDegree (graph theory)PlastikkarteUtility softwareReduction of orderMathematical analysisPatch (Unix)View (database)MereologyCASE <Informatik>Limit of a sequenceBoss CorporationProjective planeDiagram
08:03
Machine codeLibrary (computing)TelecommunicationIntegrated development environmentType theoryLecture/Conference
08:23
Inclusion mapLink (knot theory)Rule of inferenceMachine codeLogicInformation securityAxiom of choiceRevision controlSoftware design patternCollaborationismData modelDuality (mathematics)Conflict (process)GradientFunction (mathematics)Library (computing)Code refactoringFeedbackLatent heatCartesian coordinate systemProof theoryFormal languageVector potentialChecklistFitness functionPoint (geometry)AreaType theoryImage resolutionMachine codeGraph (mathematics)AuthorizationProcess (computing)Axiom of choiceRevision controlProjective planeExtension (kinesiology)Wave packetOnline helpCuboidMultiplication signGradientCollaborationismFormal grammarSpectrum (functional analysis)Software bugMilitary baseContinuous integrationDegree (graph theory)Descriptive statisticsBlogScalabilityGame controllerLatent heatFeedbackTwitter1 (number)Software developerBitLevel (video gaming)Computer programmingFlagArchaeological field surveyFunctional (mathematics)Software design patternMereologyPosition operatorSoftwareRight angleImplementationPlastikkartePerfect groupCore dumpConfidence intervalMathematicsTube (container)Order (biology)Dependent and independent variablesCommunications protocolStaff (military)Asynchronous Transfer ModeSound effectGroup actionSpacetimeDiagramProgram flowchart
17:59
Machine codeMereologyPlotterSocial classSoftwareMultiplication signObservational studyAuthorizationMathematicsRight angleCode refactoringCASE <Informatik>Limit (category theory)Point (geometry)Context awarenessQuicksortChannel capacityProduct (business)Line (geometry)Set (mathematics)DampingPairwise comparisonPlastikkarteAreaForm (programming)Video gameDivisorLecture/Conference
Transcript: English(auto-generated)
00:01
Hi, yeah, my name is Alex. I'm an Imperial in the global infectious disease analysis center And I work in a small team of RSEs called Reside. That's research software for infectious disease epidemiology Just before I get started could I see by a show of hands who here is regularly reviewing code and having their code reviewed Okay, awesome lots of people so I'm hoping for lots of feedback from you guys
00:24
Whether my experiences map onto yours any sort of questions or thoughts that you have about the ideas that I've got to present So first of all some motivation for doing code review Although it looks like I don't need to sell it too hard to this audience. So code review is obviously great for defect finding These are just a few illustrative examples taken from code complete from by Steve McConnell
00:44
Just to give an idea of the magnitude of the effects of code reviewing on defect finding So one example saw a change from 50 55% of one line changes being an error down to a 2% rate of changes being an error after code reviews were introduced
01:00
Another study saw errors reduced from 4.5 errors per hundred lines of code to 0.82 errors per hundred lines of code and Final example another study showed a 90% decrease in defects so all of these all of these were served to illustrate the idea that Code reviews are very good at finding defects And of course automated tests are also good for finding defects and code reviews are in no way a replacement for that
01:23
But they are a complement to it and especially for the RSE community I know that there were several talks about tech the challenges of testing scientific software in the previous session So I just just just to make an additional note that these that code review is especially useful when automated tests are perhaps
01:40
Perhaps of limited value or you're still struggling to implement meaningful automated tests across all of your code because all code can be reviewed by a human regardless of how amenable it is to automated testing But moving on to some less quantifiable benefits of code review one thing of course is learning opportunities, so This can happen at any stage of your career
02:02
whether you're having your code reviewed and you're getting useful comments back on it or you're reviewing someone else's code and picking up on patterns that you didn't know about or perhaps you're Reviewing code in a language that you haven't been using for a long time and you discover new functions and new libraries the learning opportunities of code reviews are really important and Then this third point sustainability, which we've heard a bit about today already
02:23
And I think it's really important to the RSE community Code reviews I would say absolutely essential for making sure that code bases are sustainable for several reasons Obviously, they increase the bus factor in other words. How many people are responsible for this project? Is it is it a sort of a code base that's owned by a single person?
02:44
Well having someone else reviewing that code ensures that at least two people hopefully many more Have eyes on that code and have some familiarity with that code base Code reviews ensure that code is readable. Someone else has to be able to read and understand that code So again, this speaks to the transparency point that was talked about in Alice's keynote this morning and performance and scalability
03:04
Thinking about the future of the codes. Well Humans are very good at picking up on these things in perhaps a way that automated tests are not so thinking about what is The future of this code base is this code that could be extended is this code that? will perform well under different conditions and for just general maintenance of code standards of
03:21
enforcement of best practice and code architecture patterns within the team So some practical advice One golden rule I think of pull requests is to keep change sets small small enough that they can be meaningfully reviewed This is a graph taken from a study done by IBM and this one this this is looking specifically at defect density
03:42
So the number of defects that were found per lines of code under review and found that they drop off quite significantly after Well, certainly after 400 lines of code, but there's sort of declining Returns earlier than that. So the rule that they drew from this is keep change that's under 400 lines of code I'd say this is true not only for defects, but also those less quantifiable things. We were talking about
04:04
sustainability readability things like that and The lines of code things is perhaps just a proxy for change set size. So as well as having a small Number of changes really the poor request should be conceptually distinct So you want reviewers to be looking at something that is a small unit of work that can easily be understood and digested and meaningfully
04:26
reviewed Relatedly this is a graph about pace. So don't review too much too quickly. So IBM also found that 500 lines of code per hour was a sort of upper bound for how fast you should be reviewing things and
04:40
And relatedly, I mean this really comes back to the previous point about change set size. So keeping poor requests small, but also Also review fatigue, so you can't spend too long reviewing code in a single session So maybe an hour is probably a reasonable upper bound for that as well We do in my group spend a significant portion of our time. We allocate a significant portion of our time to review and code
05:03
I'd say maybe as much as 15% of my time goes on reviewing code a day But certainly not for more than an hour at a time Okay, so a little bit about a culture which enables productive code review I think in an ideal environment and we're talking, you know, this is the kind of environment
05:21
You'd maybe see if you're working for a big software organization You would have no siloed code bases in other words No code base that's isolated with one person who's primarily responsible for that code base. You'd have developers moving between code bases Without too much specialization and too much ownership of a single code base You would have a culture review in which
05:41
Review is enforced for every line of code every line of code is under review and every person has to engage in the code review process as well and Maybe you'd have much Many other collaborative practices within the team as well Perhaps doing a lot of pair programming or at the very least like collaborative planning of issues that are going to be implemented
06:01
All of these things make reviewing much easier So this is an environment in which reviewers can conduct really meaningfully meaningful reviews of their peers work But I know that in the RSC community there might be some practical Barriers to achieving this kind of collaborative workflow Not least if you are working within embedded within a research team rather than embedded within an RSC team
06:25
So I have a few ideas for sort of practical moves towards this type of culture So one thing is just making sure that channels of communication are open So even if you're working by yourself Perhaps you have access to a wider RSC community or other people working at least in the same language as you within your department
06:43
Or within your group. So keeping open those lines of communication at least establishes opportunities for people to Review each other's code and maybe some kind of quid pro quo arrangement where you'll Be prepared to review someone else's code if they'll be prepared to review yours Even if you have quite high levels of ownership and some degree of siloing of those code bases
07:05
So this is my second point as well that everyone is a potential reviewer and everyone should be prepared to review unfamiliar code So ideally a reviewer is well versed in the code base that they're reviewing or at the very least Well versed in the language of the code
07:20
But minimally I think any review is better than no review and certainly within our group in the absence of someone who? Meets those standards to review code actually having a review from Someone else in the group who has some level of familiarity with the language you're coding in has still been really really useful in terms of finding defects in terms of improving architecture and actually improving knowledge as well and
07:45
Sort of knowledge sharing within the group so that people are skilled up to then Be able to really contribute really meaningfully to other people's code and offer helpful suggestions Another thing we've actually done recently has started having inviting anyone in the department anyone who's writing code To come to a weekly stand-up with us so that again is just another channel of communication where someone can say oh
08:05
I'm writing this library in R But no one else has looked at it except me and perhaps someone can volunteer to at least do some kind of cursory review of that code this I think Comes in particularly when you're talking about these types of environments where reviewers might not be
08:23
Extremely familiar with the code bases that they're reviewing you can help reviewers know what to look for so at the very least an author should always annotate their pull request write a Description of exactly what the code is trying to achieve and also review your own code first so go through your pull request You can annotate where you would particularly like attention. This is going to be really helpful if the reviewer
08:45
Yeah, maybe isn't a specialist in this in this particular area You can direct the reviewers attention to things that you think maybe you're not sure of maybe you'd like a second opinion Of and we've also had some success with checklists So in github you can actually create pull request templates, which makes this easier
09:01
so predefined checklists for certain projects just to make sure that reviewers are ticking the boxes quite literally and Looking at some of the key things that you want to be picked up during code review And these can obviously be tailored to the individual projects Okay, so those were some practicalities and
09:22
I'd like to now move on to the subject of the psychology of code reviews I mean to some extent everything I've been talking about is the psychology of code reviews, but what I mean is a code review is Essentially a critique of someone's work and if someone is invested in their work Which I assume most of us are that's necessarily going to feel personal to them
09:41
We had a quote actually the previous session someone said software is easy But people are hard and this really speaks to the people side of code reviews So the actual dialogue that takes place when reviewing code In a highly scientific and representative survey of my Twitter followers I was just curious to ask people how often they feel defensive when they're having their own code reviewed and
10:07
Yeah, I mean, I suspect that everyone is familiar with basically some feelings of defensiveness when their work is under critique I think that's just normal. That's just human. So bearing that in mind I would say that not all code review comments are created equal
10:25
This is a two axes a low reward to high reward axis and a low conflict to high conflict potential axis And I think it can be helpful to think of when you're making a comment on a review Where does the comment fit in this broad?
10:41
This brought this broad graph about this top left quadrant the high conflict low reward These are things that I think could be characterized as pedantry or at least perceived as pedantic by the by the author so typos whitespace
11:01
Arbitrary preferences where there are two equally good ways of doing something and I don't want to spend too much time on this quadrant But other than to flag up the fact that there are tools that can automate away this type of conflict, right? So you have style guidelines and you have linters You can run all of your code through a linter through a spell checker as part of your continuous integration process
11:22
so that by the time the review lands on the reviewers desk all of these types of things that can be perceived as pedantic by the author they will have caught themselves and they and they will have Have corrected in the bottom quadrant this the bottom two quadrants the ones I've characterized as low conflict
11:40
These are things that I think could be broadly described as factual. So somewhat comes back to the idea of defects or bugs in the code and These are obviously on a spectrum of low reward to high reward Some things are very minor and some things are very critical But I would say broadly these are going to be fairly low conflict Precisely because the author doesn't feel a high degree of ownership over these things, right? These were accidents
12:03
These were oversights. These weren't things that the author identifies with strongly and that leaves us with this top quadrant This is the quadrant that I think is where code reviews really come into their own right? This is things like Is this code over engineered or is it just the perfect level of abstraction?
12:22
What design pattern choices have been made here is this code really readable or what trade-offs have been made readability versus performance say and These I think speak back to that point that I made at the beginning about code reviews being great for code sustainability
12:40
Because these are the types of things that really play into the transparency the performance the scalability the future proofing of the code and I think these are things about which To qualified developers can reasonably disagree and Could be really productive to collaborate and and talk about these things But how do we induce people to collaborate? I?
13:02
Find it useful to know a little bit about conflict resolution Archetypes because I think these map quite neatly onto ways in which I've seen code reviews be dysfunctional so one way in which code reviews can fail to be highly useful and highly functional is if they turn into hotbeds of competition and Aggressive conflict and I think if you're seeing if you're seeing something like that
13:25
it could be that one or some of the participants involved have a competing style of conflict resolution On the other end of the spectrum There are poor quest processes code review processes that really seem just like a formality Where code is technically being reviewed, but no comments are coming back on that code or you have people who just opt out of the process
13:45
They don't really want to put their code up for review or perhaps they don't want to be a reviewer and I think this is where we're seeing people who are perhaps avoidant or Yielding when faced with a conflict and what we really want to do is encourage people to collaborate Common wisdom on this is there are two modes for for moving people towards that collaboration quadrant one is to lower defensiveness and
14:08
One is to raise confidence So with those two things in mind Some tips for language mainly mainly language and approach that you can take if you are the reviewer On someone's code one idea is that you can raise the code
14:25
Raise the grade of the code if you think of code like a school assignment and you think of it Graded A to F you can raise code by a grade or two, but don't go overboard So you can make a C into a B plus or maybe a B into an A But if you're trying to drag code, that's like a D or an E up to an A
14:40
That's going to be a very painful process for everyone involved, especially the author There might be other Ways to address low-quality code that are not code reviews, right? So extra training or mentoring or pair programming or what have you but just a deluge of code review comments is not going to be helpful Language choices so using we instead of you it's an incredibly simple trick, but it really works. So, you know, it's much less
15:07
Yeah, it fosters the idea that this is a collaborative effort, which it is right these code bases are collaborative efforts Or they should be and it shouldn't be accusative So a couple of examples, you know, you should have reused this function can easily become we could reuse this function
15:21
Which also feeds into the third point which is there's no need to phrase things Too directly either you could also ask things as questions. There's a couple of reasons for this I mean one is just that people respond better often to questions and demands But the other thing is you actually might be wrong as the reviewer right like you might have overlooked something So it's good to be circumspect
15:40
So ask, you know, could we reuse this other function here or you know, how does this change? How is this changed from the previous implementation? What are the advantages of this approach and Of course give some positive feedback It's so simple But everyone responds well to positive feedback and it's really easy to find something good to say about code at the very least
16:02
You can just say thank you to someone for their hard work But often there's something much more constructive and substantive to point out that's positive about the code Onto being an author. So obviously you don't have control over a reviewer's comments But you still have control over how you respond as an author one technique that I sometimes practice
16:22
That's a little bit challenging. It requires some self-control, but it's quite useful is the as-if technique So you try to ask yourself if you find yourself responding negatively or defensively to a comment you ask yourself Well, how would I respond if? Something were different about this situation perhaps. How would I respond if this was I was in a better mood
16:41
I was having a better day, or how would I respond if actually this comment was made on someone else's code? And I wasn't the author here How would I respond if that if this had been phrased more sensitively if perhaps it hasn't? Another really simple thing say thank you to the reviewer. This builds a good relationship to an author and reviewer It's actually an implementation of the first technique because by practicing gratitude or just behaving in a way that shows gratitude
17:05
you actually start to feel more gratitude and Make sure you annotate your review first It's a really simple practical points, but if you go through and explain your choices You're much like more likely to have a constructive dialogue with the reviewer And you can also solicit feedback from a fairly unresponsive review if you if you're having
17:23
Trouble getting out of the reviews what you want you can solicit feedback with specific questions And actually directing the reviewers attention to what you want Okay, I'm gonna wrap up and ask for questions and feedback from the audience You can reach me on Twitter And I have a blog post which goes into some of these ideas at more depth which is linked from my Twitter
17:43
we also have a blog at reside in which we've Blogged about our pull request protocol, so if anyone's interested in that I'll be sure to tweet it out later and with that I'll open to the floor. Thanks
18:03
Are there any questions at the back?
18:25
I think it so the question for those listening was what do you do if you actually get code that could? Reasonably be graded a D. Is that acceptable and what do you do? Do you just throw that back and say sorry? We can't merge this. I think it depends on the context right if this is something that's going into production
18:40
And it's just subpar then yeah, you're not going to be able to to merge that code if it's I Guess it depends what you mean by D as well right so I presume you mean what what do you do if you? Have really subpar code that can't be merged into production. Yeah, I think unfortunately I really think the code review is not the place to address that I think
19:01
There may be parts of the code that could be factored out and could be acceptable So that's a great thing to suggest if you think that some part of it was Was acceptable you could suggest that that be factored out and that could be reviewed and go in and that's a small win for the Author that will make them feel better about the rest of it aside from that I think you'll just have to establish whether it was because it because of a
19:23
misunderstanding of the requirements in which case maybe the author could just go away and have another go or if it there's really a deficit in Capacity there. I think yeah, you have to start again and probably pair with that person is going to be the right thing at that point Thanks. There's another question here
19:41
So in the beginning you showed those plots with the effect on on the defects in the code Do you know how those defects are measured? Actually, I haven't dug really deep into that IBM study, sorry, I can't tell you yeah Although I would be interested to and there's a lot more studies like that and I I know different institutions
20:02
Mean something slightly different by defect. So like the sort of comparison between those studies I think would be really interesting to do and I haven't had time to really to really dig into that too much. Yeah chance for another question
20:29
So the question was when software is undergoing a really substantial refactor how to keep to this limit of 400 lines of code per change I Mean, that's a huge subject I think that's like there's room for a whole other talk just on how to factor out small change sets
20:44
But I think it's I think it's almost always possible. It's my short really short answer There's many ways to slice up a code base like, you know Horizontally vertically can use code switches so that you're you're making a small change, but it's not getting released into production You're starting from scratch with a yeah with a new class while keeping the old one side by side
21:06
And using some kind of switch to switch them out So there are a lot of techniques out there for refactoring and I'll admit that there's some more upfront Time and work into making those change that small but I'm a big proponent of that being Absolutely the way to do it and it being a false economy to think that making a large change at once is the way
21:24
to go