Implementing a Strong Code-Review Culture
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Part Number | 18 | |
Number of Parts | 94 | |
Author | ||
License | CC Attribution - ShareAlike 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this | |
Identifiers | 10.5446/30672 (DOI) | |
Publisher | ||
Release Date | ||
Language |
Content Metadata
Subject Area | ||
Genre | ||
Abstract |
|
RailsConf 201518 / 94
1
4
7
8
9
10
11
13
14
16
17
19
21
24
25
29
30
33
34
35
36
37
39
40
42
47
48
49
50
51
53
54
55
58
59
61
62
64
65
66
67
68
70
71
77
79
81
82
85
86
88
92
94
00:00
Type theoryRight angleMachine codeSoftware developerComputer animation
00:28
Machine codeMereologyMeeting/Interview
01:00
Goodness of fitMathematicsObservational studyFrame problemRight angleSoftware developerData managementWordDifferent (Kate Ryan album)Rule of inferenceSelf-organizationOnline helpExpected valueInternet service providerElectronic program guideBoiling pointReal numberPhysical systemProcess (computing)FeedbackProduct (business)Type theoryResultantHeat transferContext awarenessExterior algebraAuthorizationPeer-to-peerHypermediaSubject indexingBitData conversionOrder (biology)MereologyLatent heatStandard deviationSoftware bugSlide ruleAbstractionQuicksortMachine codeLine (geometry)Multiplication signImplementationThumbnailInsertion lossFrequencyTelecommunicationLink (knot theory)Level (video gaming)Program slicingClient (computing)Interactive televisionContent (media)Traffic reportingPointer (computer programming)Descriptive statistics2 (number)Flow separationComputer animationLecture/Conference
09:25
Reading (process)Polarization (waves)Context awarenessTelecommunicationFeedbackMetropolitan area networkInformationMedical imagingMathematicsRight angleMereologyLink (knot theory)Service (economics)Exterior algebraSubject indexingWordBitAbstractionDifferent (Kate Ryan album)Multiplication signData conversionForcing (mathematics)Control flowAdaptive behaviorNegative numberAdditionFrame problemForm (programming)AuthorizationCognitionOpen setWeb pageParameter (computer programming)Position operatorComputer configurationQuicksortMachine codeWater vaporType theoryBit rateEntire functionAmenable groupSurjective functionMessage passingRoundness (object)Level (video gaming)Order (biology)Computer animation
17:50
ResultantMachine codeOrder (biology)Right angleCommitment schemeComputer scienceMultiplication signThumbnailSoftware bugAuthorizationPoint (geometry)MathematicsMaxima and minimaPhysical systemInformation securityData conversionPerspective (visual)Type theoryRevision controlUnit testingWeb 2.0Object (grammar)Context awarenessDependent and independent variablesService (economics)Solid geometryProduct (business)Web pageProcess (computing)Mathematical optimizationQuicksortRootSoftware developerChainAreaCuboidCode refactoringOpen sourceSoftwareSingle-precision floating-point formatMultiplicationEndliche ModelltheorieTable (information)Focus (optics)Operator (mathematics)Line (geometry)CASE <Informatik>Formal languageBit rateDisk read-and-write headFeedbackMereologyVapor barrierDifferent (Kate Ryan album)WebsiteRule of inferenceCircleOpen setObservational studySoftware testingFactory (trading post)Complex (psychology)MultilaterationControl flowNatural numberSign (mathematics)Data recoveryCovering spaceRobotGoodness of fitGame controllerParameter (computer programming)Execution unitAddress spaceTask (computing)AdditionArithmetic meanElectronic mailing listCoefficient of determinationStandard deviationElectronic program guideBitNP-hardTwitterStaff (military)Cache (computing)Form (programming)InformationSelf-organizationSet (mathematics)HypermediaShape (magazine)DivisorOnline helpVideo gameKey (cryptography)Image resolutionSoftware as a serviceDoubling the cubeTime zoneWordNeuroinformatikInternet service providerGreatest elementEmailGroup actionWhiteboardComputer animation
Transcript: English(auto-generated)
00:12
Good afternoon, RailsConf. I hope you're all feeling great after lunch. Thanks so much for coming, I'm very excited to be here. My name's Derek Pryor, I'm a developer with Thoughtbot,
00:21
and I'm here today to talk to you about code reviews, doing them well, and what it means for the culture of your team when you're the type of place that does them well. So, let's start with a show of hands, just so I can get an idea of where everybody's at. Great. How many of you are doing code reviews as a regular part of your day, every day already?
00:42
And how many of you really enjoy doing them? Okay, a few less. And how many of you do them because you feel like you have to? It's about equal, okay? All the people who said they really do enjoy them also said they do them because they have to. Okay.
01:00
So, why is it that we do reviews in the first place? This is pretty easy, right? It's to catch bugs. All right, we're gonna have somebody look at every individual line of code, and we're gonna find what's wrong with it. Not really. That's not why it's interesting, right?
01:20
I've been doing code reviews for over 10 years now. I used to hate them, and they were the worst part of my day, right? We did it just for compliance documentation at one of my former jobs. But now I think that code reviews are one of the primary ways that I get better at my job every single day. So,
01:42
yes. We're gonna have fewer bugs in code that's been peer reviewed than in code that has not been peer reviewed. But studies on this show that the level of QA that we get out of code review doesn't meet the level of expectation that we have as developers and managers. So, that is, we think by doing code review
02:02
we're getting this much QA, when in reality we're getting somewhere down here. So, why is that? Well, the reason is when we do code reviews, we're looking at a slice of a change. We're looking at the get diff, essentially. And we can catch some actual issues or problems where you might be calling a method on nil,
02:21
but we can't catch the really heinous stuff which happens when your whole system interacts and corrupts your data. So, code review's good for some level of bug catching, but it's not the end all be all, right? So, what are they good for? I already told you that I think that code reviews make me better every day, and I want you all to feel the same way.
02:42
Well, in 2013, Microsoft and the University of Lugano in Switzerland came out with this study, expectations, outcomes, and challenges of modern code review. So, in it what they did was looked at various teams across Microsoft, which is a huge organization. They have several teams with different,
03:01
you have senior developers, junior developers, managers, everybody, all working on different products. And they surveyed all these people to ask them, what is it you get out of code review? What do you like about it? What don't you like about it? When they were done surveying them, they watched them do code reviews and asked them questions afterwards. And finally, they looked at all of the feedback
03:20
that was given, every comment that was logged in their code review system. And they manually classified it. They said, this one has to do with a bug that they found. This one has to do with a style comment. This one has to do with a possible alternative solution. So, after doing all this work, what they found was that people consistently ranked finding bugs very high as a reason for doing code review.
03:45
But in the end, it was actually a lot less about finding bugs than anyone thought. The chief benefits they saw from code review were knowledge transfer, increased team awareness, and finding alternative solutions to problems. Right, that sounds a lot more interesting to me
04:01
than hunting through code looking for bugs. This, through this process, we can improve our team. One person involved in the study said this, code review is the discipline of explaining your code to your peers that drives a higher standard of coding. I think the process is even more important than the result. So I really do like that last part, even though it's not on the slide, right? The process is more important than the result.
04:22
We're gonna talk about that today, just by going through the process of code review in the right way, we're gonna be improving our team, regardless of the actual results that we're seeing on any individual change. But I also like the definition here, right? The discipline of explaining your code to your peers. I tweak it just a little bit to say that code review
04:41
is the discipline of discussing your code with your peers, rather than trying to explain it. Code review is one of the few chances we get to have a black and white conversation about a particular change, right? We often talk in abstractions, like when we come to conferences like this, we talk in large abstractions about things, and those are really comfortable conversations. In code review, we have to get down
05:00
to the implementation details and talk about how we're gonna apply these things. So like so much else that we do, it's a communication issue. If we get better at improving code reviews, then what we're really doing is improving the technical communication on our team. So the study also found that those benefits I cited earlier were particularly true
05:23
for teams that had a strong code review culture. So what does it mean to have a strong code review culture? To me, it means that your entire team has to embrace the process, right? Everybody has to be involved. It can't be something that the senior developers do for the junior developers. It's a discussion.
05:41
That's what we're after here. So as I mentioned earlier, I've been doing code reviews for over 10 years now, right? But only in the last two to three have I started to see a real improvement in what I'm getting out of them and in myself because of them. And so why is that? I think it's because I'm part of a strong code review culture at Thoughtbot.
06:02
And at Thoughtbot, we often go into clients of various sizes to help them with issues that they're having, right? And nobody ever comes to us and says, I really need help improving the code review in my team. That never happens. But when we get on the ground with those teams, we often find that there isn't a strong code review culture. So one of the challenges we have
06:21
is how do we get people to have this culture around code reviews? And there are a lot of little rules that we suggest following in our guides. But if you look at them, you can really boil them down to two key rules of engagement, two things that you can do today to start getting better at code reviews with your team.
06:40
The first of them is something that you can do as an author. The second is something that you can do as a reviewer when you're providing feedback. So first, as an author, what are we gonna do to make sure that our code reviews get off on the right foot?
07:06
So this quote here is from Gary Vaynerchuk and he was talking about social media marketing or something, not interesting. But it's also applicable to code reviews, believe it or not. So in a code review, your content is the code,
07:21
it's the diff, it's what you've changed. Right? The context is why that's changed. Your code doesn't exist for its own benefit. It solves a problem. So the context is what problem is it solving? That study I cited earlier found that insufficient context is one of the chief impediments to a quality review.
07:41
So as an author, we need to know this and get out in front of it. We're gonna provide excellent context around our changes. So let's have a look at a real-world example. This is a commit or a pull request title that you might see come across your email, right? You type column first in multi-column indexes. The specifics here aren't particularly important
08:02
if you don't understand what this means. The problem here is that there's no why. So I can look at this change and I can say, well, yeah, this changes the order of the indexes or the order of columns on multi-column indexes. But I can't tell you if that was the best solution to the problem you're solving. I can't really learn anything from it.
08:22
So this is a loss. It's not interesting for me to review and it's just gonna get a thumbs up and move on, right? Actually, what I would do with a change like this is comment and say, can you provide some more context here, right? And a lot of times what we find happens is somebody updates the pull request or adds a comment to say something like this.
08:42
Right? So, okay, so I guess you guys think that's not better, which is true. That's not better. So this is probably a link to GitHub or maybe it's a link to JIRA or whatever your issue tracker is, right? And a lot of people do this. Or they'll even, they'll provide like a short explanation and say, for more detail, see this ticket.
09:01
But what you're doing there is you're making the reviewer hunt for this context. If they click this issue, are they gonna see a good description of what you're doing? Probably not, right? They're gonna see like a report of a problem, maybe some discussion back and forth about how to solve it, and then maybe a link to this pull request or something. But they've gotta go through all this and they've gotta hunt for that context
09:21
and put it together until they're in the right frame of mind to review the change and to tell you whether or not it's the right solution. So here's an improvement, right? It's not important to read this again. So what we're doing is identifying the problem with index ordering. That's what we do first. Why is it a problem? We're gonna back it up with some Postgres docs,
09:42
and those link off to more information if you need it. And because this particular change was a change to Rails, and we need to be concerned with multiple adapters, we're also going to back it up with some MySQL documentation. And finally, we're gonna talk about why this is the best solution. So that's a lot of context, right?
10:00
But it's important to note that now, as somebody who's coming along to review this, I know why this change is being made, and maybe I've even learned something about how multi-column indexes work by reading through some of this documentation, right? So there's value for me to review this. So as an author, we're gonna provide sufficient context. What we're trying to do here,
10:21
you've been working on this change for four hours, two days, however long it took you to fix this, right? What you need to do is bring the reviewer up to speed. Let them know what you learned in this process, and put them on equal footing with you. So I'd challenge you to provide two paragraphs of context with every change you make. At first it's gonna be really painful, and there are some changes that it's torturous to describe with two paragraphs.
10:43
And yes, it's going to take you more time. But how much more time? I don't know, like five minutes maybe, right? And it avoids a whole round of, that round of questioning I described before, like why are you making this change, right? So we've headed that off. And the extra bonus is that all of that context
11:01
we saw earlier gets to live on in the commit. We're gonna squash that, and we're gonna save that. So rather than that one we saw before that had an issue, a link to JIRA or a link to GitHub, right, that's gonna go away as soon as we stop paying that bill. But our Git history is gonna stay with us, and we're gonna see that there. Okay, so that's what we can do as an author.
11:21
What about as a reviewer? What can we do to make it so our feedback is well received? Like to call this ask, don't tell. So to start off with, it's important to note
11:41
that research has shown there's a negativity bias in written communication. So that is to say if I have a conversation with you face to face, and I give you some feedback, or maybe even over the phone, just in the place where you can hear the tone of my voice, hear the inflection, you're gonna perceive that one way. If I give you that same feedback written, like in the form of a pull request review,
12:02
it's gonna be perceived much more negatively. So it's important that we're cognizant of this negativity bias in our written communication. Right, we have to overcome this if we want our feedback to be taken in the right way. One way to do that is to offer compliments. And I would suggest that you do that. So if you find something in a pull request
12:21
that is something you learned, or something you think has done particularly well, those are great to call out, right? It lets everybody know that, like, yes, you've taught me something, that's great, thank you very much. Rather than just always nitpicking up a change. But there's gonna come a time when you need to provide critical feedback. And the best thing to do here
12:40
is to ask questions rather than make demands. So I've noticed that when I remember to do this, like even on the same day, if I'm looking at two different pull requests, and in one change I remember that I should be asking questions, and in the next change I just make commands, I can guarantee which one's gonna have better technical discussion in it, which one's gonna be more satisfying
13:01
for the entire team, right? It's gonna be the one where I try to engage in conversation rather than dictating what somebody should do. So let's take a look at another example. So here's an example. Extract a service to reduce some of this duplication. So this is a command. There's no discussion here. I haven't opened anything up.
13:21
So if I'm the original author of this change, my option here is to either do it, or enter into what seems like an argument, or I can just ignore you, right? And the ignoring is probably the worst thing you can do. I'd rather see you argue. And another important thing to note is
13:41
this comment here from the reviewer gives the author no credit for maybe having thought of already extracting the service. Maybe they ruled it out for some reason. They're waiting for more information so they can make the proper abstraction here. So how can we improve this? What do you think about extracting a service to reduce some of this duplication?
14:02
All we've done is formulate it as a question. But there's all sorts of different ways this can go now, right? We've opened up a conversation. So one of the more likely ways is, yeah, you're right, I can eliminate some duplication by extracting the service, thanks a lot. And I type that back to you. I say, great, thanks, that's great feedback.
14:22
Fixed in this commit that I added. And now you feel good as the reviewer because you provided something of value that was well received. If you disagree with the change, well, you were just asked your opinion. So feel free, right? This is significantly less negative than the command, right? What we're really doing is fostering technical discussion.
14:42
Code reviews are just like a way for us to have excellent technical discussions with each other. So what do you think about as a great conversation opener in pull requests? Did you consider, is another thing, like if you want to throw out an alternative, right? Or if you're getting lost somewhere, can you clarify, it's a lot better than
15:00
what the hell's going on here, right? So these are all ways to soften suggestions and avoid negativity, which is gonna lead to better discussion. There are also excellent ways to provide feedback. Like if you are new to a team or you're a junior member of a team and somebody more senior than you submits a change, I think it's a little natural to feel
15:21
a little tentative about providing feedback. But doing it in the form of a question is a great way, right? You're just looking to learn and you're also kind of nudging the discussion in the way you want to go. And once you open with these conversation openers, yes, then you can break into some practical suggestions. But now you've got everybody on the right page.
15:40
So similarly, it works very well giving feedback from somebody senior to somebody more junior, maybe an apprentice or something like that, right? If I give that command to an apprentice, they're gonna do it. And maybe they won't feel great about it and maybe they won't even be entirely sure why. Whereas if I ask a question, what I'm really hoping that they'll do is engage, right, not just do it.
16:01
So this is pretty simple, right? All we're gonna do now is ask a bunch of questions. We're just gonna tack question marks onto everything. So we need to be really careful about this. It's pretty easy for a question to be like a not so silent judgment. So here's one I see a lot.
16:22
Why didn't you just? There's a couple things wrong with this actually, but one of my pet peeves is this word just, right? Amen. So every time I see this, I think to myself or sometimes out loud, why didn't you just, right?
16:43
So that word just, simply, easily, words like that, those pass judgment about the difficulty of a solution proposed. And they make me, when I read those, feel like I wasted somebody's time or I missed something obvious. So it's not a good way to feel, right? So let's look at how you can improve this.
17:01
We can get rid of that word just. Is this better? I mean, it's less judgy, right? That word just is gone, that's great. But we're still putting people on the defensive. It's still framed kind of negatively, right? Why didn't you do something? This is kind of a perfect example of the negativity bias in written communication. If I had this conversation with you and I said,
17:22
oh, why didn't you call map here? That's not so bad, right? But if I write this down where you lose any sort of sense of my tone or inflection, it's gonna come off more negatively. So what we're gonna do is be positive, right? We're gonna use those tools we talked about earlier. We're gonna ask, what do you think about? Can you clarify? Did you consider?
17:40
Those types of things. What we're really talking about here is asking the right questions the right way. And what we're after is better technical discussion. What we're talking about here is the Socratic method. Right? Socratic method, according to Wikipedia, is based on asking and answering questions to stimulate critical thinking and to illuminate ideas.
18:02
That sounds like exactly what we're trying to do, right? We're trying to have critical thinking around the code changes we're making and to illuminate some potentially alternative solutions. We're stimulating valuable discussion in our pull request now versus just throwing in a thumbs up. The Socratic method works pretty well for Socrates and Plato. It'll probably be okay for us
18:21
in our pull request discussions. So those are the two things that we're gonna do today, right? These are our tools for better technical discussion. We're gonna be well on our way if we start doing these two things. There's gonna be a couple issues that come up in practice. The first is, how are we gonna handle disagreements?
18:42
And the second, what is it I should be reviewing anyway? What's the high value thing for me to look at? Let's handle conflict first. Conflict is good. Your team needs conflict in order to drive a higher standard of coding.
19:01
The debate, a good debate, a healthy debate around a change drives quality and it leads to learning. But there are two types of conflict. One of them healthy, one of them not so much. So the easy type, the healthy type of conflict is that we don't agree on an issue. Perfectly fine, we're not always gonna see eye to eye.
19:21
What's critical to note is that everybody's gonna have like a minimum bar for quality that you need to pass. Once you reach that minimum bar to quality, we're talking about trade-offs. We need to be really sensitive to the fact that we're just talking about trade-offs, right? So if you find yourself disagreeing with something and you're having this conversation back and forth, ask yourself is it because I don't agree,
19:41
I don't think the quality is up to snuff here or is it because it's not the way I would have done it? If it's not the way you would have done it, that's fine, right? Like there's multiple solutions. We're talking about trade-offs here, right? We can go back and forth all day if we want. Like Socrates and Plato and all them, they can go back and forth all day but they're not shipping software. We need to ship this software to the user.
20:00
So we need to have some sort of agreement to disagree at some point. Reasonable people disagree all the time. So just make sure you're not arguing in circles because it's not quite the way you would have done it, right? So what about the second type, right? We don't agree on the process. So this can happen if you have somebody maybe just committing code directly to master
20:21
or you have somebody opening pull requests and ignoring feedback, right? So they're opening the pull request because they were told to but they don't value the feedback. They don't value the process. They don't value your time ultimately. My advice on this is to get in front of it, right? As a team, sit down and decide what it is you want out of code review. What do you expect, right?
20:41
Maybe it's that all changes, regardless of size, are gonna be put through a pull request review. And then all feedback will be addressed in some manner or another, you know? It doesn't mean you have to accept the feedback but you have to say like, oh, I see your point there but I really think this is better. I'm gonna go with this. We're gonna revisit it. I know we're gonna revisit it when we get to this other feature and maybe we'll have more information
21:01
to make that change or something like that, right? So once you've done that, you're still gonna have those problems, right? That doesn't solve all those problems. You're still gonna have people committing code directly to master. You're still occasionally gonna have people who are ignoring feedback. So what do you do in those situations, right? If you were in a situation where somebody's committing code directly to master, my advice to you is to review it anyway, right?
21:22
Go in there, add comments on the commit and when you're done, follow up with them afterwards and say, hey, I noticed you committed this to master. I had a couple questions. Can you take a look at it? When you get a second, let me know what you're thinking, how we're gonna address these. Oh, and by the way, can you, in the future, submit a pull request for this?
21:42
If this continues to happen, then it's time to break out the revert hammer, right? Go ahead and pull that change out and open a pull request for it and start adding feedback there. It's also crucially important that you enlist the help of your team in this. You can't be the one always swinging that revert hammer or always being the one cracking down.
22:02
If you are, that means you're the only one valuing it or the only one willing to actually speak up for it. Okay, so that's how we're gonna handle conflict, hopefully. What about what to review, right? People ask us a lot, like, what is it I should be focusing on in review? If I'm not catching bugs, then what am I doing, right?
22:22
And the key is here that everybody kinda brings their own list of things to look at and that's how we get better from each other's expertise or each other's area of focus. I can tell you what works for me to kinda give you an idea of the type of stuff that I look for. First, a note on timing. I'm really, when I'm doing reviews,
22:40
I'm trying to stress small changes, right? I want, 10 minutes is a long time for me to spend on a review. So what I'm really trying to push for is small changes that are easier to provide context on, right? Easier to review and I can go through it quicker. So once we have these small changes, one of the first things personally that I'm looking at
23:01
is single responsibility principle. This is the S from the solid design principles, right? Does every object in the system have just one job? And if you're not familiar with solid, that's not too particularly important. You can kinda focus on the single responsibility principle and the rest of those, the O, the L, the I and the D,
23:20
if you squint, can kinda follow from that single responsibility principle. So that's why I really like to focus on that one. Naming is a big one that I focus on a lot, right? There's two hard problems in computer science, naming things and cache invalidation, and time zones.
23:40
So good names make things easier to discuss, right? And that's what we're after, after good technical discussion. It means I can have a better discussion face-to-face. It means I can write these things more naturally. So I will definitely focus on names to the point where some people are like, you're missing the large picture here, but like I said, I'm trying to make it easier for discussion.
24:01
Complexity is another thing I focus on, right? Are there areas of the code, I'll just look at a change and see the shape of a change and say, are there areas of the code where the shape looks complex? And I'll dive into those a little bit. And that's where I'll break out the can you clarify, right? And sometimes it turns out that the complexity exists for some future feature we thought we might need,
24:20
or we can kinda ship that off to later. Test coverage, as I'm going through a change, I'm kind of assembling in my head what it is I expect to see for test coverage. And if you're using RSpec, that comes with the S, so it's almost always at the bottom. It's great. So I'm looking to see like, okay, there's probably a feature spec that covers this.
24:41
Maybe there's a controller spec that covers this edge case here, and there's probably a bunch of unit tests around these methods that got added to the model or whatever, right? And I'm not specifically, like I said, looking for bugs. The test coverage is vital. If I see a bug, I'm going to comment on it, but I'm not doing your QA, right?
25:01
I've seen a lot of places where people are happy to be like, well, so-and-so approved it, so this must be good, even though they had kind of a shaky feeling about it from the beginning. We're not doing QA. Keep saying that. And like I said, everyone has their own areas of expertise. Everybody has their own checklist, right? Personally, I'm interested in web security,
25:21
so maybe I'll look at some things from that perspective and make sure things are up and up there. Maybe you have somebody on your team who's really great at giving practical performance advice, right? That's not premature optimization. It's great for them to jump in, right? And there's all sorts of things I didn't list here, like duplication. Whatever it is that you're comfortable giving feedback on,
25:41
bring that to the table. We're going to get the best from all of our teammates that way, and we're going to learn from the best parts of them. So one thing I didn't mention was style.
26:01
Is style important? Yes, style is important, right? If you look at a code base and it looks clean and consistent, it gives the impression of a team working together towards something, right? Everybody's on the same page. A neat and tidy code base is like a clean kitchen.
26:21
Everything has a place and everything in its place. The problem is that study I cited earlier found consistently that people who got, people who received a lot of style comments on their code viewed those reviewers kind of negatively, right? They thought they were harping on things that weren't valuable because they were missing big picture things.
26:43
Whether this is perception or reality doesn't matter, right? We're talking about improving technical discussion, so if somebody feels that way, if they feel the discussion is not valuable, then is there something we can do about it? Yes, there is. So my advice first is to adopt a style guide, right? There's plenty of community style guides
27:01
you can just adopt outright or you can look at what you've been doing and just write it down somewhere real quick. Make sure that everybody knows that any arguments about style, whether you're using double quotes or single quotes or whatever it is, are gonna happen in that style guide, not in an individual pull request. After you've written it down, you're gonna outsource it. We're gonna use RuboCop, JS Hint, SaaS lint, things like that to handle this style checking for us, right?
27:26
At Thoughtbot, we have Hound CI, which is a service, right? That runs these linters on your code and adds comments as if it were a person. The difference being it's a bot that's providing this feedback, so getting into a bike-shedding discussion with a bot
27:41
is not very satisfying. Right, so yes, full disclosure, that is a Thoughtbot service. It is, however, free for open source. It's open source itself. I think it's a great product. I don't stand anything personally to gain from it. I think you guys should check it out for that reason. So these are the tools that we're gonna use
28:01
for more meaningful technical discussion. We start doing these things, right? We start providing context. We start making sure the author is gonna receive our feedback in the right way, and we know how to handle, we know what we're looking for in a review, and we know how to handle conflict that comes up. What we're well on our way to here is having a strong code review culture.
28:21
So a strong code review culture goes beyond the quality of a single change. It gets at the roots of the type of team that you have. So if your team has a strong code review culture, what is it that you're going to see? Well, first, you're gonna see better code.
28:40
Yeah, I already told you, this isn't about catching bugs. But the code's gonna be better because the discussion improves the solutions. I can't tell you how many times I've submitted a pull request and been like, that's some of my best work. That's going right through, right? And then five minutes later, there's three emails from GitHub with feedback. And at first, I'm like, ah.
29:01
And then I read the feedback, and it's all totally reasonable and makes the solution even better because it's somebody else's viewpoint about what they're really good at. And it took what I thought was already a good solution and made it better. Or maybe it turned the solution totally on its head and I was wrong, right? But the important part is that we're, through group effort, we're getting better. We're gonna have better developers.
29:21
You'll be reading code and writing about code every single day in black and white concrete examples. And like I said, we're gonna be taking the best from each other in these conversations. So maybe you're not that into web security, but because I keep commenting that you can't pass params that way, you start to learn that.
29:41
We have team ownership of code. What this means is my code, your code, their code, that's all dead, that's gone. At one point, we all decided that this was a reasonable change. Right? Maybe it wasn't the best solution, but we thought, given what we know at this time, this is good.
30:01
So the whole team is gonna own the code now. And what we're gonna get out of that is versatility. There's no more, oh, this person handles the issues with the sales dashboard, and this person handles the orders page, and when they're out, we better hope there's no bugs. Everybody knows what's going on in the system. And finally, we're gonna have healthy debate on our team.
30:22
There's a lot of teams that we go into that do not have healthy debates. They have silent seething, right? They have somebody upset about somebody else always committing what they think is crappy code. So, but now we have the tools for a specific technical discussion, and we can be better at all of these discussions.
30:42
So if you look at all these benefits, right? Better code, better developers, team ownership, and a healthy debate. This sounds like a fantastic place to work. I want to work there, right? It's gonna make you better every day, and those are the benefits that I'm seeing in the strong code review cultures I'm a part of. Have some questions?
31:03
Yeah, yeah, it's true. How do you handle somebody who's just not engaged in the process, right? I mean, I think helping them see the value of it and helping them feel like when they do give any review comments, they're engaged, right? So any comments that they do give, be excited about it, right?
31:21
And engage them in them, and even if you don't necessarily agree, maybe make a couple concessions, right? Early on to kind of get them back on board. And like I said, you write this stuff down, what you expect from people. You expect that you're gonna be doing these reviews. And a lot of people say, oh, I don't have time for those reviews, but I think that's bull. Like, we have plenty of time for reviews.
31:40
Like I said, 10 minutes is a long time for me to spend, so I finish up a feature, I look through some reviews, that type of thing. Okay, so the question is, how can you encourage more junior developers to review code of perhaps more senior developers? Is that right? I think a lot of the same rules apply, right? What you want them to do is say,
32:01
like we have an apprentice, like I say, review this code for me, right? I wanna see what your feedback on the code is. And when they give that feedback, don't just dismiss it out of hand, right? Have a conversation about it. Try and give them some easy wins, things like that. Yeah, reviewing is just, so the point there was reviewing is another form of pairing, kind of, right?
32:22
You're not doing this lot, you're doing kind of asynchronous pairing, I guess. And a lot of what I talked about today like if you notice, it doesn't really have a lot to do with pull requests, it has to do with providing feedback to one another. Right, so it works equally well in settings of pairing as well. Yeah.
32:40
Okay, so the question is, how do you deal with a company where there's a gatekeeper, right? There's a sole gatekeeper that does the reviews and you have to get through that person to get your code merged. Right, yep, so here's what I would say to that is, with your coworkers, change that culture or find a new job.
33:01
And I'm serious, like, the benefits I listed there, like, those are real. I see those benefits every day and it makes where I work a great place to work. So, yeah. Right, so how do you provide context on refactors, basically, right? There's no new features to be reviewed.
33:21
What I would say is you explain why it is you're doing the refactor and, like I said, we're not doing QA in our reviews, so I don't, like, maybe I know that there's some weird place where, you know, you change this method name and you search for everywhere where it was called, but we're calling it through send here and I know this off the top of my head and I can give you that feedback. But in reality, we're relying on our tests, right?
33:42
That's what refactoring's all about, is having good tests. So if your tests pass, then what I'm interested in as a reviewer is what you think we're getting out of, technically, from this refactoring. Like, tell me why you felt the pain and what this solution solved. Yeah. So the question is,
34:00
how do you work in, like, a QA team with this, basically? Like, when do you do the review versus doing the QA sign-off on something? My advice there is to do the code review before the QA in case anything significant changes and then, hopefully, you make QA's job really boring. And if they find something, because they do and they're, you know, they're going to,
34:21
then those changes get reviewed as well. Yeah. The question is, how important is it that it be asynchronous? I don't know. I would say that I've done, like, on larger changes, I'll often pull somebody over and just try and walk them through it. But I do think the asynchronousness kind of
34:41
makes the change have to stand on its own, which is also really interesting. So I don't know. The answer is I haven't played with synchronous versus asynchronous enough to know. All the way in the back. What's my opinion on authors merging their own pull requests? I would say what I typically do is,
35:01
once I have a good code review workflow going with the team, is if there's just a couple of small comments that are easy to address, and I already am a trusted member of this team and I trust that, like, we have a good relationship already, then what I'll do is I'll just address that feedback. If it's a straightforward thing to address that doesn't necessarily need additional review, like I renamed something to something they suggested,
35:20
then I merge that right in. Where we work, authors do merge their own requests. Sometimes, with some teams, you're waiting on, like, specifically having a thumbs up or two thumbs up or something like that. That's for your team to work out. So the question is, with refactoring, it's occasionally hard to, like, I'm sorry, try again. Yeah, okay, so it's hard to commit small changes
35:41
with refactorings. So how do you balance the need of maybe needing to run this by somebody first versus, like, presenting a gigantic pull request? I find that, like, most of those large refactorings come out of conversations that you're already having anyway. So, like, I might have a conversation with Caleb about, like, ah, this area of the system is really bugging me, and then finally we get around to this refactoring.
36:02
For larger change, like, when you actually do the review, leaving it as several commits before you squash is probably a good way. Like, in those cases, I'll say, like, this review's gonna be a lot easier for you if you step through the commits, right, and you'll see the process I went through and you can kind of follow it. Yeah, the team I'm working with, so the question is how do you handle different time zones, basically, right?
36:22
And possibly language barriers or cultural barriers. I haven't had to deal with a language barrier too much yet, like, most of the people speak good enough English to conduct a code review with. But I have dealt, and am dealing with, time zone differences, which are really difficult when you're trying to say that, like, there has to be consensus around a change. Right, if somebody's 12 hours ahead of you,
36:41
it's really difficult for them to have to wait a whole nother day for feedback. Unfortunately, that's kind of the price of having a widely dispersed team like that. Hopefully there are people that are in that time zone as well that can provide a quality review, and maybe you can kind of demarcate the work. It's tough when you have a distributed team like that that's so wide. Like, three hours is reasonable to handle,
37:01
there'll be some overlap, 12 hours is tough. I don't know. If you have some good suggestions, let's talk afterwards. Yeah, over here. Okay, so the question is, there was no question. It was get rid of the word just. I'm out of time, so come up, talk to me out in the hallway. My bike shed co-host, Sean Griffin, is here.
37:23
We're gonna be doing some podcasts out somewhere on this floor down here maybe, eventually, so you can follow us on Twitter. You can email me questions, tweet me questions, find me later, all right? Thanks, guys.