We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Code Review Skills for Pythonistas

00:00

Formal Metadata

Title
Code Review Skills for Pythonistas
Title of Series
Number of Parts
50
Author
Contributors
License
CC Attribution - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
As teams and projects grow, code review becomes increasingly important to support the maintainability of complex codebases. In this talk, I’ll cover guidelines for writing consistent python code beyond pep8, how to look out for common python gotchas, and what python tools are available to automate various parts of the review process. Most importantly, I’ll cover the human aspect of code reviews - how we can be better at approaching reviews with empathy and understanding from the perspective of both a reviewer and a submitter. Following these successful code review practices will lead to happier teams and healthier code bases.
Machine codeStandard deviationComputer configurationMaxima and minimaProduct (business)Source codeGame theoryFrustrationProcess (computing)CASE <Informatik>Observational studyBit rateSoftware developerExpert systemDivisorConsistencyInequality (mathematics)Electronic program guideTape driveDifferent (Kate Ryan album)Projective planeRule of inferenceDivisorMultiplication signCommutatorProduct (business)Maxima and minimaType theorySoftware developerTime zoneFrictionMultiplicationSubsetOpen setElectronic program guideStandard deviation2 (number)Formal languageConsistencyBit rateLink (knot theory)MeasurementOnline helpProcess (computing)TwitterMachine codeSlide rulePosition operatorMereologyPhysical systemGroup actionCASE <Informatik>Observational studyBookmark (World Wide Web)Point cloudVirtual machineSoftware bugMilitary baseShared memoryFile formatBitLaptopFrustrationRight angleGame theoryJava appletLine (geometry)SoftwareVelocityGoodness of fitTerm (mathematics)Functional (mathematics)Total S.A.MathematicsState transition systemComplete metric spaceSurfaceWritingComputer animation
Computer virusDemo (music)Machine codeText editorFile formatInternet service providerConsistencyData managementPoint (geometry)Software developerSign (mathematics)Fundamental theorem of algebraDean numberLevel (video gaming)Context awarenessLink (knot theory)ChecklistUtility softwareStatement (computer science)DebuggerMessage passingReading (process)Process (computing)FeedbackOpen setSoftware design patternUsabilityDifferenz <Mathematik>Line (geometry)Computer clusterContent (media)Machine codeConsistencyProcess (computing)PlanningSoftware developerLaptopChecklistLine (geometry)SurgeryUtility softwarePerformance appraisalLink (knot theory)Open sourceData conversionWindowGoodness of fitSoftware design patternFeedbackRule of inferenceLevel (video gaming)Point (geometry)Bookmark (World Wide Web)Software maintenanceData managementAreaFile formatAuthorizationSound effectText editorMathematicsBitCode refactoringConfiguration spacePosition operatorCountingPerspective (visual)Context awarenessDependent and independent variablesMultiplication signMultiplicationElectronic program guideBranch (computer science)TelecommunicationType theoryComputer configurationObservational studyDemo (music)CASE <Informatik>Differenz <Mathematik>Software bugTraffic reportingInstallation artComputer architectureOrder (biology)Product (business)Mathematical optimizationQuicksortPeer-to-peerFluid staticsMathematical analysisVariable (mathematics)Fundamental theorem of algebraClosed setArithmetic progressionFreewareSelf-organizationLengthOpen setHookingComputer animation
Product (business)Standard deviationError messageCode refactoringMachine codeRule of inferenceTupleScripting languageSample (statistics)Function (mathematics)Letterpress printingStatement (computer science)DebuggerCondition numberComputer configurationSoftware frameworkMultiplicationFormal languageWrapper (data mining)Source codeSystem callComputer fileSoftware testingWritingContinuous integrationSuite (music)Level (video gaming)Complete metric spaceWechselseitige InformationKeyboard shortcutExtension (kinesiology)Context awarenessFluid staticsMachine codeMathematical analysisStatement (computer science)Computer configurationFunction (mathematics)Software developerText editorOpen sourceDebuggerRight angleConfidence intervalTraffic reportingValidity (statistics)Repository (publishing)ResultantComputer fileSound effectSoftware testingElectric generatorParameter (computer programming)Data conversionTrailElectronic program guideShared memoryTupleDependent and independent variablesFunctional (mathematics)Bookmark (World Wide Web)Software maintenanceMultiplicationProjective planeArithmetic progressionSoftware bugError messageMultiplication signCollaborationismCondition numberQuicksortInstance (computer science)Touch typingWindowAnalytic continuationGraph coloringINTEGRALWordExtension (kinesiology)1 (number)Link (knot theory)Configuration spaceType theoryProcess (computing)Open setRemote procedure callSlide ruleRule of inferenceFeedbackKeyboard shortcutPauli exclusion principleLibrary (computing)SoftwareFault-tolerant systemUsabilitySpacetimeProgrammer (hardware)DampingMathematicsContext awarenessSuite (music)Cross-platformString (computer science)Continuous integrationLatent heatRevision controlIndependence (probability theory)Computer animation
Rule of inferenceTerm (mathematics)FeedbackLink (knot theory)Stack (abstract data type)Machine codeSubsetScale (map)ArchitectureError messageVariable (mathematics)LogarithmObservational studyLine (geometry)Population densityLimit (category theory)Execution unitCAN busFormal languageComplete metric spaceProgrammer (hardware)Function (mathematics)MereologyResource allocationProcess (computing)AdditionSoftware testingWeightDean numberSlide ruleMachine codeSocial classMereologyProcess (computing)ThumbnailLink (knot theory)Goodness of fitString (computer science)Online helpFeedbackRecursionSoftware bugType theoryCASE <Informatik>Military baseWritingMaxima and minimaMessage passingValidity (statistics)Rule of inferenceLine (geometry)Observational studyMultiplication signForcing (mathematics)Figurate numberTwitterAdditionRight angleBlogOpen sourceMeasurementImplementationMathematicsTheoryDependent and independent variablesSoftware developerProjective planeLattice (order)Cartesian coordinate systemNP-hardSlide ruleWeightWindowFilm editingMomentumSoftware design patternObject (grammar)PretzelBitPoint (geometry)Computer animation
Coma BerenicesBlock (periodic table)Data typeXMLComputer animation
Transcript: English(auto-generated)
I hope you're having a lovely DjangoCon.
My name is Nina. I've been writing code professionally for over a decade now. I've worked at some companies you might have heard of, like HBO, Meetup, Reddit. These days, I work at Microsoft as a Cloud Developer Advocate. That means that my goal is to make the Python experience on Azure, a pleasure for Python developers everywhere.
That right there is our mascot Bit. If you think he's cute, please find me after the talk. I have lots of stickers to give away. And today, we're going to talk about code review skills for Pythonistas. The slides are available online at the link up above. Grab a copy to follow along if you'd like to. Share with your coworkers.
There's lots of links in the slides and helpful resources. Quick show of hands, how many people here have been part of a code review? OK, almost everyone. Now, how many of you have only had a positive experience with code reviews?
Three people. I want to talk to you after this. OK, so not many. And my goal today is to change that. So after this talk, you're going to know how to make the code review process one that's positive and productive. Let's try something fun during this talk.
If you're on Twitter, if you're learning something new, if you're excited about something you've heard today, please go ahead and share a tweet. The hashtag for this conference is DjangoCon. You can at mention me. My username is nnja. Now, to briefly cover what we're going to talk about today, I want to tell you what the proven benefits of code review are.
And those are going to be based on research and on case studies. We're going to talk about setting standards. I'm going to go over some tools with you that are going to make the review process a lot better and easier for Python developers with automation, because everybody likes it when their job is easier. I'm going to give you specific examples
of how to be an effective reviewer and a submitter, how to submit your PRs for maximum impact, and lastly, I'm going to share how to use those tools to build a much stronger team. What will you take away from this talk? If you're a complete beginner, you're going to have a comprehensive overview of code review best practices.
If you're intermediate, you're going to learn about tooling and automation. If you're a total pro, you're going to learn about the hardest part, and that's the people factor. Remember that this talk is not one size fits all. I'm offering you suggestions here. You need to adjust them based on your own work situation.
There's some factors here like team size, taking the time to review will affect a team of two much greater than a team of 10. Your product type makes a difference. If you're in an agency, you might have tighter deadlines. You might have more of an incentive to just try to push code out the door. And if you're working on open source,
that's also a big factor. When you're coworkers, you have a lot more motivation to work together. If you're working on open source, the rules are just a little bit different. A paycheck really offers incentives that's hard to compete with. And you might have some friction there because of things like language barriers, different time zones.
It's also not one size fits all because of defect tolerance. What is a defect tolerance? It's the rate of failure that's acceptable in your software. If you're dealing with something like a cell phone game, you have a high defect tolerance. If the phone game crashes, whatever, who cares, right? But you'll have low tolerance if you're writing softwares
for something like medical equipment or an airplane because you might get zapped in a medical machine if your code's not thoroughly reviewed and tested. And someone laughed, but this is true. This happened in the course of software development.
And code reviews can seem really frustrating on the surface. Just, I mean, look at this developer. She's so angry that she's eating her laptop. Code reviews shouldn't be this frustrating. We don't want that to be us.
Some of those apparent frustrations. People think it might add a time demand. That can be especially noticeable on smaller teams. It adds process, and everyone hates process, right? It can bring up some team tensions. I mean, reviews can bring up egos, personality incompatibilities. Can flare up some of those hidden tensions
between team members. And lastly, every once in a while, we'll just run into a really smart dev who thinks that their code is just too good to be reviewed. How do you change these attitudes? You need to start by selling the benefits. In the short term, it's true. Frustrations and slowdowns are inevitable.
But like with all things, the more you practice, the more velocity you'll have over time, the better you'll get. The biggest benefit, though, is that you're gonna find bugs and you're gonna find design flaws. They can be found and fixed before the code is done. There have been multiple case studies from IBM, AT&T, Microsoft
that have shown that code review can lower the bug rate by up to 80%, and it can increase productivity by up to 15%. Because at the end of the day, the goal is to find those bugs before your customers do. Reviews also help us feel a sense of shared ownership
and shared knowledge throughout the team. We're in this together. No developer becomes the only expert because everyone gains familiarity with different parts of the code base by participating in reviews. Why is it important that no single developer is the only expert? We call this the lottery factor.
It's a measurement of how much concentrated, specialized knowledge belongs to individual team members. Miguel, this tweet is about Miguel. He works for the New York City transit system, and when the subway vending machines go down, he's the only person that knows how to reboot the system.
There's a problem here because his commute home takes three hours and he turns his cell phone off during that commute. That's bad enough, but what happens if Miguel wins the lottery tomorrow and decides that he never wants to work another day in his life? That's a huge problem.
That's the lottery factor. Will one person completely sabotage this project if they left tomorrow? So, the benefits of code review. We find bugs. We do so before our customers do. We share ownership in the product. We share knowledge about our code. We reduce that dreaded lottery factor.
We need to encourage consistency here because at the end of the day, the code isn't yours. It actually belongs to your company. That means that your code needs to fit your company's style guides and standards and not your own, or your project's style guides and standards, not your own. So, what reviews do is encourage consistency,
and that makes the code more robust because, let's be real, nobody stays at a company forever. Code longevity is really an important factor. Code reviews need to be universal and they need to follow guidelines. Doesn't matter how junior or senior you are, everybody on the team should be submitting
and reviewing code. If only senior developers are doing code review, that's actually a huge bottleneck on your team. I worked with a developer in a Java shop who insisted on formatting his code C++ style instead of Java style. So, there's a little bit of a difference in terms of where the braces go for the opening function.
In Java, it's on the same line. In C++, it's on the new line. And this caused some nightmare diffs, as you can imagine, and a lot of frustration amongst the team who was maintaining his code. When I protested, I was told, well, you know what, that's okay. He's allowed to do that. He's the most senior engineer, and that's fine.
And some of you are kinda shaking your heads, but it's a true story, and special cases like this, inequality, it really breeds dissatisfaction. Nobody should be special in this process. So, how do we do all of this? Well, let's jump right in. A style guide. A style guide is what distinguishes
personal taste from opinion. And many of you might be thinking, well, we have PEP8. Isn't that good enough? Not really. PEP8 only scratches the surface. It only offers some suggestions. Whatever style guide you choose,
it should be agreed upon beforehand. Google has a lengthy one, for example. It goes through the pros and cons of every decision made. At the end of the day, it doesn't matter which one you choose. Just pick one, stick with it, because great code bases look like they were written by an individual when actually they were worked on by a team, and a style guide helps enforce that.
Python has some other useful tools that help with this process called formatters. AutoPEP8 is the least strict. It just formats your code to adhere to PEP8. Black is brand new, and it's my personal favorite. It's the uncompromising Python formatter. It has almost no configuration options,
so there's no room for disagreement. Basically, you pick a line length, and that's it. YAPF, which stands for Yet Another Python Formatter, if you're seeing a theme, it's very configurable. It even allows you to specify a style guide to follow. There's a really cool demo of Black written by Jose Padilla.
Check it out after my talk, of course, for those of you who have your laptops open, and you could see what formatting options Black might take on your current code. There's also really amazing support for Black in my favorite editor, VS Code. If you don't know what VS Code is, it's a free, open-source editor
with amazing Python support. It's cross-platform, runs on Linux, Mac, Windows. In order to set up Black in VS Code, you need to install the Python extension, pip install Black, update a handful of settings, and that's it. You can then use VS Code to format your code with Black on save.
Maybe Bob wasn't in the wrong. Maybe, in fact, C++ style formatting is better, but the problem was he wasn't following convention. Consistent code will be a lot easier to maintain by a team. Code review also needs to be done by your peers and not by management because the end goal is not to get in trouble.
Bugs found during code review should never come up in the performance review process. That's why we're doing code reviews in the first place. And to add on that, don't point fingers. Maintain a no-blame culture. When the team reviews code, the team becomes responsible for code quality, not just an individual. This really needs support from management
if you work in an organization because if you get in trouble for sharing your mistakes, you're just gonna brush them under the rug next time. You won't be able to learn from them. Support your teammates here. It's not a competition. When code reviews become this really positive process, developers aren't gonna dread it. They're gonna want their code reviewed.
They're gonna be excited about the process. It's not just gonna be something that they have to do every day. Code review fundamentals, they need to be done by your peers. Style guides and formatters help encourage consistency. Maintain a healthy no-blame culture.
How do we do code reviews? There's two sides of this coin. For effective reviews, we need to learn not just how to be a great submitter, but also how to be a great reviewer. I love this comic here. On the left, the person is saying, there's no need to double-check these changes. If some problems remain, the reviewer will catch them.
Person on the right. No need to look at these changes too closely. I'm sure the author knows what he's doing. You need to be really careful not to get rubber-stamped. What is rubber-stamping? It's when the submitted code is so complex,
the reviewer thinks that it's obvious that the author knows what they're doing. They just rubber-stamp, approve the code without fully understanding it. Don't be too clever. Readability really counts. Submitting over-complicated code is a surefire way of getting that rubber-stamp.
I have a developer friend who used to love showing off how smart he was by checking in all these complex, over-engineered solutions. He stopped doing it when he realized that he was punishing himself. It meant that he always ended up being the maintainer of that code. Once he came to that realization, he stopped flexing and started writing maintainable code.
Remember, readability counts. I think Russ Olson said it best here. Good code is like a good joke. It needs no explanation. If you feel like a piece of code is confusing, it is.
Leave a comment either in code or in your review tool. Better yet, refactor it so that it's more readable and understandable. I find the process of submitting code reviews a little bit easier to think about when I think about it in stages, ranging from before I even submit the pull request
to after the review has been completed. At stage zero, where before submission, what kinds of things do we need to think about before we even think about starting the review? The most important thing, provide the context, the why. This helps the reviewer a lot. Why did you write this line of code? Link to any underlying issues or tickets,
like a bug report that would give more context. If there's not enough ticket in that context, provide more. I'm sorry, if there's not enough context in that ticket, provide more. Document why the change was needed. For larger PRs, you can even provide a change log.
And remember to point out any unintended side effects your code might have. Remind yourself that you are the primary reviewer. Before submitting, you wanna review your code as if you were giving someone else a code review. Before I submit my PR, sometimes I'll look at it again
in the context of a GitHub diff, and sometimes I'll see new issues crop up. There's just something about that context switch that gives me a new perspective. This lets you anticipate any problem areas before the reviewer does. As the primary reviewer, it's your responsibility to make sure that your code works,
that it's thoroughly tested. You always QA your own changes because you don't wanna rely on someone else to catch your mistakes. Before submitting, you can also try a checklist. What makes a good checklist? Check off the small stuff.
Did you check for a reusable code or utility methods, libraries? Did you remove any debugger statements? If you don't have pre-commit hooks set up, that might be something that you check for. Are your commit messages clear and understandable? You can also check for the big stuff.
Is your code secure? Is it going to scale? Is it maintainable? Is it resilient against outages? For this process, I highly recommend a book called The Checklist Manifesto. It's pretty short. It's a book about how having checklists help doctors make surgeries safer, helped pilots fly planes safely,
and guess what? Checklists can help developers too. We made it to stage one. We've submitted the review. At this point, you're starting a conversation. Don't get too attached to your code before the review even starts. Anticipate comments, anticipate feedback,
and acknowledge that you're probably gonna make a few mistakes. Remember, the entire point of a review is to find problems, and problems will be found, so don't be caught by surprise here. Stage two is optional. Is submitting a work-in-progress pull request?
When might you wanna do that? As a rule of thumb, if you're working on something big and complex, when your code is about 30 to 50% done, this drives perfectionists crazy, right? But you can't be afraid of showing incomplete, incremental work here. It's hard to just let that go, let go of the desire to make everything
beautiful and perfect, but that's not what a work-in-progress pull request is about. When code is a work in progress, good types of feedback to get here are architectural issues, overall design problems, suggestions for design patterns. You really want this feedback, this type of feedback,
before you approach being done. It'll save you a lot of time. You don't need to rewrite a finished product. Feedback at this stage early and often helps the process a lot. Stage three, we're almost done. We can see the finish line, it's so close.
The type of feedback that we prefer to get at this point, nitpicks, variable names, maybe some requests for more documentation or comments, any other small optimizations. As the code evolves, it should become more firm. No one wants to hear change it all at this stage.
If you did, it means you've had some sort of breakdown in communication. This kind of process prevents wasting time and effort for bigger, more complex pull requests. At this point, you also wanna ask yourself, did I solve one issue with my pull request? If you solved multiple problems, you wanna break up your code into multiple pull requests.
This will really help the reviewer. If you solved an unrelated problem, make a new PR with a separate diff. You can even make a branch of your code from a future branch, and this will help keep that other diff small. We want to keep reviews small because they help reviewer burnout.
Some case studies have shown that reviewers become less effective when they look at more than 500 lines of code in a session. Keep it small, keep it relevant. If a big review is unavoidable, consider giving the reviewer some extra time. You can also use automated tools
and static analysis to help streamline the review process. First and most importantly, a linter. Hopefully you all have linting set up for your Python projects. If you don't, it's the first thing that you should do when you leave this talk. What is code linting? It's an automated way of checking syntax, or if you want to get fancy,
you can set it up to check style guide violations too. Here's an example of some linter output. A linter can integrate with your code editor. Any editor that edits Python supports this. And this way, the reviewer doesn't have to waste any of their own time pointing out syntax errors.
Pylint is a great linting option for Python. There's lots and lots of configuration options, integrations, coding standards, error detection, refactoring help, editor integration. Make sure you take the time to learn your linter and its configuration options. I'll show you one of my favorite Pylint rules.
I don't know if any of you have done this, but it's kind of a common gotcha for me. I'm refactoring out parameter arguments, copying and pasting them, and I end up with a trailing comma. It's really hard to notice, but there's a trailing comma after bar, after the assignment to bar. And what that does is make bar a tuple
instead of an integer. And this causes vague errors, test failures. It's a little bit non-obvious. Tracking down the sort of bug if you don't have any type hinting enabled in your project, it just has messed up my day multiple times.
Pylint comes to the rescue here. There's a rule, trailing comma tuple. You can set it up and catch this sort of stuff. You can use vulture.py to find unused and unreadable, unreachable code in Python using static code analysis. This doesn't work so well when the code is called via introspection.
And because Python is dynamic, Vulture can make a mistake, so it's really good practice to double check the results. When used correctly, it can really help keep a code base clean. Here's an example of Vulture.
We have some code here. We have three methods, foo, bar, baz. We are only calling foo and bar. When we run Vulture on this file, it will tell us with 60% confidence that the function baz is unused. That's pretty cool. Git pre-commit hooks allow you to short circuit a commit
and make checks before the code even reaches your repository. You can do things here like run a linter, check your syntax, check for to-dos, debugger statements, unused imports, any other stuff that litters up your code. You can enforce styling here with auto pep a or the black formatter. There's even an option to completely reject the commit
if conditions aren't met. Sounds great, right? It also sounds like a lot of work, and we developers are pretty lazy. Thankfully, someone else has set up a great tool for us. It's called pre-commit.com.
If you don't wanna write your own pre-commit hooks, it's an amazing library with lots of resources available. It's written in Python, and it gives really nice, well-formatted command line output. Extra nice here, you can test your hooks without actually trying to commit, which becomes really tedious when you're writing pre-commit hooks from scratch.
Supports a few nice hooks for Python. We got auto pep8 to run pep8 on your source, flake8, pyflakes, check-ast to check if the file contains valid Python, debug statements to check for debugger statements. I don't believe there's support for the black formatter just yet,
so if anybody has the time and wants to contribute that back, please do. And there are a lot more hooks that aren't Python-specific, like trimming trailing whitespace, checking for files that have merge conflict, strings in them, verifying your JSON. Lots of time-saving features here.
Tests, there are tons of talks about tests, so I'm gonna touch on them very briefly. Write them. Please. Tests need to be passing for somebody new to be able to meaningfully contribute to your project, and tests help you identify problems immediately.
Nobody wants to work with scumbag programmer who commits untested code. He's a bad guy. Continuous integration, what is it? It's an automated build with every push. You can run your linter, run your tests.
Lots of available tools here. CPython uses Azure DevOps pipelines. There's also Travis, CircleCI. The cool thing about Azure DevOps pipelines is that they support multiple platforms, Mac, Linux, and Windows. There are lots of, lots of these tools are free
for small teams or open-source projects. All of them integrate with GitHub pull requests, so you'll see some nice output like that right in your PR. Coverage helps too. What is coverage? It's the percent of code that's executed when a test suite runs. It gauges the effectiveness of your test. Coverage.py is a great tool.
It can generate nice HTML reports. Remember we talked about fault tolerance earlier in the talk? If you have a low fault tolerance, for example, if your software runs in a nuclear power plant, your coverage should be close to 100%. Coverage tools also integrate into GitHub.
There's coverage.py, there's coveralls.io. Automation in this space saves everyone time. Now we're at the last stage. Our review is complete. The reviewer has finished looking at your code. We're still not done though. At this stage, we need to remember to be responsive.
Don't just ignore any comments that were left. Respond to them, even if you don't agree with them. And especially if you decide not to implement that suggestion. Make sure you come to some mutual agreement with the reviewer if you decide not to go ahead with their suggestion.
If there are any comments, let the reviewer know that you've pushed changes when your code is ready to be reviewed. It's still a conversation. But don't bike shed. What is bike shedding? It's when you're arguing over really minor issues while more serious ones are being overlooked. For example, people arguing about what color
to paint the bike shed when the house isn't even done. If you're going back and forth about something more than three times, step away from the keyboard, use your words. Don't forget to record the result of that conversation in the pull request to maintain context for anyone else who might be looking at it later.
Bikeshed.com if you wanna learn more about bike shedding. If you're co-located, that's great. You can just walk over to the other person's desk for a conversation. If not, a tool that I really like is VS Code Live Share. It's an extension for VS Code. It allows real-time sharing between two VS Code instances.
You keep your own editor, your own fonts, your own themes. Most importantly, your own keyboard shortcuts because I'm not looking to start VI versus Emacs editor wars. We just wanna collaborate. You can edit collaboratively but navigate independently
and you don't need to learn anything new. Super cool for remote teams. Takes a lot of paint out of the process. There's a link to download the extension in my slides. If you disagree with the reviewer's comments, no radio silence here. Try to carefully explain what the reviewer might have missed.
Open a friendly discussion until you understand why the reviewer left the comment. It's possible the reviewer missed some of your thought process but maybe you're just wrong. It happens to the best of us. When you're wrong, that's okay. Learn to gracefully accept defeat.
Don't take any of this feedback personally. Think about it as an opportunity for growth. Admitting that you don't know something is really hard. It can be a great way to trigger imposter syndrome that's present in many of us. Don't take it personally. You are not your code. Use it as an opportunity to grow.
Remember, we're all working towards the same goal. It's to ship code and be grateful that someone took the time to review your code. Offer thanks if you can. How do we be a great submitter? We provide context. We review our own code first. We expect a conversation and we submit work in progress PRs when necessary.
We use automated tools to help us. We're responsive and when necessary, we learn to gracefully accept defeat. We spend a lot of time talking about how to be a great submitter. Let's cover the other side of this story.
How do we be a good reviewer? I love this comic too. The therapist asks, why do you think you're so hostile in code reviews? And the dev laments, if only I had been more popular in high school. Code reviews should not look like
an appointment with your therapist. You need to approach them objectively and without ego. Leave some of those emotions behind. Have empathy towards others. There should be no room for hostility here. Have empathy towards yourself too. Check in before you start. Are you hungry? Are you thirsty?
Are you angry? Maybe you're dehydrated or tired. Check in with yourself if you need it. Have some water, have some coffee, take a walk, come back to it. Because remember that all of these feelings can affect the review process. Take care of yourself. Be objective during the review.
You can say something like, I noticed that this method is missing a doc string, instead of you forgot to write the doc string. Reviews are a learning opportunity, not a chance to catch someone else being wrong. This type of phrasing really helps from keeping it personal.
Try to ask some questions instead of giving answers. Would it make more sense if we did it this way? Did you think about trying this other approach? Offer suggestions. You can say things like, it might be easier to, or we tend to do it this way instead.
Because suggestions are better than ultimatums. Avoid these terms, always. Simply, easily, just, obviously. If it was so obvious, the submitter wouldn't have done it in the first place. They might be missing context,
they might be unaware of a concept. Especially avoid this one, well actually. Don't say it. It's something that you might say when something or someone is mostly right, but you feel like you need to interrupt them with a very minor correction. Usually it's not worth it.
I recommend reading the Recurse Center for this gem, and many more, Recurse Center social rules for this gem and many more. I'm not sure if any of you practice yoga. I do occasionally, and this happens in my class all the time. The teacher will somehow twist herself into a pretzel, and then she'll just tell the class
to now simply put your feet behind your head. Few of you would consider this simple. I mean, I certainly don't, but that looks hard. So remember this uncomfortable little guy when you start saying one of those words. To be effective, you really need to have clear feedback.
Your opinions need to be strongly supported. You need to share the how and why, why you think this change is necessary, how you might go along with the implementation. Link to blogs, documentation, other resources that back up your opinions. And don't feign surprise if someone doesn't know something
even if you consider it a basic concept like, gasp, I can't believe that Dave doesn't know about the singleton design pattern. Cut down on the snark and innuendos. This isn't the time or the place. Reviews can bring up a lot of emotions for both sides, so be cognizant of that.
Also, make sure you stay away from critical emoji-only feedback. If you leave a thumbs down emoji and no comment, not a great call. Something like this is not clear feedback.
Remember to compliment good work and great ideas. I like to leave a thumbs up when I see something like a great refactoring, a particularly clean solution, something really elegant, because reviews shouldn't be all about the bad. For large reviews with a lot of comments, consider leaving at least one compliment.
And you don't want to be a perfectionist. This tweet says it a lot better than I can. The goal is better code, not exactly the code that I would have written. For big issues, don't let perfect get in the way of perfectly acceptable.
Prioritize what's important to you. Usually 90, 95% of the way there is good enough. When you press for complete perfectionism, you end up taking ownership away from the person who wrote the code. It takes their feelings of accomplishment and creativity away. But it's also okay to nitpick things like syntax issues,
spelling errors, poor variable names, missing corner cases. Save those nitpicks for last. You might ask, what's the harm in letting some of these pass by? Well, it's the broken window theory. If I see sloppy code, I assume it's okay to check in sloppy code too. So at this point, you need to specify if your nitpicks are blocking merge or not.
As a PR submitter, I think of nitpicks as a compliment. If the rest of your code is so well written that the small stuff sticks out, you did a great job. Don't burn out as a reviewer. Those case studies have shown that you should look at about two to 400 lines of code
at a time for maximum impact. In practice, reviewing between two and 400 lines of code within a 16 to 90 minute window will let you find about 70 to 90% of the bugs. If 10 bugs exist in your code, a properly conducted review would find between seven and nine of them.
After 500 lines of code, those studies have shown that the ability to find bugs drops dramatically. If the code stops making sense, you're probably tired. You might need to take a break. You might miss something. A good rule of thumb if you work in a company,
try to do those reviews within 24 to 48 hours after they're submitted. This is especially easy when those reviews are small. This lets you look at those reviews incrementally. It prevents buildup. It also means that the code is fresh in the submitter's mind for questions. Obviously, this rule is, oh, I said obviously. Shame on me.
It's not obvious. But a good rule of thumb is to be flexible for open source projects. Try to keep, at least respond within 48 to 72 hours if you can because it helps keep excitement and momentum up. You don't wanna end up like this guy just who's still waiting and waiting
and waiting for code review. So how can we be a great reviewer? We have empathy. We watch our language. We have clear feedback. We give compliments for good work. We're not perfectionists about the project, about the process. We avoid reviewer burnout.
We finish those reviews as soon as possible and we don't leave people hanging. This is not just anecdotes. I've become a much stronger engineer by giving and receiving code reviews, especially at a company that had a great review culture. I used it as an opportunity to learn from others. I started anticipating what a reviewer might point out
before even getting to the review process. Code reviews sound like hard work and they are. I'm not gonna sugarcoat it, but now you know what to do to reap the rewards. You can use this as a huge advantage when somebody new joins.
Sasha said it best during her talk on giving and getting technical help. She said, hiring senior engineers is hard. We all know that. Instead, you can hire junior engineers and you can grow them into a functional and productive part of your team. Code reviews are an absolutely fantastic way of doing that.
If you're not doing code reviews, you're missing a huge opportunity. They're provably shown to improve code quality across all kinds of organizations, all kinds of code bases. Think about what's blocking your team from trying out some of the techniques I told you about today and figure out ways
of eliminating those blockers. Remember, allocate the time, develop, don't force the process. Code reviews are not a one-stop fix. You need to use them in addition to tests, QA, and other kinds of methods for maximum impact.
What did we learn? My coworker, Sarah Drasner, she said that, coworkers who are good at code review are worth their weight in gold. I 100% agree with her. And the most important thing, we learned that code reviews decrease WTFs per minute, which is the only valid measurement of code quality.
That's a fact. We all want to work with the team on the left and not with the team on the right. Less WTFs lead to much happier developers. If you want to check, if you want to learn more about what I work on at Microsoft, check out the link, bit.ly slash Python Azure.
My slides are available for download at bit.ly slash Django code reviews. There are tons of links to additional resources here, including a link to my blog post about the top three gotchas to look out for when you're code reviewing Django applications. And some links to examples.
Code styles, lots of great things in there. If you have comments, questions, feedback, if you agree with me or if you disagree with me, please send me a message on Twitter. I'd love to hear from you. Thank you so much.