Most teams do some sort of Code Review, by which I mean the process where team members look at and give feedback on code, regardless of the mechanism (pull request, pair programming, ad-hoc feedback).
While the format and timing may vary, the stated goals of a code review often include:
Share knowledge
Detect Errors
Improve quality (including understandability of the code)
While the jury is out about how well a Code Review does at detecting errors, there is some consensus that it can have a role in knowledge sharing and improving quality. Whether your Code Review process is effective at achieving those goals depends a lot on how your reviews work.
I’ve written earlier about some of the attributes of the process and the feedback that makes reviews more generally useful, including that reviews involve people with the context and availability to give timely, relevant, and actionable feedback. Those mechanisms got me thinking that we don’t talk enough about the values that underlie good code reviews. I realized that there are some similarities between code reviews and writers’ workshops and other creative review mechanisms.
The Software Patterns Community uses writers’ workshops as a mechanism for improving patterns, in particular to:
Share knowledge (both from and to the author)
Identify errors and issues
Improve the quality of the work (and the patterns universe in general
These sound a lot like the goals of a Code Review. (For the curious: Richard Gabriel has written a Pattern Language for Writers Workshops that explains the process in more detail and its rationale.)
There are differences to be sure, especially around the size of the work, time scale, and the like, but in the end we are still analyzing something a person created and seeing if it is understandable, and accomplishes what the author set out to do.
The key mechanisms that make workshops work well that relate to Code Reviews are:
The goal is to improve the work. Yes, you want to identify problems, but you also want to suggest solutions.
The participants are all creators; if you are giving feedback now, you may be getting feedback at another time.
The author owns the work and decides how to process the input.
The book about Pixar, Creativity Inc calls out these 4 points when describing the BrainTrust format they use to review incremental versions of films:
The people in the room must view one another as peers.
You must remove power from the room
You must recognize the vulnerability of the filmmakers
You must give, and receive honest notes.
Taken together, following the writers’ workshops and BrainTrust guidelines also leads you to what not to do in a code review:
Have only senior people do reviews. You want the reviewers to know the code and the problem, and initially, that might be the more experienced people, but the choice is about skill and knowledge, not status.
Treat code reviews as “evaluations.” The goal is to improve the code. If a pattern emerges that leads to the thought that someone needs coaching, address that in another context, ideally with an “elevate the person’s skills” frame.
Have reviews be gates. If you have a team dynamic where someone will blithely ignore show-stopper issues points to larger team dynamics issues. If you must have gates, make them part of your automated tests.
We can contribute and learn a lot from reading each other’s code, and a lightweight, intentional code review process can be valuable, not just to the organization but also to team members. But like any process, be mindful of following steps without attending to the purpose and values of a process.