Code Reviews are Dead. Here's Why.

The code review is a long-established "first line of defense" so that engineering teams don't unleash hell upon the world. Years ago, the code review literally was a developer printing out their code, handing it to a senior engineer, and having it marked up with red pen like it was a college essay. The code review has come a long way - it has been established part of development workflows everywhere largely because of the pull request feature at GitHub. Code reviews were part of the job description for senior developers and development managers everywhere - a pedagogical function that promoted stewardship over a code base. But that was then. It's time to pour one out for the code review.

Code reviews are incredibly human intensive

Look folks - it's 2019! Humans had their day, but we have robots now! Any Renaissance CTO has to look at their organization and say "where is the inefficiency?" All human intensive tasks are inefficient - they are the debt of an organization that has decided not to invest in technology to automate them. Not all of these tasks are created equal, but when it comes to easy to eliminate human tasks - for the Renaissance CTO, code reviews should be near the top of the list. A software development organization can easily spend several hours per lead developer every day reviewing code deliverables from their colleagues. Why do we do this?

In many organizations - this is just DNA. Code reviews are something that has always been done, and most engineering managers don't need product managers or executives knocking on the door saying "did anyone even look at this before it went out?" The truth is that most software managers don't spend a significant time looking at code to understand the intent of the developer who produced it.

Most of the time with a code review, managers first need to understand the objectives of the change - be it from a requirement or a user story or a bug report. For many managers, this is the antithesis of why they are managers. Managers manage so that they can facilitate the work of others. But auditing the work of others often requires a significant investment to try to understand the low level problem at hand. To do this requires an equal amount of time that the developer who wrote the code took to understand it. This duplicative effort is inefficient - managers might as well be doing the work themselves at that point.

So most managers take shortcuts. They look for trivial things. Or they just rubber stamp code that hasn't been sufficiently reviewed. This makes the process of code review both ineffective in its analysis or downright fraudulent. The question then becomes about what was reviewed, and what was corrected? This then implies a list of standards and guidelines. But if those are documented, shouldn't they be automated?

Code reviews delay delivery

Because code reviews are human intensive, and because code reviews are part of a software development lifecycle, code reviews are subject to the availability, time, and speed of human beings which means that code reviews extend the cycle time of software delivery. In one organization I've worked in, every pull request required two approvals before it could be merged to the mainline. While senior engineers on the team freely admitted to rubber stamping all pull requests, that didn't mean that the rubber stamping happened quickly. I made it part of the daily meeting to review open pull requests and ensure that any that were more than a day old were approved prior to close of business. It was a sick software development culture, but it's easy to see how even in a healthy development culture that busy or distracted managers and leads can let the backlog of pull requests pile up.

One of the things I focus on with my software teams is the urgency of cycle time. Once an issue or task is understood, the path to delivery needs to be the quickest possible. This requires an understanding of tradeoffs, including the ability to recognize technical debt. But if the software development process installs phase-gates like code reviews that require a handoff between two people - that is the death knell for cycle time optimization. The moment that one person is waiting for the work of another person to be completed before progress can continue, the more slack is created into the delivery.

So if engineering leads are rubber stamping work, and the standards that are supported by the organization are not being enforced or automated - and the end result is still delayed delivery, what value are code reviews adding?

Most everything important in code reviews can be automated

Most organizations draft standards for code quality. We see this in open source projects, enterprise projects, and everything in between. This may include how code is to be formatted, how documentation must be provided, and in some cases - how the code should be tested and (in really exceptional organizations) how the approach to the solution can be modified or changed over time without significant complexity. But it's nonsense in this day and age to say that humans should be auditing for these things:

  • Linters can be used extensively to guard against much of the nonsense that lead developers are checking for in code reviews today. Whether its pylint, JSHint, PHPCS, or any number of other linting tools - these are toolkits that can be configured to check for formatting violations, naming convention issues, and most importantly: cyclomatic complexity. These are things that the developer can do locally prior to checkin/merge, and that CI/CD systems can run automatically before integrating code - failing builds where structural issues of code organization are lackluster, or when a developer hasn't thought through ways of simplifying their implementation - all without a human involved.
  • Unit tests and test-driven development as abhorrent as they are for many skeptical managers reap huge returns in management oversight reduction and code quality improvements. By maintaining a baseline level of code coverage - the percentage of code that is being executed by unit tests, developers self-audit their work and produce documentation of what they were attempting to achieve through assertions and test cases that demonstrate the business rules and other logic that were required to deliver a feature. Another side effect? They reduce code complexity.
  • Automated functional testing. The most expensive tradeoff that ditching code reviews can ask for is higher-fidelity testing, either functional or integration testing. While these tests can take the most time to develop, they also ultimately serve as the regression suite as future functionality gets bolted on, and ensures that the business analysis of the task are baked into some automated assessment on every build.

For the Renaissance CTO, every time a manual audit/quality check is performed - they need to challenge their teams as to why we need a human to do something that can be standardized and hence automated.

The case to keep doing code reviews: teaching software design

If there is one thing that code reviews should be able to do for an organization, it's allowing a forum for senior leaders to mentor younger colleagues.

Back in the day, it was thought we could design systems - particularly object-oriented ones, using visual toolkits like UML. The tools would allow architects to build out the structure of the software, and engineers would implement the details - much like a building architect makes blueprints but developers hash out the details of electrical systems, load bearing walls, etc. This obviously never really materialized: UML became something that was sent to the plotter, and engineers still had to take that design and implement it.

But this doesn't mean that software architecture is dead - it means that the code review can serve the purpose of allowing the architects to continuously communicate feedback to the development team and perhaps as importantly, manage technical debt by accepting flawed implementations for the sake of expedited delivery while ensuring work is queued up to pay down said debt. Those "debtor" tasks doled out by senior technologists around technical debt are a great opportunity for the code review to shine as a forum for growth and not a gatekeeper to production.

I'm sure that most organizations will never do away with code reviews - but I think it's time for us to look at them for what they are: a relic of a time before CI/CD. When humans were doing all of the work of getting software to production - including putting it on a disk, code reviews were the only tool for development quality assurance. Now in 2019, code reviews are largely replicated by software packages that analyze our code for us, testing methods that give us metrics that we can manage to, and build systems that give us continuous feedback. It's time for the traditional code review to die - even if its absence is filled by mentorship.

Ryan Norris

Ryan Norris

Ryan is the former Chief Product Officer at Medullan, CTO at Be the Partner and Vitals, and now is a CTO consultant at Osmosis Knowledge Diffusion and has projects in alternative education, digital therapeutics, and patient engagement.

Related to this