Just as a matter of curiosity I will record here a couple of thoughts that came to my mind as I was analysing the results. This post will be a bit of a rant, please skip it if you don’t have the patience for grumbles.
It’s amazing how different different reviewers are. I don’t even mean the level of expertise or knowledge or intelligence, but just plain differences.
Some felt really passionate about using short variable names, others hated the idea. Some accepted having getter functions, some called it a performance drain. But the most interesting differences in opinion are in those involving stuff I classified under “design” – something I am now tempted to call “rationalisation”.
The more experienced people made comments that are obviously heavily inspired by their experience. Don’t do it that way, do it that other way. Why? Don’t ask why, I know what I’m talking about. Is their way really better? It’s hard to criticise the more senior people. These are the most realistic code reviews, it’s what happens in real companies and is what I get at work these days.
They’re not about code errors, and while one may argue it’s good for maintainability to have uniform use of patterns and other major code structure – I’m not convinced that the review is a good time to encourage it. Pre-commit reviews may force the author to make (potentially significant) changes, but I’m not sure the amount of time spent on doing this actually pays off in the end.
Who does the best reviews? That’s another very interesting thing I found after doing the tally of the reviews’ contents – it wasn’t the most experienced people who found the most errors, it seemed completely random. I think I’ve heard this somewhere before – probably related to Mike‘s marking research.
What factors did I not account for that I can’t answer that question? There are a few that come to mind. Talent is certainly a strong factor. And I’m putting my bet on personality too. If one is fundamentally only interested in errors – they will only really put effort into finding errors, and not waste brain cycles on anything else. In my opinion the count of real errors (errors that would cause the software to not function as intended) found is the ultimate measure of the quality of a review.
This brought up so many interesting questions! It makes me want to think of continuing research in this field to find answers, but I have a deadline for now and can’t be sidetracked. Sadly I don’t have time to learn what a great review contains under what circumstances and what kinds of people are most likely to make great reviewers.