1. According to research, there is no more cost-effective method of bug detection known than careful code review. (Steve McConnell's Rapid Development cites internal IBM statistics, but I completely believe it.) This is before you get to any other benefits, such as cross training, code consistency, etc.
2. According to widely reported experience for over 50 years, there is no faster way to get a lot of programmers upset than to review their code.
Therefore a primary goal in code review is to avoid conflict. The following principles have been found to help.
1. Do not involve management. When people have reason to believe that their manager will see the result of code review, the odds of defensive behavior skyrockets.
2. Review only small pieces. If you find a problem in a small piece, it can be easily fixed, you move on. Reviewing a big piece is a lot more work, and fixing even a minor design flaw can be a lot of work to redo.
3. Make reviews fast. If you don't want people to write a lot of code without getting any of it reviewed, then you cannot make them take a long time to review said code.
4. Make reviews required. If people don't have to do them, they won't. And the ones who are most likely to skip are the ones whom, 6 months later, you'll wish had.
In this spirit you might consider http://code.google.com/p/rietveld/ as a way to make sure that your team reviews early, reviews often, and reviews in small chunks. (Some will love it, some will hate it. Your source code quality will improve though.)
We've found Gerrit (a git equivalent of Rietveld code review) to be significantly more usable. We were probably using it wrong, but it seemed like there wasn't a week that went by where we didn't have to re-initialize a local repo because Code Review had gone off the rails.
I think code reviews are fine as long as people are learning and building rapport with each other. When either of those are missing the reviews can be a waste of time and even detrimental to working relationships.
YMMV to say the least. You can find yourself working with the friendliest person who teaches you something useful every time you look over code. Or you can end up working with someone who's emotional maturity level can only go up to put it nicely. Combine the latter case with someone who isn't quite old enough to "know what they know and know what they don't know" and you have a recipe for disaster.
> It worked exceptionally well. Most errors were caught at this phase, or inmediately before, when we were reading our own code just before printing it.
> We had a friendly environment.
> We had a clear pre-established set of rules. We just discussed what category an error fell into.
> Yes! it was also a way to learn to think about code and avoid similar errors next time, but some errors ocurred again anyway.
Yeah, I loved them. Everyone on the team has some little thing they know that no one else does, and everyone on the team has some little gap in their knowledge. You can spread the former and diminish the latter.
We didn't have any kind of rules for the reviews, but it ended up working great. That was a really nice team.
There was a document, not too long, about ten, maybe twenty pages, that covered a lot, from code formatting to cohesion, complexity, interface size... it was a very condensed summary of Code Complete with some extra material from other books.
Errors were categorized as major (show-stoppers), minor (annoyances), graphical (user visible typos), design (i.e. bad layered code, fat interfaces) and style (formatting, lack or excess of comments), IIRC.
There was also an authority (the boss) that was flexible enough, but had the ultimate power of solving disagreements, so we didn't get caught in endless discussions.
My main observation from managing a small group was that you could easily correlate the developer's skill to the feedback they gave in code review. The ones that illuminated possible bugs or ambiguities clearly stood apart from those that merely pointed out formatting inconsistencies. And by far the worst devs in the group were the guys who said "looks good".
I love the group therapy metaphor. Good regular communication about what we do, properly curated, really improves self-confidence.
When I was sixteen, I went to work for a newspaper in Hong Kong. It was a rag, but the editor taught me one important lesson. The key to a great story is not who, or what, or when, but why.--Bond Villain Elliot Carver
I like "Why'd you do that" even more than "How." How I can get from the code. Why is going to be enlightening, entertaining, terrorizing, uplifting, or some combination of all four.
More and more I realise that great software comes out of teams that work well together, talk about the code daily and most importantly want each other to succeed.
Code reviews might be a way to encourage that. But there is an honesty needed. Its often called a "collegiate" atmosphere - usually its based on the idea that everyone around the table has earned the right to be there, they all know and respect that, and importantly no-one can simply fire them or ban them arbitrarily - then you start to see honesty in code and discussions
Are you sure the word you're looking for isn't "collegial"?
With respect to the term, I think "collegiate" is actually very appropriate, because the best group work I've ever done was in school. Everyone was okay if someone disagreed with their work, or didn't like it, and everyone was also okay with disagreeing with someone else.
That's exactly what code reviews are used for at our company. You learn things you wouldn't have learned otherwise and a stylistic consensus grows bottom up as people discuss the whys of their personal style.
Occasionally bugs are caught. We also timebox reviews to 30 minutes and they are asynchronous (review the last week's commits on github).
Reviews are a great way for both reviewer and code author to learn, not just to check for correctness. If it were only that, the reviewer could probably go through the codebase and fix it faster (if it needed fixing).
Critically, both should understand that it's the code under review, not the author of the code, to avoid defense-mode and frame discussion as "here's a problem, here's one solution, let's see if we can do better together".
To me, one of the best things about pair programming is the non-confrontational, discussion-spawning nature of it.
Code reviews are by definition a bit judge-y, and they catch a lot of issues after the code is already written, which yields a certain reluctance to change.
When I'm pairing on something, it's rare to get in the my code vs your code thing. It's almost always our code, our ideas.
Our formal policy is that at least one other team member must review a change before it is checked in. My impressions:
1. Many problems get fixed before the review even goes out. When you know others are going to be looking at the code, you tend to take a much closer look before sending it out. Also, you tend to clean up those quick hacks you put in to "just make it work".
2. It keeps the team informed. There's always at least one other person out there that knows what you have been working on. This gives you someone to bounce ideas off, but it also keeps a team together, and forces people out of getting silo'd into one particular area.
3. It is often more valuable to send out a review before it's done. If you are working on a new framework / library / class, it's better to design and prototype it, then send out a review to some people for feedback instead of waiting until you are done. They might point you in a different direction, or offer criticism, but it's much better to hear that before you have spent weeks on implementation than after.
4. Make the review process universal. If it isn't required, people won't do it. And it should be required for everyone, not just the new guys. Senior people can still learn things too, and if they are really writing great code then it should be shared with the team as an example of how to do things right.
5. Make the review process painless. Make it easy for people to generate diffs that can be shared and easy to comment, there are good tools out there for this. If the barrier to checking in changes is too high, a lot of easy fixes will get ignored because people just can't be bothered.
Code reviews are an antipattern and should die in a fire. They slow down the development process without bringing any benefits.
Some say code reviews prevent bugs. They are wrong. Testing code and static analysis prevent bugs. Require all code to have unit tests and run them automatically. Run static analysis automatically. If you have developers who don't write tests, fire them. Code reviews only give you a false sense of security. You figure: since two people looked at it, it therefore must be good; forgetting that the reviewer just skimmed it over to get it out of their queue.
Some say code reviews enforce style. Assuming you care about style: why not just have your IDE or static analysis tool enforce it for you?
Some say code reviews help share knowledge. That may be true, but it should be unnecessary. Standups share knowledge. Wikis share knowledge. Comments share knowledge. Code should share knowledge. If you have code hitting trunk that every developer on your team can't understand easily and quickly, then you need to start finding new developers.
I can write you code that will pass all static analysis tests with full code coverage and complete dependency injection that still makes you taste your lunch in the back of your throat. I think any good programmer could.
> forgetting that the reviewer just skimmed it over to get it out of their queue.
There's a difference between cargo cult code reviews and an actual, formal code review.
There is a ~9 year old buffer overflow vulnerability in sendmail (crackaddr) that no static analyzer can identify even today (and they are specifically targeting it as a test case).
And what exactly is wrong with the code that passes all the tests and works? Its aesthetics offend you? Should the first priority of code be to work to just work?
Beauty is only a nice-to-have feature. If the pretty way is about as fast to produce as the "lunch in the back of your throat" model, sure lets go pretty. But if it's going to cost me 50% of my developer's time, I'll take the quick and dirty every time.
It's not so much about "beauty", but long-term maintainability of code makes development faster in the long run and reduces the murder rate within your dev team.
The wrong part here is in "passes all the tests and works" assumption. In my experience, static analysis, unit tests and reviews all help minimize errors (amongst their other benefits), ideally you'd make use of all three.
> Code reviews are an antipattern and should die in a fire. They slow down the development process without bringing any benefits.
Woah slow down there soldier. Code reviews are great for many reasons:
1) Introducing new technologies or techniques you used to solve a problem.
2) Letting your styles mesh with your team.
3) Focusing on whether the concern/problem has truely been addressed.
Also, unit testing isn't as widespread as you may think. Many of the projects I've worked on didn't have the infrastructure in place to even start unit testing. Also, ever try to unit test Javascript in the browser, it's possible but it sucks pretty hard.
If you are on a team of 2-3 hardcore devs, then you're probably more correct than not. Thats not usually the case.
1) Lunch and learns or brown bags or whatever your org prefers are better. Code reviews are only 1-to-1. Actual classes are more efficient.
2) Use tools to enforce style if you care. Personally, I think it's a waste of time.
3) If you have devs merging code to trunk that isn't fixing the bug and creating the feature requested, then that devs needs replacing.
RE unit testing: I realize it's not that widespread. But anyone developing software and not unit testing is doing it wrong. They should take all the time they waste on code reviews and instead spend it on building out their test suite.
And unit testing Javascript is not only possible, but easy. We do it with Jasmine. It works great. The key is to write good, testable JS instead of the mess of spaghetti that passes for Javascript code in most places.
>RE unit testing: I realize it's not that widespread. But anyone developing software and not unit testing is doing it wrong. They should take all the time they waste on code reviews and instead spend it on building out their test suite.
This is a very good point. If there are very basic things not being done (or done well) focusing on those things is more important than ceremonial code-reviews - and in the beginning it seems code-reviews are often only ceremonial and some shops never get past this stage. I've never seen them work really well aside from cases where they happen organically between two co-workers who like and respect each other.
I think that approach would work for a typical developer of a typical web app. But not all places are the same. You can't easily test many things. You can have correct tests that miss some cases. You can have incorrect tests matching incorrect code. You can have code fixing the result rather than the cause.
Basically unless you trust developers to never be wrong, I'd do reviews. And then if you do trust them, I'd still do reviews to make sure everyone learns what's happening outside of their area and where common utilities can prevent code duplication.
In short - there are things that cannot be realistically automated or simplified into compilation checks. If they don't occur in your environment - great. When they do, look again at code reviews.
For style, there are points of style that no automated tool that currently exists can enforce, such as 'good naming'. At the moment that looks like an AI-hard problem.
For your point about sharing knowledge, your argument works against all forms of sharing knowledge. You may as well say that wikis are unnecessary, because standups, code, comments, and code reviews share knowledge. What you have missed is that these different modes of knowledge sharing work better at sharing different kinds of knowledge, which is why you (should) want all of them.
Also, good developers are not really fungible. Developer X may be really good at making code flow, so that what it is trying to do is not obscured by how it is doing it, and developer Y might have a really good knowledge of algorithms and an eye for optimisation. In that case, having them review each other's code will always result both in better code and in (eventually) better developers.
As far as it goes, I have seen plenty of badly done, useless code reviews. I have also had really well done code reviews, where someone would email me an annotated diff of one of my commits, or where I would pair for a few hours with someone going over some thorny code I or they had committed and making it less thorny, or where we would go back and forth refactoring some piece of code while talking over the whys and wherefores. If I had only had bad code reviews I would be much more sympathetic to the point you are making, but as it is I just think that you have never been involved with a competently done code review.
That's all true but code reviews certainly share a lot of information and feedback specifically related to the context of the day by day development. Which certainly will increase your productivity by teaching you about mistakes and at the same time letting you evangelize someone else in your team/organization. You won't find the same amount of information for your project with as much frequency as you do on code reviews.
Yes. At Tingo.com, my team of 11 does not do code reviews. Our code base is big but not huge: 151k lines of Java and 125k lines of JS (not a great measure, but it gives some idea of magnitude). We do extensive testing and static analysis. We ship code everyday, and have been doing that for the last 10 months. We end up having to patch production to fix bugs pretty infrequently-- less then once a month.
There's a big, big gap between "Code reviews are an antipattern and should die in a fire. They slow down the development process without bringing any benefits." and "My startup does extensive testing and static analysis but no code reviews; we don't have very many bugs in production". I think you would have come across better saying something like this from the outset, and with a lot less attitude.
You've provided one uncontrolled anecdote that condradicts all the object research I've seen [1]. I don't find that very convincing. Not only does code review identify bugs, it does so much more effectively than any other method tested.
No, but neither is there quantitative evidence that code reviews help improve software quality. Code reviews are just always assumed to be good-- they're a basic part of the 'good software engineering' cargo cult.
I got rid of them here because I've personally never seen a beneficial affect at any of the places I've worked. Easy to spot bugs still made it to production. Crappy, inelegant code still made it to trunk.
Getting rid of code reviews and making developers personally responsible for what they commit has improved both of those aspects for us.
> No, but neither is there quantitative evidence that code reviews help improve software quality.
You might want to check out the book "Code Complete", which says in Chapter 24:
"...the average defect detection rate is only 25 percent for unit testing, 35 percent for function testing, and 45 percent for integration testing. In contrast, the average effectiveness of design and code inspections are 55 and 60 percent."
There is indeed quantitative evidence that code reviews help improve software quality. Steve McConnell's "Code Complete" refers in detail to such evidence.
Mandatory code reviews are about politics and efficacy. There's many people in the programming profession who maintain the appearance of producing a lot of software. However, when you view their large number of commits, you see they do very little or nothing at all. These people don't want their competitors creating a lot of software so they will try to manipulate the politics towards a belief that in all cases:
1. a high volume of very small commits are better than a low volume of large commits regardless of the functionality they provide.
2. Number of commits is an effective metric for productivity and contribution.
3. very small amounts of code are much better than large amounts of code regardless of the functionality it provides. Code is evil, so the person that produces less of it is more saintly.
4. mandatory code reviews will find lots of bugs. Naturally, they turn into complaints about spaces and formatting. EVERY TIME.
These are only true in a very limited set of circumstances. One example is when working in maintenance on an existing project. Small changes are appropriate because the product is finished.
The reality is they are trying to prevent productivity and efficacy of other developers, just as a business would try to increase the costs of their competitors down the street. PHBs aren't able to understand the impact of commits and new code, and will judge productivity by number of commits, large or small, and code review complaints rather than large amounts of working, tested software. Less scrupulous people definitely try to take advantage of this in a political corporate environment.
I have done code reviews in a scrum team setting. Often there are tasks that only one team member will implement, it would simply be too many chefs to have two developers on it, and the others are needed to implement other things.
What we found very useful is to do a pre-review of the existing code to discuss how it works, but this after everyone have done their own one-hour self-study of the code (actually, self-study time tends to be until one guy interrupts saying he grok it and can describe it). Then we discuss the approach for implementation, but only one guy go ahead and implement it. The other developers have similar reviews on their areas.
Afterwards, when it has been implemented, we all come together and look at the implemented solution, again first with self study of the commits, then review sitting in front of the developers desk.
By only being involved as a reviewer, you get a very good understanding of the implementation. You understand not only how it works, but why it was implemented that way. It also leads to more "talk about the code" in the team. Everybody gets a common understanding of some area, you can talk about code improvements.
Is there really even a "right" or wrong" in this process? Isn't it more of a matter of being able to support your choice of design trade-offs with cogent arguments? And being able to persuade others?
There is no doubt some amount of "Why" questions that a developer can ignore. The ones that come from people who have not done their homework and thus have failed to locate the (obvious) answers. But any developer who thinks he can ignore each and every "Why" question, even if he's highly competent, is not someone I would trust with really important stuff. Is it that he is afraid of being "wrong"? That he does not have a good argument to make? Is he hesistant to say, "Honestly, I don't know. That is just the solution I chose." I don't see anything "wrong" with that. No single developer is going to be able to think of everything, of every possible solution or scenario. I thought that is an important reason behind something like code review.
I've had good success doing public code review sessions at conferences where the goal was simply to generate discussion points rather than to catch errors.
The discussion center around how to use the language more effectively and sometime venture into architectural issues.
In my experience, code reviews get contentious when there is subjective feedback involved for e.g. naming or class design. In that case, it helps to discuss and resolve the issues in person. Plain correctness errors or formatting issues get resolved quickly over email.
Some other things from personal experience:
1. A thorough code review is time consuming but the team will appreciate it despite the disagreements.
2. Be polite and put forth your feedback with clarity and based on facts.
3. Practice what you preach - follow the same quality standards that you expect of others.
Code reviews can definitely suck but sometimes the short term cost needs to be sacrificed for long term gain and this is definitely one of them in my opinion.
Code reviews help enable a 'hivemind' within a project. It helps other engineers learn how to do what you do, and helps you write readable code that enables other engineers to learn what you do helping to accelerate code velocity in the end. The resultant sense of coherency in the code is immeasurable.
When I participated in code reviews (of C code) when I worked at an aerospace company, I was the ONLY one who actually reviewed the code and I always found a few serious errors.
1. According to research, there is no more cost-effective method of bug detection known than careful code review. (Steve McConnell's Rapid Development cites internal IBM statistics, but I completely believe it.) This is before you get to any other benefits, such as cross training, code consistency, etc.
2. According to widely reported experience for over 50 years, there is no faster way to get a lot of programmers upset than to review their code.
Therefore a primary goal in code review is to avoid conflict. The following principles have been found to help.
1. Do not involve management. When people have reason to believe that their manager will see the result of code review, the odds of defensive behavior skyrockets.
2. Review only small pieces. If you find a problem in a small piece, it can be easily fixed, you move on. Reviewing a big piece is a lot more work, and fixing even a minor design flaw can be a lot of work to redo.
3. Make reviews fast. If you don't want people to write a lot of code without getting any of it reviewed, then you cannot make them take a long time to review said code.
4. Make reviews required. If people don't have to do them, they won't. And the ones who are most likely to skip are the ones whom, 6 months later, you'll wish had.
In this spirit you might consider http://code.google.com/p/rietveld/ as a way to make sure that your team reviews early, reviews often, and reviews in small chunks. (Some will love it, some will hate it. Your source code quality will improve though.)