As for Peer Review, Code Review?

Interesting little tidbit crossed my Inbox today...

Only 8% members of the Scientific Research Society agreed that "peer review works well as it is". (Chubin and Hackett, 1990; p.192).

"A recent U.S. Supreme Court decision and an analysis of the peer review system substantiate complaints about this fundamental aspect of scientific research." (Horrobin, 2001)

Horrobin concludes that peer review "is a non-validated charade whose processes generate results little better than does chance." (Horrobin, 2001). This has been statistically proven and reported by an increasing number of journal editors.

But, "Peer Review is one of the sacred pillars of the scientific edifice" (Goodstein, 2000), it is a necessary condition in quality assurance for Scientific/Engineering publications, and "Peer Review is central to the organization of modern science…why not apply scientific [and engineering] methods to the peer review process" (Horrobin, 2001).


Chubin, D. R. and Hackett E. J., 1990, Peerless Science, Peer Review and U.S. Science Policy; New York, State University of New York Press.

Horrobin, D., 2001, "Something Rotten at the Core of Science?" Trends in Pharmacological Sciences, Vol. 22, No. 2, February 2001. Also at and (both pages were accessed on February 1, 2009)

Goodstein, D., 2000, "How Science Works", U.S. Federal Judiciary Reference Manual on Evidence, pp. 66-72 (referenced in Hoorobin, 2000)

I know that we don't generally cite the scientific process as part of the rationale for justifying code reviews, but it seems to have a distinct relationship. If the peer review process is similar in concept to the code review process, and the scientific types are starting to doubt the efficacy of peer reviews, what does that say about the code review?

(Note: I'm not a scientist, so my familiarity with peer review is third-hand at best; I'm wide open to education here. How are the code review and peer review processes different, if in fact, they are different?)

The Horrobin "sacred pillars" quote, in particular, makes me curious: Don't we already apply "scientific [and engineering] methods" to the peer review process? And can we honestly say that we in the software industry apply "scientific [and engineering]" methods to the code review process? Can we iterate the list? Or do we just trust that intuition and "more eyeballs" will help spot any obvious defects?

The implications here, when tied up next to the open source fundamental principle that states that "more eyeballs is better", are interesting to consider. If review is not a scientifically-proven or "engineeringly-sound" principle, then the open source folks are kidding themselves in thinking they're more secure or better-engineered. If we conduct a scientific measurement of code-reviewed code and find that it is "a non-validated charade whose processes generate results little better than does chance", we've at least conducted the study, and can start thinking about ways to make it better. (I do wish the email author had cited sources that provide the background to the statement, "This has been statistically proven", though.)

I know this is going to seem like a trolling post, but I'm genuinely curious--do we, in the software industry, have any scientifically-conducted studies with quantifiable metrics that imply that code-reviewed code is better than non-reviewed code? Or are we just taking it as another article of faith?

(For those who are curious, the email that triggered all this was an invitation to a conference on peer review.

This is the purpose of the International Symposium on Peer Reviewing: ISPR ( being organized in the context of The 3rd International Conference on Knowledge Generation, Communication and Management: KGCM 2009 (, which will be held on July 10-13, 2009, in Orlando, Florida, USA.

I doubt it has any direct relevance to software, but I could be wrong. If you go, let me know of your adventures and conclusions. ;-) )