Wednesday, June 19, 2013

The magical results of reviewer experiments

So, for those of you who aren't as familiar with it, the way the peer review process theoretically works is that you submit your paper, they (hopefully) send your paper to review by other scientists, and those scientists give their comments.  Then you have to address those comments, and if you do so to the editor's satisfaction, then they will probably accept the paper.  Often times, the reviewer comments will go something like "In order to make the claim that XYZ, the authors must show ZYX via experiment XZY."  And, amazingly, the authors seemingly ALWAYS get the results that they want (i.e., the ones that will get the paper accepted).  Now doesn't this seem a little bit unlikely?  Of course it does!  But at that point, the pressure to complete the deal (for all parties involved) is very high and so, well... things just seem to work out. :)  Does this seem shady to you?  Is that really the way it goes down?  Well, let me ask you: have you ever reviewed a paper, suggested an important experiment, and had the authors actually pull the claim in response to a negative result?  Certainly hasn't happened to me.

I think the idea with peer review is that it raises the quality of the paper.  Overblown, in my estimation.  Because of this pernicious "reviewer-experiment-positive-result" effect, I don't think that the additional experiments really make a bad paper better.  Good papers tend to get reviews that ask for peripheral experiments that are usually a waste of time, and bad papers tend to generate a list of crucial controls that somehow always seem to work out in their favor.  Note that I'm not (necessarily) saying that authors are being underhanded.  It's just that reviewer experiments tend to get much less care and attention, and since there is now a very strong vested interest in a particular result, the authors are much more likely to analyze the hell out of their data until it gives you what you want.  And once they get something vaguely positive, you can't really say much as a reviewer.  Here's a hypothetical example, very loosely based on experience...

Reviewer: "If the authors' model is correct, then the expression of gene A should increase upon induction".

Authors: "We thank the reviewer for their very insightful comment [gritting teeth behind fake smile].  We have now measured expression of gene A upon induction, and found it increased by 1.13 fold with a p of 0.02 (see supplementary figure 79).  We believe that this result has greatly strengthened the conclusions of our manuscript."

What the reviewer says: "The authors have satisfied my concerns and I now recommend publication."
What the reviewer thinks: "Whatever..."

What is the reviewer to do?  They have made a measurement consistent with the hypothesis and showed it to be statistically significant.  Now, if the authors had encountered this untoward result in their initial experiments and the point were indeed critical, would they have continued to follow up on their claims?  Probably not.  But now there's a vested interest in the publication of a project that may have taken years, and so, well, time to bust out the z-score.


What to do about it?  No clue...

No comments:

Post a Comment