Tags

HONcode Certified

This website is certified by Health On the Net Foundation. Click to verify.

This site complies with the HONcode standard for trustworthy health information: verify here.

h pylori

Peer review, what's the fuss?

Carl Heneghan
Last edited 14th September 2012

For a researcher, the arrival of a peer review article may set off feelings of angst, duty, resentment and sheer frustration. On a good day, the article fits with the research agenda, is of sufficient relevance to impact on healthcare, and it feels a worthy duty to undertake the review.

Recently, there has been considerable interest in the limitations of peer review as a credible system for improving the quality of submitted research.

Henderson’s article in the BMJ this year highlights leaked emails at the Climate Research Unit, the Lancet’s retraction of the Wakefield paper ‘the most controversial medical paper of the past 15 years’ and allegations in stem cell research peer review was failing their field.

With regard to the latter Professor Lovell-Badge states: "It's turning things into a clique where only papers that satisfy this select group of a few reviewers who think of themselves as very important people in the field is published."

One major problem is studies which are scientifically flawed, or offer modest increments in the evidence-base often attract undue attention, particularly the media. Whilst at the same time truly original findings may be delayed or rejected. If you are not convinced then consider the case of H Pylori. The Gastroenterological Society of Australia rejected Barry Marshall’s abstract to present his research at their yearly conference. They deemed it in the bottom 10% of papers submitted. In 2005, Barry Marshall with Robin Warren were awarded the Noble Prize for the discovery that peptic ulcers were caused by Helicobacter pylori.

Some notable changes have occurred in the journals, principally reviewers are now asked to sign reviews. The BMJ undertook a randomised controlled trial of signed versus unsigned reviews and found “it was acceptable to authors and reviewers, and that it made no significant difference to the reviews."

This is a two edged sword, in that the quality of the review is probably unchanged. Yet, given there is a slightly greater tendency to recommend acceptance, reviwers are probably less likely to offend common researchers (the ones they know) who may require the reciprocal favour in the future. The BMJ is about to take this one step further—publishing its signed reviews alongside published papers after a second randomised trial found this feasible and acceptable to authors and reviewers.

When the amiable doctorblogs (see I’m already in peer review mode) asked on twitter ‘Peer review is broken. But we know that. What wld you do instead? see @richard56j #bmj

My one suggestion to improve the review process is, to not see it as ending at the journal publication but starting, thus allowing analysis of trial results over and above traditional peer review.

Publication of all relevant documents, which is now possible with the internet, including protocol reviews, ethic reviews, amendments to the protocol and publication of the raw data, will allow interested parties to further review and discover the truth.

The question is which journal is going to stand up to the plate first?

Twitter TrustTheEvidence.net

tte
     

Search the TRIP Database

TRIP Database

 

Recent Comments