Tags

HONcode Certified

This website is certified by Health On the Net Foundation. Click to verify.

This site complies with the HONcode standard for trustworthy health information: verify here.

publication bias

A recent article published in the BMJ raises questions about the extent and type of publication bias that exists in the literature. Publication bias is the selected publication of studies based on the results, such as only publishing studies that demonstrate a drug works while not publishing studies that demonstrate harms.

The study authors, including Ben Goldacre author of the best-seller Bad Science, explore the potential implications of study funding and high reprint orders. They contacted the editors of the top general medical journals (i.e. JAMA, Lancet, NEJM, Ann Intern Med, and BMJ) and requested information on the 20 articles with the highest number of reprint orders. After matching the articles with controls, the authors evaluated whether study funding (i.e. industry, mixed, other or none) was associated with higher numbers of reprints.

The results are telling. The Lancet led the way with a median of 126,350 reprints for the top articles with a range from 24,000 to 835,100. The BMJ was a distant second with a median of 13,248 (range 1,000 to 526,650). Unfortunately JAMA, NEJM and Ann Intern Med did not provide information.

Overall, compared with controls papers with high reprint orders were considerably more likely to be funded by the pharmaceutical industry (odds ratio 8.64, 95% CI 5.09 to 14.68). In addition the cost for reprint orders ranged from £4,002 to £1,551,794: reprints are evidently a lucrative source of supplementary income for journals.

While not designed to detect publication bias, the article highlights the importance of thinking outside the box. Evidence-based medicine is filled with cutting edge issues that are continually evolving and emerging. Do you think that a paper with potentially high reprint orders may affect an editor’s decision to publish? Should journals disclose the number of reprints for each article?

If you are keen to learn more, consider attending Evidence Live, a conference unlike any other event in healthcare, bringing together the leading speakers in evidence-based medicine from all over the world. The conference will include a session dedicated to Publication Bias at Evidence Live 2013 with an international line-up of speakers including Doug Altman, An-Wen Chan, Tom Jefferson and many more.

What do you think are undiscovered sources of publication bias? Here's your chance to share your thoughts with the experts at the University of Oxford, 25-26 March 2013.

*Note: this blog has also been posted on Evidence Live Blog.

Publication bias: big problem for children

Peter Gill
Last edited 6th May 2012

A recent study in the journal Pediatrics reported that only 29% of clinical studies in children have been published. This finding reinforces previous studies that there is significant publication bias in paediatric studies. These findings are a cause for serious concern.

What is publication bias? Essentially, it is the selected publication of studies based on the results, such as only publishing studies that demonstrate a drug works while not publishing studies that demonstrate harms.

Publication bias is a serious problem in healthcare and can have a large influence on treatment decisions by only providing limited information. Researchers have demonstrated substantial publication bias in certain areas such as the antidepressant medication reboxetine.

Several initiatives have been spearheaded to help reduce publication bias. The creation of open-access journals have shifted the focus from the importance of the results (as judged by a journal editorial committee) to the methodological rigour by which the study was completed.

But more importantly has been the creation of online trial registries, such as ClinicalTrials.gov launched in 2000. These registries serve as central databases of all the current and on-going clinical studies. Registration is optional, however in 2005 the ICMJE made registration of clinical trials as a pre-requisite of publication. Although this does not represent all journals, it sent a strong message of the importance of registration.

However despite the creation of trial registries, less than half of US based National Institute of Health (i.e. government) funded trials in children were registered on ClinicalTrials.gov. Another important finding was the lack of information included on the registries. One-third of all clinical studies terminated early did not provide any information about why they were stopped. The situation was similar for suspended studies with one quarter not providing information.

Were these studies stopped because of harms? Were the investigators no longer able to recruit children to enroll? Whatever the reason the studies were stopped, this information must be made public.

Registration of all clinical studies involving children must be made mandatory. This is the only way to minimise publication bias and increase the reporting of research. This would create massive industry uproar, but is it ethical to enroll children in a clinical study without having it publicly registered? At a minimum any trial that receives government funded must be registered.

However registration of studies is only one element of the formula. What about the dissemination of the results? Less than 10% of completed studies in children had results posted and publicly available. With the low publication rates of registered studies, and the even lower rate of posting results, how much information is still missing?

Indeed progress has been made to increase the quality and transparency of clinical studies in children but more is needed. We cannot assume that because trial registries exist that they are being used. Complacency must be replaced with compliance. It seems that more often than not, the little ones have the biggest problems.

Comparative effectiveness research or lack thereof

Peter Gill
Last edited 15th January 2012

An earlier TrustTheEvidence.net blog post on the geometry of evidence described the importance of network meta-analyses. These indirect methods of analysis compare the results from two or more studies that have one treatment in common when comparative effectiveness (CE) research is lacking.

What is comparative effectiveness research? To quote the US Federal Coordinating Council for Comparative Effectiveness Research Report to President Obama in 2009, it is defined as the:

“generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor health conditions in ‘real world’ settings”

Additional studies that compare one drug to placebo are not incredibly useful when we already know they work. The real challenge of evidence-based practice is determining which treatment to use when all ten available drugs are better than placebo. How do clinicians decide which one to prescribe? All too often decisions are built on studies lacking active comparators. This is not high quality care for patients.

A recent study published in PLoS ONE evaluated trials registered in ClinicalTrials.gov that focused on the top 25 topics identified as priority areas by the US Institute of Medicine (e.g. treatment of atrial fibrillation). The authors looked at studies conducted in the US between 2007 and 2010 and determined the prevalence of CE research.

Despite the importance of this research methodology, only 22% of studies were CE studies and their characteristics varied substantially based on the funding source. Primarily industry-funded studies had the shortest duration of follow-up and were more likely to report positive findings compared to studies with any government funding.

As usual, children get left out. Industry-funded studies were less likely to enroll children when compared to government or nonprofit funded trials. The lack of controlled trials in children is already a problem and there may be a perceptions among drug manufacturers that testing drugs in children brings the risk of increased liability.

The authors hypothesise that the increase in CE research will lead to an increase in the number of studies that fail to support new interventions. Not good for big pharma, but why?

First, trials with inactive comparators (i.e. placebo) are more likely to achieve favourable findings. On the contrary, CE studies tend to produce conservative results regarding the superiority of a therapy compared to other active treatments.

Second, industry-funded the majority of drug and device CE studies meaning that most were designed and conducted by the company marketing the product. There is substantial evidence that these studies are more likely to report positive findings supporting the use of a product. The PLoS ONE study provides further evidence that even in CE research industry-funded studies were more likely to report an outcome favouring the use of the intervention.

But it’s not all doom and gloom. The US has allocated $1.1 billion to CE research. This added investment of noncommercial funding will be critical to provide unbiased answers and evaluate under-studied populations (e.g. children). It’s about time we provide with stronger evidence.

Twitter TrustTheEvidence.net

tte
     

Search the TRIP Database

TRIP Database

 

Recent Comments