HONcode Certified

This website is certified by Health On the Net Foundation. Click to verify.

This site complies with the HONcode standard for trustworthy health information: verify here.

network meta-analysis

Comparative effectiveness research or lack thereof

Peter Gill
Last edited 15th January 2012

An earlier TrustTheEvidence.net blog post on the geometry of evidence described the importance of network meta-analyses. These indirect methods of analysis compare the results from two or more studies that have one treatment in common when comparative effectiveness (CE) research is lacking.

What is comparative effectiveness research? To quote the US Federal Coordinating Council for Comparative Effectiveness Research Report to President Obama in 2009, it is defined as the:

“generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor health conditions in ‘real world’ settings”

Additional studies that compare one drug to placebo are not incredibly useful when we already know they work. The real challenge of evidence-based practice is determining which treatment to use when all ten available drugs are better than placebo. How do clinicians decide which one to prescribe? All too often decisions are built on studies lacking active comparators. This is not high quality care for patients.

A recent study published in PLoS ONE evaluated trials registered in ClinicalTrials.gov that focused on the top 25 topics identified as priority areas by the US Institute of Medicine (e.g. treatment of atrial fibrillation). The authors looked at studies conducted in the US between 2007 and 2010 and determined the prevalence of CE research.

Despite the importance of this research methodology, only 22% of studies were CE studies and their characteristics varied substantially based on the funding source. Primarily industry-funded studies had the shortest duration of follow-up and were more likely to report positive findings compared to studies with any government funding.

As usual, children get left out. Industry-funded studies were less likely to enroll children when compared to government or nonprofit funded trials. The lack of controlled trials in children is already a problem and there may be a perceptions among drug manufacturers that testing drugs in children brings the risk of increased liability.

The authors hypothesise that the increase in CE research will lead to an increase in the number of studies that fail to support new interventions. Not good for big pharma, but why?

First, trials with inactive comparators (i.e. placebo) are more likely to achieve favourable findings. On the contrary, CE studies tend to produce conservative results regarding the superiority of a therapy compared to other active treatments.

Second, industry-funded the majority of drug and device CE studies meaning that most were designed and conducted by the company marketing the product. There is substantial evidence that these studies are more likely to report positive findings supporting the use of a product. The PLoS ONE study provides further evidence that even in CE research industry-funded studies were more likely to report an outcome favouring the use of the intervention.

But it’s not all doom and gloom. The US has allocated $1.1 billion to CE research. This added investment of noncommercial funding will be critical to provide unbiased answers and evaluate under-studied populations (e.g. children). It’s about time we provide with stronger evidence.

Twitter TrustTheEvidence.net


Search the TRIP Database

TRIP Database


Recent Comments