All Trials Registered | All Results Reported
“Thousands of clinical trials have not reported their results; some have not even been registered.” All trials registered | All results reported
This is a problem.
A petition was launched today that calls on governments, regulators, and research bodies to put measures in place to register and report the methods and results of clinical trials. This initiative, led by Bad Science, Sense About Science, BMJ, James Lind Initiative and CEBM is important. This issue effects all of us: patients, researchers, clinicians, politicians, scientists, and industry.
The petition was followed with a rip roaring editorial by Iain Chalmers, Paul Glasziou and Fiona Godlee in the BMJ that calls for all trials to be registered and their results published. This excellent piece details the consequences of our collective abstention from action and provides advice to patients whom are invited to participate in clinical trials; name:
“Agree to participate in a clinical trial only if: (1) the study protocol has been registered and made publicly available; (2) the protocol refers to systematic reviews of existing evidence showing that the trial is justified; and (3) you receive a written assurance that the full study results will be published and sent to all participants who indicate that they wish to receive them.”
Don’t wait, sign the petition now.
After signing you can automatically share the message “I've just signed the #AllTrials petition for all trials registered and all results reported” on Twitter or Facebook.
Be proud you are taking a step for transparency and improving patient care. I know I am.
Comparative effectiveness research or lack thereof
An earlier TrustTheEvidence.net blog post on the geometry of evidence described the importance of network meta-analyses. These indirect methods of analysis compare the results from two or more studies that have one treatment in common when comparative effectiveness (CE) research is lacking.
What is comparative effectiveness research? To quote the US Federal Coordinating Council for Comparative Effectiveness Research Report to President Obama in 2009, it is defined as the:
“generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor health conditions in ‘real world’ settings”
Additional studies that compare one drug to placebo are not incredibly useful when we already know they work. The real challenge of evidence-based practice is determining which treatment to use when all ten available drugs are better than placebo. How do clinicians decide which one to prescribe? All too often decisions are built on studies lacking active comparators. This is not high quality care for patients.
A recent study published in PLoS ONE evaluated trials registered in ClinicalTrials.gov that focused on the top 25 topics identified as priority areas by the US Institute of Medicine (e.g. treatment of atrial fibrillation). The authors looked at studies conducted in the US between 2007 and 2010 and determined the prevalence of CE research.
Despite the importance of this research methodology, only 22% of studies were CE studies and their characteristics varied substantially based on the funding source. Primarily industry-funded studies had the shortest duration of follow-up and were more likely to report positive findings compared to studies with any government funding.
As usual, children get left out. Industry-funded studies were less likely to enroll children when compared to government or nonprofit funded trials. The lack of controlled trials in children is already a problem and there may be a perceptions among drug manufacturers that testing drugs in children brings the risk of increased liability.
The authors hypothesise that the increase in CE research will lead to an increase in the number of studies that fail to support new interventions. Not good for big pharma, but why?
First, trials with inactive comparators (i.e. placebo) are more likely to achieve favourable findings. On the contrary, CE studies tend to produce conservative results regarding the superiority of a therapy compared to other active treatments.
Second, industry-funded the majority of drug and device CE studies meaning that most were designed and conducted by the company marketing the product. There is substantial evidence that these studies are more likely to report positive findings supporting the use of a product. The PLoS ONE study provides further evidence that even in CE research industry-funded studies were more likely to report an outcome favouring the use of the intervention.
But it’s not all doom and gloom. The US has allocated $1.1 billion to CE research. This added investment of noncommercial funding will be critical to provide unbiased answers and evaluate under-studied populations (e.g. children). It’s about time we provide with stronger evidence.
Astronomy and Evidence: StaR Gazing for Children's Trials
On the eve of the 20th anniversary of the United Nations Convention on the Rights of the Child, which recognised the right of all children to "the enjoyment of the highest attainable standard of health", the editor of the Lancet Richard Horton was delivering a plenary address at the first summit of StaR Child Health in Amsterdam in 2009. In his address, he stated the:
“Lack of research, poor research, and poorly reported research are violations of children’s human rights.”
Individuals from various disciplines, including the World Health Organisation, the US Food and Drug Administration, and the European Medicines Agency, gathered together to discuss a topic of shared interest: the paucity and shortcomings of paediatric clinical trials.
The quality, quantity and relevance of data involving children are substantially lower than those involving adults. This problem persists despite knowledge that inadequate testing of medication in children may result in harmful or ineffective drugs being offered or beneficial drugs being withheld.
Indeed a systematic review sponsored by the World Health Organisation found that there were few guidelines relevant to the design, conduct and reporting of research in children. Most guidelines only seem to focus on what should be done, failing to address the important issue of how it should be completed.
The mission of StaR Child Health is to improve the design, conduct and reporting of "research with children through the development and dissemination of evidence-based standards."
How best to achieve this monumental task? The StaR Child Health group is using a "knowledge to action" process that involves using a systematic process to review the current knowledge base, identify gaps, develop guidance and implementation strategies. An ambitious agenda that is gaining tremendous momentum.
Based the results of a systematic review and survey of key stakeholders, they have identified 10 priority issues. Each issue will be systematically addressed by a standard development group that will produce evidence summaries, identify gaps and develop a dissemination strategy. The priority issues include recruitment and informed consent, risk of bias, sample size, age-specific dosage and administration, safety and global health.
But more guidelines and standards will not change the conduct of trials unless they are implemented. StaR Child Health is leading in knowledge translation by involving multiple stakeholders from the beginning and is working with international partners, such as the GRIP Project, a global research network in paediatrics.
For the quality of health care for children across the world to improve, trials must be conducted that address the complexity of child health and provide reliable evidence-based answers. Now we can be confident that we have a bright StaR illuminating the path forward.