Networks, geometry and evidence
Last week at the 19th Cochrane Colloquium in Madrid, Professor John Ioannidis from Stanford University gave a riveting talk about the geometry of the evidence. Among the hundreds of articles he has published, his 2005 paper on Why Most Published Research Findings Are False is the most downloaded technical paper from the journal PLoS Medicine with over 400,000 views. Without question, he is a leader in addressing controversial issues in biomedical research.
In his lecture, he advocated for agenda-wide views of research by using network meta-analysis. Traditional meta-analyses are useful to compare two interventions, or an intervention with placebo. But what happens when there are dozens of randomised controlled trials on many different medications for the same medical condition? For example, there are 68 antidepressant drugs to choose from, how do healthcare professionals determine which one is the most effective? In fact, Ioannidis conveyed it would probably be stupid to depend on a single meta-analysis.
Enter the network. Ioannidis has been leading the development of multiple treatment meta-analysis or network meta-analyses. In its simplistic form, the networks map out all the interventions for a known condition in a lattice or network design. The network displays the number of trials relevant for a certain intervention and illustrates how they connect (or do not connect) to each other. The pattern of the comparisons is called the geometry of the treatment network.
For example, there have been 69 trials for smoking cessation that compared nicotine replacement with no active treatment but zero trials comparing it with the drug varenicline (Champix). To make an informed decision, clinicians need information comparing interventions.
Also, when you look at funding of the clinical trials, you find that head-to-head comparisons of interventions owned by different companies are uncommon. In fact, you find many “auto-loops” showing that the majority industry sponsored trials examine a single intervention owned by the company. Worse still, when two companies sponsor the same trial, it is not due to altruistic cooperation but usually due to co-ownership of the same agents.
Although network meta-analysis offers a wider picture than a traditional meta-analysis, they combine large numbers of trials and comparisons into one academic paper, which Ioannidis pointed out is not good for researchers CV. The current framework encourages narrowly defined systematic reviews and clinical trials which demonstrate effectiveness rather than zooming out to look at the big picture. Networks, although providing a cross-section of a clinical field at one point in time, provide insight into the current evidence base and can identify where connections are missing.
Going back to the above example on smoking cessation, we need trials comparing nicotine replacement with to varenicline (Champix), not another study showing that nicotine replacement compared to placebo is effective. But the problem is that the latter is easy to publish with a large effect size, but the former will probably show no difference and not make BBC headlines.
Is there an evidence base for children in primary care?
A recent article in PlosONE of Cochrane reviews relevant to children in primary care states there is a mismatch between the focus of published research and the clinical activity for children in general practice. Not only in the UK but in a number of countries: Australia, Netherlands, US and the UK.
What’s odd is that in a condition such as asthma despite representing 3-5% of consultations, it is the focus of nearly one-quarter of all reviews. On the other hand, despite the increasing burden of skin conditions which lead to one-quarter of all visits only 7% of reviews were relevant.
Non-drug interventions (such as counseling) are an important part of general practice yet they are virtually non-existent in evidence syntheses and research funding. Over half of the reviews studied drug interventions in children while 69% of all controlled trials in children assess drug products.
Whilst the number of systematic reviews published is skyrocketing (over 2,500 in 2007 alone) there continues to be avoidable waste in the production and reporting of research evidence (read more on cebmblog’s recent post). Yet, there has not been a similar increase in children reviews in primary care. Since 2000, the percentage of reviews on children nearly tripled compared to a much smaller increase in primary care reviews.
Why the mismatch? Likely due to multiple factors: absence of primary trials, lack of author interests, public funding poorly correlates with disease burden, more interventions in certain conditions (such as asthma) than others, lack of additional academic training in child health and lack of an overall map of the evidence.
Despite the reasons the mismatch is clear and needs to be addressed. Further work needs to be done to look at how the reviews inform clinical practice. Improving the evidence base for children in primary care is a no-brainer. So how and who should sort this out? Initial steps should include encouraging Cochrane Review Groups, funders, and other relevant organizations to prioritize topics.
The recently created PROSPERO international register of systematic reviews is a step forward to help minimize waste.
Is there an evidence base for children in primary care? Not yet.
Avoidable Waste in research
It’s always a pleasure to listen to Sir Ian Chalmers but the topic of choice at SAPCprimary care conference is to irresistible to not blog about.
Most of you reading this blog will be involved in doing or reporting research: it seems you may be wasting a lot of resources. If you aren't involved in research then you may want to know, why does so much effort go to waste?
If you are an epidemiologist then the four questions you would want to ask are published some time ago by Austin Bradford Hill
1. Why did you start?
2. What did you do?
3. What answer did you get?
4. What does it mean?
If you can’t answer these questions about the research you are doing then it seems you should go back to the drawing board.
Part of the solution is to create better questions, relevant to patients, and developed by patients. You may be surprised that a resource to make uncertainties explicit and to help prioritise new research is actually available. It is called DUETs. It has been established to ‘ publish uncertainties about the effects of treatment which cannot currently be answered by referring to reliable up-to-date systematic reviews of existing research evidence’.
I am continually frustrated with the amount of guff published in the media about the latest ‘dramatic health cure’, Yet an imitative like DUETS never gets a jot of news space. However, this initiative is unlikely to away, and at some time in the future it is likely it will pervade all aspects of research.
The key take home messages are: there is substantial avoidable waste, research should address known uncertainties and engagement of patients and the public is essential.