Networks, geometry and evidence
Last week at the 19th Cochrane Colloquium in Madrid, Professor John Ioannidis from Stanford University gave a riveting talk about the geometry of the evidence. Among the hundreds of articles he has published, his 2005 paper on Why Most Published Research Findings Are False is the most downloaded technical paper from the journal PLoS Medicine with over 400,000 views. Without question, he is a leader in addressing controversial issues in biomedical research.
In his lecture, he advocated for agenda-wide views of research by using network meta-analysis. Traditional meta-analyses are useful to compare two interventions, or an intervention with placebo. But what happens when there are dozens of randomised controlled trials on many different medications for the same medical condition? For example, there are 68 antidepressant drugs to choose from, how do healthcare professionals determine which one is the most effective? In fact, Ioannidis conveyed it would probably be stupid to depend on a single meta-analysis.
Enter the network. Ioannidis has been leading the development of multiple treatment meta-analysis or network meta-analyses. In its simplistic form, the networks map out all the interventions for a known condition in a lattice or network design. The network displays the number of trials relevant for a certain intervention and illustrates how they connect (or do not connect) to each other. The pattern of the comparisons is called the geometry of the treatment network.
For example, there have been 69 trials for smoking cessation that compared nicotine replacement with no active treatment but zero trials comparing it with the drug varenicline (Champix). To make an informed decision, clinicians need information comparing interventions.
Also, when you look at funding of the clinical trials, you find that head-to-head comparisons of interventions owned by different companies are uncommon. In fact, you find many “auto-loops” showing that the majority industry sponsored trials examine a single intervention owned by the company. Worse still, when two companies sponsor the same trial, it is not due to altruistic cooperation but usually due to co-ownership of the same agents.
Although network meta-analysis offers a wider picture than a traditional meta-analysis, they combine large numbers of trials and comparisons into one academic paper, which Ioannidis pointed out is not good for researchers CV. The current framework encourages narrowly defined systematic reviews and clinical trials which demonstrate effectiveness rather than zooming out to look at the big picture. Networks, although providing a cross-section of a clinical field at one point in time, provide insight into the current evidence base and can identify where connections are missing.
Going back to the above example on smoking cessation, we need trials comparing nicotine replacement with to varenicline (Champix), not another study showing that nicotine replacement compared to placebo is effective. But the problem is that the latter is easy to publish with a large effect size, but the former will probably show no difference and not make BBC headlines.
Reporting of subgroup analyses: look for the test of interaction
Today’s BMJ looks at the ‘influence of study characteristics on reporting of subgroup analyses in randomised controlled trials.’
The objective was to investigate what industry funding does to the reporting of subgroup analyses. The ‘so what’ factor, of why is this important in the first place, probably comes to mind.
Subgroups are common, and in modern trial publication appear pretty much all the time. They are used to try and determine if certain baseline characteristics may affect the treatment outcome. For example, women with heart disease have very different outcomes compared to men and often this results in less aggressive treatment.
Yet, subgroups are open to all sorts of misuse: if they aren’t predefined they should be treated with extreme caution. A number of subgroups have subsequently been shown to be false: for example In 1988, the early breast triallists collaborative group showed in tamoxifen trials, there was a clear reduction in mortality only among women 50 or older. Yet, later reviews show benefits were irrespective of age.
Xin Sun and colleagues selected RCTs published in 118 core clinical journals in 2007, including 469 RCTs. What they found was that the high impact journals, non-surgical trials and larger sample size led to more reporting of subgroups. Of interest, and why this study is worth reading, is when the primary outcome was not significant, pharma trials were more likely to report subgroup analyses than non-pharma trials.
The one learning point that readers of trials should stick to from now on, and why this is important, is pharma trials used a test for interaction less for subgroups than non-pharma trials. To compare treatment effects in subgroups in a RCT, such as by sex, a test of interaction should be used. Even if it looks to the reader the two treatment effects look very different, and the P value looks very different, the test of interaction may not be significant.
So, if the interaction test isn't significant, there is no observable subgroup effect. And, if it isn't reported at all then you should ignore the result.
Medicine and media - do they have to be awkward bedfellows?
My Monday evening was spent at an event organised by the London Business School' Healthcare Club, called "Challenging the Status Quo". Andrew Witty, CEO of GlaxoSmithkline, spoke passionately about why drug development and profits do not have to be at the expense of access to medicines in poor countries. His company has been the first global pharmaceutical company to pledge to pool its patents to allow generic manufacture of its drugs in poor countries, and to enforce differential pricing between rich and poor countries. The take home message was that it is possible to change the prevailing practice or norms, even in an industry like pharma.
Sanjay Gupta is a neurosurgeon with a difference and he is changing the norms in a totally different arena. He is most famous for being CNN's chief medical correspondent and his TV programmes and writings are hugely popular in the US for their new angles on healthcare problems around the world. For example, he has followed medics in war zones, and was filmed meeting the Mexican boy, who was thought to be the index case of swine flu, amid global hysteria about the disease. He recently turned down the job to be Surgeon-General in President Obama's staff.
Gupta made two important points. Firstly peer review, the process used by journals to accept and then publish scientific articles, takes too long and is too slow at delivering up-to-date information for mass consumption. Moreover, one study showed that "... although recommendations made by reviewers have considerable influence on the fate of both papers submitted to journals and abstracts submitted to conferences, agreement between reviewers ... was little greater than would be expected by chance alone". In other words, peer review is far from perfect. Therefore, many people (including health professionals) are increasingly gaining their knowledge from alternative sources such as the internet, blogs and Twitter. Secondly, the pressure for news headlines from mass media corporations does not necessarily have to conflict with the need for good quality, science and health information. There is a plethora of health-related news and advice and so there is plenty of room for health professionals to work innovatively with new media to ensure quality of that information. We wholeheartedly agree at trusttheevidence.net. Are you thinking enough about where you get your up-to-date health information from?