Networks, geometry and evidence
Last week at the 19th Cochrane Colloquium in Madrid, Professor John Ioannidis from Stanford University gave a riveting talk about the geometry of the evidence. Among the hundreds of articles he has published, his 2005 paper on Why Most Published Research Findings Are False is the most downloaded technical paper from the journal PLoS Medicine with over 400,000 views. Without question, he is a leader in addressing controversial issues in biomedical research.
In his lecture, he advocated for agenda-wide views of research by using network meta-analysis. Traditional meta-analyses are useful to compare two interventions, or an intervention with placebo. But what happens when there are dozens of randomised controlled trials on many different medications for the same medical condition? For example, there are 68 antidepressant drugs to choose from, how do healthcare professionals determine which one is the most effective? In fact, Ioannidis conveyed it would probably be stupid to depend on a single meta-analysis.
Enter the network. Ioannidis has been leading the development of multiple treatment meta-analysis or network meta-analyses. In its simplistic form, the networks map out all the interventions for a known condition in a lattice or network design. The network displays the number of trials relevant for a certain intervention and illustrates how they connect (or do not connect) to each other. The pattern of the comparisons is called the geometry of the treatment network.
For example, there have been 69 trials for smoking cessation that compared nicotine replacement with no active treatment but zero trials comparing it with the drug varenicline (Champix). To make an informed decision, clinicians need information comparing interventions.
Also, when you look at funding of the clinical trials, you find that head-to-head comparisons of interventions owned by different companies are uncommon. In fact, you find many “auto-loops” showing that the majority industry sponsored trials examine a single intervention owned by the company. Worse still, when two companies sponsor the same trial, it is not due to altruistic cooperation but usually due to co-ownership of the same agents.
Although network meta-analysis offers a wider picture than a traditional meta-analysis, they combine large numbers of trials and comparisons into one academic paper, which Ioannidis pointed out is not good for researchers CV. The current framework encourages narrowly defined systematic reviews and clinical trials which demonstrate effectiveness rather than zooming out to look at the big picture. Networks, although providing a cross-section of a clinical field at one point in time, provide insight into the current evidence base and can identify where connections are missing.
Going back to the above example on smoking cessation, we need trials comparing nicotine replacement with to varenicline (Champix), not another study showing that nicotine replacement compared to placebo is effective. But the problem is that the latter is easy to publish with a large effect size, but the former will probably show no difference and not make BBC headlines.
Stenting versus surgery-lessons from the heart to the brain
Atherosclerosis, or clogging up arteries, causes more deaths and more suffering than any other cause worldwide, most commonly in the form of heart attacks and strokes. Blocking of coronary arteries in the heart causes a spectrum of disease from angina to heart attacks, while blockages in cerebral arteries in the brain cause mini-strokes (transient ischaemic attacks or TIAs) and strokes. How best to prevent further strokes and heart attacks (secondary prevention) has occupied medical research for 40 years. There are similarities in the disease process and treatment strategies and lessons from the heart are proving useful in the brain.
Thrombolysis uses clot-busting drugs very soon after the heart attack or stroke to reduce the risk of further events. In both heart attacks and strokes, this treatment is now well-established as long as it is delivered within the narrow time window (12 hours for heart attacks and 4.5 hours for stroke). Evidence from randomised trials was 7 years later in the case of stroke, compared with heart attacks, and the data from meta-analysis has been even slower .
In both heart and brain, surgery is possible to remove or bypass the area of the blood vessel that is worst affected by atherosclerosis.
Coronary artery bypass surgery (CABG) uses a strip of vein or artery to bypass the section of narrowed vessel. An alternative strategy is to insert “stents” to keep the narrowed section patent and allow blood flow. Coronary stents have been adopted across the world for the last 20 years, at the expense of CABG for several reasons, including patient preference, shorter hospital stay, physician preference and stent-company lobbying. Meta-analysis has shown that in the case of multi-vessel disease, CABG is at least as good as stenting, and perhaps even better. Stents had been widely adopted despite inadequate long-term follow-up data, and despite inadequate trial data.
It was not long before stents started to be used in the arteries to the brain as well. However, it seems that the same caution needs to be used with stents in the brain circulation as in the heart. A recent randomised controlled trial of carotid endarterectomy (stripping away the clot from the wall of the artery) versus carotid stents in 1700 patients, concluded that carotid endarterectomy should remain the treatment of choice for patients suitable for surgery. Another analysis from the same trial showed that new lesions on MRI scan (suggesting stroke) were 3 times more likely after carotid stent versus carotid surgery. Data presented at the American Stroke Association last week from a similar North American trial suggests that the two treatments are near equal. Until proper long-term trial data and proper consensus is reached, let us hope that carotid stents are not rolled out with the same zeal as coronary stents.
Understanding evidence-based medicine in 4 days. Lesson 4: The big picture and asking the right question
There are several historical lessons showing why the results of studies and trials should always be viewed in the broader context of all the knowledge in that area. The most commonly used cautionary tale is that of babies lying on their side and risk of sudden infant death. The unfortunately named Dr Benjamin Spock first published his famous book, “Baby and Child Care”, in 1946. In it, he advocated lying babies on their side and sold 19 million copies. Trials as early as the mid-1980s clearly showed that there were more deaths in babies lying on their side compared with babies lying on their backs. However, scientists continued to conduct over 20 more trials which all showed the same result. If these scientists had conducted a proper systematic review and combined the results of previous analyses (meta-analysis), they would have found that further trials were totally unnecessary because the data already showed that laying a baby on its side was harmful. Instead, their trials actually led to tens of thousands of infant deaths which may have been avoided if practice had changed before 2003. Setting the results of new studies in the context of a systematic review of the results of all other relevant studies would become straightforward if systematic reviews were always done before embarking on new research. In new areas of research, such reviews should be performed as data is accumulated in order to look at overall “pooled” trends.
James Lind, a Scottish physician, is credited with performing the first systematic review in 1753, titled “Treatise of the Scurvy”. In this work, he noted,
“As it is no easy matter to root out prejudices, …. it became requisite to exhibit a full and impartial view of what had hitherto been published on the scurvy, and that in a chronological order, by which the sources of these mistakes may be detected. Indeed, before the subject could be set in a clear and proper light, it was necessary to remove a great deal of rubbish.”
His observations have stood the test of time. A systematic review must involve 4 steps: (1) a clearly formulated question; (2) finding relevant studies; (3) appraisal of quality of the studies; and (4) summary of the evidence by use. The first step is crucial, not just in systematic reviews, but in any area of evidence-based medicine. Four aspects of any study question must be clearly defined in order to make any results meaningful: (1) the population being studied; (2) the intervention or exposure being studied; (3) the comparison group used in the study; (4) the outcome that was measured in the study.
Meta-analysis just means that we are combining the numbers from individual studies or trials to give the overall effect from all available data. A meta-analysis of data can only be done if the included studies are comparable and this process will give weighting to studies with larger numbers of patients and more precise data. For an example, see my previous blog regarding aspirin in primary prevention.
The BBC this week reported that patients do not need to fast before having their cholesterol tested , and that this could save greatly on the cost and convenience of testing for cholesterol in patients. This conclusion was only possible because of a systematic review done by Cambridge researchers, published in the Journal of the American Medical Association. They looked at the available evidence for measuring cholesterol and lipids in the blood and cardiovascular risk, which involved going through the individual records of over 300 000 patients involved in 68 long-term studies. Nobody said that doing systematic reviews was always easy but if we don’t do them, we will miss the big picture.