Is swine flu the most over-published and over-hyped disease ever?
A quick search of PubMed using the terms ‘swine flu’ and ‘H1N1’ reveals that as of the 27th November 2009 there are currently 4,475 articles on the subject.
The first MEDLINE peer reviewed articles appeared in 1935 in the Journal of Experimental Medicine THE INFECTION OF MICE WITH SWINE INFLUENZA VIRUS by Shope RE (PMID: 19870434). In the intervening years up to March 2009 there were 3,032 articles published in PubMed. Yet, in the last seven months alone we have seen an explosion of articles, with 1,437 peer reviewed articles on the subject: - roughly six per day.
In terms of the big four journals the BMJ leads the way with 107 articles - largely due to the number of news articles it publishes on the issue - followed by the Lancet and New England Journal of Medicine at 35 each, and JAMA bringing up the rear on 21.
By the time you realize articles are starting to appear on the The Emotional Epidemiology of H1N1 Influenza Vaccination you may want to consider drowning your sorrows, or at least I do. Given the impact on my emotional intelligence quotient, of yet another article, on what must be the most over-hyped disease ever, I seemingly keep asking the question: are there any other infections out there?
A shift of focus to the news reveals even more dire statistics. The top story in April on swine flu with 3,675 new articles was the outbreak of a new form of ‘swine flu’ which prompted the United States and the World Health Organization to declare a public health emergency. By May 1,881 related articles were focusing on the Southern Hemisphere being mostly spared in the swine flu epidemic.
By May, peaks on H1N1 were occurring with 1,881 articles on confirmation of all of 8 Influenza A/H1N1 cases confirmed in Asia-Pacific, and in August the major peak in news stories occurred for Tamiflu. Unfortunately, along with Dr T I have to lay claim to part of the blame. The publication of the effects of Tamiflu in children in the BMJ saw over 2,000 articles published on the subject in August alone. Which brings us to the current month where 2,741 news articles were published on what has to be a must read story - the availability of the vaccine H1N1 vaccine clinics to offer seasonal flu shots.
Do you know what the killer fact is in all of this? There isn’t one randomized trial out there on swine flu or H1N1 - outrageous.
How to communicate risk: part 1 understanding the five dimensions
In 1980 Richard Peto explained to ordinary people the quantitative dangers of smoking:
“Among an average 1000 young men who smoke cigarettes regularly – about one will be murdered, about six will be killed on the roads, and about 250 will be killed before their time by tobacco.”
There are many good attributes to this explanation that can be followed when trying to communicate risk. The concept of risk acknowledges every course of action or inaction in clinical care may be associated with risks and/or benefits. Risk can be thought of as an unwanted outcome or as an uncertainty about the occurrence of that outcome. In defining risk we therefore can think of it as either an unwanted outcome and/or the probability of that unwanted outcome occurring.
Bogardus’ work highlights risk has five basic fundamental dimensions, and understanding these dimensions may help when trying to communicate risk more effectively:
- Identity: Some risks may not even be known about and sometimes it may be hard to quantify whether the exposure is a risk or a benefit.
- Permanence: Requires an understanding whether the risk is temporary or permanent and how long it will occur for. For example, if you have some numbness after a hip operation the question you want answered is: How long will it last for? Or, if I do get numbness, is it permanent?
- Timing: When will the risk occur, does it occur early compared to late after a procedure - does an infection after an operation occur early just after the operation or later, like when I have left hospital?
- Probability: Will it occur in all patients, how likely is it? If I get it once will I get it all the time?
- Value: How important is the risk to the patient given his current ideals and lifestyle?
Ultimately what we want to know is "What is the best way to communicate to patients the chances of a ‘bad’ event occurring?" Although this is not exclusive to patients, we face risks every day about which we want better and more informed communication. For instance, as a parent you make decisions daily about what your children may or may not do based on risks about which we often want better information. In pondering why the uptake of the current swine flu vaccine is proving so controversial, does the current debate provide adequate information on the benefits and risks?
When deciding about the pros and cons of a given risk the fifth and perhaps most important dimension of risk is therefore its value, its subjective “badness”. Some people, no matter how small the risk – for example flying – may perceive the subjective badness to be so impelling that no matter how well you communicate the quantitative aspects of the risk, which are incredibly small when it comes to flying, they still won’t fly. Although, most clinicians make amiable attempts to quantify the amount of risk, the ultimate determination of importance is its level of subjectiveness. In effect the first four dimensions of risk identity, permanence, timing, and probability are there to help determine our own personal value to be associated with the risk. These dimensions should be thought of at the outset when communicating risk whether verbal or written.
The next time you see an article outlining risk see to what extent the dimensions have been incorporated into the communication, and in the next articles on this series we will consider the expression of risk, both qualitatively and quantitatively.
Ain't no sunshine when she's gone: How America is tackling the doctor/drug industry relationship.
I spent this week in a conference centre the size of an airport for the Annual American Heart Association conference in Orlando, Florida. With over 25 000 delegates, it represents the world’s premier meeting for doctors and researchers interested in vascular disease. For 5 days, there were presentations, posters and seminars about every conceivable aspect of diseases that block up your arteries. There is far too much to mention individually, but some of the highlights for me were (1) research showing that coronary heart disease (CHD) and its underlying cause, atherosclerosis, are lifelong processes that start in childhood and might be programmed by the intrauterine environment; (2) evidence that multivitamins, particularly vitamin A, C and E do nothing to protect us from CHD; and (3) a trial of transcendental meditation showing its protective effect on CHD.
However, the most fascinating session I attended was about the so-called “Sunshine Act”, which is currently going through the United States legislature to completely overhaul the interactions between physicians and the pharmaceutical industry. In the 15 years that I have been visiting the US, I was always struck by the scale of the pharma industry and its lobbying power. However, in the massive exhibition hall at a conference that used to be brimming with drug companies and freebies, the restrictions were clearly visible. No free pens, no free food, and nowhere near as much hard-sell. The new bill will make it mandatory for any interactions, particularly financial, to be reported, leading to more openness from health professionals and industry, whether in relation to education, research or corporate hospitality. At the conference, both journal editors and scientists discussed what should represent “disclosures”and “conflicts of interest”. If this bill becomes law, America will be leading by example in an effort to rebuild trust in medical practice and to preserve the independence of clinicians.
The attendees of the session found out that the speakers were not the first-choice speakers for this debate. The conference organisers had asked the two US Senators who were in charge of the Sunshine Act and government advisors from Harvard to lead the session, but they had all declined because they would not be receiving an honarium from the American Heart Association. It is interesting that policymakers and other public servants expect doctors to be impartial and independent in their practise and yet they are happy to accept speaker fees and all manner of expenses in the name of their jobs. This is not an American problem as our hugely embarrassing MPs’ expenses scandal in the UK illustrates. We cannot have one rule for one sector, and one rule for another. Wherever taxpayers’ money is at stake and people are employed in a public service, there should be certain standards across the board.
As the US Senate prepares to vote on President Obama’s proposed health reform bill, which would greatly broaden access to healthcare, the previous Chief Executive of the NHS, Lord Crisp, suggested that the NHS and the Department of Health in the UK need to separate in order for greater accountability to the taxpayer, and greater independence of the NHS. There is clearly no one solution to better healthcare and better accountability, but any move that leads to greater transparency will surely be a good thing.
Understanding evidence-based medicine in 4 days. Lesson 4: The big picture and asking the right question
There are several historical lessons showing why the results of studies and trials should always be viewed in the broader context of all the knowledge in that area. The most commonly used cautionary tale is that of babies lying on their side and risk of sudden infant death. The unfortunately named Dr Benjamin Spock first published his famous book, “Baby and Child Care”, in 1946. In it, he advocated lying babies on their side and sold 19 million copies. Trials as early as the mid-1980s clearly showed that there were more deaths in babies lying on their side compared with babies lying on their backs. However, scientists continued to conduct over 20 more trials which all showed the same result. If these scientists had conducted a proper systematic review and combined the results of previous analyses (meta-analysis), they would have found that further trials were totally unnecessary because the data already showed that laying a baby on its side was harmful. Instead, their trials actually led to tens of thousands of infant deaths which may have been avoided if practice had changed before 2003. Setting the results of new studies in the context of a systematic review of the results of all other relevant studies would become straightforward if systematic reviews were always done before embarking on new research. In new areas of research, such reviews should be performed as data is accumulated in order to look at overall “pooled” trends.
James Lind, a Scottish physician, is credited with performing the first systematic review in 1753, titled “Treatise of the Scurvy”. In this work, he noted,
“As it is no easy matter to root out prejudices, …. it became requisite to exhibit a full and impartial view of what had hitherto been published on the scurvy, and that in a chronological order, by which the sources of these mistakes may be detected. Indeed, before the subject could be set in a clear and proper light, it was necessary to remove a great deal of rubbish.”
His observations have stood the test of time. A systematic review must involve 4 steps: (1) a clearly formulated question; (2) finding relevant studies; (3) appraisal of quality of the studies; and (4) summary of the evidence by use. The first step is crucial, not just in systematic reviews, but in any area of evidence-based medicine. Four aspects of any study question must be clearly defined in order to make any results meaningful: (1) the population being studied; (2) the intervention or exposure being studied; (3) the comparison group used in the study; (4) the outcome that was measured in the study.
Meta-analysis just means that we are combining the numbers from individual studies or trials to give the overall effect from all available data. A meta-analysis of data can only be done if the included studies are comparable and this process will give weighting to studies with larger numbers of patients and more precise data. For an example, see my previous blog regarding aspirin in primary prevention.
The BBC this week reported that patients do not need to fast before having their cholesterol tested , and that this could save greatly on the cost and convenience of testing for cholesterol in patients. This conclusion was only possible because of a systematic review done by Cambridge researchers, published in the Journal of the American Medical Association. They looked at the available evidence for measuring cholesterol and lipids in the blood and cardiovascular risk, which involved going through the individual records of over 300 000 patients involved in 68 long-term studies. Nobody said that doing systematic reviews was always easy but if we don’t do them, we will miss the big picture.
Understanding evidence-based medicine in 4 days. Lesson 3: Putting tests to the test
So much of modern medicine is about tests and making diagnoses on the basis of the results, that old school doctors often lament the death of the stethoscope and the traditional clinical skills of the physician. Not only are patients entering hospitals and general practices immediately hit by a battery of X-rays, blood tests, scans and other specialised tests; many tests are available for home use by the patients themselves, e.g. home glucose monitoring, home ultrasound probes for antenatal scans and electronic blood pressure meters. Both patients and doctors often make the mistake of assuming that a test is 100% trustworthy and accurate, but we should always ask how good a test is at picking up OR ruling out what it is meant to. The result from a test is only as good as the test itself, and the person using the test. There have been warnings against the use of home foetal heart monitors this week because the inexperience of parents makes the test less reliable and unsafe.
A positive test result can label somebody with diabetes, cancer and all kinds of other illnesses which have many implications of a person’s life. Therefore, we need to know how good a test is at picking up the people with the disease. The “sensitivity” of a test looks at the proportion of diseased individuals that will have a positive test. That is to say, there will be some people with disease who will get a “false negative”. A negative test result can give somebody reassurance that they do not have a disease, but if the test is unreliable, this may be false reassurance and may lead to the psychological trauma and adverse health effects of a later diagnosis. The “specificity” of a test looks at the proportion of individuals without a disease that will have a negative test. In a test for screening (for example, for colorectal cancer), we want to be confident that we are ruling out the disease; i.e. the test must be very specific. In other settings, picking up a positive might be more important, such as the simple urine dipstick test, which is 90% sensitive for urinary tract infection, but has a specificity of only 60%.
Once we have a positive result, how likely is the patient to have the disease in question? This is called the “positive predictive value” or PPV and tells us what proportion of people with a positive test have the disease (do not confuse with sensitivity). Unfortunately, the PPV is affected by how common a disease is in the population. If there is a high prevalence in the population, then the predictive value will be high, but if the disease is uncommon, the PPV will be low.
If you were paying attention during lesson 2, you will realise that neither sensitivity nor specificity of the monofilament are exact values; they will lie within ranges of values. The monofilament is a special tool, used to test whether diabetics have lost sensation in their feet. A review of all relevant studies showed that sensitivity ranged from 41% to 93% and specificity ranged from 68% to 100%. So next time you ask what the diagnosis is, ask how good the test is first.
Understanding evidence-based medicine in 4 days. Lesson 2: 5% is the magic number for confidence and certainty.
Some things in life are certain. We are 100% sure that every human being will die at some point in their lifetime. Other things are close to 100% certain, but there is a small chance of an alternative outcome. When a person has a coronary angiogram to look at the arteries in their heart, we are 99.9% certain that they will have an uneventful procedure, but 0.1% of the time there will be a major complication such as bleeding, stroke, or a heart attack. If the person has had previous heart attacks or other illnesses, this chance increases and might reach as much as 2%. In still other situations, the chance of an outcome is much less certain. For example, the chance of surviving for 5 years after a diagnosis of bowel cancer may vary from less than 10% to near 100%, depending on the severity of the cancer.
There is a lot of uncertainty in medicine and in medical research, and yet media reports of health and science often give the impression of “black-and-white”, exact figures. The yoghurt in my fridge says “Best before 15/11/09”. Does that mean that all the yoghurts go off at the same time on the 15th of November? Of course not. The reality is that this date is an estimate and the date when my yoghurt goes off is within a range. Some yoghurts will go off earlier than the best before date, and others will go off after the best before date. This range is called a “confidence interval”. Confidence intervals can be set so that most of the possible results will be within that range. It might be that 99.9% of yoghurts are fine if eaten before 15/11/09, but this means that 0.1% of yoghurts will go off before or after that date.
In yesterday’s example, immobilisation for 15 minutes immediately after artificial insemination increased the relative risk of a successful pregnancy by 50%. However, the increase in relative risk actually lies between 10% and 120%. Moreover, the increase in risk is only in this range 95% of the time, and 5% of the time, it is outside of this range. A recent study of the UK’s GP database looked at the effect of statin therapy on future risk of gallstones and gallbladder surgery . The researchers showed that the odds of getting gallstones if you were on long-term statin therapy were 66% of the odds of gallstones if you were not taking statin therapy. However, the 95% confidence interval for the odds ratio was 59% to 70%. Therefore, long-term statin therapy seems to convincingly reduce the risk of gallstones and gallbladder surgery by a third. By convention, we accept 95% confidence intervals, and the narrower the range, the more certain we are of the finding.
In the New England Journal this week, American researchers were interested in whether or not the use of the heart-lung machine (“cardiopulmonary bypass”). during coronary-artery bypass graft (CABG) surgery affected death rates. Scientists test their results using hypotheses. The “null hypothesis” in this case was that use of the heart-lung machine during CABG surgery would make no difference to the death rate after 1 year. The “alternative hypothesis” was that use of the heart-lung machine during CABG surgery would make a difference to the death rate after 1 year. The alternative hypothesis was proved: (a) use if the heart-lung machine led to lower death rate and (b) less blockage in the grafted arteries at 1 year. By testing these results against the null hypothesis, we get a “p-value”. The p-value is simply the chance of the result occurring due to chance alone. Again, the cut-off is a p-value of 0.05 or 5%. For the death rate, the p value was 0.04 or 4%, whereas the p-value for the difference in graft blockage at 1 year was 0.01 or 1%. So the combination of confidence intervals and p-values tells us about how reliable a result is and with the rule of 5%, anybody can spot a chance finding and assess statistical significance.
Understanding evidence-based medicine in 4 days. Lesson 1: Clinical significance is all about risk
It is often hard to figure out the findings of health research because of jargon and the numbers. However, I reckon most of that research can be understood by anybody with 4 simple concepts. I am going to cover one of these concepts each day using stories from this week’s press relating to health to show how often these numbers appear in the press. Hopefully these 4 keys will allow more people to open the door and to question the numbers we read about in health research.
LESSON 1: CLINICAL SIGNIFICANCE IS ALL ABOUT RISK
For over 2000 years, two principles have formed the basis of medical practice: “primum non nocere” (first do no harm) and “succurrere” (do good). If we want to measure “the good” or “the harm” associated with a treatment or an exposure, we have to know how it changes the chance or risk of a disease compared to another treatment or exposure. Chance or risk is usually expressed as a percentage, and tells us about the number of people who develop a disease out of a population.
In absolute terms, this change is simply the difference between the risk associated with the first, or control, treatment and the risk associated with the new treatment. This difference is sometimes called the absolute risk difference. In relative terms, this same change can be expressed as the risk associated with the new treatment divided by the risk associated with the control treatment, known as the relative risk.
In this week’s British Medical Journal, Dutch researchers looked at whether 15 minutes of immobilisation increased the chance of successful pregnancy after artificial insemination . 199 couples received the new treatment (15 minutes immobilisation) and 192 couples had standard treatment (they were allowed to mobilise immediately after insemination- the control group). In the immobilisation group, 54 couples had pregnancies. Therefore the chance or risk of pregnancy was 54/199= 27% in this group. In the control group, 34 out of 192 couples had pregnancies, and so the risk of pregnancy was 34/192=18%.
The absolute risk difference is 27%-18%=9%. In other words, immobilisation increased the risk of pregnancy by 9%, compared with controls. Put another way, the relative risk was 27/18=1.5. This means that compared to standard practice, immobilisation leads to a 50% increased chance of pregnancy after insemination. You might have spotted that a 50% increase sounds a lot more impressive than a 9% increase! Therefore, scientists should either report both absolute and relative risks, so that we can understand the size of the risk, or report absolute risk because it is more useful. As readers, we should also look for these numbers before making any conclusions about harm and good. The terms, “hazard” and “odds” are sometimes used in research, but they are just slightly different measures of chance. The message is still the same: absolute changes caused by a treatment are often smaller than the relative changes.
Homelessness and health: four parties, two countries, zero policies
In New York a 45 percent increase in shelter use in the last 8 years has been reported with over 39,000 homeless people, including 10,000 homeless families, checking in to city shelters every evening. This is a phenomenon that is not restricted to one major city as it also affects cities such as London where rough sleeping has risen by 15 percent in the last year, whilst being middle-class and homeless is escalating almost as quickly as the recession. In addition, once affluent areas such as California are seeing dramatic steep rises in the number of homeless.
There are some disquieting facts about the health concerns of being homeless. For a start, homeless people have a great increased risk of death. For instance, in Montreal mortality for street youths is nine times higher for men, 31 times for women. Many chronic diseases are prevalent including epilepsy, chronic airways disease, hypertension, diabetes which are often poorly controlled. Both respiratory infections and poor dental hygiene are common. Not to mention the traumatic experience suffered due to a lack of control over one’s housing situation. In this week’s BMJ a Canadian study of mortality among residents of shelters, rooming houses, and hotels reveals the probability of survival to age 75 years was 32 percent in men and 60 percent in women. For men and women, the largest differences in mortality rates were for smoking related diseases, ischaemic heart disease, and respiratory diseases.
So, based on these facts you’d expect the four main parties in the US and UK to have comprehensive strategies for tackling the homeless problem.
UK Conservative party: Apparently Boris Johnson is going to end homelessness by 2012, his housing minister is already saying it isn’t as big a problem as it was. The Tory Shadow housing minister Grant Shapps says the payment of housing benefit is the major problem – sounds like cuts to me - the major issue is the homeless are too chaotic to handle their own money. They will probably have to house all the homeless in the Olympic village if they want to achieve their target. If you look at their blueprint the solution for a future Conservative Government is to work across Whitehall to ensure that policy is designed to help rather than hinder homeless people. Having completely ignored the issue in its 2001 and 2005 manifestos at least it is back on the agenda.
UK Labour party: The number of homeless families in Britain has reached a record of 100,000, more than double the total when Labour took office in 1997. Although, that’s nothing to shout about as the Tory’s doubled homelessness between 1979 and 1997. The National Rough Sleeping Count for 2009 shows there are 464 people sleeping rough on English streets on any single night, representing a 75% reduction since 1998. Although, the Government says it is committed to reducing rough sleeping to as near to zero as possible, I think the figures at best are dubious. Overall Labour has set a target to halve the number of households living in temporary accommodation by 2010. This commitment includes ending the use of bed and breakfast accommodation by local housing authorities, securing suitable accommodation for 16 and 17 year olds, improved access to homelessness mediation across the country and the creation of a new national supported lodgings development scheme for young people.
US Democratic party: Disappointingly the democrats seem to be overwhelmed with medical reform trying to provide healthcare for all, the environment and security. Obama in February 09 stated he is going to use $10 billion of Housing development money "to create green jobs, to revive housing markets with high rates of foreclosure, and curb homelessness." However, when you break the pledge down, there has been some criticism of how much real impact is actually delivered on the ground.
US Republican Party: Surprisingly in 2000 George Bush administrations began a radical and successful national campaign against chronic homelessness. “Housing first,” they called it offering rent-free apartments up front. This strategy got a lot of credit for a 30% decline in U.S. from 2005 to 2007. Giving housing up front gave the homeless an opportunity to seek work giving net worth to society. The problem was 9/11 took over policy initiatives and homelessness dropped off the radar.
Homelessness is a problem that affects health and society greatly. The worrying thing is it can affect anyone. Therefore the next time a politician turns up on my door I’m going to ask them what is their policy on such a preventable health problem?