HONcode Certified

This website is certified by Health On the Net Foundation. Click to verify.

This site complies with the HONcode standard for trustworthy health information: verify here.

September 2010

Can a virus ever make you fat?

Ami Banerjee
Last edited 24th September 2010

On both sides of the Atlantic, obesity, particularly in childhood, is a growing problem (no pun intended). So earlier this week, when new research claimed to associate a virus that causes the “common cold” with the development of obesity, the media took interest in both the UK and the US.

Jeffrey Schwimmer, lead researcher was quoted on the BBC:
"It is time that we move away from assigning blame in favour of developing a level of understanding that will better support efforts at both prevention and treatment. These data add credence to the concept that an infection can be a cause or contributor to obesity.”

That is big chat. Like all papers published in major journals, the abstract or summary of the paper is available on PubMed for free. I used the abstract to examine these claims a bit further.

The authors set out to compare blood levels of antibodies to adenovirus (AD36) in children who were obese versus those who were not. The first problem is that they did a “cross-sectional study”, which means they took a snapshot of their patients at a single point in time, rather than following them up over a length of time. That means that we deduce nothing about the virus (the “exposure”) causing obesity (the “outcome”) since we are not following the children up over time from the onset of the infection of the virus. At best, we can talk about an association or a link. Secondly, they have kids from 8-18 years of age. Children at eight are very different to children at eighteen and so you might expect the effect of infection at different stages of childhood to be different. So why are they lumping all kids of all ages together?

In the results, only 124 children were studied, and we have no idea how many patients were excluded from original recruitment. Half of the 124 children were obese. Before we go any further, the antibody (AD36) was present in 15% of the children. In other words, any comments made about the relationship between the antibody and obesity is based on 19 children. That does not seem a big enough number to be making any claims.

The paper’s main findings are: “The majority of children found to be AD36-positive were obese (15 [78%] of 19 children). AD36 positivity was significantly (P _ .05) more frequent in obese children (15 [22%] of 67 children) than nonobese children (4 [7%] of 57 children)”. Again, we are looking at only 19 children who had viral antibodies. In addition, the p-value is only just statistically significant (p=0.05). You do not have to read the whole paper to see the limitations of research. Bottom line: regular Big Macs and lack of exercise are still much more likely to cause obesity in childhood than the common cold.

Do You Know Who Frances Kelsey Is? I didn't

Carl Heneghan
Last edited 18th September 2012

Do You Know Who Frances Kelsey Is? I didn’t

I came across this article in the excellent Pharmalot site on Frances Kelsey by Ed Silverman

He’s right, ‘The odds are that you don’t.’

Fifty years ago Frances Kelsey transformed the way prescription drugs are regulated. As a new FDA employee she was assigned to review Kevadaon, better known as its brand name drug thalidomide.

Kelsey at the time questioned the drugs safety. “It just came with so many extravagant claims that I didn’t believe.” Sounds familiar to me.

Kelsey’s work led to the amendment of the Food, Drug & Cosmetic Act, requiring safety and effectiveness testing and informed consent in clinical trials. At this time the drug company Merrell were giving thalidomide to more than 1,000 US docs to distribute to 20,000 patients as part of a ‘so-called’ investigational trial. Some trial; many patients were not informed they were actually participating in a study.

They say the good die young. I think it’s more like the good don’t retire young: Kelsey retired from the FDA in 2005 at age 90.

There is more on the story at the New York Times

Special educational needs-a problem with the diagnostic test?

Ami Banerjee
Last edited 16th September 2010

Just over one in five pupils – 1.7 million school-age children in England – are identified as having special educational needs. An Ofsted report
this week claims that half of children in UK schools have been labelled incorrectly as “special educational needs” or SEN. There has been much debate in the media about whether this was to raise extra revenue for schools or to save money on teachers’ wages.

Given the implications on the child, the parents and the educational system, I was amazed how difficult it was to find out about how SEN is diagnosed. In practice, children with SEN encompass a wide variety of conditions from autism and dyslexia to ADHD. SEN are defined by one website as:
• significantly greater difficulty in learning than the majority of children of their age;
• a disability which prevents or hinders them from making use of educational facilities of a kind generally provided for children of the same age in schools within the area
• Is under compulsory school age and falls within the definition above or would do so if special educational provision was not made for them.

Even though it is in the field of education and not directly health-related, I was reminded of problems in inaccurate diagnosis in evidence-based medicine. The issues of money and stakeholder interests are no different to the conflicts of interest often found in health research. However the major issue is that children who have been falsely diagnosed with SEN are “false positives”. Therefore the way we are diagnosing SEN is not specific enough. Surely there needs to be a debate about the way we diagnose SEN?

Alzheimer's and vitamin B - Intention to treat, often not understood

Carl Heneghan
Last edited 14th September 2012

I have been questioned about the issue of intention to treat used in the study in Plos One we reported on yesterday.

The methods in the study state:
‘Centralised telephone randomization by independent statisticians was used with full allocation concealment and minimization for age, gender, baseline TICS-M score and consent for MRI.’


The flow chart in figure 1 illustrates 271 participants started the trial – 5 withdrew before treatment, 43 were lost to follow up. The flow chart then reports 223 completed the study of which 187 had a 1st MRI scan and of these 7 declined the 2nd scan. Leaving 167 for the final analysis

As you can see a high proportion were not taken into account for the calculation of the effectiveness at the end of the trial.

The trial methods state:

‘we conducted an intention-to-treat analysis for the main outcome in the subgroup that completed both MRI scans (n = 168). Plasma vitamin response was reported from the same group. Serious adverse events were evaluated in the total intention-to-treat group (n = 266/271).’

Performing an intention to treat analysis on a continuous measure is not the same as in a dichotomous outcome (dead/alive). In the latter the denominator would remain the original 271 which tends to deflate the overall effectiveness.

This is what Cochrane tell us about intention to treat:

‘The basic intention-to-treat principle is that participants in trials should be analysed in the groups to which they were randomized, regardless of whether they received or adhered to the allocated intervention.’

There material also tell us:

If your outcome is a continuous measure, imputation of missing data for ITT purposes is more difficult, as there are more than two different possibilities for each participant.

If participants are lost to follow-up then the outcome may not be measured on them. But the strict ITT principle suggests that they should still be included in the analysis. There is an obvious problem - we often do not have the data that we need for these participants. In order to include such participants in an analysis, we must either find out whether outcome data are available for them by contacting the trialists, or we must 'impute' (i.e. make up) their outcomes. This involves making assumptions about outcomes in the 'lost' participants.

Because imputation of missing data in order to perform a full ITT analysis is controversial, it may be best to present only the results for available participants. If you do this, you should also consider the possible effects of the missing participants, either through sensitivity analyses as described here or by discussing the implications in the Discussion of your review.

An alternative approach may be to only analyse the data available, but to consider drop out rate as a marker of trial quality. Whichever approach you use, ensure that it is described in the methods section of the review and that the numbers of participants with missing data are described in the results section and the characteristics of included studies table.’

Thanks to Cochrane and there open learning materials for sorting this issue out.

Is this study intention to treat , I think not. A difficult issue but important to get right in describing results. A sufficient condition to provide an unbiased comparison is to obtain complete data on all randomized subjects.

If you want more info see Montori's take in CMAJ

‘A new study suggests high doses of B vitamins may halve the rate of brain shrinkage in older people experiencing some of the warning signs of Alzheimer's disease.’ reports the BBC

A total of 168 participants (85 in active treatment group; 83 receiving placebo) completed the MRI section of the trial from 271. Therefore the study lost a lot of participants – where did they go?

The researchers state the efficacy analyses were performed on the basis of the intention-to-treat principle. But I have serious concerns that they were not.

The intenton -to-treat aims to avoid bias arising due to drop outs. For example, if people who had worse cognitive decline tend to drop out at a higher rate, even a completely ineffective treatment may appear to be providing benefits.

Thus if you just compare the primary outcome measure before and after the treatment for only those who finish the study (forgetting to count those enrolled originally, but subsequently excluded or not followed up) the results are likely to be misleading.

Thus everyone who begins the treatment, in this case Vitamin B, should be considered to be part of the trial and have the outcome assessed.

So what did they do in this study?
The main outcome in the study published in Plos One was rates of brain atrophy. The requirement was, subjects had both a baseline and a follow-up MRI, researchers state they conducted an intention-to-treat analysis for the main outcome in the subgroup that completed both MRI scans (n = 168).

What do you think? is this intention-to-treat?

Well, if you are like me, you will be thinking this invalidates the intention-to-treat principle. This seriously affects the ability to believe the results. This is a per protocol analysis and much more likely to lead to spurious findings.

If you look at the results for serious adverse events this was evaluated in the total intention-to-treat group (n = 266/271).

Although treatment with vit B after 24 months significantly slowed the rate of brain atrophy by 30% this is a relative measure. The absolute reduction is 0.32% over the length of the trial. The question is; is this clinically significance? Given adherence, was about 75% and 17/83 (21%) of the placebo group had taken supplementary folic acid or vitamin B12. In the active treatment group, and 14/ 84 (17%) in the treatment group did not take, or did not absorb, the vitamins, only 136 were defined as biologically compliant.

We seem to have lost half of the original group. I am feeling less convinced about the reuslts the more the numbers reduce – therefore the conclusions just don’t add up

I'd call this an interesting result. But what is needed is a much larger trial, with a well defined clinically significant outcome. Ideally this would be progression to Alzheimer’s disease.

Can we teach critical appraisal in 30 minutes

Carl Heneghan
Last edited 7th September 2010

At the centre, we are currently running our teaching workshop. Participants have come from about 20 different and in today’s session, we are fortunate in that Rod Jackson, Professor of Epidemiology at Auckland has made the short journey from New Zealand to be with us and present.

Simply critical appraisal in 30 minutes includes one picture, two formulas and 3 acronyms. The picture is the GATE frame which Rod describes as his whole career.

Making evidence more accessible using pictures © Rod Jackson 2009
Making evidence more accessible using pictures © 2009 - Rod Jackson

The powerpoint on this talk is in our resource centre in the September - 16th Oxford Workshop on Evidence-Based Practice and the paper is published in the EBM journal.

Oops we’re nearly at 30 minutes.

Twitter TrustTheEvidence.net


Search the TRIP Database

TRIP Database


Recent Comments