Tags

HONcode Certified

This website is certified by Health On the Net Foundation. Click to verify.

This site complies with the HONcode standard for trustworthy health information: verify here.

January 2012

EBM at the bedside-bicuspid aortic valves and familial screening

Ami Banerjee
Last edited 20th January 2012

The original proponents of EBM have always argued for “evidence at the bedside” so that we can make the best decisions for patients nearest to the point “where the rubber hits the road”. How often do we clinicians actually look up the evidence in real time during or soon after a consultation to change the management or the advice we give to a patient?

I saw a lady in her 40s in our cardiology clinic this week. She has been followed up every 1-2 years in clinic for bicuspid aortic valve (BAV). Basically, the aortic valve is at the outflow of the left ventricle (the major pump of the heart) and usually has three cusps which open and close to ensure flow of blood in the right direction through and out of the heart. In bicuspid valves, people are born with only two cusps and over their lifetime, they are more prone to developing narrowing of the valve (“aortic stenosis”), with a significant probability of needing aortic valve replacement during their lifetime. The idea of screening and surveillance is that any narrowing or malfunction of the aortic valve can be picked up early, and the person can be referred for surgery more quickly and effectively than if their disease had progressed.

BAV is the most common abnormality of the heart valves, occurring in 1- 2% of the general population and is twice as common in males as in females. Reassuringly, a recent cohort study of patients with BAV found that they have similar survival rates to the normal population. However, “given that serious complications will develop in over a third of patients with BAV, the bicuspid valve may be responsible for more deaths and morbidity than the combined effects of all the other congenital heart defects”. The potential problems are narrowing or leaking of the aortic valve, infective endocarditis and enlargement or “dilatation” of the aorta. In other words, BAV is common, has serious complications and there is a treatment which improves survival (aortic valve replacement). Therefore, BAV is a condition which meets Wilson’s criteria for screening.

I was asked by the lady if her children were at risk of BAV and whether they should be screened. I did not know the exact answer so I looked online with the patient. There is a 30% risk of aortic dilatation or BAV in first degree relatives (parents, children or siblings) of people with BAV. A more recent study showed that 20% of first degree relatives of people with BAV may have undetected BAV themselves. It turns out there are no NICE guidelines or formal UK/European guidelines for whether we should be screening relatives or how we should be doing it.

Interestingly, across the pond, the Americans have guidelines for familial screening and the literature seems to suggest it. Therefore adult children of patients with BAV should have an echocardiogram to check that they do not have a BAV which would mean that they should also be followed up. Valvular heart disease is a bigger health issue than we imagine.

There are four take home messages for me. First, EBM can be done at the bedside-it is meant to be the most practical of clinical sciences. Second, there is no harm as a clinician in saying “I don’t know” and looking it up. Third, sometimes it is the obvious clinical questions which are still unanswered or debatable. Finally, practice can be changed.

Comparative effectiveness research or lack thereof

Peter Gill
Last edited 15th January 2012

An earlier TrustTheEvidence.net blog post on the geometry of evidence described the importance of network meta-analyses. These indirect methods of analysis compare the results from two or more studies that have one treatment in common when comparative effectiveness (CE) research is lacking.

What is comparative effectiveness research? To quote the US Federal Coordinating Council for Comparative Effectiveness Research Report to President Obama in 2009, it is defined as the:

“generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor health conditions in ‘real world’ settings”

Additional studies that compare one drug to placebo are not incredibly useful when we already know they work. The real challenge of evidence-based practice is determining which treatment to use when all ten available drugs are better than placebo. How do clinicians decide which one to prescribe? All too often decisions are built on studies lacking active comparators. This is not high quality care for patients.

A recent study published in PLoS ONE evaluated trials registered in ClinicalTrials.gov that focused on the top 25 topics identified as priority areas by the US Institute of Medicine (e.g. treatment of atrial fibrillation). The authors looked at studies conducted in the US between 2007 and 2010 and determined the prevalence of CE research.

Despite the importance of this research methodology, only 22% of studies were CE studies and their characteristics varied substantially based on the funding source. Primarily industry-funded studies had the shortest duration of follow-up and were more likely to report positive findings compared to studies with any government funding.

As usual, children get left out. Industry-funded studies were less likely to enroll children when compared to government or nonprofit funded trials. The lack of controlled trials in children is already a problem and there may be a perceptions among drug manufacturers that testing drugs in children brings the risk of increased liability.

The authors hypothesise that the increase in CE research will lead to an increase in the number of studies that fail to support new interventions. Not good for big pharma, but why?

First, trials with inactive comparators (i.e. placebo) are more likely to achieve favourable findings. On the contrary, CE studies tend to produce conservative results regarding the superiority of a therapy compared to other active treatments.

Second, industry-funded the majority of drug and device CE studies meaning that most were designed and conducted by the company marketing the product. There is substantial evidence that these studies are more likely to report positive findings supporting the use of a product. The PLoS ONE study provides further evidence that even in CE research industry-funded studies were more likely to report an outcome favouring the use of the intervention.

But it’s not all doom and gloom. The US has allocated $1.1 billion to CE research. This added investment of noncommercial funding will be critical to provide unbiased answers and evaluate under-studied populations (e.g. children). It’s about time we provide with stronger evidence.

Research misconduct: 'alive and well'

Carl Heneghan
Last edited 12th January 2012

BMJ research misconduct survey results are released today online and were discussed at the BMJ meeting today on research misconduct.

Sara Schroter, senior researcher, at the BMJ sent an email to 9,036 authors and reviewers on the BMJ database of which 2,782 (31%) replied.

The results show that 13% have witnessed or have had first-hand knowledge of UK based scientists or doctors inappropriately adjusting, eluding altering or fabricating data during their research or for the purpose of publication. Six percent were aware of any cases of possible research misconduct at their institution, that in their view, have not been properly investigated.

Rewards and incentives to conduct research occur at the individual, institutional, national and company level, and misconduct occurS at all of these levels. In a previous survey of 3,247 US researchers, 16% admitted to altering design, methodology or results of their studies due to pressure of an external funding source. In addition, researchers involved with industry were more likely to report one or more of ten serious misbehaviours, to have engaged in misconduct and less likely to report financial conflicts.

As the BMJ survey shows research misconduct is 'alive and well'.

At the BMJ today research misconduct in the UK was discussed amongst academics, journal editors, policy makers and others.

Why does scientific fraud occur? Among the incidents of scientific fraud that David Goodstein has reviewed, three motives are more or less always present. In all the cases individuals were under career pressure, thought they knew what the result would be if they went to all the trouble of doing the work properly, and were in a field in which studies are not expected to be precisely reproducible.

A case of prolonged research fraud by Diederik Stapel in the Netherlands highlights the closed culture that aids such deception: simply misconduct is more likely when there is less scrutiny.

Peter Wilmhurst, in the morning, talked about the case of Eastell who was suspended from Sheffield University, whislt Professor Clara Gumpert of the Karolinska Institute talked about the case of Suchitra Holgersson: a Karolinska scientist who tried to mislead with false documents.

Iain Chalmers talked about the extensive problems of research that remains unpublished “50% of results remain unpublished.” As far back as 1990, in JAMA, Chalmers published on this exact topic:

“Substantial numbers of clinical trials are never reported in print, and among those that are, many are not reported in sufficient detail to enable judgments to be made about the validity of their results. Failure to publish an adequate account of a well-designed clinical trial is a form of scientific misconduct that can lead those caring for patients to make inappropriate treatment decisions.”

Fiona Godlee, editor of the BMJ, is instrumental in the BMJ's ongoing commitment to identifying and reporting on research misconduct. She spoke recently on the importance and relevance of this exact issue on the BBC: should all medical research be published?

Who can sort the problem out? Journals and their editors are not in a position to be the custodians of integrity. “Editors are not the individuals to investigate cases of research misconduct and the responsibility lies with the institution,” said Elizabeth Wager, chair of the committee on publication ethics. COPE as it is known is a forum for editors and publishers of peer-reviewed journals to address aspects of publication ethics. It also advises editors on what to do in cases of research and publication misconduct.

The morning meeting also discussed policies in the US, Sweden and Germany and their different approaches to research misconduct. It seems there is alot of it going on at the professorial level but also within Phds. Watch out for the BMJ survey coming this afternoon on research misconduct in the UK amongst clinical researchers. I bet it shows there is substantial misconduct going on. It seems to me the incentives are so great for academic to publish, or not in some cases, that it will be a hard problem to solve.

Thomas Kuhn, in 1962 wrote, scientific advancement is not evolutionary, but is a "series of peaceful interludes punctuated by intellectually violent revolutions", and in those revolutions "one conceptual world view is replaced by another". What he referred to as a 'paradigm shift.' A shift that is needed to force action and find solutions to research misconduct.

The Hippocratic oath originally included the harm and good that doctors and their prescribed treatments can cause. The biggest challenge in today’s clinical practice is not much different. With increasing numbers of trials of different drugs in different patient groups with different comparison groups, how are patients and doctors ever going to see the wood from the trees? How do we make judgments about which drug to use in which situation?

NICE was set up in 1999 in order to help in these difficult matters. Broadly speaking, it looks at current trial evidence and uses the metrics of “cost-effectiveness” to decide whether to fund drugs and treatments in the NHS. It uses “quality-adjusted life years” (the ‘QALY') to measure effectiveness and then calculates the cost per QALY gained for a given drug. A drug must be effective in treating disease but the cost of the benefit must be below a certain threshold, usually £20000-30000 per QALY gained

One problem is that in trials, we tend to focus on benefits and not harms. Another problem is that the performance of drugs in different patients, even for simple characteristics like age and sex and poorly defined in many trials. Even more importantly, trials often do not report their outcomes based on the disease risk of the patients involved. Therefore we end up “painting all patients with one brush”. This has obvious problems. Cost effectiveness analysis is only as good as the trials which are studied and if those trials do not report outcomes (good and bad) properly, then analysis is difficult.

Atrial fibrillation (AF) is a heart rhythm problem which causes increased risk of stroke. Warfarin has been established as a safe treatment for over 50 years and reduces risk of stroke. However, it does lead to increased risk of bleeding, including intracerebral bleeds. Therefore, a way of quantifying the overall benefit of warfarin is to directly weigh up the risk of stroke and the risk of intracerebral bleeds as a “net clinical benefit”, as proposed by Singer and his colleagues in 2009. They reported that “Expected net clinical benefit of warfarin therapy is highest among patients with the highest untreated risk for stroke, which includes the oldest age category.” In other words, we should use the drug in the patients with the highest chance of benefit from the drug, or the highest chance of the adverse outcome (intracerebral bleeds).

Currently 3 new drugs (dabigatran, apixaban and rivaroxaban) have been evaluated in trials as alternatives to warfarin in the setting of AF. Each of these trials looks at different patients and uses different comparisons. In a recent analysis, we used data from the Danish National Patient registry to work out the net clinical benefit of these drugs at different levels of risk of stroke (potential benefit) and bleeding (potential harm) compared with warfarin. We also calculated the number of patients needed to treat and harm for each drug at each level of risk. Although, this is a modelling exercise, this type of analysis is needed in order to look at all the drugs side by side, using the best evidence we currently have. This idea of “net clinical benefit” could also be used in other disease areas in order to quantify to both health professionals and patients how good or bad a treatment is.

Twitter TrustTheEvidence.net

tte
     

Search the TRIP Database

TRIP Database

 

Recent Comments