How to Read a Paper: The Basics of Evidence-Based Medicine (41 page)

BOOK: How to Read a Paper: The Basics of Evidence-Based Medicine
3.81Mb size Format: txt, pdf, ePub

15
Haynes B. Can it work? Does it work? Is it worth it?: the testing of healthcare interventions is evolving.
BMJ: British Medical Journal
1999;
319
(7211):652–63.

16
Franke RH, Kaul JD. The Hawthorne experiments: first statistical interpretation.
American Sociological Review
1978:623–43.

17
Pirsig R.
Zen and the art of motorcycle maintenance: an enquiry into values
. New York: Bantam Books, 1984.

Chapter 15

Getting evidence into practice

Why are health professionals slow to adopt evidence-based practice?

Health professionals' failure to practice in accordance with the best available evidence cannot be attributed entirely to ignorance or stubbornness. Consultant paediatrician Dr Van Someren [1] has described a (now historical) example that illustrates many of the additional barriers to getting research evidence into practice: the prevention of neonatal respiratory distress syndrome in premature babies.

It was discovered back in 1957 that babies born more than 6 weeks early may get into severe breathing difficulties because of lack of a substance called
surfactant
, which lowers the surface tension within the lung alveoli and reduces resistance to expansion. Pharmaceutical companies began research in the 1960s to develop an artificial surfactant that could be given to the infant to prevent the life-threatening syndrome developing, but it was not until the mid-1980s that an effective product was developed.

By the late 1980s, a number of randomised trials had taken place, and a meta-analysis published in 1990 suggested that the benefits of artificial surfactant greatly outweighed its risks. In 1990, a 6000-patient trial (OSIRIS) was begun, involving almost all the major neonatal intensive care units in the UK. The manufacturer was awarded a product licence in 1990, and by 1993, practically every eligible premature infant in the UK was receiving artificial surfactant.

Another treatment had also been shown a generation previously to prevent neonatal respiratory distress syndrome: administration of the steroid drug dexamethasone to mothers in premature labour. Dexamethasone worked by accelerating the rate at which the foetal lung reached maturity. Its efficacy had been demonstrated in experimental animals in 1969, and in clinical trials on humans, published in the prestigious journal
Paediatrics
, as early as 1972. Yet, despite a significant beneficial effect being confirmed in a number of further trials, and a meta-analysis published in 1990, the take-up of this technology was astonishingly slow. It was estimated in 1995 that only 12–18% of eligible mothers were receiving this treatment in the USA [2].

The quality of the evidence and the magnitude of the effect were similar for both these interventions [3] [4]. Why were the paediatricians so much quicker than the obstetricians at implementing an intervention that prevented avoidable deaths? Dr Van Someren [1] considered a number of factors, listed in
Table 15.1
. The effect of artificial surfactant is virtually immediate, and the doctor administering it witnesses directly the ‘cure’ of a terminally sick baby. Pharmaceutical industry support for a large (and, arguably, scientifically unnecessary) trial ensured that few consultant paediatricians appointed in the early 1990s would have escaped being introduced to the new technology.

Table 15.1
Factors influencing implementation of evidence to prevent neonatal respiratory distress syndrome (Dr V Van Someren, personal communication)

Surfactant treatment
Prenatal steroid treatment
Perception of mechanism
Corrects a surfactant deficiency disease
Ill-defined effect on developing lung tissue
Timing of effect
Minutes
Days
Impact on prescriber
Views effect directly (has to stand by ventilator)
Sees effect as statistic in annual report
Perception of side effects
Perceived as minimal
Clinicians' and patients' anxiety disproportionate to actual risk
Conflict between two patients
No (paediatrician's patient will benefit directly)
Yes (obstetrician's patient will not benefit directly)
Pharmaceutical industry interest
High (patented product; huge potential revenue)
Low (product out of patent; small potential revenue)
Trial technology
‘New’ (developed in late 1980s)
‘Old’ (developed in early 1970s)
Widespread involvement of clinicians in trials
Yes
No

In contrast, steroids, particularly for pregnant women, were unfashionable and perceived by patients to be ‘bad for you’. In doctors' eyes, dexamethasone was an old hat treatment for a host of unglamorous diseases, notably end-stage cancer, and the scientific mechanism for its effect on foetal lungs was not readily understood. Most poignantly of all, an obstetrician would rarely get a chance to witness directly the life-saving effect on an individual patient.

The above-mentioned example is far from isolated. Effective health care strategies frequently (although, thankfully, not always) take years to catch on, even amongst the experts who should be at the cutting edge of practice [5–8]. The remaining sections in this chapter consider how we can reduce the time from research evidence appearing to making real differences in health outcomes. Be warned—there are no quick fixes.

How much avoidable suffering is caused by failing to implement evidence?

The short answer to this question is ‘a lot’. I recently discovered a paper by Woolf and Johnson [9] in the
Annals of Family Medicine
, entitled: ‘The break-even point: when medical advances are less important than improving the fidelity with which they are delivered’. Their argument is this. Imagine a disease that kills 100 000 people a year. If we demonstrate through research that drug X is effective for this disease, reducing mortality by 20%, it will potentially save 20 000 lives per year. But if only 50% of eligible patients actually receive the drug, the number of lives saved is reduced to 10 000. They argue that in many cases, we would add more value by increasing our efforts to implement this evidence than by doing more research to develop a different drug whose efficacy is greater than drug X.

If you think these figures are speculative, here's a real example quoted from Woolf and Johnson's paper, in which they cite evidence from a meta-analysis of the impact of aspirin in acute stroke and a survey of prescribing practice in the USA:

A systematic review by the Antithrombotic Trialists Collaboration reported that the use of aspirin by patients who had previously experienced a stroke or transient ischemic attack reduces the incidence of recurrent nonfatal strokes by 23%. That is, in a population in which 100,000 people were destined to have strokes, 23,000 events could be prevented if all eligible patients took aspirin. McGlynn et al.1 reported, however, that antiplatelet therapy is given to only 58% of eligible patients. At that rate, only 13,340 strokes would be prevented in the hypothetical population, whereas achieving 100% fidelity in offering aspirin would prevent 23,000 strokes (i.e., 9,660 additional strokes)
[9].

In summary, the amount of avoidable suffering caused by failure to implement evidence is unknown—but it could be calculated using the method set out in Woolf and Johnson's paper. It is encouraging that a growing (although still small) proportion of research funding is now allocated to increasing the proportion of patients who benefit from things we know work.

How can we influence health professionals' behaviour to promote evidence-based practice?

The Cochrane Effective Practice and Organisation of Care Group (EPOC, described in Chapter 10 and online at
http://epoc.cochrane.org
) have done a thorough job of summarising the literature accumulated from research trials on what is and is not effective in changing professional practice—both in promoting effective innovations and in encouraging professionals to resist ‘innovations’ that are ineffective or harmful. EPOC have been mainly interested in reviewing trials of interventions aimed at redressing potential gaps in the evidence-into-practice sequence.

One of the few unequivocal messages from EPOC's work is that simply
telling
people about evidence-based medicine (EBM) is consistently ineffective at changing practice. Until relatively recently, education (at least in relation to the training of doctors) was more or less synonymous with the didactic talk-and-chalk sessions that most of us remember from school and college. The ‘bums on seats’ approach to postgraduate education (filling lecture theatres up with doctors or nurses and wheeling on an ‘expert’ to impart pearls of wisdom) is relatively cheap and convenient for the educators but does not generally lead to sustained behaviour change in practice. Indeed, one study demonstrated that the number of reported ‘CME’ (continuing medical education) hours attended was
inversely
correlated with doctors' competence [10]!

If, like me, you are interested in the theory underpinning EBM teaching, you will have spotted that the ‘instructional’ approach to promoting professional behaviour change in relation to EBM is built on the flawed assumption that people behave in a particular way
because (and only because) they lack knowledge
, and that imparting knowledge will therefore change behaviour. Marteau and colleagues' [10] short and authoritative critique shows that this model has neither theoretical coherence nor empirical support. Information, they conclude, may be
necessary
for professional behaviour change, but it is rarely, if ever,
sufficient
. Given here are psychological theories that Marteau and her team felt might inform the design of more effective educational strategies.

 
  • Behavioural learning
    : The notion that behaviour is more likely to be repeated if it is associated with rewards, and less likely if it is punished.
  • Social cognition
    : When planning an action, individuals ask themselves ‘Is it worth the cost?’, ‘What do other people think about this?’ and ‘Am I capable of achieving it?’.
  • Stages of change models
    : In these, all individuals are considered to lie somewhere on a continuum of readiness to change from no awareness that there is a need to change through to sustained implementation of the desired behaviour.

More recently, Michie's [11] team extended this simple taxonomy with a smorgasbord of other behaviour change theories taken from cognitive psychology, and Eccles's [12] team (which includes guideline guru Jeremy Grimshaw) applied a similar set of psychological theories specifically to uptake of evidence-based practice by doctors.

What sort of educational approaches have actually been shown to be effective for promoting evidence-based practice? Here's a summary of the empirical literature, based mainly on four systematic reviews of intervention trials [13–16].

a.
EBM teaching as conventionally delivered in undergraduate medical education curricula improves students' EBM knowledge and attitudes, but an impact on their performance in dealing with real cases has not been convincingly demonstrated.
b.
In relation to qualified doctors, most classroom-based EBM training has little or no impact on their knowledge or critical appraisal skills. This may be because both the training and the tests are non-compulsory; or it may be because the training itself is too little, too superficial, too formulaic, too passive and too removed from practice.
c.
More educationally sound approaches such as ‘integrated’ EBM teaching (e.g. during ward rounds or in the emergency room) or intensive short courses using highly interactive learning methods can produce significant changes in knowledge, skills and behaviour.
d.
However, no direct impact has yet been demonstrated from such courses on any patient-relevant outcomes.

Green [17] [18], who has conducted one of the most rigorous primary studies of EBM training ever organised, as well as a national survey of programmes and a critical overview, holds the view that EBM teaching should occur ‘where the rubber meets the road’—that is, in the clinic and at the bedside . He cites adult learning theory to support the argument that EBM teaching must surely be more effective if the learner can relate it to practical problems in the here-and-now and use it for real (as opposed to hypothetical) decision making—and he has also undertaken qualitative work to confirm that these real-world practical barriers (lack of time, evidence inaccessible when it is needed, unforgiving organisational culture, etc.) account for much of the theory–practice gap in EBM implementation [19]. The way forward, he suggests, is for more work to be carried out ensuring that evidence is available and readily accessible at the point of care, enabling clinical questions to be raised and answered in a context that optimises active learning.

Other books

Cargo Cult by Graham Storrs
A Dangerous Game by Lucinda Carrington
The Winds of Autumn by Janette Oke
The Compass by Cindy Charity
Slaves of the Billionaire by Raven, Winter
Return of the Mountain Man by William W. Johnstone
Hunt for the Panther 3 (9781101610923) by Delaney, Rachelle; Guerlais, Gerald (ILT)
Black Gangster by Donald Goines
Jack In A Box by Diane Capri