How to Read a Paper: The Basics of Evidence-Based Medicine (48 page)

BOOK: How to Read a Paper: The Basics of Evidence-Based Medicine
9.63Mb size Format: txt, pdf, ePub

Perhaps the most powerful criticism of EBM is that, if misapplied, it dismisses the patient's own perspective on the illness in favour of an average effect on a population sample or a column of quality-adjusted life-years (QALYs) (see Chapter 11) calculated by a medical statistician. Some writers on EBM are enthusiastic about using a decision-tree approach to incorporate the patient's perspective into an evidence-based treatment choice. In practice, this often proves impossible, because as I pointed out in Section ‘The patient perspective’, patients' experiences are complex stories that refuse to be reduced to a tree of yes/no (or ‘therapy on, therapy off’) decisions.

The (effective) imposition of standardised care reduces the clinician's ability to respond to the idiosyncratic, here-and-now issues emerging in a particular consultation. The very core of the EBM approach is to use a population average (or more accurately, an average from a representative sample) to inform decision-making for that patient. But as many others before me have pointed out, a patient is not a mean or a median but an individual, whose illness inevitably has unique and unclassifiable features. Not only does over-standardisation make the care offered less aligned to individual needs, it also de-skills the practitioner so that he or she loses the ability to customise and personalise care (or, in the case of recently trained clinicians, fails to gain that ability in the first place).

As Spence [1] put it, ‘Evidence engenders a sense of absolutism, but absolutism is to be feared absolutely. “I can’t go against the evidence” has produced our reductionist flowchart medicine, with thoughtless polypharmacy, especially in populations with comorbidity. Many thousands of people die directly from adverse drug reactions as a result'.

Let me give you another example. I recently undertook some research that required me to spend a long period of time watching junior doctors in an A&E Department. I discovered that whenever a child was seen with an injury, the junior doctor completed a set of questions on the electronic patient's record. These questions were based on an evidence-based guideline to rule out non-accidental injury. But because the young doctors filled these boxes for every child, it seemed to me that the ‘hunch’ that they might have had in the case of any
particular
child was absent. This standardised approach contrasted to my own junior doctor days 30 years ago, when we had no guidelines but spent quite a bit of our time playing and honing our hunches.

Another concern about ‘EBM done well’ is the sheer volume of evidence-based guidance and advice that now exists. As I pointed out in Section ‘The great guidelines debate’, the guidelines needed to manage the handful of patients seen on a typical 24-h acute take run to over 3000 pages and would require over a week of reading by a clinician [15]! And that does not include point-of-care prompting for other evidence-based interventions (e.g. risk factor management) in patients seen in a non-acute setting. For example, whenever I see a patient between 16 and 25 in general practice, a pop-up prompt tells me to ‘offer chlamydia screening’. Some of my own qualitative work with Swinglehurst [16] has shown how disruptive such prompts are to the dynamic of the clinician–patient consultation.

A more philosophical criticism of EBM is that it is predicated on a simplistic and naïve version of what knowledge is. The assumption is that knowledge can be equated with ‘facts’ derived from research studies that can be formalised into guidelines and ‘translated’ (i.e. implemented by practitioners and policymakers). But as I have argued elsewhere, knowledge is a complex and uncertain beast [17]. For one thing, only some knowledge can be thought of as something an individual can know as a ‘fact’; there is another level of knowledge that is
collective
– that is, socially shared and organisationally embedded [18]. As Tsoukas and Vladimirou [19] put it:

Knowledge is a flux mix of framed experiences, values, contextual information and expert insight that provides a framework for evaluating and incorporating new experiences and information. It originates and is applied in the minds of knowers. In organizations, it often becomes embedded not only in documents or repositories but also in organizational routines, processes, practices, and norms
.

Gabbay and May [20] illustrated this collective element of knowledge in their study that I mentioned briefly in Section ‘How can we help ensure that evidence-based guidelines are followed?’ on page 138. Whilst these researchers, who watched GPs in action for several months, never observed the doctors consulting guidelines directly, they did observe them discussing and negotiating these guidelines among themselves and also acting in a way that showed they had somehow absorbed and come to embody the key components of many evidence-based guidelines ‘by osmosis’. These collectively embodied, socially shared elements of guidelines are what Gabbay and May called
mindlines
.

Facts held by individuals (e.g. a research finding that one person has discovered on a thorough literature search) may become collectivised through a variety of mechanisms, including efforts to make it relevant to colleagues (timely, salient, actionable), legitimate (credible, authoritative, reasonable) and accessible (available, understandable, assimilable) and to take account of the points of departure (assumptions, world views, priorities) of a particular audience.

These mechanisms are elements of the science of knowledge translation – a major topic that is beyond the scope of this book [17] [20–22]. The key point here is that to present EBM purely as the sequence of individual tasks set out in earlier chapters of this book is an over-simplistic depiction. If you are comfortable with the basics of EBM, I strongly encourage you to go on to pursue the literature on these wider dimensions of knowledge.

Why is ‘evidence-based policymaking’ so hard to achieve?

For some people, the main criticism of EBM is that it fails to get evidence simply and logically into policy. And the reason why policies don't flow simply and logically from research evidence is that there are so many other factors involved.

Take the question of publicly funded treatments for infertility, for example. You can produce a stack of evidence as high as a house to demonstrate that intervention X leads to a take-home baby rate of Y% in women with characteristics (such as age or comorbidity) Z, but that won't take the heat out of the decision to sanction infertility treatment from a limited health care budget. This was the question addressed by a Primary Care Trust policymaking forum I attended recently, which had to balance this decision against competing options (outreach support for first episode of psychosis and a community-based diabetes specialist nurse for epilepsy). It wasn't that the members of the forum ignored the evidence – there was so much evidence in the background papers that the courier couldn't get it to fit through my letterbox – it was that values, rather than evidence, were what the final decision hung on. And as many have pointed out, policymaking is as much about the struggle to resolve conflicts of values in particular local or national contexts as it is about getting evidence into practice [23].

In other words, the policymaking process cannot be considered as a ‘macro’ version of the sequence depicted in Section 1.1 (‘convert our information needs into answerable questions…’ etc). Like other processes that fall under the heading ‘politics’ (with a small ‘p’), policymaking is fundamentally about persuading one's fellow decision-makers of the superiority of one course of action over another. This model of the policymaking process is strongly supported by research studies, which suggest that at its heart lies unpredictability, ambiguity, and the possibility of alternative interpretations of the ‘evidence’ [23] [24].

The quest to make policymaking ‘fully evidence based’ may actually not be a desirable goal, as this benchmark arguably devalues democratic debate about the ethical and moral issues faced in policy choices. The 2005 UK Labour Party manifesto claimed that ‘what matters is what works’. But what matters, surely, is not just what ‘works’, but what is appropriate in the circumstances, and what is agreed by society to be the overall desirable goal. Deborah Stone, in her book Policy Paradox, argues that much of the policy process involves debates about values masquerading as debates about facts and data. In her words: ‘The essence of policymaking in political communities [is] the struggle over ideas. Ideas are at the centre of all political conflict… Each idea is an argument, or more accurately, a collection of arguments in favour of different ways of seeing the world’ [25].

One of the most useful theoretical papers on the use of evidence in health care policymaking is by Dobrow and colleagues [26]. They distinguish the philosophical-normative orientation (that there is an objective reality to be discovered and that a piece of ‘evidence’ can be deemed ‘valid’ and ‘reliable’ independent of the context in which it is to be used) from the practical-operational orientation, in which evidence is defined in relation to a specific decision-making context, is never static, and is characterised by emergence, ambiguity and incompleteness. From a practical-operational standpoint, research evidence is based on designs (such as randomised trials) that explicitly strip the study of contextual ‘contaminants’ and which therefore ignore the multiple, complex and interacting determinants of health. It follows that a complex intervention that ‘works’ in one setting at one time will not necessarily ‘work’ in a different setting at a different time, and one that proves ‘cost-effective’ in one setting will not necessarily provide value for money in a different setting. Many of the arguments raised about EBM in recent years have addressed precisely this controversy about the nature of knowledge.

Questioning the nature of evidence – and indeed, questioning evidential knowledge itself – is a somewhat scary place to end a basic introductory textbook on EBM, because most of the previous chapters in this book assume what Dobrow would call a philosophical-normative orientation. My own advice is this: if you are a humble student or clinician trying to pass your exams or do a better job at the bedside of individual patients, and if you feel thrown by the uncertainties I've raised in this final section, you can probably safely ignore them until you're actively involved in policymaking yourself. But if your career is at the stage when you're already sitting on decision-making bodies and trying to work out the answer to the question posed in the title to this section, I'd suggest you explore some of the papers and books referenced in this section. Do watch for the next generation of EBM research, which increasingly addresses the fuzzier and more contestible aspects of this important topic.

References

1
Spence D. Why evidence is bad for your health.
BMJ: British Medical Journal
2010;
341
:c6368.

2
Timmermans S, Berg M.
The gold standard: the challenge of evidence-based medicine and standardization in health care
. Philadelphia: Temple University Press, 2003.

3
Timmermans S, Mauck A. The promises and pitfalls of evidence-based medicine.
Health Affairs
2005;
24
(1):18–28.

4
Agoritsas T, Guyatt GH. Evidence-based medicine 20 years on: a view from the inside.
The Canadian Journal of Neurological Sciences
2013;
40
(4):448–9.

5
Goldacre B.
Bad pharma: how drug companies mislead doctors and harm patients
. Random House Digital Inc., London, Fourth Estate, 2013.

6
Saukko PM, Farrimond H, Evans PH, et al. Beyond beliefs: risk assessment technologies shaping patients' experiences of heart disease prevention.
Sociology of Health & Illness
2012;
34
(4):560–75.

7
Davis C, Abraham J. The socio-political roots of pharmaceutical uncertainty in the evaluation of ‘innovative’diabetes drugs in the European Union and the US.
Social Science & Medicine
2011;
72
(9):1574–81.

8
Jutel A. Framing disease: the example of female hypoactive sexual desire disorder.
Social Science & Medicine
2010;
70
(7):1084–90.

9
Lugtenberg M, Burgers JS, Clancy C, et al. Current guidelines have limited applicability to patients with comorbid conditions: a systematic analysis of evidence-based guidelines.
PloS One
2011;
6
(10):e25987.

10
Bull FC, Bauman AE. Physical inactivity: the “Cinderella” risk factor for noncommunicable disease prevention.
Journal of Health Communication
2011;
16
(Suppl. 2):13–26.

11
Garcia-Moreno C, Watts C. Violence against women: an urgent public health priority.
Bulletin of the World Health Organization
2011;
89
(1):2.

12
Clegg A, Young J, Iliffe S, et al.
Frailty in elderly people
. The Lancet 2013;
381
:752–62.

13
Boussageon R, Bejan-Angoulvant T, Saadatian-Elahi M, et al. Effect of intensive glucose lowering treatment on all cause mortality, cardiovascular death, and microvascular events in type 2 diabetes: meta-analysis of randomised controlled trials.
BMJ: British Medical Journal
2011;
343
:d4169.

Other books

Fake Boyfriend by Evan Kelsey
Berserker Throne by Fred Saberhagen
The Tale of Mally Biddle by M.L. LeGette
The Pharos Objective by David Sakmyster
Swipe by Evan Angler
Dracula (A Modern Telling) by Methos, Victor