The Rise and Fall of Modern Medicine (38 page)

BOOK: The Rise and Fall of Modern Medicine
13.78Mb size Format: txt, pdf, ePub

There are many forms of medical research – synthesising new drugs, inventing new technologies, experimenting on animal models of disease, and so on – but the distinguishing feature of clinical science is that it is practised by doctors with a unique access to the ‘experimental subjects' – patients with illnesses. Most clinical science involves observing or measuring in some way the phenomena of disease in a living person, rather than a dead one in the autopsy room, usually with some special technique. Thus, in the post-war years at the Postgraduate Medical School, John McMichael used the cardiac catheter to measure pressures within the heart while Sheila Sherlock used the liver biopsy needle to take specimens of the liver from jaundiced patients to make a more accurate diagnosis. Much of the dynamic of clinical science can be accounted for by new methods of measuring some aspects of human physiology, ranging all the way from the imaging techniques of the CT and MRI scanner for delineating the internal organs, to the ability to measure minuscule levels of hormones and chemicals in the blood in different disease states. The therapeutic revolution added a further major dimension to clinical science, because
every new drug or new technology was ‘experimental', so there were considerable opportunities for the clinical scientist to evaluate its effects.

There was certainly much to do. These were uncharted waters, there was little competition from others, as only doctors could do clinical science, while the wards and outpatients were packed with ‘clinical material', the dreadful euphemism for patients whose interesting diseases merited investigation. The bright young doctor only had to collect twenty or thirty patients with one illness or another and encourage them to come to the laboratory, where he could measure something or try out some new treatment whose effects he could measure. The results could then be written up and published in a medical journal.

This is not to belittle clinical science, which certainly expanded knowledge and understanding of the physiological processes of disease, but such a ‘phenomenological approach', as it is called because it involves the observation of the ‘phenomena' of disease, obviously has intellectual limits. There comes a time when there is no more useful knowledge to be gained from doing yet more catheter studies on children with congenital heart disease, or from performing yet more biopsies on patients with jaundice. This saturation of clinical science's potential for further observational studies just happened to coincide with the decline in therapeutic innovation. By the late 1970s clinical science was in serious trouble.

This decline in its fortunes, which would explain its lack of appeal to young doctors, can be illustrated in two ways. The first is to compare the contents of medical journals before and after the End of the Age of Optimism. The January 1970 issue of the
British Medical Journal
is clinical science ‘writ large', with articles on the value of steroids in treating meningitis, the treatment of blood poisoning (septicaemia) and Royal Free disease (an
epidemic of ‘fatigue' at that hospital). There are original contributions describing the value of giving folic acid to pregnant women to prevent miscarriages and a study of patients with an obstruction to the main vein draining to the heart, the inferior vena cava. There is an evaluation of the anti-diabetic drug phenformin, both in controlling diabetes and in helping patients to lose weight; an investigation of the value of injecting hydrocortisone directly into arthritic joints; and an article reporting the effects of the antibiotic tetracycline in exacerbating chronic renal failure. The correspondence section is similarly concerned with clinical matters, with doctors offering their views, based on their own clinical experience, on diverse matters such as facial pain, the management of deep vein thrombosis, the relative merits of different types of treatment for constipation and a new surgical treatment for those afflicted with sweaty armpits. Thus virtually everyone, both specialist physicians and family doctors, reading this issue of the
British Medical Journal
would have found much of general interest directly touching their everyday practice.
2

From the mid-1970s onwards the proportion of space in the
BMJ
devoted to clinical science fell rapidly. By the 1990s its contents are so different as to be virtually unrecognisable.
3
The January 1995 issue, for example, features a massive statistical analysis of the effects of quinine for nocturnal cramp, an epidemiological study linking weight in infancy with the subsequent probability of developing heart disease and a survey of young people's views on drug misuse. There is a ‘controversy piece' on whether obstetricians should see women with ‘normal' pregnancies and an article on ‘informed consent' (informing patients about the pros and cons of a research project prior to obtaining their consent to participate in it). Sandwiched between these discursive pieces there is only one original article that directly relates to clinical practice, an assessment of the value of
blood-thinning drugs in the treatment of elderly patients with strokes.
4

The second illustrative example of this marginalisation of clinical science from its previous pre-eminent position within medicine's intellectual life is reflected in the changing fortunes of its major research institutions, and in particular in Britain the brief and troubled life of what was intended to be its flagship, the Clinical Research Centre, founded in 1970. The CRC was attached to a brand-new district hospital – Northwick Park – in Harrow, North London, for the specific purpose of studying common clinical-science-style medical problems such as bronchitis, heart disease and strokes. ‘The opening of a lavishly equipped hospital and research centre is inevitably an occasion for congratulations,' commented the
British Medical Journal
a month before its official opening by Her Majesty the Queen. And lavish it certainly was. The capital cost was three times greater than that of a standard hospital. Besides the usual complement of consultant staff there were 134 research posts spread across fourteen research divisions. ‘This was a substantial and essential investment in medical care which it is hoped should enable the Medical Research Council to retain its place as a leader in international medical research.'
5

But this ‘lavish' palace of disease, with facilities for research that would have been inconceivable to preceding generations, did not prosper. Perhaps it was ill-conceived to try and create a research institution
de novo
in this way. The Centre rapidly achieved white elephant status with its high running costs, and its abysmal research record became a major embarrassment. Just over a decade later, in 1986, the decision was made to close it down following the report of a committee which found ‘little prospect of creating the unity of purpose essential for the future development of high-quality clinical research'.
6

Comparisons with the achievements of an earlier epoch are inevitable. The important developments in medical research at the Postgraduate Medical School and elsewhere from the 1940s through the 1960s were carried out on a shoestring budget with a fraction of the funds and other resources available to those working at the CRC. There can thus be only two explanations for the disparity in its ‘research productivity'. Either those involved were less intelligent and committed than the preceding generation, which seems unlikely, or the intellectual context within which they were working must have changed so that clinical science had lost its capacity to make substantial contributions to the major problems posed by disease.

Almost a Dead End

It would be absurd to suggest that medical progress had completely ground to a halt by the end of the 1970s. Several of the ‘definitive' moments were still to come, including the discovery of helicobacter as the cause of peptic ulcer, and the role of clot-busting drugs in saving lives following heart attacks. The 1980s would also see the flowering of the new methods of minimally invasive surgery, as well as modest improvements in survival from cancers of the breast and colon.
7
And, most importantly of all, the 1980s were a very necessary period of ‘fine-tuning' of the innovations of earlier decades, defining much more precisely the value and indications for their use.

And yet the verdict of the End of the Age of Optimism is inescapable. The therapeutic revolution was faltering. Medicine, like any field of endeavour, is bounded by its concerns – the treatment of disease – and so success necessarily places a limit on further progress. From the 1950s onwards it had advanced
exponentially through a positive feedback mechanism, where knowledge gained from one area was applied to another, which in turn was applied to another, culminating in an event, like transplantation of the heart, that depended on half a dozen or more ‘definitive moments'. Once that had been accomplished, cardiac surgery had reached its limits, and there was little further for it to go.

But there is at least one last major ‘soluble' challenge left, already illustrated by the discovery of the role of helicobacter in peptic ulcers. There remains a vast ocean of ignorance at the centre of medicine: the causes of virtually all the diseases of early and middle life – multiple sclerosis, rheumatoid arthritis, Parkinson's and myriad others – remain completely obscure. It was precisely this search for ‘causes' that would become the dominant medical paradigm from the early 1980s onwards, and it is to this we now turn.

PART III

The Fall

The value of a historical perspective is that it allows for the ‘wisdom of hindsight', illuminating matters that were not at all obvious at the time. In retrospect it now seems quite clear that, concealed behind the glory days of medicine in the 1970s when the innovations of the previous decades began to be widely applied, significant trends indicated that the continuous onward march of medical progress was coming to an end.

But that is not all, for, again with the wisdom of hindsight, it is possible to see that, simultaneously during the 1970s, the foundations were being laid for an entirely new paradigm to fill the intellectual vacuum left by this decline in therapeutic innovation. This new paradigm emerged quite dramatically in the 1980s, driven by two very different specialties that up till now had only played a marginal role in post-war medicine: epidemiology and genetics. They promised to move beyond the empiricism that had driven the therapeutic revolution to identify the underlying causes of disease. The epidemiologists, with their ‘Social Theory', insisted that most common diseases such as cancer, heart disease and strokes were caused by the social factors of an unhealthy ‘lifestyle' and were thus readily preventable by switching to a healthy diet and reducing exposure to environmental pollutants. As for genetics, or rather ‘The New Genetics' as it became known, a few truly astonishing developments in the 1970s had opened up the possibility of identifying the abnormal genes in several diseases. There is a beguiling complementarity between these two very different types of explanation as they represent, in a different guise, the specific contributions of nature (the gene) and nurture (social and environmental factors) in human development.

The rapidity with which this new paradigm filled medicine's intellectual vacuum is striking testimony to the declining power of empirical therapeutic innovation. In the process, however, the claims of the epidemiologists and geneticists were never properly scrutinised at the outset, and there were sound theoretical reasons for doubting their validity. Thus The Social Theory might seem plausible enough, but man as the culmination of millions of years of evolution is capable of surviving in the most diverse of circumstances. It would thus seem highly improbable that suddenly, in the middle of the twentieth century, he should have become vulnerable to lethal diseases caused by his ‘lifestyle'. Similarly, genetics is unlikely to be an important or modifiable cause of disease, since evolution, operating by the laws of natural selection, ensures those unfortunate enough to be born with deleterious genes are unlikely to survive long enough to procreate. As it turned out, both The Social Theory and The New Genetics have proved in their different ways to be blind alleys, quite unable to deliver on their promises. Their failure is ‘The Fall' of modern medicine.

1
T
HE
B
RAVE
N
EW
W
ORLD OF
T
HE
N
EW
G
ENETICS
(i) T
HE
B
EGINNING

While most medical researchers might concede that progress has slowed in recent years they will add, optimistically, almost in the same breath, that another golden age is ‘just around the corner'. The source of this optimism is molecular biology, the science of the molecules within our cells. And what are these molecules? Look down a microscope at a cell and you will see in the centre a dark circle – the nucleus – packed with the molecules of DNA that make up our genes, the code of life. Surrounding the nucleus is the cytoplasm of the cell, filled with other specialised molecules, the ‘factories' that transform the messages from the DNA of the gene into the tens of thousands of different proteins, hormones and enzymes of which the human body is made. These molecules are biology's bottom line. There is nowhere further that science can take us. Almost by definition, once we understand the workings of these essential elements of life all will become clear.

This application of molecular biology to medicine is now commonly known as The New Genetics. Its potential is exemplified by the Human Genome Project spelling out each of the 3 billion molecules of DNA that make up the genes within each nucleus. Genes code for proteins, so it is then only a matter of working out how these proteins are malfunctioning in diseases like cancer or multiple sclerosis to find ways of putting them right. ‘Genetics research will have the most significant effect on our health since the microbiology revolution of the nineteenth century,' observes John Bell, Nuffield Professor of Medicine at Oxford. It will, ‘like a mechanical army, systematically destroy ignorance,' argues Professor John Savill of Nottingham's University Hospital, and ‘promises unprecedented opportunities for science and medicine'.
1

Other books

The Innocents by Nette Hilton
Public Enemy Number Two by Anthony Horowitz
Storm Thief by Chris Wooding
Poverty Castle by John Robin Jenkins
Three Women by March Hastings