Read The Coming Plague Online

Authors: Laurie Garrett

The Coming Plague (78 page)

BOOK: The Coming Plague
10.37Mb size Format: txt, pdf, ePub
ads
“It's hard to put the large view into day-to-day medicine. And it's a real tragedy. And you can't sue a doctor for violating an ecosphere, but you can sue for failure to give an antibiotic that you think would have enhanced
the possibility of patient survival. It's a real dilemma,” Lappé had said.
A decade before the resistance crisis was acknowledged by mainstream science, he said that medicine and public health were locked in a conflict over drug-induced emergence of new microbes—a conflict that couldn't easily be resolved. It was the physicians' job, Lappé said, to individuate decisions on a patient-by-patient basis. The mission of the doctor was to cure
individual cases
of disease.
125
In contrast, public health's mission required an ecological perspective on disease: individuals got lost in the tally of microbial versus human populations.
When Lappé looked at American hospitals in 1980 he didn't see the miracles of modern medicine—heart transplants, artificial knees, CT scans. Lappé saw disease, and microbes, and mutations.
“It's incredible,” Lappé said. “You can go into a hospital and you will have a four in a hundred chance of getting an infection you've never had before, while in that hospital. In some hospitals the odds are one in ten. What you will get in that hospital will be much worse than what you would have been contaminated with at home. They are the most tenacious organisms you can imagine. They can survive in the detergent. They can actually live on a bar of soap. These are organisms that are part of our endgame.”
Decrying improper use of antibiotics as “experiments going on all the time in people, creating genuinely pathogenically new organisms,” Lappé occasionally lapsed into a grim global ecological description of the crisis —a perspective that critics charged in 1981 grossly exaggerated the scope of the problem:
 
Unfortunately, we played a trick on the natural world by seizing control of these [natural] chemicals, making them more perfect in a way that has changed the whole microbial constitution of the developing countries. We have organisms now proliferating that never existed before in nature. We have selected them. We have organisms that probably caused a tenth of a percent of human disease in the past that now cause twenty, thirty percent of the disease that we're seeing. We have changed the whole face of the earth by the use of antibiotics.
 
By the 1990s, when public health authorities and physicians were nervously watching their antimicrobial tools become obsolete, Lappé's book was out of print. But everything he had predicted in 1981 had, by 1991, transpired.
For developing countries, access to still-reliable antibiotics for treatment of everything from routine staph infections to tuberculosis and cholera had reached crisis proportions by the 1990s. In 1993 the World Bank estimated that the barest minimum health care package for poor countries required
an annual per capita expenditure of $8.00. Yet most of the least developed countries couldn't afford to spend more than $2.00 to $3.00 per person each year on total health care.
126
With over 100,000 medicinal drugs marketed in the world (5,000 active ingredients), it was possible for government planners to lose sight of their highest-priority needs, purchasing nonessential agents rather than those necessary for their populations' survival. And the scale of global disparity in drug access was staggering: the average Japanese citizen spent $412 in 1990 on pharmaceutical drugs; the typical American spent $191; in Mexico just $28 per year was spent; Kenyans spent less than $4.00 per year; and Bangladeshis and Mozambicans just $2.00 per year, on average.
It was in the wealthy and medium-income countries where billions of dollars' worth of antibiotics and antivirals were used and misused. And it was in the wealthy nations that resistant strains most commonly emerged. But it was the poor nations, unable to afford alternative drugs, that paid the highest price.
“The development of new antibiotics is very costly,” wrote Burroughs-Wellcome researcher A. J. Slater, “and their provision to Third World countries alone can never be financially rewarding; furthermore, only about 20% of world-wide pharmaceutical sales are to Third World countries. The industry's interest in developing drugs for exclusive or major use in such countries is declining.”
127
Some poor countries sought to offset rising drug costs and microbial resistance by developing their own pharmaceutical manufacturing and distribution capabilities. In the best-planned situations, the respective governments drew up a list of the hundred or so most essential drugs, decided which could (by virtue of unpatented status and ease of manufacture) be made in their countries, and then set out to make the products. Local manufacture might be carried out by a government-owned parastatal company, a private firm, or—most commonly—a local establishment that was in partnership with or a subsidiary of a major pharmaceutical multinational.
Though such drug policies were strongly supported by all the relevant UN organizations and, eventually, the World Bank, they were considered direct threats to the stranglehold a relative handful of corporations had on the world's drug market. The U.S.-based Pharmaceutical Manufacturers Association, which represented some sixty-five U.S.-headquartered drug and biotechnology companies and about thirty foreign-based multinationals, strongly opposed such policies. In general, the companies—all of which were North American, European, or Japanese—felt that local regulation, manufacturing, marketing restrictions, or advertising limitations infringed on their free market rights.
128
Given that these companies controlled the bulk of the raw materials required for drug manufacture, and purchase of such materials required hard currency (foreign exchange), most of the world's poor nations were unable to actuate policies of local antibiotic production.
129
At a time when
all forms of bacteremia were on the rise in the poorest nations on earth—notably in sub-Saharan Africa
130
—the governments were least equipped to purchase already manufactured drugs or make their own.
Not all the blame for the lack of effective, affordable antibiotics could be justifiably leveled at the multinational drug manufacturers: domestic problems in many poor nations were also at fault. Distribution of drugs inside many countries was nothing short of abominable. In developing countries, most of the essential pharmaceuticals never made their way out of the capital and the largest urban centers to the communities in need. On average, 60 to 70 percent of a poor country's population made do with less than a third of the nation's medicinal drug supply, according to the World Bank.
Perhaps the classic case of the distribution crisis involved not an antibiotic but an antiparasite drug. During the early 1980s the U.S.-based multinational Merck & Company invented a drug called ivermectin that could cure the river blindness disease caused by a waterborne parasite,
Onchocerca volvulvus.
About 120 million people lived in onchocerciasisplagued areas, most of them in West Africa. And WHO estimated that at least 350,000 people were blind in 1988 as a result of the parasite's damage to their eyes.
It was, therefore, an extraordinary boon to the governments of the afflicted region and WHO when Merck issued its unprecedented announcement in 1987 that it would donate—free—ivermectin to WHO for distribution in the needy countries. No drug company had ever exhibited such generosity, and WHO immediately hailed Merck's actions as a model for the entire pharmaceutical industry.
But five years after the free ivermectin program began, fewer than 3 million of the estimated 120 million at risk for the disease had received the drug. Cost was not the issue. Infrastructural problems in transportation and distribution, military coups, local corruption,
131
lack of primary health care infrastructures in rural areas, and other organizational obstacles forced WHO and Merck to privately admit in 1992 that the program to cure the world of river blindness might fail.
132
The World Bank and many independent economists argued that such problems would persist until developing countries instituted national health care financing policies
133
—a daunting vision given that the wealthiest nation in the world, the United States, only embarked on a course toward implementation of such a policy in 1994. The pharmaceutical industry argued that developing countries had proven woefully unable to produce quality medicinal drugs on an affordable, high-volume basis. Lack of skilled local personnel, overregulation and bureaucratization, corruption, and lack of hard currency for bulk purchase of supplies and raw materials were all given as reasons for developing country inadequacies. Restrictions on multinational access to local markets were doomed, the industry asserted, to exacerbate the situation by denying the populace needed drugs.
134
From the perspective of developing countries, the pharmaceutical industry and Western governments that acted in its support were solely concerned with the pursuit of profits, and would conduct any practice they saw fit to maintain their monopoly on the global medicinal drug market. Such practices, it was charged, included bribing local doctors and health officials, manipulating pricing structures to undermine local competitors, advertising nonessential drugs aggressively in urban areas, dumping poorquality or banned drugs onto Third World markets, withholding raw materials and drugs during local epidemics, and declining foreign aid to countries whose drug policies were considered overly restrictive.
135
While charges and countercharges flew, the crisis in many parts of the world deepened. According to the World Bank, the world spent $330 billion in 1990 on pharmaceuticals, $44 billion of which went to developing countries. The majority of the world's population in 1990 lacked access to effective, affordable antibiotics.
In 1991, with the world facing a tuberculosis crisis, it was suddenly noted that the global supply of streptomycin was tapped out. The second-oldest antibiotic in commercial use was no longer manufactured by any company. Unpatented, cheap, and needed solely in developing countries, it offered no significant profit margin to potential manufacturers. When drug-resistant TB surfaced in major U.S. cities that year, the Food and Drug Administration would find itself in a mad scramble to entice drug companies back into the streptomycin-manufacturing business.
It wasn't just the bacteria and viruses that gained newfound powers of resistance during the last decades of the twentieth century.
“It seems we have a much greater enemy in malaria now than we did just a few years ago,” Dr. Wen Kilama said. The director-general of Tanzania's National Institute for Medical Research was frustrated and angry in 1986. He, and his predecessors, had meticulously followed all the malaria control advice meted out by experts who lived in wealthy, cold countries. But after decades of spending upward of 70 percent of its entire health budget annually on malaria control, Kilama had a worse problem on his hands in 1986 than had his predecessors in 1956.
“More than ten percent of all hospital admissions are malaria,” Kilama said. “As are ten percent of all our outpatient visits. In terms of death, it is quite high, and it is apparent that malaria is much more severe now than before.”
Ten years earlier the first cases of chloroquine-resistant
Plasmodium falciparum
parasites had emerged in Tanzania; by 1986 most of the nation's malaria was resistant to the world's most effective treatment. Like nearly every other adult in the nation, Kilama had suffered a childhood bout with
malaria, fortunately in the days before chloroquine resistance surfaced. Natural immunity to malaria among survivors like Kilama was weak, and whenever he was under stress he would be laid up with malarial fevers.
“It is a very unusual individual in this country who doesn't have chronic malaria,” Kilama said.
Though he was speaking of Tanzania, Kilama might as well have said the same of most of the nations of Africa, Indochina, the Indian subcontinent, the Amazon region of Latin America, much of Oceania, and southern China. Most of the world's population in 1986 lived in or near areas of endemic malaria.
Since the days when optimists had set out to defeat malaria, hoping to drive the parasites off the face of the earth, the global situation had worsened significantly. Indeed, far more people would die of malaria-associated ailments in 1990 than did in 1960.
For example, the Pan American Health Organization and the Brazilian government had succeeded in bringing malaria cases in that country down to near-zero levels by 1960. In 1983 the country suffered 297,000 malaria hospitalizations; that figure had doubled by 1988. Despite widespread use of DDT and other pesticides, the
Anopheles darlingi
mosquitoes thrived in the Amazon, feeding on the hundreds of thousands of nonimmune city dwellers who were flooding the region in search of gold and precious gems. The situation was completely out of control.
136
By 1989 Brazil accounted for 11 percent of the world's non-African malaria cases.
137
A 1987 survey of malaria parasites extracted from the blood of nearly 200 Brazilian patients revealed that 84 percent of the Amazon isolates were chloroquine-resistant; 73 percent were resistant to amodiaquine; nearly all the isolates showed some level of resistance to Fansidar (sulfadoxine/pyrimethamine). Only one then-available drug remained effective against malaria in Brazil: mefloquine.
138
By 1990 more than 80 percent of the world's malaria cases were African; 95 percent of all malarial deaths occurred on the African continent. Up to half a billion Africans suffered at least one serious malarial episode each year, and typically an individual received some 200–300 infective mosquito bites annually. Up to one million African children died each year of the disease.
139
And all over the continent the key drugs were failing.
The first reported cases were among Caucasian tourists on safari in Tanzania and Kenya during 1978–79.
140
As early as 1981 chloroquine's efficacy was waning among Kenyan children living in highly malaria-endemic areas, and higher doses of the drug were necessary to reverse disease symptoms.
141
Within two years, truly resistant parasites had emerged in Kenya, and laboratory tests showed that 65 percent of the P.
falciparum
parasites had some degree of chloroquine resistances.
142
By 1984 reports of people dying of malaria while on chloroquine, or failing to improve when taking the drug, were cropping up all over the African continent: from Malawi,
143
Namibia,
144
Zambia,
145
Angola,
146
South Africa,
147
Mozambique,
148
and locations scattered in between. Public health planners watched nervously, wondering how long chloroquine—the best and most affordable of the antimalarials—would remain a useful drug.
Kilama and his counterparts in other African nations tried mosquito control measures, but the insects quickly acquired their own resistance to the pesticides. They tried eliminating watery breeding sites for the mosquitoes, but, as Kilama put it, “what can you do when these creatures can breed thousands of offspring in a puddle the size of a hippo's foot? During the rainy season there is absolutely nothing.”
Kilama's staff regularly tested children living in northern equatorial districts of Tanzania for chloroquine resistance, and watched in horror as the parasites' sensitivity to the drug declined
logarithmically
between 1980 and 1986. Furthermore, isolated cases of mefloquine and pyrimethamine resistance were reported in the country.
149
The CDC developed a simple field test kit for drug resistance that was widely distributed in Africa in 1985. Immediately a picture of the resistance emergence patterns developed. The problem began along coastal areas of East Africa, particularly Zanzibar, Mombasa, and Dar es Salaam. In these areas two factors may have played a role: a highly mobile Asian population that traveled frequently to India and other regions of resistant malaria, and relatively high availability of chloroquine through both legal and black-market venues. From there, resistance spread along the equatorial travel routes connecting traders from Kenya, Tanzania, Malawi, Zambia, Zaire, Burundi, Rwanda, and Uganda—the same trade routes implicated in the spread of the region's AIDS epidemic. The problem eventually spread outward, from Addis Ababa to Cape Town, from Senegal to Madagascar.
150
Studies of the newly emerging P.
falciparum
strains showed that the mutations involved in resistance, once present, were permanent features in the parasitic line. The resistance mechanisms involved several different genes: partial insensitivity could result from a single mutation, total resistance from two or more. Wherever the single-mutation somewhat insensitive strains emerged, fully resistant mutants soon followed.
The mutants seemed to grow faster in laboratory cultures than did normal P.
falciparum
, indicating that they might have acquired some type of virulence advantage.
And finally, resistance was cropping up not only in regions where chloroquine was heavily used but also among people who rarely took the drug. That implied that the mutation and emergence didn't require heavy selection pressure. And it also posed serious questions about what policies governments should pursue to preserve the utility of the precious drug.
151
By 1990 chloroquine resistance was the rule rather than the exception in most malarial regions of Africa. In addition, physicians noticed that chloroquine-resistant strains of
P. falciparum
seemed somewhat insensitive to treatment with quinine or quinidine, probably because of the chemical similarities of the three drugs.
152
Between 1988 and 1990 a seemingly new disease emerged—cases of lethal adult cerebral malaria in East and Central Africa. Individuals who had acquired some degree of immunity during childhood would suddenly as young adults be overtaken with fever and the demented behavior produced by parasitic infection of the brain. The suddenness of both the onset and death in such cases was startling.
In most cases the cerebral malaria victims had lived their adult lives in urban areas, far from their childhood villages and daily exposure to bloodsucking mosquitoes. Once a year, perhaps, they would return to their old village home to visit relatives and would be reexposed to the parasites. As far as the parasites were concerned, these city dwellers, though African, were no less vulnerable to malaria than a Caucasian tourist. The immunity to P.
falciparum
disappeared within twelve months—or less—in the absence of regular reexposure to the parasites. There was no absolute protective immunity in anyone exposed to malaria—nothing akin to the lifelong immunity that resulted from a smallpox vaccine.
As the death toll among young adults increased, so did the economic costs. In 1993 the World Bank estimated that the loss of productive adult workers to malaria could within two years cost African economies $1.8 billion—a staggering figure for such impoverished societies.
153
Mortality due to malaria in 1993 was at a historic all-time high in Africa.
“Cerebral malaria is now estimated to be responsible for a fatality rate of more than twenty percent of malaria cases, even in urban areas … . Mortality and morbidity rates due to malaria, as monitored in specific countries, appear to be increasing. For example, reported deaths due to malaria increased from 2.1 percent in 1984, to 4.8 percent in 1986, to 5.8 percent in 1988 in Zaire. Malaria deaths as a percent of mortality in Zaire increased from 29.5 percent in 1983, to 45.6 percent in 1985, and to 56.4 percent of all mortality in 1986,” reported the American Association for the Advancement of Science.
154
Resistance to chloroquine, mefloquine, Fansidar, quinine, trimethoprim, and quinidine were all rising rapidly, with some areas (particularly Zaire) reporting that virtually all malaria cases were caused by chloroquine-resistant strains by 1990.
Uwe Brinkmann, then at the Harvard School of Public Health, was watching the steady rise in malaria cases, resistance, and deaths, and in 1991 set out to calculate the toll the newly emergent
P
.
falciparum
were taking in Africa, in both direct medical and indirect societal costs. He predicted that by 1995 malaria would be costing most sub-Saharan African countries 1 percent of their annual GDPS.
155
By 1995 in Rwanda, Brinkmann's group predicted, “the direct cost of malaria per capita will exceed [Ministry of Health] expenditure per capita.” This posed an obvious question: What will societies do when their malaria burden exceeds all available hospital beds, drugs, health providers, and finances?
At the CDC, where Kent Campbell, Joel Breman, and Joe McCormick were devoting their attention by the close of the 1980s entirely to the malaria problem, a new issue cropped up.
“What is malaria?” Campbell asked. “If a population is universally infected, and periodically ill, what exactly is the disease we call malaria?”
156
It wasn't an academic question. By the late 1980s the CDC scientists and their African counterparts were witnessing a dangerous new malaria disease paradigm on the continent. Small children who suffered fevers were immediately given chloroquine by their parents—easily obtained either from government clinics or on the black market. The kids would recover from their fevers, but the partially resistant parasites would remain in their bodies. Unable to mount a strong immune response, the children would suffer more bouts of severe malaria, receive additional doses of chloroquine or quinine, and continue to harbor the parasites. Over time, the parasite load would build to critical levels in their blood, causing significant damage to their red blood cells.
Just a decade earlier all African children had either died or survived severe malaria in their first weeks of life, and from then on been at no greater malarial risk than adults. However, thanks to chloroquine treatments, by the late 1980s tens of thousands—perhaps millions—of African children were surviving those early malarial fever bouts, but lapsing into fatal anemia episodes at later ages ranging from six months to nine years. The only way to save such a child's life would be to transfuse huge amounts of nonmalarial blood as quickly as possible into his or her body.
In 1985, Kinshasa's Mama Yemo Hospital performed about 1,500 such pediatric transfusions; a year later, that figure leapt to 6,000.
By the end of 1986, one out of every three children admitted to Mama Yemo suffered from a chloroquine-resistant strain.
That was also when pediatric AIDS started to soar in Kinshasa.
In an anemia crisis, seconds matter in the race to save a child's life. Even if an impoverished African clinic had the tools to test donated blood, they didn't have the time. Typically doctors would simply grab a relative whose blood type matched appropriately, and pump blood straight from the donor to the child. A review of 200 children who were transfused in this way at Mama Yemo in 1986 showed that 13 percent got infected with HIV as a result.
“The doctors knew they were transmitting AIDS,” Campbell explained. “But they were trying to ensure the survival of these children. It's a crapshoot, it really is. They saw them die day in and day out then, so they were making a clinical decision that was the best that they could do.”
157
BOOK: The Coming Plague
10.37Mb size Format: txt, pdf, ePub
ads

Other books

Easy Day for the Dead by Howard E. Wasdin and Stephen Templin
Scam Chowder by Maya Corrigan
The Mistress by Lexie Ray
Bang The Drummer by Desiree Holt
Rescue Me Please by Nichole Matthews
Winter's Passage by Julie Kagawa