Keiko Yamaguchi’s troubles began with diarrhea. After a few weeks, her toes went numb. The numbness and weakness crept up her legs, to her hips, and her vision began to fail. That was in early 1967. By the end of 1968, Yamaguchi, just 22 years old, was blind and paralyzed from the waist down.
She was one of more than 11,000 people in Japan, (with reported cases also occurring in Great Britain, Sweden, Mexico, India, Australia, and several other nations) who were struck by a mysterious epidemic between 1955 and 1970. The outbreak was concentrated in Japan where an estimated 900 died of the disease, which doctors eventually named SMON, for subacute myelo-optic neuropathy — myelo from the Greek word referring to the spinal cord; optic referring to vision; and neuropathy indicating a disease of the nerves.
The illness usually started with bouts of diarrhea and vomiting. Some patients, like Yamaguchi, became paralyzed and blind. (My efforts to track her down have been unsuccessful.) An uncertain number developed “green hairy tongue”: Their tongues sprouted what looked like tiny green hairs. Some of the afflicted developed green urine. Family members, too, came down with the disease, as did doctors and nurses who treated it. Approximately 5 to 10 percent of SMON patients died.
What was causing the outbreak? During the 1960s, Japan — where SMON was concentrated — launched vigorous research efforts to find out. Doctors thought an answer was at hand when a researcher studying SMON patients announced that he’d isolated the echovirus, which is known to cause intestinal problems. But soon other viruses were found in patients, including Coxsackie and a herpes virus. The herpes finding was compelling, since those viruses are known to affect the nervous system. But one by one, each claim was disproved when independent researchers were unable to replicate earlier laboratory findings.
Other possible causes were considered and shot down. No drinking water pathogen was detected. Pesticides? That hypothesis was discarded when a study found that farmers, who would have the greatest exposure, had lower rates of SMON than non-farmers. There was some excitement when researchers found that many victims had taken two types of antibiotics, but it seemed unlikely that two different antibiotics would both suddenly cause the same highly unusual disease. Besides, experts noted, some patients took the antibiotics only after developing symptoms of SMON.
Then, in late 1970, three years after the drug theory was dismissed, a pharmacologist made a forehead-slapping discovery. The two presumably different antibiotics, it turned out, were simply different brand names for clioquinol, a drug used to treat amoebic dysentery. The green hairy tongue and green urine, it turned out, had been caused by the breakdown of clioquinol in the patients’ systems. One month after the discovery, Japan banned clioquinol, and the SMON epidemic — one of the largest drug disasters in history — came to an abrupt end.
It appeared that the epidemic was concentrated in Japan in part because the drug was routinely used not just for dysentery, but to prevent traveler’s diarrhea and various forms of abdominal upset; and in part because Japanese doctors prescribed the drug at far higher doses and for longer periods than was customary in other countries.
The illusion that SMON was an infectious disease was compelling: When patients with abdominal upset or diarrhea were treated with clioquinol and developed SMON, family members, doctors, and nurses often took the drug thinking it would protect them — inadvertently creating the very disease they feared. The resulting cluster outbreaks made SMON look like an infectious disease. In short, what people thought was a cure for SMON was in fact its cause.
Few doctors know the story of SMON, and perhaps even fewer use the catchphrase “cure as cause.” Yet the phenomenon is more relevant today than ever. A study published last year suggests that medical interventions, including problems with prescribed drugs and implanted medical devices — from cardiac stents to artificial hips and birth control devices — are now the third leading cause of death in the U.S.
Examples abound in virtually every specialty, from cardiology to psychiatry to cancer care. Jerome Hoffman, an emeritus professor of medicine at UCLA, says it isn’t surprising: Because drugs and medical devices target disordered body systems, it’s all too easy to overshoot and make the disorder worse.
In the 1980s and 1990s, for instance, patients were widely treated with heart rhythm drugs to prevent the abnormal heartbeats called premature ventricular contractions (PVCs) from triggering deadly ventricular fibrillation. The drugs were quite good at reducing the abnormal beats, and doctors prescribed them widely, believing they were saving lives. But in 1989, the Cardiac Arrhythmia Suppression Trial, or CAST, sponsored by the National Institutes of Health, demonstrated that although the drugs effectively suppressed PVCs, when they did occur they were much more likely to trigger deadly rhythms. Treated patients were 3.6 times as likely to die as patients given a placebo. The drugs could fix the PVCs but kill the patient; as the old joke goes, the operation was a success but the patient died. The problem was invisible for more than a decade because doctors assumed that when a patient died suddenly it was from the underlying heart condition — not the treatment they prescribed.
In another case of cure as cause, a landmark study of Prozac to treat adolescent depression found that it increased overall suicidality — the very outcome it is intended to prevent. In the study, 15 percent of depressed adolescents treated with Prozac became suicidal, versus 6 percent treated with psychotherapy, and 11 percent treated with placebo. These numbers were not made obvious by Eli Lilly, the manufacturer, or the lead researcher who claimed that Prozac was “the big winner” in the treatment of depressed teens. Doctors, unaware that the drug could increase suicidality, often increased the dosage when teens became more depressed in treatment, thinking the underlying depression — not the drug — was at fault. Studies of other drugs in the same class as Prozac, selective serotonin reuptake inhibitors, or SSRIs, have shown similar problems.
There are many other instances of cure as cause: cardiac stents that caused clots in the coronary arteries; implanted pacemaker-defibrillators that misfired or failed to fire, causing deadly heart rhythms; and vagus nerve stimulators to treat seizures that instead have led to increased seizures.
One of SMON’s lessons is the danger of perverse financial incentives. Japanese doctors were paid for each prescription they wrote, a practice considered unethical in most peer nations. Doctors in some prefectures in Japan can still sell drugs to their patients. No wonder they prescribed such high doses of clioquinol for prolonged periods.
More than half of doctors in the U.S. receive money or other blandishments from Big Pharma and device manufacturers. The amounts can be stupendous: Some doctors have received tens of millions of dollars to implant certain devices or to promote certain drugs. Such influence takes a toll on the humans exposed to harmful treatments. The nonprofit group Institute for Safe Medication Practices conducted a study to quantify drug harms and concluded that prescribed medicines are “one of the most significant perils to human health resulting from human activity.” With the rise of the medical-industrial complex and its extraordinary profits, industry has a vested interest in blaming bad outcomes on a patient’s underlying disease and not on their own products.
Industry claims often mislead doctors and patients alike. Ciba-Geigy, the main manufacturer of clioquinol, said the drug was safe because it couldn’t be absorbed into the bloodstream from the intestines. Yet legal filings from a lawsuit against the company show that Ciba-Geigy was aware of the drug’s harmful effects for years. As early as 1944, clioquinol’s inventors said the drug should be strictly controlled and limited to 10 to 14 days’ use. In 1965, after a Swiss veterinarian published reports that dogs given clioquinol developed seizures and died, Ciba was content to issue a warning that the drug shouldn’t be given to animals.
In the U.S., pharma’s influence over what doctors and the public believe about drugs and devices has increased by orders of magnitude, as virtually all research is now conducted by industry and genuinely independent research has all but vanished. In 1977, industry sponsorship provided 29 percent of funding for clinical and nonclinical research. Estimates today suggest that figure has increased to around 60 percent. Even most “independent” research, such as that conducted by the National Institutes of Health, is now “partnered” with industry, making our reliance on industry claims nearly complete.
Stemming the tide of medical interventions that do more harm than good will require a deep examination of cure as cause — and a willingness to stop depending on the industry that perversely promotes it.
Jeanne Lenzer is an award-winning medical investigative journalist, a former Knight Science Journalism fellow, and a frequent contributor to the international medical journal The BMJ.