Home / Knowledge / The most critical factor in medicine? Human bias

The most critical factor in medicine? Human bias

vintage human figure illustration

Oncologist and writer Siddhartha Mukherjee suggests that what doctors fight against isn’t so much disease — it’s their own biases.

In the summer of 2003, I finished my three-year residency in internal medicine and began a fellowship in oncology. It was an exhilarating time. The Human Genome Project had laid the foundation for the new science of genomics, the study of the entire genome. And it was nothing short of a windfall for cancer biology.

Cancer is a genetic disease, an illness caused by mutations in genes. Until the early 2000s, most scientists had examined cancer cells one gene at a time. But once we could examine thousands of genes in parallel, the true complexity of cancers became evident. The human genome has about twenty-four thousand genes in total. In some cancers, up to a hundred and twenty genes were altered — one in every two hundred genes — while in others, only two or three genes were mutated. (Why do some cancers carry such complexity, while others are genetically simpler? Even the questions were unexpected — much less the answers.)

The capacity to examine thousands of genes in parallel, without making any presuppositions about the mutant genes, allowed researchers to find novel, previously unknown genetic associations with cancer. Some of the newly discovered mutations in cancer were truly unexpected: it turned out the genes did not control growth directly, but affected the metabolism of nutrients or chemical modifications of DNA. If cancer cells were dependent on mutant genes for their survival or growth — “addicted” to the mutations, as biologists liked to describe it — then targeting these addictions with specific molecules might force cancer cells to die. The battle-ax chemical poisons of cellular growth would become obsolete at last.

Through a nick of the skin, I could see the birth of a revolution in cancer treatment.

The most spectacular example of a new drug, Gleevec, for a variant of leukemia, had galvanized the entire field. I still recall the first patient whom I treated with Gleevec, a 56-year-old man whose bone marrow had been so eaten by leukemia that he had virtually no platelets left. He would bleed profusely from every biopsy that we performed; an oncology fellow had to meet Mr. K in the exam room with a brick-size pack of sterile gauze pads, and press on his biopsy site for half an hour to prevent bleeding.

About four weeks after he started treatment with Gleevec, it was my turn to perform his biopsy.
 I came prepared with the requisite armfuls of gauze, dreading the half-hour to come — but when I withdrew the needle, the wound stopped bleeding by itself. Through that nick of the skin, its edges furling with a normal-looking clot, I could see the birth of a revolution in cancer treatment.

Around the first week of my fellowship, I learned that another such drug, a molecular cousin of Gleevec’s, was being tested in our hospital for a different form of cancer. The drug had shown promising effects in animal models and in early human experiments — and an early trial was forging ahead with human patients.

I had inherited a group of patients on the trial from another oncology fellow who had graduated from the program. Even a cursory examination of the trial patients on my roster indicated a spectacular response rate. One woman, with a massive tumor in her belly, found the masses melting away in a few weeks. Another patient had a dramatic reduction in pain from his metastasis. The other fellows, too, were witnessing similarly dramatic responses in their patients. We spoke reverentially about the drug, its striking response rate, and how it might change the landscape for the treatment of cancer.

Hope is a beautiful thing in medicine — its most tender center — but it is also the most dangerous.

Yet six months later, the overall results of the study revealed a surprising disappointment. Far from the 70 or 80 percent response rates that we had been expecting from our data, the overall rate was an abysmal 15 percent. The mysterious discrepancy made no sense … until we looked more deeply at the data. The oncology fellowship runs for three years, and every graduating batch of fellows passes on some patients from his or her roster 
to the new batch and assigns the rest to the more experienced attending physicians in the hospital. Whether a patient gets passed on to a fellow or an attending doctor is a personal decision. The only injunction is that a patient who get reassigned to a new fellow must be a case of “educational value.”

In fact, every patient moved to the new fellows was a drug responder, someone whose treatment was proceeding successfully … while all patients shunted to the attending physicians were nonresponders, the patients with the most treatment-resistant, recalcitrant variants of the disease. Concerned that the new fellows would be unable to handle the more complex medical needs of nonresponders, the graduating fellows had moved them all to more experienced doctors. The assignment had no premeditated bias, yet the simple desire to help patients had sharply distorted the experiment.

Every science suffers from human biases. Even as we train machines to collect, store and manipulate data for us, humans are the final interpreters of that data. In medicine, the biases are particularly acute, not least because of hope: we want our medicines to work. Hope is a beautiful thing in medicine — its most tender center — but it is also the most dangerous.

new medical technologies will not diminish bias. They will amplify it.

But don’t prospective, controlled, randomized, double-blind studies eliminate all these biases? The very existence of such 
a study — in which both control and experimental groups are randomly assigned, patients are treated and
 both doctors and patients are ignorant of the treatment — is a testament to how seriously medicine takes its own biases, and what contortions we must perform to guard against them (in few other scientific disciplines are such drastic measures used to eliminate systematic biases). The importance of such studies cannot be overemphasized. Several medical treatments thought to be deeply beneficial to patients based on strong anecdotal evidence, or decades of nonrandomized studies, were ultimately proved to be harmful based on randomized studies. These include, among other examples, the use of high-dose oxygen therapy for newborns, antiarrhythmic drugs after heart attacks, and hormone-replacement therapy in women.

Yet the reverential status of randomized, controlled trials in medicine is its own source of bias. The BCG vaccine against tuberculosis was shown to have a potent protective effect in a randomized trial, but the effectiveness of the vaccine seems to decrease almost linearly as we move in latitude from the North to the South — where, incidentally, TB is the most prevalent (we still don’t understand the basis for this effect, although genetic variation is the most obvious culprit). Virtually every day I’m asked to decide whether a particular drug will work for a patient — an African-American man, say — when the trial was run on a population of predominantly white men in Kansas. Women are notoriously underrepresented in randomized studies. In fact, female mice are notoriously underrepresented in laboratory studies. [See Paula Johnson’s TED Talk, His and hers … healthcare.] Extracting medical wisdom from a “randomized” study thus involves much more than blithely reading the last line of the study published in a medical journal. It involves human perception, arbitration and interpretation — and hence involves bias.

New medical technologies will not diminish bias. They will amplify it. More human arbitration and interpretation will be needed to make sense of studies — and thus more biases will be introduced. Big data is not the solution to the bias problem; it is merely a source of more subtle (or even bigger) biases.

Perhaps the simplest way to tackle the bias problem is to confront it head-on and incorporate it into the very definition of medicine. The romantic view of medicine, particularly popular in the nineteenth century, is of the doctor as a “disease hunter” (in 1926, Paul de Kruif’s book Microbe Hunters ignited the imagination of an entire generation). But most doctors don’t really hunt diseases these days. The greatest clinicians I know seem to have a sixth sense for biases. They understand, almost instinctively, when prior bits of scattered knowledge apply to their patients — but, more important, when they don’t apply to their patients. They understand the importance of data and trials and randomized studies, but are thoughtful enough to resist their seductions. What doctors really hunt is bias.



Facebook Comments

Leave a Reply

Your email address will not be published. Required fields are marked *


Time limit is exhausted. Please reload CAPTCHA.

Scroll To Top