Confirmation Bias is Necessary to Any Meaningful Research, Yet Can Blind Researchers to Conflicting Results
Confirmation Bias is a bit of a dirty word these days. Typically it is heard amidst accusations of cherry picking the data or ignoring conflicting research. However, it is hard to conceive of anything meaningful getting done without it, so this is a two edged sword.
If you want to deliberately support one position by picking confirming research, you have only to go to the Internet. We all know this. Is a vegan diet best? Or is a low carb diet best? There is little overlap. They can’t both be best. We all know that either side is easily supported by choosing the right search words. If we type Ornish Good, Google will oblige us with a page of confirming links, and will just as cheerfully confirm a search for Ornish Bad with links to pages that support the opposite view. Same with Atkins. Google does not equivocate. It wants to give you what you are looking for. Confirmation bias.
But there is another side to this. Without confirmation bias, science would grind to a halt. If some genuinely new knowledge is to be plucked from our mysterious universe, a huge dose of confirmation bias is mandatory.
Introductory psychology texts tend to begin with a definition of the Scientific Method that goes something like this:
- Scientist Formulates Hypothesis
- Scientist Cooks Up Experiment
- Scientist Tests Hypothesis
- Scientist Publishes Results
The book then goes on with all sorts of psychology examples so that you will be assured that psychology is a science, run by scientists, and that you are taking a science class.
Does anyone really believe science operates in this dry dispassionate way?
Do you suppose that when Einstein hypothesized that light had a speed limit, that it was just some random notion, and that he could equally have hypothesized that in the southern hemisphere, water flows up hill? Of course not. There had been some experiments a few years earlier that indicated that light wasn’t behaving as it ought to, and Einstein was looking for a reason why. And looking and looking. To explain it all, he came up with a theory so kooky and counter-intuitive that the most amazing thing is that anyone listened to it. So do you suppose Einstein just stopped thinking about it, saying to himself: “Ok they think I’m nuts, but there’s my hypothesis. I’ve done my part. Onto the next problem.”
Einstein’s whole career, his life work, was going to depend on confirmation of that kooky theory. How do you think he looked at any reported research that came in confirming or denying his theory. Of course he had confirmation bias. He could never come up with these theories otherwise. He could never have gone forward to the many other breakthroughs he fathered. And neither can anyone else.
However, Einstein didn’t abuse confirmation bias. He couldn’t. Experimental results in physics have to be accepted, pleasant or not, embarrassing or not. Physics is what’s called a hard science. Things are or aren’t true and they are verifiable. In fact, Einstein was wrong about a few things, and had to recant and revise in the face of conflicting evidence. And it was painful for him.
Now Einstein was the greatest scientist of the 20th century. And if the greatest scientist of the 20th century has confirmation bias on steroids, should we expect lessor luminaries to be neutral?
No. Scientists are people and they are all looking for their day in the sun. But things have gotten so bad, at least in medicine, that unbiased or truly objective research is the exception, especially when the results are, let us say, fuzzy, or there is a financial or political angle.
Medical research is considered to be a hard science, but it’s obviously not as ‘hard’, as physics. What’s true medically for one person isn’t necessarily true for another, and this opens the gateway for confirmation bias. We aren’t exaggerating. Let’s look at how confirmation bias is used with a new drug.
Any new drug undergoes clinical trials. Being double blind, they are supposed to prevent confirmation bias, but this bias is built in from the very start. All clinical trials have what is called ‘pre-selection’. Pre-selection allows various categories of people to be excluded from the trial.
Now this is nothing other than confirmation bias applied in advance. The drug company will pick the group it thinks will benefit the most. In some fields, this practice would be known as “salting the mine.”
Suppose things go well and the drug gets on the market. That pre-selection confirmation bias is quickly forgotten and the drug is available to all, including all those that were excluded from the trial, the very ones the drug company thought would do poorly on the new drug.
That drug will be promoted using data from the trial. Suppose it’s a statin. Then you might read 33% reduction in heart attacks (out of 1000 people 6 heart attacks instead of 9), and then there will be this soft shoe routine about how the drug was well tolerated, no side effects, no cancers, no problems. Confirmation bias.
Were those researchers actually looking very hard for side effects? Were long-term problems considered? No drug company researcher is going to go very far poking a hole in the company’s pretty new balloon. So the drug goes into production. Millions take it, and only when there is a large enough stack of death certificates, do we learn that no one taking the drug lived any longer because of it. And we could be talking decades. In the meantime, any evidenced that the drug wasn’t quite living up to its expectations are ignored. Confirmation bias.
So confirmation bias launched the drug by excluding those it likely wouldn’t help, and then confirmation bias carried the drug along by ignoring possible flies in the ointment, and only when medical science truly becomes a hard science, when the diagnosis is death, does any evidence emerge that the drug didn’t do quite what was expected.
But even death isn’t hard enough evidence. Doctors keep prescribing the drug. Why? Confirmation bias. They have always done so, genuinely believe it helps, and are not looking for contrary voices. This is not done deliberately. Most doctors don’t spend much time trying to find out which drugs they prescribe are useless.
So confirmation bias was used to launch the drug, and confirmation bias was used to keep it afloat till enough mortality evidence accumulated. And confirmation bias continued the drug alive long after it should have been pulled from the market.
How are we to understand medical research if confirmation bias is do endemic?
This is a tough one. Here’s what we have:
Assume guilty until proven innocent. Confirmation bias is probably a factor if:
- The research was funded by a drug company.
- The research was funded by a group that might have a political agenda.
- The lead researcher has built a career around some certain meme. Any research from Ornish, Atkins, or other dietmongers should be suspect.
Signs of innocence.
- Unexpected results. Normally, strange results are tossed. If they publish them, it immediately show some scientific integrity and courage.
- A test clearly performed by researchers with no seeming agenda (but look carefully).
Now we want to be clear. We are not implying that researchers are cooking the books, deliberately misleading us. These are serious, honest people. The problem is that confirmation bias is very human. If a group of people is trying to cure some disease with some new drug, they don’t go home every night and surf the Internet looking around for reasons their new drug won’t work. Would you? And there’s no dishonesty here. No one expects them to try to torpedo their own research.
Although the Ornish (vegan) versus Atkins (low carb) diets are a very highly charged political issue, here is one study we trust:
Comparison of the Atkins, Zone, Ornish, and LEARN Diets for Change in Weight and Related Risk Factors Among Overweight Premenopausal Women
Christopher D. Gardner, PhD et al.;
The abstract goes like this…
Context Popular diets, particularly those low in carbohydrates, have challenged current recommendations advising a low-fat, high-carbohydrate diet for weight loss. Potential benefits and risks have not been tested adequately.
Objective To compare 4 weight-loss diets representing a spectrum of low to high carbohydrate intake for effects on weight loss and related metabolic variables.
Design, Setting, and Participants Twelve-month randomized trial conducted in the United States from February 2003 to October 2005 among 311 free-living, overweight/obese (body mass index, 27-40) nondiabetic, premenopausal women.
Intervention Participants were randomly assigned to follow the Atkins (n = 77), Zone (n = 79), LEARN (n = 79), or Ornish (n = 76) diets and received weekly instruction for 2 months, then an additional 10-month follow-up.
So why do we like this one?
- It was published in JAMA, a blue chip journal.
- The researchers were university types with no obvious special interest.
- There doesn’t seem to be any fishy pre-selection going on, just pre-menopausal, non-diabetic, overweight women.
- There doesn’t seem to be much wiggle room. It is simple and direct. Several hundred women were assigned diets and followed up.
Quantitative Medicine (QM) is in the very enviable position of being almost immune to confirmation bias. Why is this?
QM is looking for anything that produces ideal blood test levels. Ideal levels almost always translate to ideal health. Anything goes – if it works. And, with the exception of LDL cholesterol, there is little debate about what the ideal levels should be. For several key ones, glucose, triglycerides, and HDL, there is broad agreement on what constitutes ideal. LDL cholesterol is controversial. Much of the medical profession says lower is better. QM says, get the low risk pattern “A” cholesterol, and don’t worry about LDL cholesterol level. But other than LDL cholesterol, there is general agreement on what is healthy.
So how to get there? Ornish says vegan diet will do it. The low carb crowd say the opposite. QM doesn’t care. If vegan works, fine. QM just looks at numbers. So far, none of the vegans that have come through have had good numbers, and most vegetarians had marginal numbers as well. The low carb people tended to have the best numbers. We go with that, but we verify. If it works, fine, if not, try something else. We know what usually works, but if 100 vegans paraded into the clinic with great numbers, we’d change our mind. QM isn’t married to any particular technique. QM is about an individual finding out how he or she can get the ideal results.
Here’s an example. You will see ‘eat less starch, eat less sugar’ all over the QM site. Why? Most patients have high glucose and insulin, and less sugar/starch always fixes that. But some patients have low insulin. This isn’t good either. Too low and there is no cell renewal, no muscle or bone building. For these people, the prescription is: eat more starch. Eat a potato every day.
We think all medicine will eventually work this way. A person will be diagnosed as an individual, not a statistic, and testing will determine which diets, exercises, medicines, and so on would be best.
So how do you deal with confirmation bias in medical research? Assume it’s there. It is the researchers job to convince you it isn’t. Use your common sense, and pay attention to both the research setup, how the researchers earn a living, and any political or environmental issues that may be overlaying the work.
So what is the next step if your numbers are good and you still can’t lose the fat