Understanding Medical Research – Part 1 – Spotting Doubtful Research

Four Warning Signs of Bogus or Doubtful Medical Research

Some medical research is deliberately misleading. Some supports a political agenda. Some supports a commercial agenda. Here we explain how to separatepart 1 the wheat from the chaff, or if you are a low carb type, how to find that diamond in the rough. A lot of the popular medical press, this includes major media outlets, quotes medical research results rather indiscriminately, so it helps a lot to have a sharp focus on what the actual results mean.

Statistics frequently turn up in medical research. Don’t worry if do you not have a lot of knowledge about statistics. Most medical researchers don’t either. (You probably should worry about that.) Many just plug results into a standard stat package and hope for the best. This is a dangerous practice and leads to a lot of fiddling around attempting to get a good result (also known as ‘data mining’). Some basic statistics will be covered in a future post. Beyond the ‘tricks’ of misreporting actual research results, keep in mind publishing is a human enterprise and thus contaminated by human foibles like pride and greed; there is no glory in reporting that a drug or intervention had no benefit, thus torturing the data so the researcher looks good is all too common.

Here are four common medical research tricks to watch for.

1) Slice and Dice. Suppose a new drug, we’ll call it Liftyurdo, has no overall benefit for the 10,000 people that took it. To use the slice and dice technique, see if it helped women, or men, or people over 50, or under 50, or with some previous condition. It is well known that any data, if sufficiently tortured, will yield the result you are asking for. Eventually, if the group is winnowed down enough, something will turn up like this: “New Liftyurdo reduces colon cancer by 15 % for women over fifty that previously smoked and had more than two children.” To detect “Slicing and Dicing”, look at the result or end point detail. Was there some seemingly arbitrary narrowing of the result group that isn’t explained? Was the result they were looking for clear? If either answer is no, slicing and dicing may have occurred.

2) Inappropriate Sample Size. Small improvements need large sample sizes (and vice versa). If some activity reduces a heart risk 5% for instance, you will likely need a cohort size of several thousand to ‘prove’ the benefit. On the other hand, if some new pill caused 9 people out of 10 to get over a cold in a day, that is a strong result, even though there weren’t many participants. To detect “Inappropriate Sample Size”, look at the result. Were a lot of people cured (small sample size meaningful) , or just a small percentage improvement (large number needed).

3) Deliberately Asking the Wrong Question. The food industry has managed to con most of America and Europe into believing whole-grain is good for you. They do this by funding research comparing whole grain to refined grain. You never see any research about the health benefits of whole grain versus NO grain. (Just try.) Though whole-grain is slightly better than refined grain, it is still equivalent to eating sugar directly, and if you have a sugar issue of any sort, whole-grain will cause just as much damage. To detect “Deliberately Asking the Wrong Question”, you have to ask: “What possible means could be tried to attain the benefit.” For instance, if it’s statins and heart disease, ask: “Regarding heart disease, what would I like to see statins compared to?” Could be another pill, but could also be diet, exercise, or taking peanuts. If they’re instead comparing statins to some other drug, they have already led you to a presumption that you have to take something.

4) Salting the Mine. In Drug trials, they will typically exclude certain categories of people, for instance, those taking some other drug. The details won’t be in the research paper abstract, but will be found in the main article, under ‘selection criteria’ or ‘exclusion criteria’. The normal justification for this is to make the results ‘clearer’. Unfortunately, it’s a little hard to separate making results ‘clearer’ from making results ‘better’. But in the real world, the drug is going to be prescribed to the excluded people along with everyone else. To detect “Salting the Mine”, you will have to read more of the paper and see if the exclusion criteria seem to make any real world sense. If it seems more likely that they are just trying to get better results, there is probably an attempt to Salt the Mine going on.

This is by no means a comprehensive list of what can go wrong with medical research.

 

Here is the list of UNDERSTANDING MEDICAL RESEARCH posts (past, present, and future):

PART 1 – Warning Signs Of Fishy Medical Research

PART 2 – Types Of Medical Research Studies

PART 3 – Interpreting Statistical Results

PART 4 – Medical Research Problems – Confirmation Bias.

PART 5 – Medical Research Problems – Commercial Subversion

PART 6 – Medical research Problems – Political subversion

PART 7 – The Cart drives the horse – what causes what?

PART 8 – Playing games with the included items, eg. smoking plus red meat

PART 9 – Using Fear, Uncertainty, and Doubt

 

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *