The Problem With Statistics

By John Hanks, DC

"There are three kinds of lies: lies, damn lies, and statistics" — Mark Twain

I'm not sure when I began to mistrust statistics; I think it was in college in the late "'60s." I remember reading in the Peoria Journal Star newspaper something to the effect that 50 percent of high-school students in Peoria either smoked marijuana or knew somebody who did. A confusing conclusion, I agree, since it could have meant that there was only one student smoking dope, but 50 percent of the students knew who that person was!

I have learned to examine statistics more closely in the medical literature. I often question conclusions, especially since I have witnessed studies that unfairly beat up on chiropractic. Recently, I have come across some health care entities portraying statistical wisdom that seems flaky to me.

For instance, I found an article, "The Worst Study Ever," by Scott W. Atlas in the March 26, 2011 issue of Commentary. The article references a report from the World Health Organization titled "The World Health Report, 2000" that ranked 191 countries in "overall performance." There is obvious bias in many categories in the report, but the most egregious is the way national life expectancy was measured. The U.S. ranked 19th out of 29 countries; but this included immediate deaths from auto accidents and murder. Not that we should be proud of the U.S. murder rate, but the report seemed to pretend that a great health system could reach back in time and erase these deaths. That is why University of Iowa researchers recalculated the data without the fatal injury variables, and voila! The U.S. came out number-one, with the greatest life expectancy.

A second example of statistics gone wild comes from an article in Slate magazine, Sept. 26, 2006 issue, by Darshak Sanghavi. The article, "The Crucial Health Stat You've Never Heard Of," is an example of several I discovered that discuss a measurement invented by epidemiologists in 1988. It is called "number needed to treat" (NNT).

Drug companies like to use a statistical reference called "relative risk," meaning that, for example, a 31 percent reduction in heart attacks with a statin called Pravachol (according to a 1995 study) sounds good. It means that if one takes this drug every day for five years, the incidence of heart attack drops from 7.5 percent to 5.3 percent. This is indeed a 31 percent drop. But another stat, the "absolute risk," would only be a 2.2 percent reduction (if one subtracts the two numbers). This is not a particularly exciting find.

If the NNT is used, it means that 50 people need to take the drug to prevent one person having a heart attack. To expand on this stat, 208 people taking an aspirin daily might save one heart attack. That's a lot of aspirin. But all of this does not include discussion of the side effects or other risks, not to mention the cost for the designer drugs. So, as you can see, choosing the "type" of statistical analysis means everything.

A third example comes from the journal Health Affairs, April 9, 2011, in a study by Classen, et al., titled "'Global Trigger Tool' Shows That Adverse Events in Hospitals May Be Ten Times Greater Than Previously Measured." The long title almost sums it up. Apparently, the way that U.S. hospitals measure mistakes ending in injury and death is old fashioned. Detection methods such as voluntary reporting and an automated indicator from the Agency for Healthcare Research and Quality are inferior to a new method called the Global Trigger Tool. This approach searches through a patient's file more carefully – 10 times more carefully, since the old methods missed 90 percent of the adverse events! The warning from this article, in my opinion, is that a hospital might prefer to continue using the old statistical methods, because it makes their institution appear less dangerous or lethal.

Which brings me to a June 9, 2009 article by Harriet Hall in an online magazine called Science-Based Medicine. She reported on studies concerning complications of spinal manipulation, including a systematic review from the journal Spine and a few others. Her conclusion was that "[a]dverse reactions are common after spinal manipulation, but they are usually benign and transitory." She went on to estimate that these mild or more serious cases of increased soreness, radiation of pain and the like occur "33 to 60%" of the time.

My response is, "So what?" In my experience, that's more "benign and transitory" reactions than I see in my practice, but these things happen. How many times do we hear, "I was pretty sore after that last adjustment, but the numbness and tingling in my hand went away!" However, I agree with the author concerning new patients. They deserve informed consent and carefully applied and appropriate technique.

Ms. Hall is biased, since she ends her article by saying, "chiropractic manipulations, especially neck manipulations ... there is little or no evidence that they are effective." Whoa! Wait a minute. Her entire article up to that point was about "adverse effects," not clinical effectiveness. The author thus makes a different type of faux pas, not statistical in nature: She wanders away from her core thesis and consequently loses credibility.

There were 269 online comments regarding this article. I made the mistake of trying to print them, and ran out of ink before I shut the printer down.

Click here for more information about John Hanks, DC.

Page printed from: