Printer Friendly Email a Friend PDF RSS Feed

Dynamic Chiropractic – July 15, 2002, Vol. 20, Issue 15

Growing Pains

By Arlan Fuhr, DC
The concept was relatively straightforward, the execution was meticulous, and the authors' cautions about the limitations in interpreting the findings were clear and appropriate. So, what went wrong?

Meridel Gatterman,DC, and her team of faculty and private practice clinicians set out to evaluate 10 chiropractic procedures in relation to 15 specific health problems, including acute and chronic disorders of the low back, legs and sacrum.2 It was the logical follow-up to the Mercy Conference guidelines,4 in which chiropractic methods were rated in isolation, that is, without regard to the particular clinical conditions to which they might be applied.

The Gatterman team recruited eight chiropractors (see Table) with a "broad knowledge of chiropractic technique procedures,"2 which were not affiliated with any proprietary technique, to conduct two sets of ratings. (The authors of the study did not participate as raters.) This evaluation panel first judged the quality of the available scientific literature bearing on each of the 150 "treatment-by-conditions" (10 treatments x 15 disorders), and secondly, rated each treatment-by-condition for effectiveness. Each rating was made on a 0-to-10 scale. In all, a maximum of 2,400 ratings were possible: (10 treatments x 15 disorders) x (two kinds of ratings) x (eight raters).

Table: Chiropractor-raters of quality of literature and effectiveness.

• Thomas Bergmann
• Margaret Karg
• Jackie Buettner
• George McClelland
• Peter Gale
• Leslie Wise
• Mitchell Haas
• Ron Williams

Unfortunately, something happened on the way to consensus. After reviewing some 172 research articles culled from a search of several databases (CRAC, MANTIS, MEDLINE, CINAHL), Gatterman and co-authors reduced the list to 139 papers; these included randomized controlled clinical trials, cohort studies and clinical case series. Confronted by this limited base of evidence, the evaluation panel balked. Although there were only five abstentions in the ratings of the literature, the rating for effectiveness produced 327 abstentions. In other words, in more than 27 percent (327/1,200) of the possible judgments of effectiveness, raters felt they had too little information to make a sound judgment about the usefulness of the procedure for particular clinical conditions.

The panel of raters had good reason to be reluctant to draw conclusions in many cases. With only 139 papers to fill 150 cells (10 treatments x 15 disorders), most cells had "no literature in them, and all cells have inadequate literature in them."2 The resulting matrix resembled a block of Swiss cheese with more holes than cheese.

Nonetheless, the authors computed the mean averages and standard deviations for the ratings within each cell, adjusting the sample size per cell based upon the several hundred abstentions. What resulted were two sets of ratings: one for the quality of the literature; the other for the presumed effectiveness of the 10 chiropractic techniques. The average ratings for each technique by each condition were offered in tables. However, Gatterman, et al., cautioned that comparisons among the rankings for treatment procedures and extrapolation to the real world of clinical practice were problematic, owing to several factors, including the paucity of evidence in the literature; the potential nonrepresentativeness of patients studied to those seen in practice; and that in actual practice, specific methods may be used in various combinations, thereby influencing their effectiveness.

The Gatterman team also identified a strong correlation between the strength of the ratings for the quality of the literature and the magnitude of the ratings for effectiveness. In other words, a particular treatment method for a particular clinical condition was rated higher if there were more and better quality evidence to support it. This is not too surprising, but the authors felt compelled to remind readers of the JMPT that "Lack of evidence in the literature is not evidence of lack of effectiveness." In other words, just because something hasn't been studied doesn't mean that it doesn't work. To draw such a conclusion would be to commit what philosophers refer to as an "appeal to ignorance": a logical fallacy in which the absence of evidence is offered as evidence.

The weaknesses and limitations of this project are not a reflection on the quality of work conducted by the authors or the panelists. What "went wrong," if you will, is a reflection of the still meager state of our science: too little hard data to draw firm, evidence-based conclusions of the many musculoskeletal problems that fill our offices. And perhaps this should not be too surprising, given that the history of hard-core research in chiropractic is barely two decades old.3,5,6,7 Gatterman and her team suggested that the greatest value of their project may have been to identify the specific areas in which we have yet to conduct research. Their project also serves as a prelude to the next clinical guidelines consensus conference, which will surely have to concern itself with the greater specificity that Gatterman and her coworkers attempted.

Unfortunately, this story doesn't end with the publication of this important and sobering paper. Within days of its publication, the internet was buzzing with various interpretations of the Gatterman project. Apparently, many readers misinterpreted this consensus project as equivalent to research per se, and thought the last word on chiropractic technique had been written. As though to confirm the lack of sophistication in reading scientific literature, one wag, ignoring the group's admonition that the lack of evidence was not equivalent to lack of effectiveness, went so far to suggest that the continued use of those chiropractic procedures that received lower ratings amounted to "malpractice." Oh, Lordy!

I was reminded of what transpired at the Consensus Conference on Validation of Chiropractic Methods at Seattle, Washington in March 1990,1 one of the earliest profession-wide attempts to make sense of the wide variety of clinical procedures. People were frightened; some technique developers arrived with their attorneys! A few imbeciles suggested that all brand-name techniques should be thrown out, since none had adequate experimental support. Fortunately, cooler heads prevailed, and sentiment coalesced around the notion that all techniques should be investigated, and that we should retain whatever can be demonstrated to work. Then, as now, we seemed to be our own worst enemies.

This time around, after a period of time to calm myself, I finally realized that what we're witnessing are growing pains. Chiropractors have been drawn kicking and screaming into this new era of research and accountability; many of us are ill prepared by our formal education to deal with the details and nuances of the scientific process. A mere generation of chiropractors has had access to the profession's premier scholarly periodical - JMPT, founded in 1978. Most DCs have had little formal training in the interpretation of scholarly works. In my case, learning about research has been a sometimes painful and stumbling process gained during years of collaboration with trained scientists and clinical investigators. Ironically, these non-DC mentors have often been kinder and more patient than some of my chiropractic peers. The PhDs have enjoyed the luxury of making their inevitable errors in the relative privacy of the graduate school classes, whereas we in chiropractic seem prone to learn our lessons the hard way - in public.

Let me close here by thanking and congratulating Meridel and her group for taking this next significant step on our path toward a better, more accountable and more effective chiropractic. Our DC scholars receive little appreciation for doing the hard work that will point the way to a better day for chiropractors and the patients we serve.


  1. Chiropractic Technique 1990(Aug);2(3):entire issue.
  2. Gatterman MI, Cooperstein R, Lantz C, Perle SM, Schneider MJ. Rating specific chiropractic technique procedures for common low back conditions. Journal of Manipulative & Physiological Therapeutics 2001(Sept);24(7):449-56.
  3. Gitelman R. The history of chiropractic research and the challenge of today. Journal of the Australian Chiropractors' Association 1984(Dec);14(4):142-6.
  4. Haldeman S, Chapman-Smith D, Petersen DM (eds.). Guidelines for Chiropractic Quality Assurance and Practice Parameters: Proceedings of the Mercy Center Consensus Conference. Gaithersburg MD, Aspen, 1993.
  5. Keating JC. Toward a Philosophy of the Science of Chiropractic: a Primer for Clinicians. Stockton, CA: Stockton Foundation for Chiropractic Research, 1992; Chapter 4: A brief history of developments in the science of chiropractic.
  6. Keating JC, Green BN, Johnson CD. "Research" and "science" in the first half of the chiropractic century. Journal of Manipulative & Physiological Therapeutics 1995 (July/Aug);18(6):357-78.
  7. Waagen GN, Haldeman S, Cook G, Lopez D, DeBoer KF. Short-term trial of chiropractic adjustments for the relief of chronic low back pain. Manual Medicine 1986;2(3):63-7.

Arlan Fuhr,DC
Phoenix, Arizona

Click here for previous articles by Arlan Fuhr, DC.

Join the conversation
Comments are encouraged, but you must follow our User Agreement
Keep it civil and stay on topic. No profanity, vulgar, racist or hateful comments or personal attacks. Anyone who chooses to exercise poor judgement will be blocked. By posting your comment, you agree to allow MPA Media the right to republish your name and comment in additional MPA Media publications without any notification or payment.

To report inappropriate ads, click here.