"If I play Tchaikovsky, I play his melodies and skip his spiritual struggles. If there's any time left over, I fill in with a lot of runs up and down the keyboard." So said the late pianist Liberace, who certainly knew how to put on a good show.But was it music? More like hubris, most would argue, and without a doubt supported by his quotation above.1 The same must be said for research, for which there fortunately are numerous excellent examples pertaining to both chiropractic and medical research and unfortunately, some conspicuously flawed poster children as well.
We can begin with a quotation at the end of a study, now 8 years old, which appeared in no less than The New England Journal of Medicine. It basically sought to compare the outcomes of low-back pain patients who were subjected to a side-posture manipulation, administered physical therapy via the McKenzie method or given an educational booklet; the outcomes were considered to be equivalent in all three instances.2 Methodological flaws aside (which I and several others have addressed at great length elsewhere3-6), we need to focus upon a conclusion that runs far beyond the confines of the data it was attempting to interpret. In a statement that smacks of political opportunism, the authors opined:
"Given the limited benefits and high costs, it seems unwise to refer all patients with low back pain for chiropractic or McKenzie therapy."2
In so many words, this recommendation seems eerily reminiscent of Liberace taking his liberties with Tchaikovsky's renowned musical score with his runs up and down the keyboard. As such, Cherkin's statement was essentially a jazz riff harking back to, but not substantially based upon, the formal body of the paper. It has no place as a conclusion of primary data appearing in a scientific journal. Fortunately, this entire study essentially was written off as a piece of fluff by being deemed to neither contribute toward nor detract from the evidence base of chiropractic.6
By no means is chiropractic research without its misuse and misinterpretation, either. Grod, Sikorski and Keating were quite right a few years ago in pointing out unsubstantiated claims in the chiropractic literature that did not always accurately reflect the actual conclusions drawn in the primary research. The key here is to facilitate the replication of observations by remaining within a framework that allows this, the framework generally considered to include testable hypothesis and adequate peer review.7 We are fortunate that one of our primary journals - JMPT - is included in the Index Medicus, which is generally regarded as requiring the highest standard of refereeing-submitted research results.
The point here is to understand the difference between abuses and arrogance in research and earnest efforts to appreciate the tentativeness of research results. It's been said, for instance, more often than you find those audacious unsolicited bank account transfer offers from African republics in your e-mail, that research never "proves" an issue, but rather adds to the weight of evidence supporting or contradicting a concept, crafted as a hypothesis. Results are based upon probabilities rather than absolutes, such that a multiplicity of them pointing in a uniform direction lends ever-increasing amounts of confidence to the notion that the guiding hypotheses are valid. But science is humbling in that even what appear to be the strongest hypotheses are sometimes ultimately overthrown, otherwise known as the principle of "My Karma Running Over My Dogma."
Whether one goes the RCT route in clinical research, one always has to remain aware of the fact that the RCT by its definition refers to a fastidious set of conditions which may have little or no bearing upon the outside world or populations. As if bringing the most rigorous RCT to the table weren't enough of an effort, we now face a daunting recent study which indicates that even the most orthodox RCTs can often deliver contradictory results upon subsequent investigations. In fact, highly cited and presumably the most scrutinized articles actually showed a trend toward even more contradictions or diminished effects upon repeating than the less-cited designs.8
On the other side of the coin is the fact that observational studies, in which patients are simply watched as they carry on in real-world activities, are improving in quality. Since they are not controlled, they would seem to have less internal validity, but make up a lot of that in superior external validity. In a study comparing various treatments for a common condition from 1985-1998, Benson and Hartz searched both the abridged Index Medicus and Cochrane databases and found that the estimates of treatment effects in observational studies and RCTs were similar. In only two of 19 analyses did the magnitude of the observational studies lie outside of the 95-percent confidence interval for the combined magnitude of RCTs.9
Another study reviewed five clinical topics in 99 original articles in five major medical journals from 1991-1995 and concluded that the average results of the observational studies were "remarkably similar" to those of the randomized controlled trials. Less heterogeneity of results was actually seen in the observational research.10 The point here is that major gains in adding useful evidence in clinical research can be obtained with careful attention to the design issues, such that a well-crafted prospective case series or case study conceivably could be more useful than a deeply flawed randomized controlled trial, which is traditionally represented as occupying a more commanding position in the hierarchy of clinical evidence.
It is really within the capacity of every clinician, therefore, to be able to conduct one or more good case studies or series. In today's environment of evidence-based medicine, it is imperative that every clinician at least be capable of maintaining detailed, systematic and accessible records. It is only in this respect that we believe practitioners will maintain better communication among themselves, with their patients, and particularly with the media. In laying the groundwork for any future, not only within the chiropractic profession but for all of health care, the curious, patient and methodical practitioner truly will have followed "the Tao of conscientious research."
- Jarski R. The Funniest Thing You Never Said. London, United Kingdom, Ebury Press, p. 186.
- Cherkin DC, Deyo RA, Battie M, Street J, Barlow W. Comparison of physical therapy, chiropractic manipulation, and provision of an educational booklet for the treatment of patients with low back pain. The New England Journal of Medicine 1998;339(4):1021-1029.
- Rosner A. Fables of foibles: inherent problems with RCTs. Journal of Manipulative and Physiological Therapeutics 2003;26(7):460-467.
- Chapman-Smith D. Back pain, science, politics and money. The Chiropractic Report, November 1998;12(6).
- Freeman MD, Rossignol AM. A critical evaluation of the methodology of a low-back pain clinical trial. Journal of Manipulative and Physiological Therapeutics 2000;23(5): 363-364.
- Royal College of General Practitioners. Unpublished update of CSAG Guidelines [reference 2], 1999.
- Grod JP, Sikorski D, Keating JC. Unsubstantiated patient brochures from the largest state, provincial, and national chiropractic associations and research agencies. Journal of Manipulative and Physiological Therapeutics 2001;24(8):514-519.
- Ioannidis JPA. Contradicted and initially stronger effects in highly cited research. Journal of the American Medical Association 2005;294(2):218-228.
- Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. The New England Journal of Medicine 2000;342(25):1878-1886.
- Concato J, Nirav-Shah, Horwitz RI. Randomized, controlled trials, observational studies and the hierarchy of research designs. The New England Journal of Medicine 2000;342(25):1887-1892.
Anthony Rosner, PhD
Click here for previous articles by Anthony Rosner, PhD, LLD [Hon.], LLC.