More than 80 years ago, Calvin Coolidge made quotation history by telling his secretary, Everett Sanders, "I do not choose to run for president in 1928." Now imagine: How would events have gone down if Mr. Coolidge had been ordered to run or not run? The betting from the street is that Coolidge's storied laconic demeanor would have become just a tad more dour.
The situation is just as absurd if we take a hard look at unblinded, randomized clinical trials (RCTs), which in most cases is what we are stuck with when it comes to comparing chiropractic intervention with either a placebo or alternative treatment. Think for a moment how a patient would react knowing which arm of a clinical trial they were assigned to. Given the choice of things, being ordered to the conventional treatment might appear to some to be equivalent to drawing the short straw, if a crossover design was not used.
In such cases, a participant might do one of three things: drop out of the trial, become noncompliant with therapy or be less than over the moon about the prospects of treatment and recovery. Any of these alternatives is bound to diminish the magnitude of recovery from symptoms, which may very well be at least part of the explanation why patient outcome measures in clinical trials are most often greatly diminished compared to what is seen in clinical observation in live treatment. As Niels Nilsson once told the chiropractic research community, the attenuated responses seen in clinical trials are so significant that they lead one to search for better outcome measures in experimental measurement.1
The situation could not have been more dramatically shown than by our reviewing the outcomes of lumbar disk herniation patients in the Spine Outcomes Research Trial (SPORT). There were no fewer than eight outcomes measured in these trials: bodily pain, physical function, the Oswestry Disability Index, sciatica bothersomeness, work status, satisfaction with symptoms, satisfaction with care, and self-rated improvement. Treatment options were either surgery (a standard open diskectomy) or nonoperative treatment (education and counseling, NSAIDs, injections, physical therapy or chiropractic).
If patients were randomized to these two choices, most outcomes were virtually indistinguishable at six, 12 and 24 months, with very modest improvements in the surgical cohort in satisfaction with symptoms at three- and six-month follow-up, sciatica bothersomeness at three-, 12- and 24-month follow-up, and self-rated improvement at six- and 12-month follow-up.2 But if patients chose one of the two treatment options in a parallel observational study, major superiority in the surgical cohort was shown at virtually all times and in all outcome measures.3 It is true that confounding in nonrandomized comparisons in self-reported outcomes is a possibility, such that these results should be interpreted with caution.
But the overall trend is undeniable. In the same setting with the same individuals performing the same interventions, a major difference emerges when one moves from randomization to patient choice of provider. This would seem to have everything to do with patient values and expectations, a major part of the typical office visit that is completely suppressed in randomized clinical trials. In the surgical cohort, one can almost imagine those patients thinking: "I've invested $28,000 in this surgery and, by gum, I'm gonna make sure that there's something to show for it!"
We see this effect elsewhere. In my experience as research director at FCER, I bore firsthand witness to how patients behaved in the conventional arms of clinical trials. In one instance, nine of 25 patients with colic who were assigned to medical treatment instead of manipulation dropped out of the RCT, whereas no such dropouts were seen in the chiropractic cohort. Not surprisingly, the outcomes in those patients who remained in the medical treatment arm were significantly inferior to those receiving chiropractic interventions.4 In the robust tension-type headache clinical trial conducted by Boline and Nelson, cited as one of the highest in quality of clinical trials, just five patients out of the initial cohort of 75 dropped out of the spinal manipulation group. In the medical arm initially comprising 75 patients, 19 patients dropped out.5-8
Yet a further demonstration of this principle occurred in the early stages of clinical-trial design planning with David Eisenberg at Harvard Medical School. Pilot data, in advance of what was ultimately a published clinical trial, revealed that most back pain patients, if given a choice, opted for treatment by an alternative medical provider rather than their usual medical doctor.9 The implication is that if patients were compelled to use their medical provider through an outright randomization, they would be less than enthusiastic with diminished outcomes to be expected, as outlined above.
This is just the point at which expectations and values of the patient become such an important part of the outcome equation; actually a missing link in outcomes research, as I have discussed previously.10 Clinical decisions are becoming more and more recognized as a shared effort between patient and clinician since what has been deemed "the most compelling and growing" component of evidence-based medicine happens to be the empowerment of the patient in the decision-making process.11-12
In contrast to becoming fixated on randomization schemes by the "clinical gaze" as described by Foucault, we need to realize that the role of patient choice is going to set many wheels in motion that will determine the extent of their clinical responses.13 In other words, there seems to be ample evidence that patients dragged kicking and screaming into a treatment arm by randomization in a clinical trial may express their displeasure in any number of ways, which can dampen their hoped-for clinical outcomes. Put in more colloquial terms, when a patient chooses a provider, in many ways, the "fix" may already be in.
- Nilsson N. Disparity between clinical observation and clinical trials. International Conference on Spinal Manipulation, Toronto, Oct. 4, 2002.
- Weinstein JN, Tosteson TD, Lurie JD, et al. Surgical vs. nonoperative treatment for lumbar disk herniation: the Spine Outcomes Research Trial [SPORT]: a randomized trial. JAMA, 2006;296(20):2441-50.
- Weinstein JN, Lurie JD, Tosteson TD, et al. Surgical vs nonoperative treatment for lumbar disk herniation: the Spine Outcomes Research Trial [SPORT]: observational cohort. JAMA 2006;296(20):2451-9.
- Wiberg JMM, Nordsteen J, Nilsson N. The short-term effect of spinal manipulation in the treatment of infantile colic: a randomized controlled clinical trial with a blinded observer. JMPT, 1999;22(8):517-22.
- Boline P, Kassak K, Bronfort G, et al. Spinal manipulation vs. amitriptyline for the treatment of chronic tension-type headaches: a randomized clinical trial. JMPT, 1995;18(3):148-54.
- Hurwitz EL, Aker PD, Adams AH, et al. Manipulation and mobilization of the cervical spine: a systematic review of the literature. Spine, 1996;21(15):1746-60.
- Kjellman GV, Skagren EI, Oberg BE. A critical analysis of randomised clinical trials on neck pain and treatment efficacy: a review of the literature. Scand J Rehabil Med, 1999;31:139-52.
- Bronfort G, Assendelft WJJ, Evans R, et al. Efficacy of spinal manipulation for chronic headaches: a systematic review. JMPT, 2001;24(7):457-66.
- Eisenberg DM, Post DE, Davis RB, et al. Addition of choice of complementary therapies to usual care for acute low back pain. Spine, 2007;32(2):151-8.
- Rosner A. "The Shifting Sands of EBM." Dynamic Chiropractic, Feb. 26, 2008;26(5).
- O'Connor A. Using patient decision aids to promote evidence-based decision making. EMB Notebook, 2001;6:100-2.
- Fisher CG, Wood KB. Introduction to and techniques of evidence-based medicine. Spine, 2007;32(19S):S66-72.
- Foucault M. The Birth of the Clinic: An Archaeology of Medical Perception. New York: Random House, 1973
Click here for previous articles by Anthony Rosner, PhD, LLD [Hon.], LLC.