Health & Wellness / Lifestyle

Sacred Cows: An Endangered Species

Anthony Rosner, PhD, LLD [Hon.], LLC

Eddie: What am I watching? It just started, and I don't know what's happening (reacting to a screening of Ingmar Bergmann's The Seventh Seal).

Billy: It's symbolic.

Eddie: Yeah? (He responds to an image of the Grim Reaper.) Who's that guy?

Billy: That's Death walking on the beach.

Eddie: I've been to Atlantic City a hundred times, and I've never seen Death walk on the beach.1

And so, with this line from Barry Levinson's extraordinary 1982 film Diner, (from his "Baltimore" trilogy), comes a classic example of the anecdote questioning a system (in this case, the film's obvious symbolism), with a personal view from the trenches. So it goes in health care, in which traditionally held medical beliefs may be plausibly questioned by new observations - whether systematically derived, or taken from the individual patient's own experience. The challenge for everyone is to analyze and admit both types of evidence to our ever-expanding base of knowledge.

It turns out that what is considered the most rigorous form of clinical experimentation - the clinical trial - did not arise with the advent of antibiotics and other oral medications in the 1930s, as is commonly believed; it actually dates back some 250 years. In 1753, J.A. Lind released a report concerning the effect of citrus fruits on a debilitating disease of the day (scurvy), with dramatic results. It appears to have been the world's first clinical trial:

"On the 10th of May, 1747, I took twelve patients in the scurvy aboard the Salisbury at sea. Their cases were as similar as I could have them..."Two of these were ordered a quart of cider a-day. Two others took twenty-five gutts of elixir vitriol ...Two others took two spoonfuls of vinegar ...Two were put under a course of sea water (So much for ethics and informed consent!). Two others had each two oranges and one lemon given them each day...The two remaining took the bigness of a nutmeg ...The consequence was the most sudden and visible good were perceived from the use of the oranges and lemons."2

Ironically, this practice-based clinical trial seems to display certain design characteristics that are superior to those found just within the past decade in selected investigations. Although the statistical power of this particular study is obviously nothing to write home about, the author's attempt, two-and-a-half centuries ago, to seek baseline uniformity within a defined environment with matching and regular spacing of interventions is to be commended.

Roll the tape forward some 250 years, and you have the following checklist of key items for reporting randomized controlled trials (RCTs): (i) title and abstract; (ii) introduction with background; (iii) methods, including participants, interventions, objectives, outcomes and sample size; (iv) randomization, including descriptions of sequence generation, allocation concealment and implementation; (v) blinding procedures; (vi) statistical methods; (vii) results (including the flow of participants; recruitment; baseline data; numbers analyzed; outcomes and estimation; ancillary analyses; and adverse events); and (viii) comments (including interpretation, generalizeability and overall evidence).3

However, problems begin to arise with RCTs when you consider a number of realities. Limited resources dictate there can never be a deployment of RCTs to document every health care intervention. Additionally, patients studied in RCTs generally are not those seen in everyday practice; patients with comorbidities are often excluded in RCTs to obtain homogeneous samples, and one wonders what type of patient would voluntarily submit to participating in an RCT in the first place. Finally, blinding can be broken by outright cheating or by guessing the treatment group by means of the side-effects observed. All of these problems have been elegantly discussed in the past few months by Walach, Jonas and Lewith.4

Further problems become apparent when assembling a checklist of items that are omitted in a clinical trial: (i) symptoms to identify some patient subgroups; (ii) responses to previous therapeutic agents; (iii) short-term (24-hour) responses to remedial therapy; (iv) difficulties in compliance with therapy and reasons for noncompliance; (v) psychic or nonclinical reasons for impaired functional status; and (vi) the social support system available at home or elsewhere. Patients' expectations and desires for therapeutic accomplishment may or may not have been accounted for in more recent RCTs.5

In meta-analyses (systematic, statistical poolings of the results of RCTs), considered in many circles the most definitive of experimental demonstrations, additional problems abound. In meta-analyses, one may mix disparate groups of patients of varying homogeneity across different studies into one "salad." It also is possible to overlook radical departures in subgroups and in the quality of data sets thus assembled. Once again, such real-world effects in the presentation and treatment of patients as illness severity, comorbidities and pertinent co-therapies have to be accounted for, but often are not.6 The vicissitudes, misuses and abuses of RCTs and meta-analyses have been summarized by your loyal scribe at length elsewhere.7,8

What does all this mean, and specifically, what are the implications for chiropractic health services? It suggests that the sacred cows of experimental clinical medicine, often posed as obstacles to the provision of chiropractic health care,9 are not infallible and can be questioned if there are reasonable and well-crafted case studies to question them. Indeed, estimates of treatment effects from the more recent and sophisticated observational studies closely match those of RCTs: In only two out of 19 analyses did the magnitude of the observational studies conducted fall outside of the 95-percent confidence interval for the combined magnitude of RCTs. In other words, there appeared to be little evidence that the estimates of combined treatment effects from observational studies reported after 1984 were either consistently larger or qualitatively different from those obtained in RCTs.10

In terms of what the clinician can provide, there is obviously an explosion of clinical information that has been incorporated into a clinical decision. The days of limited, politically motivated and heroic decision-making appear to be numbered and place the patient at a distinct disadvantage, as argued in considerable detail in this space just two months ago.11 Instead, what appears to have happened is that the limits of the physician's capacity to review all appropriate clinical options have been reached, requiring the assistance of a database. Indeed, it has been argued that a physician lacking software to cope with the sophisticated information that originates from clinical experience, case studies, RCTs and meta-analyses is "like a scientist working without a microscope to augment the eye."12

In the spirit of true scientific inquiry and the advancement of knowledge, the sacred cows of clinical decision-making are not only fallible; they are truly an endangered species. For those who might suspect that I am seeking to discredit the RCT in this space, I am simply attempting to argue that the RCT, as with any endangered species, needs to be handled with care and expertise to maintain its viability in today's ecosystem of medical evidence.

References

  1. Levinson B. Diner. Metro-Goldwyn Mayer Film Co., 1982.
  2. Lind JA. Treatise on Scurvy in three parts, quoted in Hurst JW. Who organized the first clinical trial? From Medscape Cardiology 2002;6(2), Web site posting. www.medscape.com/viewarticle/441510, 09/25/02).
  3. Moher D, Schultz KF, Altman D, for the CONSORT group. The CONSORT Statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. Journal of the American Medical Association 2001;285(15):1987-1991.
  4. Walach H, Jonas WB, Lewith GT. The role of outcomes research in evaluating complementary and alternative medicine. Alternative Therapies in Health and Medicine 2002;8(3):88-95.
  5. Feinstein AR, Horwitz RI. Problems in the "evidence" of "evidence-based medicine." American Journal of Medicine 1997;103:529-535.
  6. Feinstein AR. Meta-analysis: Statistical alchemy for the 21st century. Journal of Clinical Epidemiology 1995;48(1):71-79.
  7. Rosner A. Fables of foibles: Inherent problems with RCTs. Journal of Manipulative and Physiological Therapeutics; 2002, accepted for publication.
  8. Rosner A. Tales from the crypt: Fables of foibles, or RCTs that go bump in the night. Dynamic Chiropractic, January 25, 2000;18(3).
  9. Gaumer G, Koren A, Gemmen E. Barriers to expanding primary care roles for chiropractors: The role of chiropractic as primary care gatekeeper. Journal of Manipulative and Physiological Therapeutics 2002;25(7):427-449.
  10. Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. New England Journal of Medicine 2000;342(25):1878-1886.
  11. Rosner A. Evidence or eminence-based medicine? Leveling the playing field instead of the patient. Dynamic Chiropractic 2002;20(25).
  12. Gaither C. How much do doctors really know? One man's crusade to help physicians think before they act. Boston Globe Magazine July 14, 2002.

Anthony Rosner, PhD
Brookline, Massachusetts

rosnerfcer@aol.com

January 2003
print pdf