At the outbreak of the Iraqi invasion in late 2001, Americans were reassured that they would be kept apprised of events in the conflict by means of a cadre of reporters, who would remain "embedded" with the American troops.
In classical experimental sciences, we have often been warned that the paramount danger is to lose one's objectivity in pursuit of data that are supposed to be able to be reproduced by others. Unfortunately or amusingly (depending upon your world outlook), the history of research is fraught with hijinks in which this kind of objectivity has been mercilessly pummeled, warped, pulverized and otherwise hammered into a form which at worst is a laughingstock and at best is late for dinner. However, don't allow this discourse to be confused with other discussions dealing with qualitative research or whether the sciences overall are truly value-free. These two particular topics need to be saved for another day.
Let us start with Yin's warning in his excellent description of the case study.2 One of its sources of evidence (along with interviews, direct observation and physical artifacts) is what the author calls "participant observation," in which the observer occupies an insider's role in the investigation. This phenomenon occurs most frequently in anthropological studies, but this vignette has broader meaning because it traces a scenario in which the embedded observer may have to assume the role of advocacy. This is inimical to good science.
J.B.S. Haldane, a Marxist writer, son of an Oxford professor and faculty member himself at University College in London, had a penchant for having experimenters employ their own bodies in their investigations. With an obsession for saving divers and submariners from compression effects, for instance, Haldane became preoccupied with nitrogen intoxication, which becomes a major factor at depths exceeding 100 feet. In these murky environs, divers under the influence were known to pass their air hoses to fish. Or they would take a smoking break. Scientists who accompanied these divers, on the other hand, were sufficiently potted to be unable to conduct simple mathematical tests, take notes or even remember to hit the spindles of their stopwatches.3
Haldane also seemed to be able to rationalize one's way out of being put in harm's way in hazardous experiments. In his studies with diving, for instance, he was aware of the fact that perforated eardrums were a routine occurrence. Not to worry, said he, because: "The drum generally heals up; and if a hole remains in it, although one is somewhat deaf, one can blow tobacco smoke out of the ear in question, which is a social accomplishment."3
In a more serious vein, consider the fact that a standard scoring criterion for quality in any clinical trial is to make sure that the assessor is blinded, in turn assuring that the outcome assessment is unbiased. Other important criteria which are designed to prevent our personal involvement from skewing the results obtained include: blinding of the health care provider to prevent attention bias; blinding of patients (if at all possible) to eliminate the effects of bias due to expectations; and concealing the allocation of treatments.4
In a sense, you could argue that we are trying to have it both ways; i.e., to have the provider be responsive and sensitive to the patient's needs and values, while at the same time in a idealized situation not knowing who the heck they are treating, or how or why. The same would hold true for the patient. This is where we have to call upon our common sense and realize that we're looking at two different lines of investigation: Qualitative research, which attempts to take into account the real world of the patient in everyday life; and quantitative research, which goes strictly by the numbers in a defined and therefore artificial setting with techniques of intervention which are most commonly restricted and/or standardized for a given clinical trial.
The point is that we need both. One should not be confused with the other, although practice-based clinical trials may come close.
One of the more outstanding examples in the scientific literature in which the experimenter has become too intimate with the subject matter and allowed personal bias to have free reign lies in a recent so-called "systematic" review of systematic reviews.5 Conventional wisdom would have it that the more "systematic" you become, the more objective your product. Therefore, a "systematic review of systematic reviews," in being all the more rarefied, is that much more free of bias, right? Wrong!
In summarizing 16 systematic reviews of the effectiveness of spinal manipulation published between 2000 and May 2005, Ernst and Canter conclude that the data fail to demonstrate spinal manipulation is effective for treating back pain, neck pain, dysmenorrhea, infantile colic, asthma, allergy, cervicogenic dizziness or any medical problem. The exception is for back pain, for which spinal manipulation may be superior to sham manipulation, but not to conventional interventions. Considering the possibility of adverse events, the authors concluded that their review "does not suggest that spinal manipulation is a recommendable treatment."5
Among its many problems is the simple fact that the article is anything but systematic. Authorship bias is as conspicuous as a dandy dressed in ascot and spats at a rodeo. This is amply demonstrated by Ernst and Canter's attempt to expunge what appears to be an outlier in their data, which turns out to be the consistently positive findings of Bronfort in three of his recommendations for back pain, neck pain and headache.4,6 The message from Ernst and Canter is that these findings have to be taken with a grain of salt since they "originate from the same chiropractor."5
By that rather reductive reasoning (which appears to ignore the fact that Bronfort's papers are actually the product of multiple authors from different disciplines), one needs simply to go back to Ernst's 16 reviews cited and immediately note that an even greater number of the papers listed (25 percent of the total) were headed by Ernst himself, all of whose comments have been uniformly negative. These papers were hardly systematic in themselves, such that their biases were simply carried forward like an unfounded rumor into the next "systematic" review. There is little doubt that a double standard has been created in this argument. Overall, the methodology of this "review of reviews" has been shown elsewhere to be far inferior to that customarily employed in systematic reviews.7
This tale simply is meant to point out how insidious and sometimes absurd personal bias can be, such that every scientific paper has to be read with scrutiny in the best case and with skepticism in the worst. Personalities being what they are, one Alexander von Humbolt was once led to summarize the three stages of scientific discovery - far from the idealized version we may have grown up with:
People deny that it is true.
People deny that it is important.
They credit it to the wrong person.8
- "Days of Heaven." Paramount Pictures, 1978.
- Yin RK. Case Study Research: Design and Methods. Beverly Hills, Calif.: Sage, 1989, pp. 85-95.
- Haldane JBS. What Is Life, p. 197, quoted in Bryson B, A Short History of Nearly Everything. New York: Broadway Books, pp. 244-245.
- Bronfort G, Haas M, Evans RL, Bouter LM. Efficacy of spinal manipulation and mobilization for low back pain and neck pain: a systematic review and best evidence synthesis. Spine J 2004;4:335-56.
- Ernst E, Canter PH. A systematic review of systematic reviews of spinal manipulation. J Royal Soc Med 2006;99:189-93.
- Bronfort G, Assendelf WJJ, Evans R, et al. Efficacy of spinal manipulation for chronic headache: a systematic review. JMPT 2001;24:457-66.
- Bronfort G, Haas M, Moher D, et al. Review conclusions by Ernst and Canter regarding spinal manipulation refuted. Chiroprac Osteopat 2006;14.
- Ferris T. The Whole Shebang: A State of the Universe(s) Report. New York: Simon & Shuster, 1997, p. 73.
Click here for more information about Anthony Rosner, PhD, LLD [Hon.], LLC.