Since space is so limited in this column, attempting to describe even the basics of biostatistics in just a few paragraphs is about the equivalent of writing an essay on the history of chiropractic in the same space. I heartily recommend to the readers of this column, two references mentioned in an earlier part of this series (1,2). Most will find that, rather than the dull, vapid science it sounds like, biostatistics is fascinating and easy to understand. Moreover, it lies at the heart of our need to understand our literature and our science, and without it we are powerless to judge the difference between reality and junk science.
Much of the interpretative process centers on the issue of cause and effect. For example, in a study of the effectiveness of spinal manipulation for low back pain, we ask, "Was the spinal manipulation solely responsible for the observed differences in outcome or were other factors involved?" This is where confounding variables may become a problem. Suppose, for example, that a larger portion of the patients in the manipulation group are employed in managerial positions, whereas the majority in the group given only ibuprofen were employed as minimum wage clerks. Suppose also that one of the measures of outcome was total time off work. The greater incentive to return to work expected in the former group might be partially responsible for observed differences in outcome. Or suppose that several patients had received not only spinal manipulation but deep tissue massage as well. Now we must ask whether the massage itself might have affected treatment. We might be more critical and question whether the mere contact with the physician had any effect.
One way around this problem would be to change the study design slightly and employ a single blind method where control and study groups are both seen by the doctor. The doctor, who is aware of their group status (and therefore not blinded), would administer a true manipulation to the patients in the study group and a sham manipulation to the control group.
Once we have shown a significant effect in a well designed study we have satisfied the criteria for contributory cause. It is not necessary that we know why something happened or precisely how it works. The mechanism of how penicillin works was discovered long after it had saved many thousands of lives. The discovery of vitamin B12 similarly saved many lives before its actual chemical structure and mechanism were known. Research builds upon itself like a coral reef. From lessons learned in in one trial, new projects, testing new hypotheses, are spawned.
Another common error made by researchers is in the extrapolation of their data. It is not wise, and usually not valid to extrapolate beyond the range of the data. If, to continue with our example of spinal manipulation, the researchers measured a 50 percent clinical improvement in in range of motion after 3 weeks of treatment, it would be inappropriate to conclude that with 6 weeks of care the likely improvement would be close to 100 percent (even though it might be true).
To extrapolate from animal models to humans or from one body part to another is tenuous at best and must be done with an appropriate disclaimer. In a recent article in one of our journals, the author frequently relied on data obtained from patients with industrial low back injuries to support his theories about neck injuries from whiplash.
Another type of extrapolation error arises when epidemiological data is applied to individuals. We might study a large population of construction workers and find a much greater incidence of lumbar disc herniation in this group than seen in sedentary workers. We might also find a much higher bending/lifting index in the construction group compared to sedentary workers. It would seem logical to conclude from this that higher bending/lifting indexes are associated with greater risk of disc herniation. It is possible, though, that many of the construction workers who have disc herniations actually did very little bending or lifting. This type of mistake is referred to as an ecological fallacy and can be very misleading. In fact, these erroneous conclusions are often perpetuated in the literature and reinforced by others who subsequently make reference to the original research.
Finally, researchers should be careful when extrapolating from a study population to the general population. The example of the 400 female chiropractic students in Part I of this series illustrates the potential problems with this. Since these women probably did not represent a typical cross section of American women, we cannot realistically draw general conclusions from the findings of this study which looked at only a narrow segment of a population.
In Part V of this series we'll delve into the important, but often confusing, measurements of sensitivity, specificity and predictive value.
- Hassard TH: Understanding Biostatistics. St. Louis, Mosby Year Book, p187, 1991.
- Riegelman RK: Studying a Study and Testing a Test: How to Read the Medical Literature. Boston, Little, Brown and Co., 1981.
Arthur C. Croft, DC, MS, DABCO
Click here for more information about Arthur Croft, DC, MS, MPH, FACO.