Just as Odysseus was forced to pussyfoot his way between Scylla and Charybdis while passing through the Strait of Messina, clinical researchers have had to face their own horns of the storied dilemma by either capturing the Full Monty of health care interventions or reducing one or more aspects of their clinical experience to a form of measurement that is entirely objective and impartial. The problem is that these forms of measurement are but a tiny fraction of what is experienced by the patient, whereas capturing the full health care encounter is largely a descriptive undertaking subject to all manner of human error and bias.
I remember in particular a conversation with the renowned Danish chiropractic researcher Niels Grunnet-Nilsson in 2000, during which he bemoaned the lack of suitable outcome measures in chiropractic research. He had observed time and time again how large and meaningful responses in clinical treatment became sharply diminished once the patient was imported from the doctor's office to a research setting for measurements of one kind or another.
And so it is with clinical research in general. In this space about a year ago,1 I described all the patches and fixes that research authorities have applied to clinical research in their efforts to make it more clinically relevant. The research gurus wound up a new paradigm: comparative effectiveness research as an attempt to address the fragmented, inefficient, and glacial state of the U.S. health care system.2 Now come two new studies that appear to shake the very foundations of clinical investigations and research in general in the ever-evolving attempt to put more research into live clinical practice.
The first of these revolutionary reports originates with Peter Bacchetti from the Department of Epidemiology and Biostatistics at the University of California, San Francisco. It challenges the widely observed notion that a study is scientifically unsound and even unethical if it doesn't have at least 80 percent power in determining adequate sample size to avoid the type I error (chance of rejecting the null hypothesis if it is true). In reality, the problems with the 80 percent power rule turn out to be threefold:
- The rule assumes that a meaningful boundary exists between adequate and inadequate sample sizes, while in fact it does not.
- The rule relies strongly upon inputs that cannot be accurately specified without standard deviations for continuous primary outcome measures. Adding standard deviations would require so much preliminary data that the new study itself might not even be necessary.
- The study's P-value, assumed to be <0.05 for the 80 percent power calculation originally proposed by Cohen in 19653 to take effect, is rarely the sole criterion by which a clinical decision is made.4
What happens in reality is that the projected outcome value in a study rises continuously with increasing sample size, such that diminishing marginal returns are soon obtained with sample sizes that, in the end, are substantially smaller than those yielded by the 80 percent power calculations. What this means is that each additional subject adds less value than the previous one. This effect has been verified elsewhere, the savings in cost and time needed to perform a clinical research investigation being both self-evident and enormous.5-6
It gets worse. The slavish application of the 80 percent rule can lead to outright abuse, such as when investigators work backward and fabricate an effect size or rationale just to be able to obtain the 80 percent power to appease reviewers (for grants or otherwise); whereas in truth, the effect size or rationale may be more imaginary than real. Indeed, we cannot ignore the fact that breakthrough studies dealing with insulin therapy,7 the smallpox vaccine8 or the first case of an HIV cure9 were conducted with individuals or families. These particular investigations ignored the conventions of sample size, yet we know were game-changers in medical history.
How, then, would an adequate sample size be estimated? Bacchetti proposes that prospective researchers focus upon the practical issue of cost and diminishing returns to obtain the optimal value per dollar spent. For early studies, he suggests researchers should project the total cost of the study at different possible sample sizes and then select the sample size that minimizes the ratio of total cost to the square root of the sample size.6 Yet this calculation should not be set in stone, for if the funds spent for a larger study add a projected value that exceeds the additional expense, then larger sample sizes should be considered to be admissible.10
The second of the ground-breaking events in research development is a new report by economists from the Massachusetts Institute of Technology and the University of California, San Diego, that adds a new twist to traditional concepts as to how research is best accomplished. It is due to appear in the RAND Journal of Economics.
Basically, Pierre Azoulay (from the MIT Sloan School of Management) and his colleagues compared two groups of researchers in biology, presumably similar. One was mainly supported by the National Institutes of Health, which traditionally has supported tightly circumscribed projects with exquisitely structured grant proposals, often felt to be incremental in nature. The other research group enjoyed support from the Howard Hughes Medical Institute, which provides open-ended, long-term research funding, allowing more flexible – perhaps circuitous – research undertakings to be undertaken (which, incidentally, include dead ends and failures along the way).
The result was that the Hughes-supported group turned out about twice as many papers that were ranked in the top 1 percent in citation frequency. By the same token, the Hughes group was more likely to publish papers that tanked as well. In other words, the open-ended support was associated with more seminal research that could be conceived to be more innovative and breakthrough in nature. While the researchers were careful not to criticize the NIH for its emphasizing the importance of detail and follow-through, the fact remained that a freer, more spacious form of research allowing for spontaneity was shown to provide highly beneficial results that might not have been obtained otherwise.11
In the final analysis, how does this all relate to chiropractic? I see two major implications:
1. The profession needs to be far more accommodating to innovative, nontraditional and open-ended queries and approaches if it is to embrace the true spirit of D.D. Palmer's emphasis upon neural tone,12 let alone capturing the essence of patient well-being. This means that it must avoid falling into the knee-jerk pattern of following what it perceives to be the medical paradigm of research. Instead, it needs to embrace more of the elements of the patient as a whole in order to be most effective. This includes reaching out to such related interventions as acupuncture, homeopathy, naturopathy, and related professions in manual therapy that have been shown in varying degrees to be effective in practice.
For example, applied kinesiologists have already embarked upon this type of outreach to attempt to capture more inclusive patient experiences in treatment, much to their credit. For this initiative, they have had to endure the slings and arrows that clearly have accompanied most forms of medical advances through history. Indeed, it was Arthur Schopenhauer who once declared that "every truth passes through three stages before it is recognized. In the first, it is ridiculed, in the second, it is violently opposed, in the third it is regarded as self-evident."13 Therefore, conscience tells us that further research is needed to more clearly identify and amplify the elements upon which AK – like any discipline – is based.
2. Individuals should feel more emboldened to pose research questions and pursue them to the best of their abilities, seeking support wherever possible. Indeed, the program director in manual medicine at the National Center for Complementary and Alternative Medicine raised the point earlier this year that his department wished to see more innovative research proposals in the near future.14
The two studies just reviewed serve as vivid reminders that revolutions and upheavals in human thought are simply an integral part of our heritage. It is without question that a portion of these presumably outlandish challenges will eventually become a part of our daily lives and taken for granted, as Schopenhauer once proclaimed. For this reason, we should all take heart and feel emboldened for proposing new ideas, attempting to follow them through whenever possible. Stepping back from this invitation, on the other hand, is to invite stagnation and decline.
- Rosner A. "Comparative Effectiveness Research: No Longer Stuck in Neutral." Dynamic Chiropractic, June 3, 2010;28(12).
- Committee on Health Care Quality of Health Care in America, Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, D.C.: National Academy Press, 2001.
- Cohen J. Handbook of Clinical Psychology. Woman BB [Ed.]. New York, NY: McGraw-Hill, 1965, pp. 95-121.
- Bacchetti P. Current sample size conventions: flaws, harms, and alternatives. BMC Medicine, 2010;8:17.
- Bacchetti P, McCulloch CE, Segal MR. Simple, defensible sample sizes based on cost efficiency. Biometrics, 2008;64:577-585.
- Bacchetti P, Deeks SG, McCune JM. Breaking free of sample size dogma to perform innovative translational research. Science Translational Medicine, 2011;3(87):24.
- Banting FG, Bes CH, Collip JB, Campbell WR, Fletcher AA. Pancreatic extracts in the treatment of diabetes mellitus. Canadian Medical Association, 1922;12:141-146.
- Gross CP, Sepkowitz KA. The myth of the medical breakthrough: smallpox, vaccination, and Jenner reconsidered. International Journal of Infectious Diseases, 1998;3:54-60.
- Allers K, Hutter G, Hofmann C, Laddenkemper K, Rieger E. Thiel E, Schneider T. Evidence for the cure of HIV infections by CCR532/32 stem cell transplantation. Blood, 2011;117:2791-2799.
- Bacchetti P, McCulloch CE, Segal MR. Simple, defensible sample sizes based on cost efficiency. Rejoinder. Biometrics, 2008;64:592-594.
- Johnson CY. "When Breakthroughs Follow Failure." Boston Globe, June 20, 2011:B5.
- Palmer DD. The Chiropractor's Adjuster (The Text-Book of the Science, Art, and Philosophy of Chiropractic). Portland, OR: Portland Printing House, 1910.
- Marr M. Truth: Resuming the Age of Reason. Toboso Publishing, 2006.
- Khalsa P. Presentation at the Research Directors Meeting, Association of Chiropractic Colleges Research Agenda Conference, Las Vegas, March 17, 2011
Click here for previous articles by Anthony Rosner, PhD, LLD [Hon.], LLC.