What does the evidence-based practice really mean? Horror stories abound of interventions and tests that were promising in the laboratory but failed to deliver, or were harmful in widespread application. Often this relates to inappropriate application (i.e., indiscreet patient selection in practice) or situations or conditions for which the intervention has not been tested ("off-label" use).
Few would argue that patients want health care that will predictably contribute to bettering their condition or health status, but how do we know that a given procedure or intervention will do that? The "classic" naÜve answer is that high-quality research that "proves" a procedure's effectiveness must be available before providing or paying for the service. In fact, this is rarely the case. Indisputable evidence of effectiveness is required before advertising or making claims. However, the holy grail of perfectly designed clinical trials and treatment comparisons for a broad range of patient and varying degrees of clinical confounders are the exception in making treatment decisions.
Applying research findings in a specific situation may be limited by the population that was studied (i.e., excluding people over 60), the expertise of the clinicians used in the study (skills may be different in clinicians in general practice), the design of comparison groups, or any number of other confounders. When a study has effectively addressed of issues, it is said to have good external validity, meaning that its results may be applicable to the real world. As you might imagine, these things are often hard to control and make for complex, more expensive designs.
For example, if the changes expected between two treatment groups with only subtle differences (say, manipulation by an osteopathic muscle energy technique compared to a chiropractic high velocity adjusting techniques) are likely to be small, many more subjects will have to be included to see a statistically significant difference. Careful accounting of such methodological issues intrinsic to the design of the study is called internal validity.Because studies with poor internal validity would not even have applicability to the settings they were studied in, an emphasis on internal validity is the hallmark of grant writing, reviewing, funding and scientific publishing. Even so, poor studies get to press for any number of reasons, even in some of the most prestigious publications.
Finding both high internal and external validity is even more crucial. As you might imagine, this has become one of the most fundamental issues in health services research and policy development. Attempting to base real-world decisions on better evidence requires assessment of internal and external validity.
I can't think of a better argument for why research is important to the "average" practitioner than the use of published literature by your peers, health care administrators, adjudicators, peer reviewers and policy makers to make decisions. It becomes ever more important for the practitioners, leaders and professional representatives in chiropractic to be able to understand how "evidence" is assessed and how "best practices" are determined. The more DCs that can actually engage and contribute to the process, in a high-quality manner, the better.
Just as some scientists and their methods may have a bias that favors internal validity, practitioners may have just as much bias in favor of external validity. Yet, for research to have usefulness for making real-world decisions, it must have large measures of both.
Evidence-based medicine, best practices, clinical care pathways, guidelines, technology assessment, outcomes/performance measurement, and clinical accountability are all evolving to better reflect the balance of internal and external validity. In the long term, this will be of value to quality of care. Unfortunately, how it is applied during the learning curve phase (especially by those with inadequate understanding of the complexity of the issues involved) could have unforeseen consequences.
Unaccountable medicine has contributed to unnecessary surgeries and treatment and skyrocketing health care costs. When these costs got high enough, the marketplace and regulators reacted with managed care that has severely reduced the autonomy and discretion of providers. This has contributed to underutilization of appropriate care, short-term savings with long-term negative ramifications, and poor patient outcomes in many situations.
Obviously, either extreme is undesirable. With patient legislation and court cases setting precedents on how far payers can go to control costs and clinical autonomy, the pendulum is beginning to swing around again. However, where it goes next will likely embrace research and evidence more strongly than ever. Consumers have already voiced the importance of affordable health care. Policymakers are more explicitly implementing decisions based on evidence. As a result, the demand on scientists to address external validity in clinical studies is increasing. The result is more relevance for the chiropractic community to better understand how research is designed and applied in the world around us.
More than ever, we need our professional community leaders and ordinary practitioners to have working knowledge of research issues if they are to effectively engage policymaking and have constructive involvement in deciding where health care goes in the future. The level of research sophistication of payers, patients, providers, and policymakers has tangibly increased in recent years. Becoming a research "consumer" through support of the chiropractic research enterprise, understanding of research issues, and familiarity with our own professional clinical and scientific literature is becoming more important every year. Without it, our profession's ability to compete within the shifting healthcare marketplace is diminished.
Click here for previous articles by Robert Mootz, DC.