On “Evidence-based” Practice / Medicine

These 2 tweets resonated with me:

I’ve tweeted my thoughts on Evidence-Based Practice, but it might be worthwhile to organize my thoughts on this topic in a blog post – one that will likely evolve over time.

The way I see it we need research (evidence) for a few basic reasons, but one in particular stands out to me: to improve our clinical practice in terms of efficiency, application, and outcomes. Admittedly, there are different way to get on this path of improvement; research-based evidence is one of them. Many passionate individuals have taken this path and espoused the triumphs of an Evidence-Based approach. The most common question (implied or blatant) in online discussions is “Where is the evidence?”, or some version of it: “Does the research support that?”, “Can you back that with data?”, etc.

All this reliance on data and research makes me wonder how strong of a panacea research data might represent. How pure is the science in research? How unbiased are these published “peer-reviewed” articles? What about the unpublished ones? Why were they turned down?

What about when the “facts” change? Just consider the many fluctuations in nutritional research over the last 30-40 years. Dietary recommendations seemed to change haphazardly. Here’s one on when you should start introducing your children to peanuts:

Is it any wonder that the public is losing trust in scientists and scientific journals? Can such uncertainty, wavering, and poor research infiltrate medicine?

Of course it can; just look here and here. I started to wonder when this would bleed into the world of physical therapy. The more I thought about it, the more I realized that it was already on the minds of some clinicians. Other than tweeting their opinions, some of them have even written up their thoughts in blog posts. Here are some interesting reads:

Metacognition, Critical Thinking, and Science Based Practice by @Dr_Ridge_PT. This is a solid read with links to relevant articles by others.

Where’s the Evidence? by @The_OMPT is a succinct read that echoes some of my sentiments.

And finally, The Tyranny of Evidence-Based Medicine by @DrGregBrown is another very worthwhile read.

All this research data and journaled evidence can get very overwhelming. The Cochrane Collaboration does a nice job of filtering through the flood of journaled publications. Their work is important and very necessary, but there might be something even more important and necessary. I’m not sure if I should call this critical thinking or clinical reasoning, but it boils down to the skill of assessing the outcomes of your interventions. Can it be done? Some say “no”. I say “yes”.

Let me clarify. I’m not talking about an understanding of the biopsychosocial process behind the outcomes, but the Clinical Process that yields the Outcome. (Look here for more on my thoughts on Process & Outcomes) This approach is blatantly Practical by applying the concept of phenomenology leaving the “how” to researchers and academics. According to Taleb from his fantastic book, Antifragile,

Phenomenology is the observation of an empirical regularity without a visible theory for it.

This rhymes with a very practical concept of mine: If treatment X consistently precipitates outcome X for condition X, then you might be onto something. This side-steps the need for biological, psychological, or any other explanation and leaves you with a straight-forward practitioner’s application or approach to a particular presentation.

Dr. Greg Brown echoes some of my sentiments in the closing paragraph of his fantastic post:

The bottom line is that there are two sorts of Medicine — the sort which works and that which doesn’t. Outcomes are the final arbiter. The promise of all this is that the practice of Medicine can once again become what it has always been — a person seeking help, and the practitioner providing a perspective. Without the tyranny of EBM, there is no longer a third entity in the room. It is no longer good enough for “the evidence says” to be the end of the conversation. Its premise is false, its promises illusory, its autocratic arrogance no longer tenable.

EBM is dead, only most of us don’t know it yet. Long live outcome-based medicine.

By no means am I discarding the value and worth of research. (Although, I must admit the EBP folk are starting to remind me of fundamentalists. Their certainty is more fragile than they may realize. Then again, I could be wrong.) It is a valuable machine that drives much needed efficiency in the profession. There is also a link between research and re-imbursement that you should consider. Dr. Sandy Hilton, DPT, MS did a fantastic job of explaining some of this for us:

Physical therapy research is an interest of yours. Tell us about the connection between research and insurance reimbursement. I think this is something most PTs (including myself) should know more about.

That’s a great question.  Specifically for pelvic health (incontinence, post-prostatectomy, pregnancy, sexual dysfunction, transgender work) there is a perceived push from the insurance companies to deny treatment as not medically necessary.  We need more quality research regarding the benefit of PT intervention to decrease the number of visits needed to reach functional independence.  We also need research showing that the quality of life and functional outcomes lead to less payments by the insurance company.  (of course, if studies are conducted and my theory is false – and PT doesn’t really make a difference, we must accept that and change!)

Right now there are insurance companies denying PT visits for bowel dysfunctions, bladder dysfunctions and sexual dysfunctions due to their interpretation that Physical Therapy is not needed, or that there is insufficient proof that intervention will alter outcome.  I think more participation in outcome studies such as available through FOTO will help shape the future of insurance reimbursement.  I care about this for the profession as a whole, even through my clinic is out of network for all but Medicare and Tricare.

Going back to my original thesis, evidence should enhance NOT tyrannize your practice. I believe we have reached (or are nearing an apex) of data deluge that necessitates the ability to filter through numbers, graphs, conclusions, and implications to distill real-world applicable practicality. Many Randomly Controlled Trials do not reflect the world we live in simply due to the fact that our world isn’t a Randomly Controlled Trial, and definitely isn’t Double Blind. Context and psychology play a major role in everyday situations, and (in my opinion) can over-ride any RCT conclusion. Subjective Perception is Powerful.

The ultimate filter (clinically) is our skill in assessing the outcomes of our interventions. Here are 3 links to help you along the way: one, two, and three. The third link is my favorite by a long shot!

Also, I encourage you to read the entire Twitter-thread sparked by Marc Andreessen. Equally worthwhile (and more relevant) read these 3 threads sparked by @NicoleStoutPT: onetwo, and three. (Again, the third link is the best!)

Could I be a bit off? Without a doubt. Mostly wrong? Absolutely. Completely wrong? Definitely. Either way, I want to hear from you. This way (maybe) I’ll be less wrong. Also, I plan on evolving this article if anyone convinces me of anything worth changing. I’ll leave old material in strikethrough font so that any change in my thinking can be followed.

I am @Cinema_Air.

Advertisements

9 thoughts on “On “Evidence-based” Practice / Medicine

  1. Hey Cinema,

    I enjoyed your post, thanks for sharing. You make several good points, particularly when you say “evidence should enhance NOT tyrannize your practice”. Evidence is but one aspect that can inform practice that comes with unique advantages and disadvantages that need to be considered carefully. I would hope that anyone who claims to support evidence/science based practice does not solely base their practice on published trials. To be honest, I’m not sure that there is anyone that actually does this. Science, outcomes, observations, experience and patient values are not isolated silos. We need all of these things to deliver best care.

    I think you will find this editorial from Lorimer Moseley worth a read — http://www.ingentaconnect.com/content/ppa/pr/2013/00002013/00000035/art00002

    A few things you wrote stood out, my specific comments are below

    “All this reliance on data and research makes me wonder how strong of a panacea research data might represent. How pure is the science in research? How unbiased are these published “peer-reviewed” articles? What about the unpublished ones? Why were they turned down?”

    There are no doubt limitations to research, those you listed and much more. I do not think any one thinks of research data as a panacea, though. Look no further than the work done by Ben Goldacre on highlighting the shortcomings and misgivings in his books Bad Science and Bad Pharma. Does proclaiming EBP dead in favor of a move to outcomes based medicine fix these problems? Probably not. Would identifying these problems and working towards solutions fix them? Possibly.

    Consider these very same questions when appraising observed and measured outcomes — How pure is the information they provide? How unbiased are our observations and outcomes? What about the outcomes after we stop following patients?

    —–

    “What about when the “facts” change? Just consider the many fluctuations in nutritional research over the last 30-40 years. Dietary recommendations seemed to change every 3 years! Coffee is good for you. Coffee is bad for you?”


    Of course! Science never pretends to be anything but provisional. Our understanding of the world should always be evolving. We once believed the earth to be flat. Now we know it to be round. We used to believe the Descartes model of pain was accurate. Now we believe pain to be a top-down experience, rather than a bottom-up sensation.

    This changing of course is medical reversal and it SHOULD happen when there is adequate information to drive the change (see here http://onlinelibrary.wiley.com/doi/10.1111/1742-6723.12044/full). This isn’t a bad thing, but a marker of progress.

    ——

    “If treatment X consistently precipitates outcome X for condition X, then you might be onto something.”

    You absolutely might be onto something, but that’s all you can reasonably say. Otherwise you risk making a hasty generalization. This equation can be useful for the generation of a hypothesis (which is important and necessary), but until it is tested we can not be certain that it is specifically “treatment X” that consistently precipitates outcome X. There are just too many confounding variables. To quote Moseley from the above editorial “If this [treatment X] is as good as you think it is, then we should all know about it” so let’s test it and learn more about it. We owe as much to our patients.

    I think the questions posed in the first paragraph are relevant here— How pure is the information gleaned from this equation? Is it not MORE prone to bias and false-positives than research?

    We can and should be assessing our outcomes, but we also need to understand just what claims can be made with our assessments. Claims of efficacy and effectiveness without evidence are limited. Outcomes measure outcomes, not treatment effectiveness (http://ajp.physiotherapy.asn.au/ajp/vol_51/1/AustJPhysiotherv51i1Herbert.pdf)

    ——-

    “Many Randomly Controlled Trials do not reflect the world we live in simply due to the fact that our world isn’t a Randomly Controlled Trial, and definitely isn’t Double Blind. Context and psychology play a major role in everyday situations, and (in my opinion) can over-ride any RCT conclusion. Subjective Perception is Powerful.”

    People who understand RCTs and their applicability are frank about stating that the information is representative of a sample of a population and its generalizability is limited to how the trial was set up. Some are more internally valid, some are more externally valid. Some look at the efficacy of a treatment, some look at the effectiveness.

    Prescriptive trials are going to have a high internal validity at the sacrifice of external validity. This is not a pitfall, but a goal of the trial’s design. These prescriptive trials are solely designed to determine if a treatment has efficacy. In other words, does the intervention work in controlled circumstances? These trials are very selective and controlled and certainly do not reflect the world we live in. That’s the point, though.

    Pragmatic trials on the other hand are designed with the goal of having high external validity and look to test effectiveness. That is, does the intervention demonstrate an effect in a setting that reflects typical clinical practice. These types of trials have “looser” exclusion criteria and account for things such as personal equipoise and the heterogeneity of the humans we treat. These do in fact reflect (though, not perfectly) the world we live in.

    To say context and psychology can over-ride any trial conclusion (especially a pragmatic trial) is a very bold statement that would need to be further substantiated.

    We need both types of trials (and many other forms of evidence) to inform, not dictate, our decision making. We also need to be able to appraise our outcomes, incorporate patient values, and utilize our clinical expertise to integrate the information from all of the above. To subscribe to any particular aspect (outcomes, RCTs, patient values, expertise) would be foolhardy.

    Thanks again for your post, these types of discussions are incredibly helpful.

  2. Kenny, thanks for your thought-out response!

    While I do not view research data as a panacea, I’m not so sure the EBP-folk on Social Media would say the same. In fact, in one conversation an individual on twitter stated s/he believes their practice is 100% supported by research. To this individual, research data is a panacea. If there’s one, then there are more…

    When appraising outcomes you (the thoughtful clinician) have an advantage over published research: the direct feedback over each visit and a stretch of visits. The turnaround period available for analysis is much shorter. And you, a much smaller boat to steer than the Costa Concordia of research data, can adapt to this feedback much quicker. The ability to learn from and adapt your clinical practice based on your outcomes is a skill worth sharpening. I believe this is one major advantage of our profession.

    Biases? There is no escape from biases. Another issue I have with “evidence” is the veneer of objectiveness that lures many readers. Research bias can’t be ignored. Combine this with the reductionist nature of RCTs and Double-Blind studies, and you have a more imperfect foundation than you first imagined. Again, research data has a place, but its flaws cannot be ignored. Do not underestimate your abilities as a thoughtful clinician; a majority of research proposals is confirmation of concepts envisioned by individuals like you.

    It might be worthwhile to check out the vast amount of research that is withdrawn every year. http://retractionwatch.com/ will give you an idea of how erroneous (and biased, and prone to monetary sway) these endeavors are – much more than you might realize at first.

    When I say the Context and Psychology can over-ride any RCT conclusion, I stand by it 100%. It is substantiated my clinical experience everyday. Just because an RCT concluded something, doesn’t mean it translates into clinical practice that involves the patient’s disposition, intention, environment (internal & external), etc. No need wasting research money on this!

  3. Hi Cinema & Kenny,

    Thought provoking post Cinema. My post on metacognition sums up much of my analysis on thinking and a practice based on the principles of science (which includes psychology, expectation, etc). But, I also think this longer post on data and how to interpret it attempts to illustrate the complexity of integrating basic science research, clinical research/RCT’s, clinical experience, and clinical observation.

    http://ptthinktank.com/2014/05/15/data-quality-garbage-in-garbage-out/

    I think Kenny presents some interesting questions. Of course there is bias and error in science, as well as publication bias towards positive trials or studies, which falsely inflates what we think works.

    Treating people with complex complaints such as pain is definitely messy, and research is not perfect by any means. But science based principles are the foundation for proper understanding.

  4. I promised Cinema my thoughts on biological plausibility as the basis for Science Based Medicine and as a favorite self-check for considering my own treatment approach, clinical reasoning and con-ed choices. The topic is covered with skill here: http://www.sciencebasedmedicine.org/plausibility-in-science-based-medicine/

    A short quote: “Plausibility is essentially an application of existing basic and clinical science to a new hypothesis, to give us an idea of how likely it is to be true. We are not starting from scratch with each new question – which would foolishly ignore over a century of hard-won biological and medical knowledge. Considering plausibility helps us to interpret the clinical literature, and also to establish research priorities. But plausibility is not the ultimate arbiter of clinical truth – it must be put into context with clinical evidence, just as clinical evidence must be put into the context of scientific plausibility.”

    There are 3 broad categories of answers to “is this approach I want to use in the clinic or technique that I want to learn plausible?”:
    1) There is support for the approach from the current understanding of science and biology.
    2) Neutral or unknown: The idea is new, untested or not well tested, but the theory underpinning the idea can hold up to the current understanding of science and biology.
    3) There is no support for the theory or idea within current understanding of science and biology.

    My favorite part of this is “current” because it celebrates that there will be more known in the future than we know now. I have great respect for clinicians who learn and keep up with current knowledge. I have even more respect for those who can let go of their cherished techniques as the evidence shows there is no benefit. I used to be exceptional at ultrasound and used TNS over acupuncture points, they taught us in school that both of these were helpful… trouble is, despite that the machines are still marketed by vendors at APTA conferences, there is a large body of evidence that says ultrasound is less effective than any other thing we could do and TNS is largely placebo and that the electrode placement matters little… Giving up both of these are easy examples or applying science based medicine in the clinic.

    More challenging? Take a hard look at how you practice and what is known. Keep anything that falls into #1 and #2 and boldly abandon cherished theories/explanations for techniques that are in #3.

    I think there is resistance to rejecting implausible techniques or theories because of the bucket-loads of money spent on continuing education. If a PT takes a series of courses to become “certified” in something and that thing turns out through research to be not so great… and the results were due to the interaction with the patient, not the expensive technique itself, that therapist will experience some cognitive dissonance and may decide that they are right anyway, and the science just hasn’t caught up yet. And so that PT continues to use it, to teach it, to promote and defend it. What if there is questioning or criticism and requests for quality evidence that the intervention is the thing causing the effect? Then those using the questioned technique rally up and claim persecution or bullying or closed-mined opposition. Studies showing minimal to no effect are said to be poorly done or too narrow… We could plug in any number of techniques into that scenario, but it wouldn’t matter.

    The question is really “Does the theory underpinning your preferred technique(s) hold up to what is currently known about biology and science?” If yes, carry on. If no, find a new theory. (and test it by trying to DISPROVE it)

    1. Sandy, thanks for your comments. They’re always some of the more stimulating comments I read; be it on here, twitter, etc.

      My view: if you were to come upon a technique (learned or discovered) that consistently yields positive results (even if you don’t know the mechanism behind hit), then it is worth applying regardless of the fact that you cannot explain why it work. Certain exercises, for example, have improved outcomes regardless of the theories behind it. Could it have been placebo? Sure. Did it work? Looks like it did. Would you prescribe these exact exercises if the exact presentation presented itself again? I probably would…regardless of my lack of understanding of the theory(ies) behind the curtain.

      Is it worth it to develop a theory to explain the mechanisms behind the scenes? Absolutely. And that’s when I look to those more interested in the mechanics/theories than I am: the academics.

      Regardless of explanatory theories (which come & go in fad-like fashion), if I can obtain consistent outcomes based on patient presentation, then I’m good with it. Assessing outcomes isn’t mathematics, but can be approximated for practical application….I think.

      “Plausibility” is a term that I think will dilute with popularity. It seems very vulnerable to subjective interpretation of “objective” data or conclusions. Also, (I might be completely wrong here) I think Plausibility has a strong bias toward past interpretations and frameworks. Completely new models that do not fit currently popular frameworks will be derided and ignored regardless of outcomes. Again, nutrition is a perfect example of this. Opposition is a necessary component of progress, and everything (as it should) has bounds. To me, “plausibility” is a one-eyed mask at the masquerade – singular in vision, and very prone to blindsides.

      As far as explanatory theories, I completely agree with you in terms of their importance; I just place outcomes ahead of the theory du jour. Maybe I’m turning a bit Machiavellian??

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s