I’m thrilled to be on the ground in Minneapolis this week speaking, attending, and geeking out at the AcademyHealth Annual Research Meeting. You can join the fun on #ARM15. Warning: we all really love a great regression analysis and r2.
Yesterday, I spoke on a panel on engagement, focused on opportunities to include patients (or rather — people) in health services research. Health Services Research (HSR) is something you may not have stumbled across to-date as a term, but I promise you have benefited from its brave cohort of explorers. HSR is the research around the application of things like the study that originally discovered when you washed your hands, you had fewer bacteria on them. A health services researcher took that paper and walked to a hospital staff meeting and said, “Hey, we need to try washing our hands more and see if we have fewer patients infected with diseases while in the hospital, and, you know, having more alive patients leaving our facility.” (It was the 19th century.)
A significant amount of HSR work tracks the trends, changes and adjustments to clinical care resulting from policy changes and, as you may suspect, this year is a great year for capturing data from CHIP changes to Medicaid-expansion changes to long-term care changes as a result of the Affordable Care Act. So far, sessions have shared findings that demonstrate significant improvements to care access and costs per patient in each state. More awesome? The states sharing their data with researchers are seeing incredible new opportunities to make small investments in critical populations to create a superhighway to services already active in the community and reduce overall costs for what I’m calling “well-being” coverage for its residents.
Which brings me to a great debate that popped up yesterday afternoon that I find I’m still noodling this morning.
Sometimes, as a researcher, the phone rings and a government official’s office (local, state, or federal/all parties) asks for a data point related to your field of study or a project you are working on that is not yet available via journal publication. The working paper still awaits confirmed analysis from the full data set or perhaps the inquiry focuses on a specific new analysis for a subgroup of an already published paper.
Several folks shared stories about a time it happened to them, the responses seemed to follow these paths:
- The researcher emailed the preliminary findings and explained the caveats to a data point’s interpretation. The data point was published in a Congressional report without the caveats to its interpretation, sucking it out of context.
- The researcher opted to not email the preliminary findings. The resulting public dialogue on the topic was very damaging to the specific patient population as a result of not having evidence on the topic that would have been guided by the preliminary findings.
What’s a researcher to do? A debate fast broke out.
“Never release evidence and findings until you are 100% certain of results and have them submitted for publication and peer review.”
“Connect with your department or leverage your own social media presence to document the research evidence in question through a blog post and offer that as the source for the inquiry.”
“Engage in a dialogue around what you know, what you don’t know, and explore the discussion of the topic together.”
I thought of my colleagues at the Association for Health Care Journalists, a group working to collectively support one another as they cover health topics, tricky data and reports, and fairly debate findings and discoveries from diet hoaxes all the way to complex analyses from the teams at ProPublica and others. When in doubt? We explore it together.
We know, as a researcher and as a steward to one’s civic duty to community, you want to jump through the open windows when given an opportunity to share your findings. How do you protect your credibility in the process?
What is your solution?
This post originally appeared on Medium.com/@MsWZ