What happens when journalists report the findings of a scientific study to the general public? Often, the findings are stated out of context, broadly interpreted, and stripped of the nuance and uncertainty that characterize much of scientific research. Should this scare us back from publicizing findings to a wider audience than you might typically find in a scientific journal? Or is publicity critical to uptake?
What is our responsibility as scientists to communicate our findings, not only through dedicated dissemination and implementation planning, but also through the popular press?
Here’s a recent example. JAMA published the findings of a study by Mandager et al. on the association of cardiorespiratory fitness (CRF) with long-term mortality. CRF was measured by exercise treadmill testing in a sample of over 120,000 patients who were having this test done anyway as part of their care (that means these people were mostly being evaluated for symptoms potentially related to cardiovascular disease). The investigators quantified CRF as peak estimated METs. They separated by sex and age to calculate percentiles and then stratified CRF based on those percentiles. They used public and hospital records to determine mortality. Median follow-up was 8.4 years. The investigators concluded that CRF was significantly inversely associated with all-cause mortality (i.e., the fitter you are, the less likely you are to die). They went on to state that low CRF was as risky as or riskier than diabetes, CAD, or smoking. They also noted, importantly, that “there does not appear to be an upper limit of aerobic fitness above which a survival benefit is no longer observed”, but “there continues to be uncertainty regarding the relative benefit or potential risk of extreme levels of exercise and fitness”. They go on to offer several other sensible caveats, including that the study population may not be representative of the general population, and there are potentially significant unmeasured factors in this retrospective study. All things considered, though, this seems to represent very good news: a modifiable factor is strongly associated with increased longevity in a large sample with a long follow-up. Bravo!
So how did this get reported in the popular press? Gizmodo’s headline reads “No Such Thing As Too Much Exercise, Study Finds”.
What? Did the author and I read the same study? Even though the article goes on to describe the study fairly well, that headline is way off target. The investigators didn’t collect any data about how or how much the patients exercised at all, so the headline is misleading and could be misinterpreted by someone who didn’t read the whole story closely. This is classic click-bait tactics and it’s unfortunate. CNN took a different approach: “Not Exercising Worse for Your Health than Smoking, Diabetes, and Heart Disease, Study Reveals”. At least this one, though still dramatic, is recognizable as a reference to the paper. The CNN article, unlike the Gizmodo article, includes interview quotes with one of the investigators as well as with other experts not affiliated with the study. These experts suggest treating exercise as a prescription— reminiscent of the “Exercise is Medicine” program. Might programs like this benefit from the science discussed here, both by bolstering their evidence (important in getting clinician and insurance buy-in) and enhancing their public profile? Can press coverage help the evidence reach people in various clinical and non-clinical scenarios effectively? How can we take the findings from this study and maintain enthusiasm after the initial buzz wears off? And how can we prevent damage to the study’s credibility with the public, spurred by irresponsible reporting?
As a scientist, clinician, or other expert, do you talk to the media? Do you promote your work in traditional or social media outlets? Do you give quotes for stories not related specifically to your research?