Blame the research-industrial complex.
Victor Torres / Stocksy
Science likely has a "spin" problem. According to a new study in PLOS Biology, more than a quarter of biomedical research papers may contain hype or misleading interpretations designed to make their findings sound more impressive.
To reach that conclusion, a team of researchers at the University of Sydney reviewed 35 previously published academic studies about "science hype." They found some disturbing results. Among the 31 meta-analyses they looked at (which combine data from different studies), some 26 percent of papers contained hype. The new study is itself a meta-analysis, since it reviews existing research. Does that make it a meta-meta-analysis? We digress.
Even more disturbing, in papers that described non-randomized trials—where study participants are not assigned to different treatment groups by chance—that number rose to 84 percent. If participants choose which group to be in, or the researchers assign them to groups knowing information about them, it could introduce bias into the results.
Published studies generally get combed through by peer reviewers (aka other scientists) as well as journal editors—if it's a non-bullshit journal—so this finding is upsetting to say the least. The paper was released to coincide with its presentation at the Eighth International Congress on Peer Review and Scientific Publication, a meeting where scientists discuss research practices; it happens once every four years.
What kind of spin are we talking about? The 35 studies reviewed used different definitions for hype, but put together, they showed a wide range of shady strategies. Some used results that weren't statistically significant to make inappropriate claims, or emphasized a particular subset of data to support a conclusion while ignoring the rest. Another problem lied in presenting data in a misleadingly favorable way, like underreporting negative outcomes or writing abstracts that were overly positive. Others went beyond the study results to make recommendations for doctors that weren't supported by the evidence. And finally, some of the studies committed that cardinal sin of science: claiming their study proved that A caused B rather than that A is merely associated with B.
Why does this happen? Of the 35 studies reviewed, 19 tried to identify factors associated with hype. People typically think to look for funding and conflict of interest statements when reading new studies, and so did the researchers. Unfortunately, it wasn't that easy to spot spin-producing factors; the authors found them too wide-ranging and unrelated to draw definitive conclusions.
They did notice, however, that research about what produces spin tended to focus on individual scientists, journals, or studies—not on the influences affecting entire sectors. That means the bigger incentives driving hyped science haven't been examined. "Conclusive" science gets more media coverage, which can burnish a researcher's reputation, which can lead to greater opportunities. That's an obvious problem, but one that hasn't been tackled so far.
"The contribution of research incentives and reward structures—for example financial and reputational—that rely on 'positive' conclusions in order to publish and garner media attention is yet to be addressed," Lisa Bero, a study co-author, said in a statement. So peer reviewer and journal editors: consider yourself warned. (Here's hoping the people looking at this one did a bang-up job.)
Bero and her co-authors call for, you guessed it, more study to determine what cultural and institutional factors create pressure on researchers to hype their work. They also want to see more examination of the effects of spin, and research into what could be done to combat it, such as publishing raw data alongside multiple interpretations of the data. Basically, the science field needs develop better ways to police itself. In the meantime, unfortunately, we'll have to reiterate another long-time axiom: caveat lector—reader beware.