Peer-Reviewed Scientific Journals Don’t Really Do Their Job

Less than a month later, this got me into trouble. Apparently I had upset some Very Important People by “desk-rejecting” their papers, (which means I turned them down on the basis of serious methodological flaws before sending out the work to other reviewers. (This practice historically accounted for about 30 percent of the rejections at this journal.) My bosses—the committee that hires the Editor in Chief and sets journal policy—sent me a warning via email. After expressing concern about “toes being stepped on,” especially the toes of “visible […] scholars whose disdain will have a greater impact on the journal’s reputation,” they forwarded a message from someone whom they called “a senior, highly respected, award-winning social psychologist.” That psychologist had written them to say that my decision to reject a certain manuscript was “distasteful.” I asked for a discussion of the scientific merits of that editorial decision and others, but got nowhere.

In the end, no one backed down. I kept doing what I was doing, and they stood by their concerns about how I was damaging the journal’s reputation. It’s not hard to imagine how things might have gone differently, though. Without the persistent support of the associate editors and the colleagues I called on for advice during this episode, I very likely would have caved and just agreed to keep the famous people happy.

This is the seedy underbelly of peer reviewed journals. Award-winning scientists are so used to getting their way that they can email the editor’s boss and complain that they find rejection “distasteful.” Then the editor is pressured to be nicer to the award-winning scientists.

I heard later that the person who had hired me as Editor in Chief described the decision as “an experiment gone terribly, terribly wrong.” Fair enough: That’s basically what I think about the whole system of peer-reviewed science journals. It was once a good idea—even a necessary one—but it isn’t working anymore.

It’s not that peer review can’t work; indeed, as the old saying goes, it’s the worst form of quality control, except for all the other ones that have been tried. But there are new ways of doing peer review that we haven’t yet tried, and that’s where preprints come into play.

Many of the problems with peer reviewed journals are problems with the journals, rather than problems with peer review, per se. Preprints allow peer review to be taken out of the journals’ hands, which opens up dramatic, new opportunities to improve it. There’s no guarantee that the freewheeling, open-ended peer review of preprints will be rigorous and just, but everyone can see the process: Was it thorough? Do the reviews seem detailed, fair? We get to judge the judges. Journals don’t let us do that. We just have to take their word that their peer review process is rigorous and just.

For now, most preprints will get very few, if any, reviews. That needs to change, but even just knowing that a paper has not been thoroughly reviewed is a huge improvement over the black box of journal-based peer review. As these public reviews become more commonplace, there is reason to hope that preprints will elicit more piercing criticism than typically happens at journals, particularly for sensationalistic papers by famous people. Journal editors and reviewers may be blinded by the flashiness of a paper’s claims, or the prominence of its authors; or else they may notice a study’s flaws but choose to publish it anyway for the “impact.” Either way they can be confident in the knowledge that they will not be held accountable for the stringency of the peer review process. In a preprint, though, a famous scientist’s exaggerated or unwarranted claims may be more likely to be called out, instead of less so.

Preprints also introduce new challenges, such as how to guarantee that unknown authors can get attention, or prevent friends from writing glowing reviews of each other’s work. But the most frequent concern I’ve heard—that preprints allow bad science to get into the hands of policymakers and practitioners—rings hollow. Peer reviewed journals have been disastrously ineffective at preventing that very outcome. Indeed, some of the papers we published under my editorship at Social Psychological and Personality Science have been convincingly, and quite devastatingly, criticized. Editors and reviewers are fallible, and the journal peer review process is far too flimsy to live up to its reputation. It’s time we stopped putting so much faith in journals, and look for more transparent and effective ways to peer review scientific claims.

Source

Author: showrunner