October 21, 2015DOI: 10.1056/NEJMp1512330
from the New England Journal of MedicineIn August 2015, the publisher Springer retracted 64 articles from 10 different subscription journals “after editorial checks spotted fake email addresses, and subsequent internal investigations uncovered fabricated peer review reports,” according to a statement on their website.1 The retractions came only months after BioMed Central, an open-access publisher also owned by Springer, retracted 43 articles for the same reason.
“This is officially becoming a trend,” Alison McCook wrote on the blog Retraction Watch, referring to the increasing number of retractions due to fabricated peer reviews.2 Since it was first reported 3 years ago, when South Korean researcher Hyung-in Moon admitted to having invented e-mail addresses so that he could provide “peer reviews” of his own manuscripts, more than 250 articles have been retracted because of fake reviews — about 15% of the total number of retractions.
How is it possible to fake peer review? Moon, who studies medicinal plants, had set up a simple procedure. He gave journals recommendations for peer reviewers for his manuscripts, providing them with names and e-mail addresses. But these addresses were ones he created, so the requests to review went directly to him or his colleagues. Not surprisingly, the editor would be sent favorable reviews — sometimes within hours after the reviewing requests had been sent out. The fallout from Moon's confession: 28 articles in various journals published by Informa were retracted, and one editor resigned.3
Peter Chen, who was an engineer at Taiwan's National Pingtung University of Education at the time, developed a more sophisticated scheme: he constructed a “peer review and citation ring” in which he used 130 bogus e-mail addresses and fabricated identities to generate fake reviews. An editor at one of the journals published by Sage Publications became suspicious, sparking a lengthy and comprehensive investigation, which resulted in the retraction of 60 articles in July 2014.
At the end of 2014, BioMed Central and other publishers alerted the international Committee on Publication Ethics (COPE) to new forms of systematic attempts to manipulate journals' peer-review processes. According to a statement published on COPE's website in January 2015, these efforts to hijack the scholarly review system were apparently orchestrated by agencies that first helped authors write or improve their scientific articles and then sold them favorable peer reviews.4 BioMed Central conducted a comprehensive investigation of all their recently published articles and identified 43 that were published on the basis of reviews from fabricated reviewers. All these articles were retracted in March 2015.
The type of peer-review fraud committed by Moon, Chen, and third-party agencies can work when journals allow or encourage authors to suggest reviewers for their own submissions. Even though many editors dislike this practice, it is frequently used, for a number of reasons. One is that in specialized fields, authors may be best qualified to suggest suitable reviewers for the topic and manuscript in question. Another is that it makes life easier for editors: finding appropriate peer reviewers who are willing to review in a timely manner can be both difficult and time consuming. A third reason may be that journals and publishers are increasingly multinational. In the past, the editor and editorial board of a journal knew both the scientific field it covered and the people working in it, but it's almost impossible to be sufficiently well connected when both editors and submissions come from all over the world. Having authors suggest the best reviewers may therefore seem like a good idea.
In the aftermath of the recent scandals involving fake peer reviewers, many journals have decided to turn off the reviewer-recommendation option on their manuscript-submission systems. But that move may not be enough, as the publisher Hindawi discovered this past spring. Although Hindawi doesn't let authors recommend reviewers for their manuscripts, it decided to examine the peer-review records for manuscripts submitted in 2013 and 2014 for possible fraud.
The peer-review procedure used in Hindawi's journals depends mainly on the expertise of its editorial board members and the guest editors of special issues, who are responsible for supervising the review of submitted manuscripts.5 Since the peer reviewers selected by the guest editors were not subject to any sort of independent verification, editors themselves could undermine the process in much the same way that authors or third-party agencies have done elsewhere: by creating fake reviewer identities and addresses from which they submitted positive reviews endorsing publication.
And that's exactly what happened — Hindawi's investigation revealed that three editors had engaged in such fraud. When all manuscripts handled by these editors were examined, a total of 32 articles were identified that had been accepted thanks to the comments of fake reviewers. It is unclear what motivated the guest editors to engage in such fraud, nor has it been determined whether the authors of the manuscripts involved participated in the deception in any way.
There are several lessons to be learned from these instances of peer-review and peer-reviewer fraud. One is that the electronic manuscript-handling systems that most journals use are as vulnerable to exploitation and hacking as other data systems. Moon and Chen, for example, both abused a feature of ScholarOne: the e-mail messages sent to scholars (at whatever address has been provided) inviting them to review a manuscript include log-in information, and whoever receives those messages can sign into the system. Most other electronic manuscript submission systems have similar loopholes that can easily be hacked.
The most important lesson is that incentives work. The enormous pressure to publish and publish fast — preferably in the very best journals — influences both authors and editors. This pressure exists almost everywhere but is particularly intense in China. It is therefore no surprise that the most inventive ways to game the peer-review system to get manuscripts published have come from China. The companies mentioned above that provide fake peer reviews all come from China and countries in Southeast Asia, and most of the authors involved in these cases come from the same areas. But it would be a mistake to look at this as a Chinese or Asian problem. The problem is the perverse incentive systems in scientific publishing. As long as authors are (mostly) rewarded for publishing many articles and editors are (mostly) rewarded for publishing them rapidly, new ways of gaming the traditional publication models will be invented more quickly than new control measures can be put in place.
Disclosure forms provided by the author are available with the full text of this article at NEJM.org.
This article was published on October 21, 2015, at NEJM.org.
Source: www.nejm.org/doi/full/10.1056/NEJMp1512330?query=TOC
http://www.nejm.org/doi/full/10.1056/NEJMp1512330?query=featured_home
No comments:
Post a Comment