In a Letter to Science, Donald Siegel and Philippe Baveye discuss what they call “the paper glut” we face and address suggestions to improve the reviewing system.
“Publish or perrish” has become a real fashion of doing research and has somehow replaced the desire to share knowledge. The authors recall that the amount of schlarly published papers increased by 200 to 300% between the early 1980s and the late 1990s. Bibliometrics have been established, such as the h-(for Hirsch) and g-indexes, which aim at giving an evaluation of a researcher’s impact in a given domain. Despite all the critics adressed to those indexes, university administrations and funding agencies use the number of papers published by a faculty member per year as an evaluation of his/her productivity.
The authors adress an essential question: what about the reviewers? Ok, a great amount of papers are published, but someone had to review them beforehand. The problem is that peer-reviewing is not recognized as a scientific work while it is and it is also a time-consuming activity.
In a 2007 survey of three thousand academics conducted by Mark Ware for the Publishing Research Consortium, the authors reports that the most active reviewers were indeed overloaded. Thus, at about 90% of the authors are also reviewers. They declared having reviewed an average of 8 papers during the last year (compared to a maximum of 9 they were prepared to review). There were also reviewers called active: they completed an average of 14 reviews per year, nearly twice the overall. According to the survey, the active reviewers make up 44% of all reviewers and are responsible for some 80% of all reviews. There is clearly an overload here. Also, what is underlined in this survey is that longer time of review was a reason for dissatisfaction. But the study declares that 20% of invitations to review are declined.
Furthermore, the Sense About Science study conducted in 2009, reported:
Detecting plagiarism and fraud might be a noble aim but is not practical: A majority think peer review should detect plagiarism (81%) or fraud (79%) but fewer (38% and 33%) think it is capable of this.
What is encouraging in both surveys is that the major part of peer reviewers say they do it because they want to be active in the scientific community and improve the knowledge shared by publishing results. Nevertheless, Sense About Science reported that 16% of some 4,000 researchers declared reviewing papers because it could increase their chances to have futur papers accepted.
In both surveys, people discuss ways of recognizing peer-reviewers. Reviewers were divided over incentives: the Ware one reports that 35% are favorable and 40% are against payment for reviewers. In Sense About Science, just “over half of reviewers think receiving a payment in kind (e.g. subscription) would make them more likely to review; 41% wanted payment for reviewing, but this drops to just 2.5% if the author had to cover the cost”. Acknowledgement in the journal is the most popular option reported by Sense About Science.
The conclusion Siegel and Baveye make in their Letter to Science is to me an excellent one and we should not, we have to think about a way to contain this paper glut very seriously:
The number of articles published per year should never be used, under any circumstance, as a criterion in tenure or promotion decisions, or to rank academic institutions. As the medical community proposed 25 years ago , researchers should never be allowed to include more than three publications per year in activity reports; in research proposals, principal investigators should cite no more than 10 papers. University administrators should consider peer-reviewing as not only legitimate, but a vitally important way for researchers to contribute to scholarship, and should reward it as such. One way to accomplish this would be a new generation of review impact indexes, based on information provided by publishers. Effectiveness in peer-reviewing should be viewed as an essential skill to acquire for Ph.D. students, worldwide. Journals should demand that for every paper submitted, an author provide three reviews of other manuscripts. Perhaps if authors knew that their reviewing workload would increase dramatically with the number of papers they submit, they would craft fewer and better papers, ultimately benefiting all involved.
Siegel D, & Baveye P (2010). Battling the paper glut. Science (New York, N.Y.), 329 (5998) PMID: 20847251
This is a very nice write-up. I would also suggest that tenure and promotion committees should only be allowed to consider a professor’s top three or five papers. I don’t know how this rule would be demonstratively enforced though.
“Journals should demand that for every paper submitted, an author provide three reviews of other manuscripts. Perhaps if authors knew that their reviewing workload would increase dramatically with the number of papers they submit, they would craft fewer and better papers, ultimately benefiting all involved.”
I quite like that idea, but I worry that it would just lead to lots of really quick, haphazard reviews. People should review a paper because they want to review it. Even a small financial incentive would probably get people much more motivated, if only psychologically (it feels more like “proper work” if you’re getting paid for it, at the moment it often feels like a chore that you do as a favor.)