HomeArticle

Peer review is on the verge of collapse. A review report costs $450? Scientists are no longer willing to "work for free out of love."

新智元2025-09-01 15:50
A daily compensation of nearly 1,000 euros still fails to attract enough experts to review projects. This is not an exaggeration but a real dilemma faced by top funding institutions. When monetary incentives start to lose their effect, is the peer-review system really beyond cure?

There is a device named MUSE on the super - large telescope in Chile, which enables researchers to detect the most distant galaxies.

It is so popular that during the observation season from October to April of the following year, the total application hours from scientists around the world exceeded 3,000 hours.

The problem is: this is equivalent to 379 all - night workloads, while the observation season is only seven months in total.

Even if MUSE were a cosmic time machine, there wouldn't be enough time.

In the past, the European Southern Observatory (ESO), which manages this telescope, would organize an expert group to select the most valuable projects from a vast number of applications.

However, with the explosive growth of application letters, the experts are gradually overwhelmed.

Therefore, in 2022, ESO came up with a new method: delegate the review work to the applicants.

That is to say, any team that wants to apply to use the telescope must also help review the application plans of other competitors.

This model of "applicant mutual review" is becoming a popular solution to the labor shortage in the field of peer review.

Nowadays, there are more and more academic papers, and journal editors are complaining because it is becoming increasingly difficult to find people to help review manuscripts.

Funding institutions like ESO are also struggling to find enough review experts.

What are the consequences of this high - pressure system?

Decline in research quality: Many people point out that there are now some studies of poor quality or even full of errors in some journals, which indicates that peer review has failed to ensure the quality.

Innovation ideas are buried: Some people also complain that the existing review process is too cumbersome and rigid, resulting in some truly exciting ideas not getting funding.

To be honest, these problems have existed for a long time.

Since its inception, peer review has been criticized for being inefficient, forming cliques, and being full of biases.

Data shows that people's dissatisfaction is becoming more and more intense. Especially after the COVID - 19 pandemic, the explosion in the number of papers has further increased the pressure on the system.

To solve the problem, people are starting to try various new methods, such as paying reviewers or providing clearer review guidelines.

But there are also some more radical voices that believe the peer - review system is terminally ill and has become untrustworthy.

They suggest that it should be completely reformed, and the most extreme idea is to simply abolish it completely.

Its history is shorter than expected

Although people think that peer review is the cornerstone of the scientific cause, today's review model didn't become popular in major journals and funding institutions until the 1960s and 1970s.

Before that, the review methods for manuscripts were far less standardized.

Melinda Baldwin, a science historian at the University of Maryland, who has studied the development of the peer - review system in academia, pointed out that at that time, some journals would use external reviews, but more journal editors completely relied on their own professional knowledge or the opinions of a small group of core academic experts to decide what to publish.

But with the significant increase in government public investment in scientific research, the number of papers also soared.

This prompted all journal editors to turn to external reviews to prevent a small number of core experts from being overwhelmed by a large number of manuscripts.

Even today, external review is far from a unified standard. It is more like a collection of a variety of inspection and screening methods, which vary among different journals, different academic disciplines, and different funding institutions.

The system formed in the late 20th century is now facing a similar crisis: too many manuscripts and too few reviewers.

The scientific research system is producing more and more papers, but the reserve of reviewers doesn't seem to be growing fast enough.

In a 2024 survey, about half of the approximately 3,000 respondents said that they had received more review invitations in the past three years.

Motivate experts to work out of passion?

Both scientific research funding institutions and academic journals are conducting various experiments in the hope of motivating researchers to take on more review work and submit review opinions more quickly.

Attempt 1: Non - monetary incentives

Some journals have tried to publicly display their review cycles, which has indeed shortened the review time to some extent, but the effect is mainly reflected in senior researchers.

Other journals have set up awards for high - yielding reviewers, but there is evidence that this kind of reward may actually lead to the award - winners reducing their review volume in the following year.

Another idea is to reform the scientific research evaluation system.

In April this year, Springer Nature conducted a survey of more than 6,000 scientists, and the results showed that:

70% of them hoped that their peer - review work would be included in their performance evaluations.

But currently, only 50% of them said that their institutions actually do so.

Attempt 2: Paying reviewers (a long - lasting debate)

The ultimate incentive measure may be money. The debate about whether reviewers should be paid has lasted for many years, and the two sides have distinct views.

Supporters believe: This is a fair reflection of the labor and value contributed by reviewers. Psychologist Balazs Aczel and his colleagues estimated in 2021 that in 2020 alone, reviewers around the world worked for more than 100 million hours without pay. If calculated according to the average salary of scholars, this contribution is worth billions of dollars.

Opponents warn: Paying may bring conflicts of interest and bad incentives (such as rushing through the work for money), and they point out that most scholars say that reviewing is part of their paid work.

More and more people are starting to resent providing free labor for commercial publishers, who profit from it.

James Heathers, a science integrity consultant, strongly expressed this point.

In 2020, he blogged that he would only accept unpaid review invitations from societies, communities, or other non - profit journals.

As for those large commercial publishers, he would issue an invoice for $450.

He now jokes that although the invitations from commercial publishers have indeed disappeared, the invitations from non - profit institutions have increased significantly.

The real results of the payment experiments

This year, two journals announced the results of their paid - review experiments, and the effects were very different.

Experimental case 1: Critical Care Medicine

This journal offers $250 for each review report.

This experiment funded by the Canadian government shows that paying does make reviewers more willing to accept invitations (the invitation acceptance rate increased slightly from 48% to 53%), the review cycle was slightly shortened from 12 days to 11 days, and there was no obvious change in the review quality.

David Maslove, an intensive care doctor at Queen's University in Canada and the deputy editor of the journal, said that the journal doesn't have enough funds to maintain paid reviews in the long term.

Experimental case 2: Biology Open

In contrast, the non - profit organization The Company of Biologists in Cambridge, UK, conducted a paid - review experiment on its journal Biology Open and achieved great success, and decided to continue this model.

Payment amount: £220 (about $295) is paid for each review report.

Response time: The journal clearly requires reviewers to give a preliminary response within 4 days so that the editor can make a decision to accept or reject the manuscript within a week after submission.

Amazing effect: In the experiment, all manuscripts received a preliminary decision within 7 working days, and the average processing cycle was only 4.6 working days, while the previous standard review process took 38 days.

Alejandra Clark, the editor - in - chief of the journal, said that the editorial department unanimously believes that the review quality has been guaranteed.

Some scientific research funding institutions are also struggling to find review experts.

"It's becoming increasingly difficult to find people who have the time, ability, and willingness to evaluate our project applications," said Hanna Denecke, the head of the German foundation Volkswagen Foundation.

This is the case even when they offer reviewers a daily reward of nearly 1,000 euros (about $1,160).

On June 30 this year, at a meeting in London, several UK funding institutions announced a successful experiment.

The results show that this model can review funding applications twice as fast as the traditional process.

To dispel people's concerns that "reviewers may give negative evaluations to competitors", the experiment divided the applications into different groups.

Reviewers only need to review applications outside their own groups, so they cannot affect the approval probability of their own projects.

Another reason why the Volkswagen Foundation favors the DPR model is that it disperses the decision - making power from senior and well - established scientists.

These senior experts are sometimes like gatekeepers, who may block others out.

The real way out: Expand the review team

Under the heavy pressure of the surge in the number of applications, some funding institutions are starting to take so - called demand - management measures, such as only considering one application submitted by a university for a certain fund.

But Stephen Pinfield, an information expert at University College London, pointed out that this just shifts the review burden elsewhere.

Ultimately, to solve the labor problem faced by funding institutions and journals, the most popular solution is to expand the team of review experts.

The growth in papers that need to be reviewed mainly comes from emerging scientific research countries, while review experts are often still selected from a small circle of senior Western academic experts.

Journal editors naturally tend to invite reviewers whom they think do a good job and can submit reports on time.

If they suddenly expand the invitation scope, they may end up receiving review opinions of lower quality or less accuracy.

Another popular idea is joint review, where a senior scholar is paired with a young researcher.

This method can not only introduce new review forces but also train new people at the same time, killing two birds with one stone.

To make the review more efficient and improve the quality at the same time, one method is to provide reviewers with a series of clear questions to answer.

This form is called Structured Peer Review.

Method 1: Structured review

In August 2022, Elsevier conducted a pilot project, requiring reviewers of 220 of its journals to answer 9 specific questions when evaluating papers. They found that:

The consistency among reviewers has improved: Compared with before the experiment, after using structured review, the possibility that two reviewers reach an agreement on the final suggestions (such as whether the data analysis is correct and whether the experiment is appropriate) is higher.

But the consistency is still not high: Even so, the proportion of agreement only increased from 31% before the experiment to 41%.

Currently, Elsevier has adopted structured review in more than 300 of its journals.

The way of asking questions also seems to help expose the knowledge gaps of reviewers.

In this model, reviewers are more willing to admit that certain technical aspects (such as statistics or modeling) are beyond their ability and suggest that other experts should take charge.

Method 2: Increase transparency

Those who advocate for a more transparent peer - review system also believe that this can improve the review quality.

They encourage journals to take two measures:

Publish the review reports together with the finally published papers.

Encourage reviewers to sign their names on