Journal article

Reviewers are blinkered by bibliometrics

26 Apr 2017
Description

There is a disconnect between the research that reviewers purport to admire and the research that they actually support. As participants on multiple review panels and scientific councils, we have heard many lament researchers’ reluctance to take risks. Yet we’ve seen the same panels eschew risk and rely on bibliometric indicators for assessments, despite widespread agreement that they are imperfect measures

Although journal impact factors (JIFs) were developed to assess journals and say little about any individual paper, reviewers routinely justify their evaluations on the basis of where candidates have published. Panel members judge applicants by Google Scholar results and use citation counts to score proposals for new research. This practice prevails even at agencies such as the European Research Council (ERC), which instructs reviewers not to look up bibliometric measures.

As economists who study science and innovation, we see engrained processes working against cherished goals. Scientists we interview routinely say that they dare not propose bold projects for funding in part because of expectations that they will produce a steady stream of papers in journals with high impact scores. The situation may be worse than assumed. Our analysis of 15 years’ worth of citation data suggests that common bibliometric measures relying on short-term windows undervalue risky research.

Publication Details
Identifiers: 
DOI: 
10.1038/544411a
Volume: 
544
Pagination: 
411–412
Language: 
License Type: 
All Rights Reserved
Peer Reviewed: 
Yes
146
Share
Share
Subject Areas
Geographic Coverage
Advertisement
Advertisement