The UK has a unique Dual Support System 21 for research funding: competitive research grants are just one component; the other is top-sliced funding, awarded to each UK university, department by department, based on how each department is ranked by discipline-based panels of reviewers who assess their research output. In the past, this costly and time-consuming Research Assessment Exercise (RAE) 22 has been based on submitting each researcher’s four best papers every six years to be ‘peer-reviewed’ by the appointed panel, alongside other data such as student counts and grant income (but not citation counts, which departments had been forbidden to submit and panels had been forbidden to consider, for both journals and individuals,).
To simplify the RAE and make it less time-consuming and costly, the UK has decided to phase out the panel-based RAE and replace it instead with ‘metrics.’23 For a conversion to metrics, the only problem is determining which metrics to use. It was a surprising retrospective finding (based on post-RAE analyses in every discipline tested) that the departmental RAE rankings turned out to be highly correlated with the citation counts for the total research output of each department (Figure 3; 2425).
Why would citation counts correlate highly with the panel’s subjective evaluation of researchers’ four submitted publications? Each panel was trying to assess quality and importance. But that is also what fellow-researchers assess, in deciding what to risk building their own research upon. When researchers take up a piece of research, apply and build upon it, they also cite it. They may sometimes cite work for other reasons, or they may fail to cite work even if they use it; but for the most part, a citation reflects research usage and hence research impact. If we take the panel rankings to have face validity, then the high correlation between citation counts and the panel rankings validates the citation metric as a faster, cheaper, proxy estimator.