Using indicators in the evaluation of research

Indicators can support the evaluation of research performance quickly, efficiently and with a high degree of objectivity. Their simplicity makes indicators particularly attractive. However, they should be used with caution because scientific practice is too complex to be grasped in one indicator.

This guideline for the use of indicators in the evaluation of research at Ghent University can be used for many purposes: the recruitment of professors, the career models, the selection of BOF applications, or an allocation model.

General tips when using indicators in research evaluation

  • Check in the overview list below whether the intended indicator is sufficiently reliable.
  • Only use the indicators that are in line with the objective of the evaluation (if you want to recruit researchers who do quality work, do not prioritize indicators that measure quantity).
  • Always use a combination of multiple indicators.
  • Practice sufficient restraint in individual evaluations: the undesirable side effects of indicators have greater implications than at higher aggregation levels.
  • Indicators without peer review can only be relevant at a very broad aggregation level (for example in a broad allocation model).
  • Indicators can be used at group and individual level as a starting point for self-reflection or as a basis for peer-review assessment by experts.
  • Use the above tips to minimize the risk that the product (an indicator) becomes more important than the goal itself (high-quality research).

Indicators frequently used in research assessment

Ghent University keeps an up-to-date list of frequently used and easily retrievable bibliometric indicators (in Dutch).
This list of the most frequently used indicators indicates what the indicators do and do not measure, in which context their use is recommended or not advisable, and in which information source you can find or calculate these indicators.

Be sure to check the indicators' scores in terms of reliability (acceptable, excellent, inappropriate) and usability (usable, recommended, to be treated with caution). Keep in mind that 'reliability' depends on the objective of the evaluation. We distinguish four aspects in the analysis. Although they are often linked to each other, the distinction is still important:

  1. Quality refers to the intrinsic quality of the contribution or the researcher, as assessed by experts in the field. When a proper peer review assessment is part of the development of the indicator, the indicator can make a certain statement about "quality".
  2. Impact refers to the effect of scientific output on the work of other scientists or in society. Normally, (scientific) quality is a precondition for generating (scientific) impact.
  3. With productivity reference is made to the activity level of a scientist.
  4. With visibility, reference is made to the reputation and visibility of a scientist or of the scientific output.

The entry under Information Source indicates where the indicator can be found.

  • For UGent researchers, more sources are available (eg Biblio, Oasis, ...) than for non-UGent researchers.
  • Some indicators (eg normalized citation impact, Altmetrics score) depend on paying access to databases. Non-UGent researchers often have no access to this information (relevant for ZAP application files, for example). If the paying agreement of UGent ends, access to this indicator may also disappear.
  • All indicators that are compiled on the basis of citations are only relevant for the disciplines within SCIE and SSCI (biomedical, exact, applied and part of the social sciences)
  • Some indicators need to be made up completely manually.
  • Some indicators are particularly labor intensive.

Further reading

Want these tips straight from the horse's mouth? (Source: Wolfgang Glänzel en Paul Wouters: Some criteria for building reliable bibliometric indicators for measuring research performance.” Clarivate Workshop, KULeuven, 2017)

Ten things one must not do at the individual-level
1. Don’t reduce individual performance to a single number
2. Don’t use IFs as measures of quality
3. Don’t apply hidden “bibliometric filters” for selection
4. Don’t apply arbitrary weights to co-authorship
5. Don’t rank scientists according to one indicator
6. Don’t merge incommensurable measures
7. Don’t use flawed statistics
8. Don’t blindly trust one-hit wonders
9. Don’t compare apples and oranges
10. Don’t allow deadlines and workload to compel you to drop good practices
Ten things one might do at the individual-level
1. Also individual-level bibliometrics is statistics
2. Analyse collaboration profiles of researchers
3. Always combine quantitative and qualitative methods
4. Use citation context analysis
5. Analyse subject profiles
6. Make an explicit choice for oeuvre or time-window analysis
7. Combine bibliometrics with career analysis
8. Clean bibliographic data carefully and use external sources
9. Even some “don’ts” are not taboo if properly applied
10. Help users to interpret and apply your results