Skip to Main Content
Library Guides

Researcher Support Library Services: Responsible Metrics

Bibliometrics refers to quantitative methods of measuring influence or impact in research literature – in other words, publication and citation data analysis. The extent of use and importance of bibliometrics will vary across different subject areas, but the best practice, or Responsible Metrics approach, for wider research assessment involve using a range of qualitative and quantitative indicators drawn from different data points.

Plan S, funder policies and responsible metrics

cOAlition S are committed to fostering responsible research assessment and evaluation, and one of the ten key principles of Plan S centres on valuing the intrinsic merit of the work and not consider the publication channel, its impact factor (or other journal metrics), or the publisher. To find out more about Plan S, visit our guidance on Plan S and funder policies.

So, what's going to change? The Wellcome Trust – the first of the cOAlition S funding bodies to update its open research requirements in line with Plan S – now expect Wellcome-funded organisations to publicly commit to responsible research evaluation. This can involve signing or endorsing DORA, the Leiden Manifesto or equivalent; read below to find out more about what these kind of manifestos entail.

The University of Plymouth is in the process of working towards an official policy on responsible metrics. 

What is Responsible Metrics?

What is meant by 'responsible metrics'?

Research metrics are used to 'measure' the impact of research outputs, researchers, research groups, and even journals. Authors often use journal metrics to decide where they want to publish; citation and other such metrics are often used to assess the 'quality' of a research output, or group of outputs; and many institutions use author metrics to inform the recruitment, probation, or promotion of researchers.

Bibliometrics have their place in research evaluation, but it is essential to exercise caution when using them, and to view them through a critical lens. Quantitative measurements cannot in isolation be used to assess research fairly; for example, citation metrics – though all too commonly used as an absolute proxy for research quality – do not tell the whole story of a research output. Not all research impact can be quantified, and many quantifiable indicators can be misleading, biased, or manipulated. The best practice for wider research assessment is therefore to use a range of qualitative and quantitative indicators drawn from different data points.

Responsible metrics is an approach which advocates the accountable use of metrics. The idea of 'responsible metrics' is not to do away with numerical metrics entirely, but rather to ensure that they are used appropriately, and in conjunction with other methods of assessment, to ensure that a more rounded picture of impact is produced.

"Responsible metrics [is] a way of framing appropriate uses of quantitative indicators in the governance, management and assessment of research." 

 'The Metric Tide', 2015

 

The Responsible Metrics movement & key documents

There are four key documents which anyone interested in responsible metrics should know about. Each has its own set of principles on responsible metrics but there are some common themes, such as:

  • Research should be assessed on its own merit, and the journal it was published in should not be used as a proxy for its quality
  • Assessment of a researcher's outputs should take the broader context into account, including but not limited to their career stage and discipline
  • A mixture of both qualitative and quantitative types of assessment should be used to give a more rounded picture of impact
  • Relying on any one measure of assessment is misleading
  • The methods used to calculate metrics should be transparent, open, and reproducible, and should also be regularly reviewed for biases and inaccuracies.

 

1. DORA - the San Francisco Declaration on Research Assessment (2012)

DORA is a worldwide initiative that recognises the need to improve the ways in which research outputs and researcher's contributions are evaluated.

Institutions and organisations can sign DORA, thereby declaring their support for the initiative, and committing to developing and promoting best practice in the areas it highlights.

Some of its focuses include:

  • Halting the practice of correlating journal impact factor to the quality of research outputs, or researcher's contributions
  • Assessing research on its own merit
  • For institutions, being explicit about criteria used for hiring and promotion decisions, and focusing on content rather than publication metrics or source
  • Considering the value of research outputs beyond publications, such as datasets and software

 

2. The Leiden Manifesto (2015)

The Leiden manifesto for research metrics is a list of ten principles to guide research evaluation. Like DORA, organisations can endorse the Leiden manifesto.

Some of its focuses include:

  • Quantitative evaluation should support, but not substitute for, qualitative expert assessment
  • Databases for evaluation should be open, transparent and simple
  • Accounting for variation in field and citation practices, and avoiding false precision
  • Recognising the systemic effects of assessment and indicators – too much focus on one metric encourages gaming and goal displacement.

 

3. The Metric Tide Report (2015)

The Metric Tide Report outlines five keys areas to inform the responsible use of metrics.

  1. Robustness: basing metrics on the best possible data in terms of accuracy and scope;
  2. Humility: recognising that quantitative evaluation should support – but not supplant – qualitative, expert assessment;
  3. Transparency: keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results;
  4. Diversity: accounting for variation by field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system;
  5. Reflexivity: recognising and anticipating the systemic and potential effects of indicators, and updating them in response.

The report made a number of recommendations to UK HEIs around how to select appropriate indicators, communicate the rationale for their selection, and pay pay due attention to the equality and diversity implications of their choices.

 

4. The Hong Kong Principles (2019)

The Hong Kong Principles were formulated and endorsed at the 6th World Conference on Research Integrity, June 2019, with the aim of helping institutions to recognise and reward behaviours that strengthen research integrity. Its five principles are:

1. Assess responsible research practices
2. Value complete reporting
3. Reward the practice of open science
4. Acknowledge a broad range of research activities
5. Recognise other essential tasks like peer review and mentoring