Bibliometrics refers to quantitative methods of measuring influence or impact in research literature – in other words, publication and citation data analysis. The extent of use and importance of bibliometrics will vary across different subject areas, but the best practice, or Responsible Metrics approach, for wider research assessment involve using a range of qualitative and quantitative indicators drawn from different data points.
So, what's going to change? The Wellcome Trust – the first of the cOAlition S funding bodies to update its open research requirements in line with Plan S – now expect Wellcome-funded organisations to publicly commit to responsible research evaluation. This can involve signing or endorsing DORA, the Leiden Manifesto or equivalent; read below to find out more about what these kind of manifestos entail.
The University of Plymouth is in the process of working towards an official policy on responsible metrics.
Research metrics are used to 'measure' the impact of research outputs, researchers, research groups, and even journals. Authors often use journal metrics to decide where they want to publish; citation and other such metrics are often used to assess the 'quality' of a research output, or group of outputs; and many institutions use author metrics to inform the recruitment, probation, or promotion of researchers.
Bibliometrics have their place in research evaluation, but it is essential to exercise caution when using them, and to view them through a critical lens. Quantitative measurements cannot in isolation be used to assess research fairly; for example, citation metrics – though all too commonly used as an absolute proxy for research quality – do not tell the whole story of a research output. Not all research impact can be quantified, and many quantifiable indicators can be misleading, biased, or manipulated. The best practice for wider research assessment is therefore to use a range of qualitative and quantitative indicators drawn from different data points.
Responsible metrics is an approach which advocates the accountable use of metrics. The idea of 'responsible metrics' is not to do away with numerical metrics entirely, but rather to ensure that they are used appropriately, and in conjunction with other methods of assessment, to ensure that a more rounded picture of impact is produced.
'The Metric Tide', 2015
There are four key documents which anyone interested in responsible metrics should know about. Each has its own set of principles on responsible metrics but there are some common themes, such as:
DORA is a worldwide initiative that recognises the need to improve the ways in which research outputs and researcher's contributions are evaluated.
Institutions and organisations can sign DORA, thereby declaring their support for the initiative, and committing to developing and promoting best practice in the areas it highlights.
Some of its focuses include:
The Leiden manifesto for research metrics is a list of ten principles to guide research evaluation. Like DORA, organisations can endorse the Leiden manifesto.
Some of its focuses include:
The Metric Tide Report outlines five keys areas to inform the responsible use of metrics.
The report made a number of recommendations to UK HEIs around how to select appropriate indicators, communicate the rationale for their selection, and pay pay due attention to the equality and diversity implications of their choices.
The Hong Kong Principles were formulated and endorsed at the 6th World Conference on Research Integrity, June 2019, with the aim of helping institutions to recognise and reward behaviours that strengthen research integrity. Its five principles are:
1. Assess responsible research practices
2. Value complete reporting
3. Reward the practice of open science
4. Acknowledge a broad range of research activities
5. Recognise other essential tasks like peer review and mentoring