Bibliometrics refers to quantitative methods of measuring influence or impact in research literature – in other words, publication and citation data analysis. As individuals, researchers can use these tools to identify the citation performance of published outputs and to find highly cited authors and papers.
The extent of use and importance of bibliometrics will vary across different subject areas. Contact your Information Specialist for further advice.
The H-index is the main author metric. It is a quantitative metric based on analysis of publication and citation data.
The H-index is defined as follows: “A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np – h) papers have ≤h citations each". For example, if you have 8 papers that have each been cited at least 8 times (and the rest of your papers have been cited <8 times), your H-index is 8.
The H-index was created by Jorge Hirsch, who spoke of its limitations:
But if you do want to know your H-index...
Web of Science and Scopus automatically generate profiles for authors where work is indexed in their databases. See our separate guidance on managing author profiles if you need to make amendments or claim these system generated profiles:
Web of Science, Scopus, and Google Scholar will all detail in their search results, the number of times a paper has been cited by other papers. Each tool will generate a different figure for the same paper due to source data indexing differences.
Databases will attempt to normalise and contextualise these numbers:
Both tools enable ranking of hits by citation count 'high to low' and further, in-depth, analysis of citations:
Field-weighting or field-normalisation of citation metrics aims to account for citation potential within disciplinary fields and of outputs of varying ages, since outputs in certain fields are more likely to have higher citation counts, as are older outputs.
Field-normalisation therefore makes citation metrics more comparable across years or disciplines.
An example of a field-weighted citation metric is SciVal's FWCI (Field Weighted Citation Impact) which is the ratio of the total citations actually received by the denominator’s output, and the total citations that would be expected based on the average of the subject field.
Journal metrics can help you establish whether you are publishing in the most appropriate journal for your research, or could you have greater impact if you published elsewhere?
See our separate guidance on 'identifying where to publish' which covers journal metrics such as Impact Factor (Web of Science's proprietary ranking of journals) and Scopus's equivalent, CiteScore, plus other measures for evaluating journal quality.
Caveats
Journal rankings metrics are for journals, and are not a score by which to measure an individual or a research output. A journal's IF should not be considered a proxy for the quality of an article it publishes, nor used for research evaluation and benchmarking, since citation practices vary across disciplines.
Altmetrics can highlight the attention papers are receiving on social media sites, newspaper articles, policy documents, television and radio.
These can be especially useful for recently published works that have yet to generate traditional citations.
The University of Plymouth has subscriber access to Altmetric Explorer. Full guidance on using Altmetric Explorer is available from the Library.
Caveats of Altmetrics
Altmetrics have similar limitations to citation metrics in that they can only measure how much of a certain kind of attention an output is receiving. Having a higher or lower attention score does not necessarily mean a research output is of an accordingly higher or lower quality.
As with citation metrics, it is important to consider why an output might be receiving this kind of attention. Negative media attention will lead to a 'higher' altmetric score, for example!
It is also important to compare like with like – altmetrics for outputs of different ages, in different research areas, or of different types are likely not to be comparable.
When using citation-based metrics or other measures of impact, it is important to be aware of the issues surrounding their improper use. There is an increasing movement towards the responsible evaluation of research to ensure that metrics are recognised as indicators and not the absolute worth of a person's research endeavours.
To find out more about how to use measures of impact in a responsible way, visit our guidance on responsible metrics.