Skip to Main Content
Library Guides

Researcher Support Library Services: Bibliometrics for Tracking & Measuring Your Impact

Bibliometrics refers to quantitative methods of measuring influence or impact in research literature – in other words, publication and citation data analysis.  As individuals, researchers can use these tools to identify the citation performance of published outputs and to find highly cited authors and papers. 

The extent of use and importance of bibliometrics will vary across different subject areas.  Contact your Information Specialist for further advice.

Metrics and where to find them

 Author metrics

The H-index is the main author metric. It is a quantitative metric based on analysis of publication and citation data.

The H-index is defined as follows: “A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np – h) papers have ≤h citations each". For example, if you have 8 papers that have each been cited at least 8 times (and the rest of your papers have been cited <8 times), your H-index is 8.

The H-index was created by Jorge Hirsch, who spoke of its limitations:

"Obviously a single number can never give more than a rough approximation to an individual's multifaceted profile, and many other factors should be considered in combination in evaluating an individual..........especially in life-changing decision such as the granting or denying of tenure." (Hirsch, http://arxiv.org/PS_cache/physics/pdf/0508/0508025v5.pdf)

 

H-Index caveats

  • Databases such as Scopus, Web of Science and Google Scholar can calculate an H-Index.  Each databases has their own source list of indexed titles which citations are calculated from, so the H-index generated by each tool will be different for the same researcher.   
  • Disciplinary differences in citation practice require normalisation and context before assessing 'value.'  e.g.  the lifespan of articles, the coverage of the research topic within the source plus the rate of publication in a subject area all mean that h indexes only really have a context when compared with others in the same subject.
  • Early Career Researchers and researchers who have taken career breaks can be disadvantaged when compared against other researchers using the H-Index.
  • Self citations will increase the H-index (some information sources such as Web of Science will indicate self citations).
  • Negative citations: There can be concern that citation counts can be generated by review and evaluation of the research which could ultimately lead to papers being withdrawn or retracted.

 

But if you do want to know your H-index... 


Can you improve the accuracy of your H-Index?

Web of Science and Scopus automatically generate profiles for authors where work is indexed in their databases.  See our separate guidance on managing author profiles if you need to make amendments or claim these system generated profiles:

 Citation metrics

Web of Science, Scopus, and Google Scholar will all detail in their search results, the number of times a paper has been cited by other papers.  Each tool will generate a different figure for the same paper due to source data indexing differences.

Databases will attempt to normalise and contextualise these numbers:

  • Web of Science will identify both  'hot papers in field' (new papers) and 'highly cited in field' in your search hits
  • Web of Science also states a 'usage count' of the times the record has been viewed in Web of Science
  • Scopus gives a field-weighted citation for each citation count

Both tools enable ranking of hits by citation count 'high to low' and further, in-depth, analysis of citations:

 

How to analyse citations in Web of Science:

 

How to analyse citations in Scopus:

 

Field-weighting and normalisation

Field-weighting or field-normalisation of citation metrics aims to account for citation potential within disciplinary fields and of outputs of varying ages, since outputs in certain fields are more likely to have higher citation counts, as are older outputs.

Field-normalisation therefore makes citation metrics more comparable across years or disciplines.

An example of a field-weighted citation metric is SciVal's FWCI (Field Weighted Citation Impact) which  is the ratio of the total citations actually received by the denominator’s output, and the total citations that would be expected based on the average of the subject field.

 

Journal rankings

Journal metrics can help you establish whether you are publishing in the most appropriate journal for your research, or could you have greater impact if you published elsewhere?

See our separate guidance on 'identifying where to publish' which covers journal metrics such as Impact Factor (Web of Science's proprietary ranking of journals) and Scopus's equivalent, CiteScore, plus other measures for evaluating journal quality.

 

Caveats

Journal rankings metrics are for journals, and are not a score by which to measure an individual or a research output.  A journal's IF should not be considered a proxy for the quality of an article it publishes, nor used for research evaluation and benchmarking, since citation practices vary across disciplines.


Image result for altmetricsAlternative metrics

Altmetrics can highlight the attention papers are receiving on social media sites, newspaper articles, policy documents, television and radio.

These can be especially useful for recently published works that have yet to generate traditional citations.

The University of Plymouth has subscriber access to Altmetric Explorer.  Full guidance on using Altmetric Explorer is available from the Library.

Caveats of Altmetrics

Altmetrics have similar limitations to citation metrics in that they can only measure how much of a certain kind of attention an output is receiving. Having a higher or lower attention score does not necessarily mean a research output is of an accordingly higher or lower quality.

As with citation metrics, it is important to consider why an output might be receiving this kind of attention. Negative media attention will lead to a 'higher' altmetric score, for example!

It is also important to compare like with like – altmetrics for outputs of different ages, in different research areas, or of different types are likely not to be comparable.

Responsible Metrics

Responsible Metrics

When using citation-based metrics or other measures of impact, it is important to be aware of the issues surrounding their improper use.  There is an increasing movement towards the responsible evaluation of research to ensure that metrics are recognised as indicators and not the absolute worth of a person's research endeavours. 

To find out more about how to use measures of impact in a responsible way, visit our guidance on responsible metrics.

Metrics Toolkit