DEFINING IMPACT
SCHOLARLY IMPACT
Section Contents:
-
What is Impact?
-
Development in research metrics
-
Grey Metrics
-
Critically Examining Metrics
-
Activity 1
Development in research metrics
In this OS, the terms indicator, measure, and metric will be used interchangeably, encompassing both the criteria and unit which ‘counts’ as impact (e.g. an assessment tool that is peer-reviewed and shared publicly or a peer-reviewed publication is an indicator/metric). The content in this section is related to Friesen et al. (2019) article “Approaching impact meaningfully in medical education research.”
Most literature examining the definition(s) of impact have been concerned with research impact, which is only one type of ES activity. But the questions applied to research impact can also be applied more broadly to ES. In this section, we will begin by focusing on developments in research metrics, then ES more broadly.
We are all familiar with traditional research metrics. These are often the criteria used to evaluate individuals for promotions and tenure decisions.
- Dollars in grant funding
- Number of peer-reviewed publications
- Number of citations
- Journal impact factor (JIF)
- Number of peer-reviewed presentations
In 2010, Priem et al. launched the Altmetrics Manifesto to address the fact that the academic cycle is slow (e.g. a journal article might garner a citation only 2-4 years after publication), that indicators like Journal Impact Factor (JIF) focus on the journal and do not tell us about individual, article-level quality (JIF was originally established to help librarians make subscription decisions), and to champion new, online tools that could track research impact in “real-time.”
Altmetrics was initially defined by Priem et al. as article–level metrics and was meant to examine the quality of each article (not only a metric like JIF reflecting where the article is published). A decade after the term was coined, it is now often conflated with alternative metrics. Regardless of the exact definition, altmetrics indicates the use of online indicators to capture immediate impacts of one’s research. For example, online mentions (blog posts, social media, and news), bookmarks through reference management software that suggest future citation of the article (e.g. Mendeley, Zotero, CiteULike), and the sharing of ‘raw’ science like datasets, code, and slides (GitHub, FigShare, SlideShare). There are now many aggregators that collect data from multiple, online scholarly communication sources to generate a score or count (e.g. Altmetric*, PlumX, Scopus, Clarivate Analytics [Web of Science], ImpactStory, ResearchGate).
*Altmetrics (plural, not capitalized) refers to Priem et al’s concept of article-level metrics, while Altmetric (singular) refers to a company later developed that aggregates data on “online activity around scholarly research outputs.”
Priem et al.’s contributions to our thinking of research impact cannot be minimized. Most journals now work with companies who provide an aggregated altmetrics score for each article. Priem et al. also brought up the very important question of appropriate unit of analysis (i.e. article versus journal) to determine research impact. Yet altmetrics suffer many of the same pitfalls as traditional metrics. Altmetrics measures mostly academic impact (e.g. Twitter mentions by other academics, not general public) and despite wanting to focus on article quality, altmetrics is also a quantitative measure of attention and does not fully reflect quality. For example, the retracted Wakefield article has both a high citation count (traditional metric – 1,353 citations in Web of Science) and high altmetrics (on PlumX – 1,799 captures, 291 mentions, 11,008 social media shares).
One of the issues with traditional metrics and altmetrics is a conflation of research productivity with research impact. An individual who is prolific might be ‘successful’ using the indicators above, but there is no assurance that a high number of peer-reviewed publications, citations, or Twitter mentions means anything beyond academic impact and recognition. Has their work resulted in uptake and use of findings (e.g. to inform curriculum or policies), changes in perspectives or behaviours, or been accurately understood? Citations that do not represent the actual cited text continue to proliferate (CITE). Citation rates might indicate uptake of ideas that inform other researchers, but are there beneficiaries from one’s research, beyond the academic context?
Since traditional metrics and altmetrics focus on impact within academia, further developments in research metrics look to evaluating impacts beyond academia. When one now reads about research impact, the word impact is often shorthand for “provable effects of research in the real world. Impact is the changes we can see (demonstrate, measure, capture), beyond academia (in society, economy, environment) which happen because of our research (caused by, contributed to, attributable to). Impact may look and operate differently across disciplines, and can happen quickly or take a long time, but always reflects the mobilisation of research into the non-academic world” (Bayley 2017). While this might seem another form of impact agenda, one might frame this as a benevolent form that “requires academics to address not only the intrinsic value of their research in advancing knowledge – its academic merit – but also the value their research has to society – its broader impacts” (Holbrook 2017).