Journal Information
Vol. 20. Issue 1.
Pages 1-2 (January - February 2014)
Share
Share
Download PDF
More article options
Vol. 20. Issue 1.
Pages 1-2 (January - February 2014)
Editorial
Full text access
Traditional and alternative metrics: The full story of impact
Métricas tradicionais e métricas alternativas: a história completa do impacto
Visits
1016
H. Donato
Documentation Department, Centro Hospitalar e Universitário de Coimbra, Coimbra, Portugal
This item has received
Article information
Full Text
Bibliography
Download PDF
Statistics
Full Text

For many years, the Journal Impact Factor (JIF) was the best tool available to determine the prestige of a journal. JIF is a metric tool that was originally developed in the 1960s by the Institute for Scientific Information (ISI) as a valid metric for journal quality.1

A group of editors from a number of scholarly journals met in December 2012 to discuss the Impact Factor, and a declaration was born “the San Francisco Declaration on Research Assessment” (DORA). DORA is a worldwide initiative to improve the way in which the output of scientific research is evaluated by funding agencies, academic institutions and other parties.2 The declaration includes the following recommendations3:

  • 1.

    Avoid using journal metrics to judge individual papers or individuals for hiring, promotion and funding decisions.

  • 2.

    Judge the content of individual papers and take into account other research outputs, such as data sets, software and patents, as well as a researcher's influence on policy and practice.

  • 3.

    Balance the Impact Factor with other metrics and reduce emphasis on the JIF in journal promotion. Article-level metrics are more specific than journal-based metrics.

  • 4.

    Declare detailed authorship contributions.

  • 5.

    Avoid limits on reference lists and remove reuse and access limitations. Wherever appropriate, cite the primary literature.

  • 6.

    Use open data to calculate metrics.

  • 7.

    Account for article types in reporting metrics; define what constitutes inappropriate manipulation of metrics.

  • 8.

    Promote and teach best practice focusing on the value and influence of specific research outputs.

Over the past 20 years, a great number of measures have been produced, varying from publication counts and citations to sophisticated impact indicators. Much has been said and written about the limitations of the JIF, and a number of other metrics to evaluate journals have emerged, such as the 5-Year Impact Factor, the Immediacy Index, the EigenFactor, the Article Influence, and the SCImago Journal Rank.1,4 But all these metrics depend on citations, using them as a metric for quality.

Citations to articles in a journal appeared to provide a quantitative means to access the quality of a journal. This has become highly debated over the years, because articles can receive citations for a number of wrong reasons, including vanity (self-citations), politics (honorary citations) and refutation (there is no difference between positive and negative citations). Another huge disadvantage to citation counts is their speed of accumulation: it can take as long as two years from submission to see the first citations. Some argue that it is not fast enough given the speed of communication allowed by the Internet.

JIF was born when there was one delivery route for scientific articles, paper publication. The migration from paper to electronic online has enabled a better understanding and analysis of citation count-based impact measurements and created a new supply of user activity measurements: downloads, visits.4 Usage statistics, unlike JIFs and citation, can measure an article's use.

In the last few years, the raising importance of social networking resulted in new ways of measuring scholarly activities. Physicians have begun a migration into an online environment, using platforms such as Mendeley, Zotero, CiteULike, Blogs, Twitter, Facebook, and more. Today, if something is not available on these platforms, it does not seem to exist.1 In these new spaces, the interactions such as reading, saving, discussing and recommending become visible. Observing these traces can inform a new metric of influence, attention and impact.

The attempts to find alternative metrics is a symptom that the research evaluation is not functioning well.5 A new movement called “Altmetrics” emerged, well described in a manifesto6 published in 2010.

The aim of Altmetrics is making available better tools to monitor, track, and measure other aspects of scientific and scholarly literature than what is possible by the current dominant paradigms of citation analysis. Altmetrics monitor in real-time the online activity around scientific publication by tracking metrics such as downloads, number of readers, discussion and comments in social networks.

In conclusion, we can say that traditional measures of scientific relevance (citation metrics, publication in high impact factor journals) still have great importance. But alternative new metrics are now added, such as article downloads, views, tweets, and bookmarks. Altmetrics measure the number of times a scientific article gets cited, tweeted about, liked, shared, bookmarked, viewed, downloaded, mentioned, reviewed or discussed in almost real-time. Altmetrics provides a new way of detecting the use of scientific publishing beyond formal citation.

It is a mistake to consider a paper important just because it is published in a journal of high impact factor. It is much better to focus on the citation, views, downloads, comments, and tweets. It is important to show the various ways in which a paper receives attention.7 Popularity can indicate future citations. There have been many studies that point out the correlation between Altmetrics measures and citations.8,9

Participating in social media networks allows Revista Portuguesa de Pneumologia/Portuguese Journal of Pulmonology to disseminate the research findings quickly and effectively, and amplify the articles, as well as raise the journal visibility. Sharing articles with a wider audience gives more visibility. With greater visibility, it is more likely to be cited.4

Follow the Revista Portuguesa de Pneumologia/Portuguese Journal of Pulmonology on Twitter @RevPortPneumol, on LinkedIn, on Facebook.

References
[1]
M.K. Heinemann.
On metrics.
Thorac Cardiovasc Surg, 61 (2013), pp. 377-378
[2]
B. Pulverer.
Impact fact-or fiction?.
EMBO J, 32 (2013), pp. 1651-1652
[3]
San Francisco Declaration on Research Assessment.
(2013),
[4]
A. Ward, R. Guest.
Making the most of social media.
(2013),
[5]
Alternative, metrics.
Nat Mater, 11 (2012), pp. 907
[6]
J. Priem, D. Taraborelli, P. Groth, C. Neylon.
Altmetrics manifesto.
(2013),
[7]
The maze of impact metrics.
Nature, 502 (2013), pp. 271
[8]
G. Eysenbach.
Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact.
J Med Internet Res, 13 (2011), pp. e123
[9]
P. Wouters, R. Costas.
Users, narcissism and control – tracking the impact of scholarly publications in the 21st century.
(2013),
Copyright © 2013. Sociedade Portuguesa de Pneumologia
Download PDF
Pulmonology
Article options
Tools

Are you a health professional able to prescribe or dispense drugs?