Journal Information
Vol. 22. Issue 2.
Pages 67-69 (March - April 2016)
Share
Share
Download PDF
More article options
Vol. 22. Issue 2.
Pages 67-69 (March - April 2016)
Editorial
Open Access
Understanding Journal Evaluation and Strategies to Increase Impact
Compreender a Avaliação das Revistas e Estratégias para Aumentar o Impacto
Visits
1879
Helena Donato
Documentation Service, Centro Hospitalar e Universitário de Coimbra, Coimbra, Portugal
This item has received

Under a Creative Commons license
Article information
Full Text
Bibliography
Download PDF
Statistics
Full Text

Journal metrics mania started over 50 years ago with impact factor.1

In 1955, Eugene Garfield, an American bibliometrician, suggested that the number of references could be used to measure the impact of a journal, but the term “Impact Factor” (IF) was only introduced in 1963.2 This led to the publication of Journal Citation Reports (JCR). The JCR produces yearly impact factor lists that are grouped by speciality and cover the world's most frequently cited peer reviewed journals.2

As defined by its owner Thomson Reuters, the JCR offers systematic, objective means to critically evaluate the world's leading journals, with quantifiable, statistical information based on citation data.

How is the impact factor defined and calculated?

The formula used is: IF=A/B

A=total number of citations of journal articles during the 2 years before the year considered.

B=total number of items published during the 2 years before the year considered.

The primary utility of IF was to improve the management and selection of library journal collections. In market research IF provides quantitative evidence for editors and publishers for positioning their journals in relation to the competition with others in the same subject category.

When Garfield first launched his idea of a citation index for scientific publications he probably could not have dreamt the immense impact the instrument would have. In spite of some limitations JCR becomes a legitimate authority for ranking scientific journals.

One of the main limitations of the IF is the short time frame of 2 years. Citations of articles older than 2 years do not contribute to IF. It is a disadvantage to some subject categories where it usually takes more than one year to collect citations.

The IF is directly related to the area of research. The larger the scope of the journal, the higher the journal IF. For example, in the latest edition of JCR the highest journal IF in Oncology is 144.80 and in General Internal Medicine is 55.873, however in the category Respiratory System is 12.996 (American Journal of Respiratory and Critical Care Medicine), in Allery 11.476 and in Otorhinolaryngology is 3.761. This means that the best journals in specialized areas such as the mentioned never achieve the journal IF of general medicine journals.

Beyond that, journals dedicated to basic science have higher citation rates than journals devoted to clinical subjects.

Since its introduction in 1995 IF gained increasing popularity as a measure of the quality of scientific journals. The IF is an accepted proxy measure of the quality of a journal all over the world; it remains the most popular metric by which to judge the performance of a scientific journal.1

The IF is not the perfect metric and as previously mentioned has its own limitations. Nonetheless, the journal IF have become increasingly popular as a substitute for scientific quality, and it is often recognized as a symbol of scientific prestige and relevance.1 But, as Garfield said the impact factor is not an absolute measure of the quality of a journal, but of its influence and never intended to serve as an indicator of the influence of individual papers.3

Currently, it is used to not only evaluate and compare journals but also to assess the scientific performance of authors, institutions, countries. It is a mistake to use IF as a proxy measure of article quality or author prestige.

IF is still used by funding agencies, universities, policymakers to:

  • Select candidates for a particular position

  • Select recipients of grants

  • Promote

  • Award

  • Establish scientific collaborations

  • Select editor for journals

  • Determine the scientific output in a ranking

  • And many other.

Can the IF be ignored when assessing the work of others? The Declaration on Research Assessment (DORA) is one attempt to do so.4 DORA recommends that the IF should not be used to evaluate an author and remembers that the IF is a bibliometric measure to be used to assess the influence of a journal. DORA was published to spark discussion of alternatives to the use of IF for the evaluation of individual authors in hiring, promotion and funding decisions.4 The basic message of DORA is that the scientific content of an article is much more important than the IF of the journal in which it appears.4 We must remember that several medical areas read but do not write so they may benefit from articles without citing them. A journal with a low impact factor can contain highly valuable content that will impact on the practice of reader.1,4

There are ongoing efforts to develop better scores in attempt to find a better method that would reflect journals contribution to science. Several proposals of new alternatives of bibliometric measures have appeared.5

The extent of use, however, still seems to weigh in on IF's side.

SCImago Journal Ranks (SJR) was created by a Spanish research group in 2007 and is based on data from journals indexed in the Scopus (an Elsevier's database) within a 3-year period. SJR values are freely available at the SCImago Journal & Country Rank website.

Another indicator is the Journal h-Index. Like SJR, it is an open access metric, calculated using Scopus data, and freely available at SCImago Journal & Country Rank website.

The Journal h-Index is calculated in the same ways as the originally proposed Jorge Hirsh for individual authors. This means, the least number of publications (h), each of which is cited at least h times.

A strong association exists between the h-Index and journal IF.

Other metrics are Eigenfactor and Article Influence Score, and are now incorporated in JCR. Both new metrics are based on a 5-year frame and self-citations are excluded.6

Eigenfactor Score as a proxy of scientific prestige takes into account the quantity and “quality” of citations from highly cited journals.6

Article Influence Score is calculated by dividing the journals Eigenfactor Score by the number of articles in the journal.

Other additions to the JCR metrics are the Immediacy Index and the Cited-Half-Life. The Immediacy Index reflects how often on average, journal articles are cited in the same year of publication. Apparently journal publishing with open access and covering rapidly growing fields will have greater values of this metric.

The cited half-life reflects the period for which articles in a journal continue to attract citations.

Recently, with the explosion of new journals of questionable scientific quality, which have been called “predatory journals”, the scientific publishing arena has become infected with questionable websites that claim to measure and index scientific journals and provide fake and misleading impact factors. Some examples are: Universal Impact Factor (UIF); Global Impact Factor (GIF) e Citefactor.7

Authors must be able to recognize and avoid publishing in journal indexed in such illegitimate impact factor websites.

All authors goal is to publish in impact journals, preferably high impact journals. However many medical journals reject more than 80% of the manuscripts they receive, making rejection the biggest barrier to publication in high impact factor journals.

It seems obvious that more citation listed, the higher the IF of the journal. And there are various ethical strategies an editor can try to increase citations and improve de impact of their journal, which can help to increase IF and other citation metrics.

  • -

    The best way to improve IF is to publish high quality articles, but attract high quality manuscripts is not an easy task particularly for journals with an already low impact factor.5

Other ways are:

  • -

    Journals can publish invited content from leading figures in the field, guidelines, methodologies, special issues on-topical subjects, debates on currently relevant themes.

  • -

    Identify highly-cited papers in the journal and in other journals: these are indicative of hot-topics on which articles should be commissioned, reflecting the hottest and latest results in the field.

  • -

    Identify zero-cited papers: analyze what topics do not attract citation and use this information to feed an editorial strategy.

  • -

    Publish the relevant articles in the first issues of the year (a larger window citation).

  • -

    Preference for English as the language of publication.

  • -

    Publish primarily original research and review articles: these articles are more likely to be cited than others.

  • -

    Some journals have ceased to publish case reports, which tend to be infrequently cited. But others opted to publish, given the potential importance a single case reports can have in the field.

  • -

    Publish articles available with an open access policy and encourage authors to self-archive the articles in institutional or subject repositories. Articles freely available have greater impact, are cited faster than subscription- access articles.8

  • -

    Raise awareness to the journal, media promotion: promote the best articles using social media like Twitter, Facebook, blogs, academic network sites, etc.

  • -

    Increase journals visibility: make the journal more visible, making sure that it is covered by maximum of abstracting and indexing services; titles and abstracts of articles should be written to render them high visibility on those bibliographic databases and search engines.

  • -

    Improve the quality of the peer review: journals should find referees who have already published in journals with an international scope and journals should also prepare guidelines for their reviewers and find ways to ensure their use.

  • -

    Publish good editorials, news items and letters that are excluded from denominator of the journal IF formula but can attract numerous citations.6

  • -

    Speed of publication: introducing fast track publication and improving turnaround times for potential high quality articles.

  • -

    Ahead of print/online first: publishing accepted articles online prior to print means that they can be read and cited earlier.

Journals self-citations can be due to several admissible reasons but they should not be encouraged by the journal. “Coercive citation” is an unethical strategy used by journals and must be condemned. Since 2007, Thomson Reuters has temporarily suppressed journals from JCR when excessive self-citation was detected. Vigilance against untoward levels of self-citation is now part of the routine of JCR.

As a conclusion I can say that journal IF is usually recognized as a symbol of scientific prestige and relevance, but its true value is to evaluate journals and only similar journals (those dedicated to the same speciality) must be compared, because the impact factor varies greatly by subject category. It is clear that IF of a given journal does not necessarily reflect the quality of the papers it publishes. It is strongly discouraged to use journal IF as a proxy of an individual author or a specific article's scientific merit.

Impact factor, however imperfect, continues to be an important benchmark of the success of a journal but in my opinion what really makes a high-quality journal is above all the expertise and dedication of its editors and reviewers as well as the excellence of its editorial process.

References
[1]
P. Smart.
Is the impact factor the only game in town?.
Ann R Coll Surg Engl., 97 (2015), pp. 405-408
[2]
A. Grzybowski.
The journal impact factor: how to interpret its true value and importance.
Med Sci Monit., 15 (2009), pp. SR1-SR4
[3]
Garfield.
Citation indexes for science; a new dimension in documentation through association of ideas.
Science., 122 (1955), pp. 108-111
[4]
H. Donato.
Traditional and alternative metrics: the full story of impact.
Rev Port Pneumol., 20 (2014), pp. 1-2
[5]
U. Aydıngöz.
Ways to improve a journal's impact factor in the online publication era.
Diagn Interv Radiol., 16 (2010), pp. 255-256
[6]
L. Bornmann, W. Marx, A.Y. Gasparyan, G.D. Kitas.
Diversity, value and limitations of the journal impact factor and alternative metrics.
Rheumatol Int., 32 (2012), pp. 1861-1867
[7]
M. Jalalian.
The story of fake impact factor companies and how we detected them.
Electron Physician., 7 (2015), pp. 1069-1072
[8]
G. Eysenbach.
Citation advantage of open access articles.
Copyright © 2016. Sociedade Portuguesa de Pneumologia
Download PDF
Pulmonology
Article options
Tools

Are you a health professional able to prescribe or dispense drugs?