Scientific Journal Rankings and Citations Know-How
Suppose you are the topper in your class, now compare yourself with other toppers from different classes in the same school, now other schools in the same district, and then with the ones in the whole country. Where do you stand among all the toppers? Most probably you do not know because there is no established matrix, nor anyone can unequivocally stamp you as a topper among all the other toppers. The world of scientific publication bears the same dilemma. Thousands of journals, millions of authors! How to rank them based on merit? – is it even possible? Efforts have been made to categorise scientific outputs on different levels in the form of individual article matrix, authors’ matrix, journal matrix and so on. Often, we hear the terms Impact Factor (IF), h-index, i10-index,
Impact Factor (IF) is a measure for journal impact. The Journal Citation Reports (JCR) reports Impact Factor score. This scale was developed in the 1960s by Eugene Garfield and Irving Sher. Example: A = the number of times articles published in journal X in 2008 and 2009 were cited by indexed journals during 2010. B = the total number of “citable items” published in journal X in 2008 and 2009. A/B = 2010 impact factor for journal X More precisely, A = all articles published in a journal in 2008 and 2009 received a total citation of 100 B = total articles published in the journal in 2008 and 2009 is 50 A/B = 100/50 = 2 (IF for 2010) IF indicates the average number of times a single document is cited in the a period of prior two years of its publication. IF of a journal 2.5 means that an article published in that journal received 2.5 citations on average in previous two-year period. Some flaws in JCR Impact Factor (IF):
- It is based on the journal as a whole and not to any individual author-based performance
- IF scale is limited to journals listed in Web of Science database which only holds about 15% of total active journal titles (data from December 2013)
- IF is reported based on previous 2-year data, but in practice the full impact of an individual publication is built over decades of data
- IF can be manipulated by self-citation within same journal, although journal that self-cite more than 30-40% often gets banned from any impact factor
- The scale is not static and favours the journals that are higher up the pyramid
CiteScore, introduced by Elsevier in December 2016 is similar to Impact Factor (IF) but the calculation is for a 4-year period instead of 2-year.
Citations are one of the most used and readily available matrices to measure author-level performance. The higher the number of citations per published document, the more impact the publication has, thus bears more prestige for an author. This too has some drawbacks since self-citation or reciprocal citation by colleagues may boost the score of subpar publications. Also, higher number of citations does not always mean it has higher impact on policy making or in improving the research field in general.
h-index is comparatively a new measure, developed by Hirsch, used to provide “an estimate of the importance, significance, and broad impact of a scientist’s cumulative research contributions. Individual researchers’ h-index can be found easily in Google Scholar or in Scopus. In a nutshell, this matrix provides an individual-level performance measure. For example, if one has 10 manuscripts that have each been cited 10 times, that individual’s h-index is 10. Now the h-index will only become 11 when at least 11 of the manuscripts have been cited at least 11 times, does not matter if the total number of manuscripts have reached 100 or even 1000. h-index, being a single and easy to find metric seems to be the solution of all problems, but unfortunately, it’s not. Different databases may have different h-index for the same author especially due to the difference in the number of indexed articles within the databases. Also, authors usually need to publish more to achieve a higher h-index.
m-value is used as “an indicator of the successfulness of a scientist”, though could be compared with h-index, but corrected for scientists’ age. Similarly, if you want to jam your mind with other not-so-used indices such as e-index, hc-index, p-index, Ab-index, Pr-index, Bh-index or v-index, you are welcome to look up Carpenter et al. (2014) for more details.
Emerging document-level measures
Academic journals keep record in academic manner, but what about popular media outreach (e.g., Facebook, twitter, YouTube etc.). If people are talking about something in social media, then there must be some impact and so should be captured in some matrix. Altmetric, Plum Analytics are some of the widely used measures for such outreach (e.g., number of times an article is cited, downloaded, exported, shared and so on). In addition, sites like ResearchGate, Slideshare, Academia.edu provides individual researchers the opportunity to share and track their own impact within the community.
A few Journal quality lists can be found below:
- Scimago Journal ranks (open)
- Journal Citation Reports by Clarivate (requires access)
- Journal Quality list Compiled and edited by Prof. Anne-Wil Harzing (open)
A few benchmarking tools for analysis (to analyse papers based on research fields, authors, institutions, journals, countries, regions etc.)
- InCites by Clarivate based on Web of Science data (requires access)
- SciVal by Scopus based on Scopus data (requires access)
- Carpenter, C. R., Cone, D. C., & Sarli, C. C. (2014). Using publication metrics to highlight academic productivity and research impact. Academic emergency medicine, 21(10), 1160-1172.
This is a Research HUB original post shared by Hasan Mahbub Tusher, Research Assistant at the University of South-Eastern Norway. To publish your article, write it up and send to firstname.lastname@example.org with a photo of yours.