What you always wanted to know about the Impact Factor
Posted 3/8/2012 12:03 AM by Frank Krell |
Comments
Scientists often are evaluated by metrics based on citations of
scientific papers, because of a common belief that more citations
equates higher quality. Is is so? A commonly used metric, the
Journal Impact Factor, mainly considers the citations of other
scientists' papers. Does this make sense?
In a recently published invited essay in European Science Editing, Frank Krell discusses
a few crucial aspects and misunderstandings of the Journal Impact
Factor as a performance indicator.
Abstract. The Journal Impact Factor is the most
commonly applied metric for evaluation of scientific output. It is
a journal-focused indicator that shows the attention a journal
attracts. It does not necessarily indicate quality, but high impact
factors indicate a probability of high quality. As an arithmetic
mean of data originating from all authors of a journal with a high
variance, it is inapplicable to evaluate individual scientists. For
quantifying the performance of authors, author-focused citation
metrics are to be used, such as the h-index, but self-citations
should be excluded ("honest h-index" hh). All citation
metrics suffer from the incompleteness of the databases they source
their data from. This incompleteness is unequally distributed
between disciplines, countries and language-groups. The Journal
Impact Factor has its limitations, but if they are taken into
consideration, it is still an appropriate indicator for journal
performance.
pdf
comments powered by