Tuesday, September 4, 2012

Impact Factors Skewed by a Few Articles and not the Entire Journal

Sick of Impact Factors

Posted on  August 13, 2012  Stephen Curry (Reciprocal Space)
I am sick of impact factors and so is science.

The impact factor might have started out as a good idea, but its time has come and gone. Conceived by Eugene Garfield in the 1970s a useful tool for research libraries to judge the relative merits of journals when allocating their subscription budgets, the impact factor is calculated annually as the mean number of citations to articles published in any given journal in the two preceding years.

By the early 1990s it was clear that the use of the arithmetic mean in this calculation is problematic because the pattern of citation distribution is so skewed. Analysis by Per Seglen in 1992 showed that typically only 15% of the papers in a journal account for half the total citations. Therefore only this minority of the articles has more than the average number of citations denoted by the journal impact factor. Take a moment to think about what that means: the vast majority of the journal’s papers — fully 85% — have fewer citations than the average. The impact factor is a statistically indefensible indicator of journal performance; it flatters to deceive, distributing credit that has been earned by only a small fraction of its published papers.