There are numerous critiques, both online and in the
literature (pdf), of the overused H-index and journal impact factor (IF) metrics,
particularly when it comes to assessing the quality of recent research. However, many of these critiques do not
include suggestions for how to improve the situation, aside from pointing out that if h-index equals half the square root of total citations, then it is a redundant number. Over in Economics, they have gone all out to
make a fantasy economics league, but we dirt people have no such
construction. Here, then, are a few
easily calculated stats that would be an improvement on the status quo. The can be calculated using Google Scholar,
if necessary, assuming anyone knows how to yank their numbers.
COIF: Citations over Impact factor.
This is the number of citations per year a given paper has
relative to the impact factor of the journal. Impact factor/2 is the average
citations per year of a journal for papers in their first two years of release;
subracting that from the citations per year for each given paper gives each
paper a score. averaging those for a researcher gives their score.
This metric puts the particular work of a scientist into
perspective relative to others who publish in similar journals. Of course, the COIF from
someone who publishes in journals with IF of 20 is not comparable to that of
those who publish in papers with IF of two, but if IF is going to be tied to
individual researchers despite all admonitions against this practice, then COIF
gives a way to interpret it.
I suspect that most young to mid careers scientists will
have a positive COIF; citations, at least in geology, tends to accumulate more
in later years than in the first two.
However, a declining COIF might mean that one's work is becoming less
relevant as time goes by.
Whether an institution wants a person with low COIF and
flashy journals, or a high COIF in esoteric publications probably depends on
the particular institution, and what their priorities are. So the COIF might even be useful for
determining how well suited people are to various particular institutions.
As an industry person who publishes occasionally, I have few
enough papers to be able to calculate this for myself manually and easily (using Google scholar, which probably inflates the numbers by 20%).
Anyone with a basic knowledge of programming could probably automate the
process, though.
Paper | year | Journal | IF | CPY | COIF |
Birch et al. | 2007 | AJES | 1.6 | 1.8 | 1.0 |
Parsons et al. | 2008 | Am Min | 2.0 | 4.3 | 3.3 |
Klemme et al | 2008 | Geostandards | 3.2 | 2.9 | 1.3 |
Parsons et al. | 2009 | CMP | 3.5 | 3.3 | 1.6 |
Aleinikoff et al. | 2012 | Chem Geol | 3.5 | 7.7 | 5.9 |
Magee et al | 2014 | SIA | 1.2 | 1.0 | 0.4 |
Mean | 3.5 | 2.2 |
SCP: Self citation percentage
What percentage of a paper's citations come from authors of
that paper? This is simply The number of times a paper is cited by one or more
of its authors divided by the total number of citations. This has been looked
into by a number of people in the never ending struggle to interpret citation
numbers. At least some suggest that the
number in generally in the twenties, and doesn't have enough variant to be useful,
but I find that surprising, as the papers I've published vary quite a bit:
Demonstrating on myself again, it can range from 4% to 100%.
paper | year | cites | sefies | SCP |
Parsons et al. | 2008 | 30 | 6 | 20% |
Aleinikoff et al. | 2012 | 23 | 1 | 4% |
Parsons et al. | 2009 | 20 | 7 | 35% |
Klemme et al | 2008 | 20 | 6 | 30% |
Birch et al. | 2007 | 14 | 2 | 14% |
Magee et al | 2014 | 1 | 1 | 100% |
Total | 108 | 23 | 21% |
No comments:
Post a Comment