Google
 
Web catdynamics.blogspot.com

Sunday, June 19, 2005

Citation patterns

I've been meaning to make an idle comment on this for a while, ie this is unprompted by any particular event in my personal life, though Sean's blog on inter-disciplinary network citation patterns (physicists don't cite sociologists or some such) prompted further thought on this...

Anyway, most of you've had these moments - when someone buttonholes you at a meeting, or sends an e-mail nastygram, claiming you overlooked their important, if not critical, paper which was somehow vaguely related to you recent paper.
What I really hate about such is that it works: the easy thing to do is to look slightly embarrassed and add a cite, even if it has nothing to do with your work; the other thing I hate is when I find myself going "but they didn't cite me..."
An insidious issue is that the people who complain the most about others not citing them, and who drive up their own citation rates, are also generally the ones who don't cite others, sometimes quite deliberately. A strategy instantly familiar from classical iterated game theory.

So, why does this matter?

First, citations count. They are part of the "reputation index" of scientists, and they are objective, quantitative measures of something, so university administraitors have a tendency to obsess on them.
Citing properly is hard, even with modern tool like ADS and the citation index.
Large fields have papers with large citations - it is difficult to get 1000 cites, if only 20 papers are published in your sub-field each year! (It is possible, if your work becomes important to some other large field, like cosmology).
There are infamous, and impenetrable, "schools of citation" - research groups which meticilously cross-cite each others work and ruthlessly ignore work by competitors.
It is possible to miss, in good faith, whole groups of cites, even with ADS like tools, if the papers don't cross-cite. Way back when I did my thesis I was mortified to discover a series of Australian papers related to my thesis topic which I had been unaware of. The student who did them evidently left science, and no one cited them, so I didn't come across them until after the thesis was written. (I did cite them in later papers).

So, when should people cite? Other than the obvious classic papers and recent discoveries and the classic and recent reviews?
True citation classics are actually never cited - no one cites Einstein (1905) when using E=mc^2, or Newton (Principia CUP) when using F = GMm/r^2... If you read it, and it contributed to your work, you should cite it; if you can. Some journals have fairly ruthless citation count limits.
If you see a paper you think is flawed, and you decide to do the problem "correctly" should you cite the paper, even if you don't rely on the results or method? I'd say "yes", because it is the paper that got you thinking about the problem!

Is citation analysis overused? The heavy citation of great papers is in fact usually because they are great, or at least very useful (in astronomy, catalogs and methods papers tend to get very heavy citations, even though they are sometimes not particularly profound, and don't necessarily solve difficult problems). Papers that get no citations are mostly not cited because they are bad or irrelevant. The mean citation rate per paper in science tends to be less than 1 across disciplines...

Ah well, in the end, getting a cite, particularly for either a very new paper (especially one with a student first author), or a very old paper is still a thrill, particularly when it comes from someone "new". But the way citations are done is too easily gamed and the whole process is too flawed to be taken as seriously as it is sometimes.

0 Comments:

Post a Comment

<< Home