User login

Navigation

You are here

Citation analysis? it is an old story!

I started my research on citation system with a very positve attitude about that. I have read many of scientometrics papers, but now I am completely disappoined! In the report by the International Mathematical Union (IMU) (attached) about citation statistics. I found an interesting sentence! Citation should not be used for evaluation of different papers in two different disciplines. How about different papers from two different sub-disciplines? and even two different  sub-sub-disciplines? and so on!

In fact citation cannot be used for compare anything without having complete information about them! Because in act every scientist has his discipline! For example we have different methods for imaging, MRI, PET, CT, Ultrasound, Spect, FRI, Optical, BLI, …,between these methods MRI is more famous, In fact these methods are more complementary than competitive, but if you work on MRI your work will get more citation because it has more applications, but your work is useless in many other applications too. Now imagine you want to work on Optical part,  you can use different probesin that relation, Quantum dots, GFP, and new phosphorescence nanoparticles, if you work on quantum dots your work will get more citation because it has more applications, but your work is useless in many others applications too. If you want to work on quantum dots then …, Now, Which method got Nobel Prize? MRI!.Which imaging probes in optical part have more Science Papers in imaging? Quantum dots. In fact

Citation = Quality of work * Popularity of topic.

Second parameter is even more effective (it changes in a wider range). We want to know the quality of work, but we have not a way to separate popularity of topic parameter. Now this question arises, Which parameter should be used instead? Citation is now in use for ranking of Authors, Universities, Nations and getting feedback for investment in science and ...

All of my assumption was based on that paper be used actually in that papers. Most of references come from introduction part. (even not to giving positive credit!). J.H. Schon's papers who known as Fraud got too many citations very soon after their publishing! are they used in any paper?.  

1. If we assume all authors had ideal review, every paper is based on a few key papers and number of other papers. in current citation based system all referred papers get same credit . and it is not true

2. citation says papers have different valuse but all referer papers gives same credit!. it lead to a paradoxical nature, if we call higly cited papers high quality papers and lowly cited papers low quality. must of papers are low quality and since much of references comes from these papers, quality of highly quialified papers are deteremined by low quality papers.

3. due to pervious, if you do some thing which be a little useful for many papers is very better than doing something great for a few papers.

4. there is no limitation for number of references and many low quality papers has too much references and change our citation data base significantly.

5. there is no parameter to assess the final of influence of a list of papers on industry and society. then a network of low quality papers can cite each other. Although great effort has been made, no substantial progress can be observed in the past 50 years in the fight against cancer!

6. reviewers of many ISI journals dont check even content of the papers good, references are next step.

7. a lot of number of references comes from introduction part and in this part many authors just like to justify their work. many of them use any paper which help them in this justification.

8. H factor may be good for select nobel prize! winners which has a number of hi quality papers, but  it is not good for ordinary researchers. citation diagram of many of them is very different in act. (i checked!). (attention: even Einestein was an ordinary researcher!)

9. Some books and review papers summurize papers and then others refer to books and review papers not to original papers. then many of papers will lost because they summurized in books or review papers.

10. when some author or inistitution becomes famous people follow his works and it gets more citation (halo effect)

 

Comments

It is a sobering fact that some 90% of papers that have been published in academic journals are never cited.

Indeed, as many as 50% of papers are never read by anyone other than their authors, referees and journal editors We know this thanks to citation analysis, a branch of information science in which researchers study the way articles in a scholarly field are accessed and referenced by others (paper by Lokman I Maho in Physics world, The Rise and rise of Citation Analysis). if we assume this papers low-quality papers this papers in citation term changed citation numbers more than any other!. then low-quality papers determined which is hi-quality paper!.

a higher rate of downloads in the first year of an article could predict a higher number of eventual citations later, what makes people to download a paper!
(Earlier Web Usage Statistics as
Predictors of Later Citation
  Impact, T Brody, S Harnad - Arxiv preprint)

in thirld world people use number of papers which is
redicolous. authors change their papers and send them to many journals to
getting acceptance. citation is very better in that position. 

even if we want to use bibliometric data to evaluate
quality there are many ways more rational than citation. graphs are developed to
model such systems.

for a constant number of citations, h factor is maximum when
citations diagram be near a -45 deg line.

in many disciplines and research systems it is like that and in
many other it is not. it may be suitable for you  and unsuitable for
many others (50%-50%).

it also depends on taste, do you prefer 3 papers with
120 citation or 8 papers with 30 citation! which is better?.it depends on policy of institute. 

if we want to rely on citation metrics available in
scholar.google.com!

then normalization of citation diagram with a function
of time is better.

we put our photograph in imechanica by 100 kb file, how
we want to show all of efforts of one person with a single number
between 0-100000? use rar?

i dont know. why ISI thinks all of science is same topic?. most familiar example be biology, we have too many animals, cats, dogs, goats, sheeps, snakes, monkeys, mouses, and many others. think about biologists who work directly on one of them, many of them may have paid more attention and many of them not, may be related to geographical characteristics and many other factors. then biologists who worked on mouses got more citation! (for example). does it mean they have more invention or intellegence. we first should categorize science more precisely. if we can. then try to find a best researchers. 

Zhigang Suo's picture

Dear Roozbeh:  You bring up a fact:  the number of citation depends on field.  But your suggetion to refine the definition of the fields may become very difficult to execute.  When search becomes much easier than before, the value of putting things into different folders decreases.  If you have too few folders, each folder contains two many items.  If you have too many folders, you have the difficulty to memorize the name of every folder.  Also, your folders might be different from mine.  Does a paper on strain-induced formation of quantum dots belong to the folders of mecahnics, semicondutor, quantum mechanics, optics, or biological imaging?

The history and the failure of catogorization have been discussed in the book Everything is Miscellaneous, which we talked about in a previous thread.  You may enjoy watching the video of a lecture by the author of the book.

I think search is not so easy. for example

  • Self-assembled-nanoscale-biosensors-based-on-quantum-dot-FRET-donors (2003, 44.2, WA)
  • Single-quantum-dot-based DNA Nanosensor (2005,25.6, JHU)
  • Nanoparticle-based-bio-bar-codes-for-the-ultrasensitive-detection-of-proteins (2003,56.6, Northwestearn)

are 3 papers related to "nano-bio-sensors",

i found first one with the "biosensor" keyword. second one with "nanosensor" keyword and third one with "protoien+detection" keyword. how about papers i dont know the relevant keywords?

There is no need to put each paper in only one category. for your example it could be

1. Biological imaging/Molecular imaging/Imaging probes/Quantum dots/ formation of quantom dots

or

2. Quantum dots/formation of QDs/strain induced formation of QDs

or

3. Mechanics/.../strain induction formation/strain induced formation of QDs

or

...

It helps people with any point of view to find their document.

Wikipedia in fact is a categorization thing but with many complementary sentences

 

WIKIPEDIA and IMECHANICA are examples of categorization. many things we read in wikipedia is not novel. we could found them in GOOGLE also but between many other commerical or by reading same sentences more and more, in fact wikipedia is a try to categorization of information

Many things we read in imechanica is not new but they are useful, we could found them in many other websites and in many other forms of categorization, but for mechanicians it may be easier to find many related topics in imechanica. Imechanica is a try to categorization of information related to mechanicians.

 

But i think about a categorization  not with complementary sentences,  for contributors, it is easy to add a new thing. 

 

 

Mike Ciavarella's picture

As you can see there are blogs around on this!

http://www.timeshighereducation.co.uk/story.asp?storycode=400516

Researchers may play dirty to beat REF

7 February 2008

By Zoë Corbyn

Senior academics predict a boom in manipulation of citations under new system. Zoe Corbyn reports.

The
kinds of manipulation and gamesmanship that researchers could employ to
boost their research ratings under the system set to replace the
research assessment exercise have been outlined in frank detail to Times Higher Education by a number of senior academics.

Under
the forthcoming Research Excellence Framework, which is out for
consultation until 14 February, research quality in the sciences will
be judged largely on the basis of the number of times an academic's
work is cited by his or her peers.

One potential technique to
maximise citations could harm young researchers looking to climb the
citation ladder. Senior scientists could manipulate publication
procedures used by their research groups to prevent first and
second-year PhD students being added as secondary authors to group
publications, a practice seen as affording students an early career
boost. They could later be used as lead authors on group work on the
understanding that they cited earlier papers from the group where their
names did not appear. The practice would circumvent the Higher
Education Funding Council for England's plan to exclude "self
citations" - where academics cite their own earlier work - and would
allow senior researchers to improve their citation counts substantially.

"Any
system that undermines the aspirations and ambitions of our very best
early-career researchers would be a step backwards," said one pro
vice-chancellor.

Another senior academic said universities might
introduce institutional publication policies to ensure researchers
achieved maximum citations. "If Hefce chooses to harvest all papers
produced from an institution, the cat is really among the pigeons,"
said the source. "Institutions will have to start policing every paper
that leaves the institution destined for publication."

But behaviour that could hinder young researchers is just the tip of the iceberg, according to others.

"Citation
clubs" - cartels of colleagues who do deals to cite each others' work -
may become increasingly common. Observers also predict a boom in the
number of spurious or sensational papers that are more likely to grab
attention and citations.

"Lots of people quote bad science," one pessimistic senior researcher said.

"Citation clubs are already operating in the US," said one head of department.

"A
lot of academic work is already based on mutual admiration and back
scratching, and that will intensify," said another anonymous source.

Concerns that game-playing within the REF will undermine collaborative and interdisciplinary research are also being expressed.

"It
is much better to ensure that joint papers aren't published so that the
chances for citation by a group working in the same area are
increased," said one professor.

Concerns are particularly acute
within the engineering community, which Hefce acknowledges is not well
covered by the Web of Science database that it intends to use to count
citations.

"The one thing you can absolutely guarantee is that
people will skew behaviour in order to make the best out of whatever
metrics are in place," said Sue Ion, the vice-president of the Royal
Academy of Engineering. She has been working on the society's
submission to the REF consultation.

"If citations become the
be-all and end-all - and Hefce has never said they will - then academic
groups will look at where they should be publishing. The worry is that
rather than doing collaborations with small to medium enterprises and
industry that may not result in any publications, they will try to do
detailed work to report in top-flight journals."

Dr Ion added
that the society was looking at other metrics measures that might be
introduced to measure interdisciplinary and collaborative research but
that "light-touch peer review had a fair bit of attraction". However,
she said, there was no "one-size-fits-all" solution for all of
engineering's branches.

Among the more colourful suggestions
offered by researchers as to how to improve citation counts under the
REF is the use of more colons in the titles of papers.

Times Higher Education
was referred by one academic to a paper by an American academic, Jim
Dillon, "The emergence of the colon: an empirical correlate of
scholarship", in the journal American Psychologist in 1981.
It records a high correlation between the use of colons in the titles
of published academic papers and other quality indicators, including
citation counts.

"We might follow this to include at least two
colons in the titles of our papers so as to ensure the highest
recognition," said the academic, adding that the downside might be a
mass breakout of what Dr Dillon might call "colonic inflation".

"I understand that it is extremely painful," he said.

zoe.corbyn@tsleducation.com

SUGGESTIONS TO BOOST YOUR CITATION COUNT IN THE REF

-

Hefce says: "The main incentive for researchers
will be to publish work that is recognised as high quality by peers and
becomes highly cited by the international academic community.
Short-term game-playing is far less likely to achieve this than
responsible strategies to nurture the talent ... and for publishing
work that enhances the group's international reputation." But more
mischievous academics have suggested some ways to maximise one's rating:

- Do not cite anyone else's research, however relevant it is, as it boosts others' citation counts and increases their funding;

-
Do not publish anything in journals with low citation rates, a group
that includes lots of applications-based journals, as it will lower
your citations;

- Do not do scientific research in fields not yet well covered by Thomson Scientific database, as your output won't be visible;

- Do not report negative results: they are unlikely to get cited;

- Join a citation club.

Readers' comments

  • Charles Jannuzi
    7 February, 2008

    You have a horrible situation in the sciences where people have
    their names on papers they haven't even read, let alone contributed
    anything to. Authorship should remain authorship. So when you see an
    article with one person's name followed by AND and another person's
    name, co-authorship has some sort of meaning. I have had my names on
    papers as co-author and actually written more for the paper than the
    first name (we flipped a coin, or we went by alphabetical order, or we
    took turns). And yet when evaluations roll around, articles that I have
    written as much as 80% of the content for have counted less because
    co-authorship somehow means less or nothing. The reason is all this
    indiscriminate inclusion of names of NON-AUTHORS. It's horrendously
    dishonest.

  • Onlooker
    7 February, 2008

    Try this strategy to increase your citations. Publish very critical
    articles about the work of eminent US academics. They will rush into
    print to repudiate your work, thereby increasing your citation count.
    Repeat with other well-known non-UK academics.

  • K. V. Pandya
    8 February, 2008

    Having a high number of citations is absolutely the wrong way to go
    about measuring research. How can the number of citations measure the
    importance of the research, how applicable is the research? Already
    there are researchers citing and rewarding thier friends. I have
    noticed this at the conferences where one speaker praises the
    conference chair and the conference chair praises the speaker, within a
    space of half an hour. It goes on openly, needless to say it will go on
    behind closed doors. The citation clubs are an example. This form of
    research assessment will be open to wider abuse than it currently is.
    Already we have seen people talking ways of improving citations:
    joining citation clubs, using colon (like used here), not putting
    research students names on it, etc.

    Getting more and quality publications should be the way
    forward. This should be based on how does the research contribute to
    knowledge of the peers, students, industry and wider community? This
    should be important, not how many friends, research collaborators,
    former colleagues, yes and foes cite one's publications. In addition to
    this how about having HEFC sponsored conferences on different fields of
    research.

    HEFC needs to look more at how the research is disseminated
    not who has read the publication. Though this could be useful, it
    should not be the only way. It should only play a minor role in the
    research assessment.

  • Barry G Blundell
    8 February, 2008

    In a Letter published in 'Nature' in 1948, Denis Gabor elucidated
    the principles of holography, however it was not until the invention of
    the laser in the early 1960's that Gabor's pioneering work came into
    its own. In short, for approximately fifteen years Gabor's Letter
    received scant attention. Today his publication is recognised as a
    seminal work and is cited widely.

    In fact, history shows that pioneering breakthroughs in
    science, technology etc, are seldom immediately recognised. Clearly
    therefore anybody who is interested in playing the RAE game should
    avoid leading the way, but rather jump onto a research bandwagon after
    it has met with establishment approval. An alternative approach is
    again demonstrated in history. Almost universally the invention of the
    transistor is attributed to the work done in the 1940's by Bardeen,
    Brattan and Shockley (1947). Since that time their work has been widely
    recognised and it would be likely that a measure of their citation
    index would be off the scale. Of course, this fails to take into
    account the fact that the transistor was actually invented in the
    1920's. In short, once citations have reached a critical number the
    truth becomes irrelevant.

    Looking back over the last two hundred years or so, it is clear
    that scientists and engineers within the UK have the credit for paving
    the way and despite the vast funding poured into US institutions,
    Britain has managed to play a pivotal role in human advancement. How
    sad therefore to see scientists and engineers in the UK being forced
    down a distorted route which is intended to mimic 'excellence'
    associated with US institutions. Have we learnt so little that we
    really believe that the human creative spirit can be quantified?
    Research is a way of life - a quest for understanding - and is
    something to disseminate to one's peers and students. Unfortunately in
    many UK institutions, students have become third-class customers as the
    RAE game has pushed their importance to one side. The pioneers who pave
    the way are unafraid to enter a darkened room without necessarily
    knowing what they will find and without being unduly concerned as to
    how long their quest will take. The continued RAE exercise simply
    serves to ensure that darkened rooms are avoided and at all times one
    plays safe.

    Although above I have acknowledged the UK's tremendous history
    in science, engineering, and other fields, there have been good times
    and not so good times. I firmly believe that future generations will
    look back on this current period as one of the more dismal times for
    creativity and innovation - simply a very poor copy of the flawed US
    model. Dr Barry G Blundell

  • Aaron Ahuvia
    12 February, 2008

    Clearly, citation rates are a very flawed system for assessing
    research quality. But we need some system for doing this. For every
    problem with citation rates, there are far more serious problems with
    the current non-system system.For example, many universities ask a
    department to rate journals in a discipline. The resulting lists
    generally combine some true insight with a heaping dose of self
    interest on the part of the people making the lists, all mixed up with
    petty prejudices about how "our" (i.e. journals edited by my friends,
    or in my sub-sub-sub area) are better than "their" journals.
    Furthermore, quality is assessed based largely on the journal where
    something is published, rather than on the characteristics of the paper
    itself. When evaluating any new system, we need to keep in mind just
    how lousy the current system is.

    What is really needed is a system of multiple indicators for
    the underlying construct -- in this case research quality -- we are
    trying to measure. Any single measure can always be gamed. Even
    multiple measures can be gamed to a certain extent, but it gets harder
    to fool the system with each additional indicator used. Eventually, it
    just gets easier to do good work than to spend your time figuring out
    how to cheat.

 

Roozbeh Sanaei's picture

AAAS - Survey Finds Citations Growing Narrower as Journals Move Online

Jennifer Couzin*

 

Millions of scholarly articles have migrated online in recent years, making trips to library stacks mostly obsolete. How has this affected research, which depends on published work to guide and bolster academic inquiry? A sociologist at the University of Chicago in Illinois argues on page 395 that the shift has narrowed citations to more recent and less diverse articles than before--the opposite of what most people expected.
Working solo, James Evans of the University of Chicago was curious about how citation behavior has changed in the sciences and social sciences. In theory, online access should make it quicker and easier for researchers to find what they're looking for, particularly now that more than 1 million articles are available for free.
Relying on Thomson Scientific's citation indexes and Fulltext Sources Online, Evans surveyed 34 million articles with citations from 1945 to 2005. For every additional year of back issues that a particular journal posted online, Evans found on average 14% fewer distinct citations to that journal, suggesting a convergence on a smaller pool of articles. In other words, as more issues of a journal were posted online, fewer distinct articles from that journal were cited, although there were not necessarily fewer total references to that journal. It suggests herd behavior among authors: A smaller number of articles than in the past are winning the popularity contest, pulling ahead of the pack in citations, even though more articles than ever before are available. The average age of citations also dropped. Valuable papers might "end up getting lost in the archives," says Evans.
Oddly, "our studies show the opposite," says Carol Tenopir, an information scientist at the University of Tennessee, Knoxville. She and her statistician colleague Donald King of the University of North Carolina, Chapel Hill, have surveyed thousands of scientists over the years for their scholarly reading habits. They found that scientists are reading older articles and reading more broadly--at least one article a year from 23 different journals, compared with 13 journals in the late 1970s. In legal research, too, "people are going further back," says Dana Neac u, head of public services at Columbia University's Law School Library in New York City, who has studied the question.
Tight focus. Citations to journals that have been online longer, according to James Evans, tend to cluster around more recent dates.
CREDIT: DAVE G. HOUSER/CORBISOne possible explanation for the disparate results in older citations is that Evans's findings reflect shorter publishing times. "Say I wrote a paper in 2007" that didn't come out for a year, says Luis Amaral, a physicist working on complex systems at Northwestern University in Evanston, Illinois, whose findings clash with Evans's. "This paper with a date of 2008 is citing papers from 2005, 2006." But if the journal publishes the paper the same year it was submitted, 2007, its citations will appear more recent. Evans disputes that this affected his results, noting that in many fields, such as economics, the time to publication remains sluggish.
In other ways, Evans's findings reflect the efficiency that comes with online searching. "There's always been a desire to be focused in your citations, but it was impossible to manifest that in the old world," says Michael Eisen, a computational biologist at the University of California, Berkeley, who helped found the Public Library of Science.
In the end, Evans notes that "I don't have snapshots of people in their offices searching." But, he says, his findings show that "everyone's shifting to this central set of publications"--an effect that may lead to easier consensus and less active debate in academia.

Roozbeh Sanaei's picture

local science should challenge to citation based evaluation

Pushes to globalize science must not threaten local innovations

 

 

mohammedlamine's picture

I have never failed to search a paper on GOOGLE engine. Some engines and related databases aren't easy to use. They need an automatic program to process the citations which scans all databases and not a specific group of them. I have also advised on Linkedin that GOOGLE SCHOLAR (https.//scholar.google.com) is efficient on automatically generating the number of citations.

Subscribe to Comments for "Citation analysis? it is an old story!"

Recent comments

More comments

Syndicate

Subscribe to Syndicate