User login

You are here

A new methodology for ranking scientific institutions

jfmolinari's picture

 

We extend the pioneering work of J.E. Hirsch, the inventor of the h-index, by proposing a simple and seemingly robust approach for comparing the scientific productivity and visibility of institutions. Our main findings are that i) while the h-index is a sensible criterion for comparing scientists within a given field, it does not directly extend to rank institutions of disparate sizes and journals, ii) however, the h-index, which always increases with paper population, has an universal growth rate for large numbers of papers; iii) thus the h-index of a large population of papers can be decomposed into the product of an impact index and a factor depending on the population size, iv) as a complement to the h-index, this new impact index provides an interesting way to compare the scientific production of institutions (universities, laboratories or journals).

 J.F. Molinari1, A. Molinari2*

1 Laboratory of Mechanics and Technology, Ecole Normale Supérieure de Cachan, Paris 6, France

2 Laboratory of Physics and Mechanics of Materials, Université Paul Verlaine, Metz, France

AttachmentSize
PDF icon paper.pdf326.99 KB

Comments

Henry Tan's picture

Has the Internet Era changed a scientist's attitude towards scientific publication?

jfmolinari's picture

This is a very good question, and I believe that in many ways the answer is yes. The web site iMechanica is a very good example.

But perhaps I should comment on why my father and I wrote this paper.

First and foremost, there is an obvious scientific reason. We were very surprised to find a similar growth rate for the h-index for all fields that we considered (and this even if they were disparate in nature, e.g. journals versus universities). This observation is, we think, rather puzzling.

Second, it is our opinion that no matter what, metrics are going to become more and more popular. The posted paper is our attempt to participate in the debate knowing that if we, as iMechanicians, do not get involved, others outside of our community will do it for us.

I hope that the posted paper will promote a constructive debate.

 

Yours,

 

Jean-Francois Molinari

Regarding these kinds of rankings, I think it is possible that the negative outweighs the positive. Rankings can bring objectivity to the allocation of research funds. However, rankings can create an orthodoxy and stifle innovation. If the only thing that matters is the number of papers and citations, that is a very strong incentive to (a) work in a field that is already glutted, because there will be lots of other workers who can cite your paper, (b) write the shortest possible publishable papers in order to maximize the number of papers written, and (c) attach one's name to as many papers as possible, no matter how modest one's contribution may have been. As familiar as these strategies may be to some in academia, none of them improves the state of knowledge in mechanics. I think we can all agree that the real merit of a piece of work, if we must judge merit, is correctness and usefulness. The number of times the work has been cited in the open literature is often a poor indicator of this, especially in mechanics, because the primary "consumers" of research literature, practicing engineers, mostly do not publish. Therefore, they produce neither papers nor citations. There are examples of books and papers that are so well known in industry that one can simply mention the name of the author, knowing that people in the field will know the work, yet many of these same papers and books have few academic citations -- for instance, Bruhn's flight vehicle structural analysis book, which isn't even in print any more but sits on thousands of cubicle bookshelves.

Pradeep Sharma's picture

Grant, I agree with your comments but I am afraid academic research is increasingly being "MBA-ized" and I don't see things getting better anytime soon. Thus whether we like it or not, administrators will impose some sort of metrics. The pragmatic approach for us may be to "design or control" the ranking metrics so that true scholarship emerges rather than the a system that can be abused by, as you alluded to, tricks like working in crowded fields/unnecessarily increasing paper count etc.Unfortunately, I don't have any solutions but this is a topic worth discussing further in society meeting (and of course iMechanica).

Henry Tan's picture

well, this is the rule of our game.

pls let me know where i can find the complete list of h index for all the journals....

Thanks in advance...

jfmolinari's picture

I am not sure if this lists exist as yet. If it does, I have not seen it.

It can be done easily as new bibliometrics tool output the h index.

However, I believe h_m is a better indicator than h for comparing journals as it takes out the size effect (indeed a journal publishing many papers would tend to have a higher h than a journal with a smaller output).

h_m per journal for a given field could be a interesting table to complete. 

 

Best regards,

Jean-Francois Molinari

Dear Jean

Thank you. I'm new to this field, so can you pls explain wat h_m is and how better it is than impact factor and h-index...

jfmolinari's picture

Sorry for the late reply.

h_m is not a better index than the h_index

It is a complementary indicator. This is what is explained in the attached pdf paper.

h-index and h_m could be used hand in hand: h-index is the raw visibility, whereas h_m is the visibility normalized by the effect of size (I would strongly recommend h_m for comparing institutions of disparate sizes).

 

Best,

 

JF 

  

Jean-Francois Molinari

http://lsms.epfl.ch/

Mike Ciavarella's picture

Jean Francois

 the list exists and I have just written about it

The full list of journals ranked by H index

michele ciavarella
www.micheleciavarella.it

Subscribe to Comments for "A new methodology for ranking scientific institutions"

Recent comments

More comments

Syndicate

Subscribe to Syndicate