User login


You are here

h-indices of Timoshenko medalists

Zhigang Suo's picture

In preparing cases for faculty appointments, my colleagues in other fields often ask about citations of each candidate and his or her comparees.  Despite obvious resistance, my colleagues give following reasons:

  1. The numbers are easy to obtain.
  2. Although the numbers don't tell everything, they tell something.  Why should a scientist suppress any data points?
  3. Numbers don't lie, at least don't lie by themselves.  As long as people don't over-interpret the numbers, the numbers form a part of the case, along with other data.

So, I have been learning how to find such numbers.  A particular popular metric is the h-index.  There has been much discussion on pros and cons of this metric, and I have nothing new to add. 

Here are steps that I have just learned from a colleague to find the h-index of an author:

  1. Go to the Web of Science (your institution must have paid for this service).
  2. Click the tab "GENERAL SEARCH".
  3. A form appears.  Under the box for "AUTHOR", click the link "Author Finder"
  4. Type in the name of the author, then follow several rounds of "NEXT".  A particular important one is to select all the affiliations under which the author has published.  This step will help to differentiate people with the same name.
  5. Click FINISH.
  6. On the right side, click "CITATION REPORT".  Look for the h-index, along with other statistics of the author.

This morning, it occurred to me that it might be fun to make a list of the h-indices of recent Timoshenko medalists. The results are listed below

  • 2007 – Thomas J.R. Hughes  57
  • 2006 – Kenneth L. Johnson  28
  • 2005 – Grigory Isaakovich Barenblatt  15 
  • 2004 – Morton E. Gurtin  38
  • 2003 – Lambert B. Freund  42 
  • 2002 – John W. Hutchinson  67 
  • 2001 – Ted Belytschko  60
  • 2000 – Rodney J. Clifton  19
  • 1999 – Anatol Roshko  21
  • 1998 – Olgierd C. Zienkiewicz  53
  • 1997 – John R. Willis  37
  • 1996 – J. Tinsley Oden  35
  • 1995 – Daniel D. Joseph  36
  • 1994 – James R. Rice  60
  • 1993 – John L. Lumley  29
  • 1992 – Jan D. Achenbach  31
  • 1991 – Yuan-Cheng B. Fung  37
  • 1990 – Stephen H. Crandall  13
  • 1989 – Bernard Budiansky 29

Unfortunately, the results are path-dependent.  That is why I have given my steps.  Also, the results are inexact.  For example, several most cited papers of Jim Rice were missing from the search results. The same problems might also significantly affect other medalists.  If you know a more effect procedure to determine the h-index, please leave a comment below. 

I'm not sure what I have learned from this list of numbers.  But if you have seen any pattern or had any related experience, please explain to us.  

Update on 23 November 2007.  The above steps often underestimate the h-index.   See a comment below.  I'll update the list as better apporximations become available.

Update on 27 November 2007.  Finding the exact h-index of an author is indeed not easy.  Following this post, a number of people have pointed out to me pitfalls:

  1. ISI "General Search" does not include all journals.  If an author published a highly cited paper in a journal not included in the "General Search", the h-index of the author will lose 1 point.
  2. ISI sometimes includes a paper without the address of the author.  If you use both the name and the address of an author to locate his papers, ISI often underestimates.
  3. ISI does not differentiate authors with the same name.  If you use the name of an author to locate her papers, ISI often overestimates.
  4. ISI sometimes lists the same author under different names.  For example, I am listed as Suo Z and Suo ZG.  A way to filter papers is to search Suo Z*.  Then you have Problem 3, and add papers by Suo ZK to my list.

I have not found a procedure to avoid all the above problems.  Given that the utility-to-effort ratio of the h-index seems uncertain, as an approximation, I have now modified my procedure as the follows.

  • Click General Serach
  • Enter the name of an author, taking care of Problem 4 by using Belytschko t*, for example.
  • Leave all other boxes of the form open.  Click Search.
  • Click Citation Report.
  • Go through the list of papers.  Eliminate papers not written by the author.  Include the papers just below the line of h-index, until I meet the definition of the h-index.  That is, the nth most cited paper is at least cited n times.

Following this new procedure, I have updated the h-indices listed above.

Of course, I have not addressed the more basic issue:  why should we use h-index in the first place.  Should we use some other metrics?  See Molinari's comment.  I have not studied various metrics myself, and should leave the task to others.  


Pradeep Sharma's picture


This is said it right; as long as one does not over-interpret, such numbers may be informative. Of course, (most) administrators do not pay heed to the cautionary statements (e.g. consider the misuse of impact factor to assess the "impact" of the publications of individual scientists).

Some of my materials science colleuages use the h-indiex extensively. It is generally accepted among them that anyone with this index greater than 10 is making an impact on his/her field. While I cannot comment on such a specific assertion, I would say that it is quite difficult for ANY researcher to reach the double digit h-indiex values that your list exhibits. Of course, given the impact these individuals have had on their fields, this is hardly suprising. Here it is worth pointing out that the h-index is biased agains early-career researchers.

I became interested (and on occasion dismayed by their misuse) in such metrics whenI was preparing my tenure-package. During that time I came across the original article by Hirsch on this topic (I believe, the symbol h stands for Hirsch). In this article, apart from presenting the h-index, he reviews the value for several nobel laureates (physics). He finds a median index between 35-39. Rather interestingly, the distribution he shows is not too different from your own list.

Zhigang Suo's picture

Pradeep:  Thank you for your comments.  I jotted down some quick notes as I thought about your comments.

  1. It makes no sense to use the h-index in hiring fresh PhD graduates, and probably makes no sense to use it in tenure decision, either.  The time is just too short for papers get cited.  I think you are right.
  2. The h-index might become useful in considering senior appointments and in considering awards.   While a high h-index may not translate to high impact, it seems that a particularly low h-index should be justified with some other evidence if one wishes to make a case of high impact.
  3. Some practices might help to alleviate concerns about using the h-index.  For example, we identify a field, and list 5 people as the comparees of the candidate.  We then ask the referees to comment on the comprees and suggest new ones if they wish.  The h-indices are listed for all comparees, along with the years in which the candidates gained their PhD. 
  4. Of course, the list of h-indices is just a very small part of the case report.  It is impossible to reduce the accomplishments of a person to a single number.
  5. The h-index is perhaps still too new for people to have a good intuition about it.  Can it be easily gamed?  Will it affect the publishing practice of researchers?  
Ting Zhu's picture


It is a good strategy to use affiliations to filter out the papers that are not written by the people you are checking. But some early papers could be missing. For example, on web of science there is no affiliation associated with Jim's most influential paper on J-integral. I don't have a solution to this problem yet.

Author(s): RICE JR
Source: JOURNAL OF APPLIED MECHANICS 35 (2): 379-& 1968
Document Type: Article
Language: English
Cited References: 25      Times Cited: 2175     

BTW, I found an interesting study that correlates people with high H-index and Nobel prize winners. Below is a list of top 20 alive chemists with high H-index; # indicates the Nobel laureates. Five out of the top 10 have already got Nobel prizes. Your Harvard's colleague, Whitesides and Karplus, could be future winners if H-index is a good predictor.


Rank Name h-index Field
1 Whitesides, G. M. 135 Organic
2 Corey, E. J.# 132 Organic
3 Karplus, M. 129 Theoretical
4 Heeger, A. J.# 114 Organic
5 Wührich, K.# 113 Bio
6 Bax, A. 112 Bio
6 Hoffmann, R.# 112 Theoretical
8 Lehn, J. M. # 107 Organic
8 Schleyer, P. R. 107 Organic
9 Scheraga, H. A. 105 Bio
10 Bard, A. J. 104 Analytical
11 Schreiber, S. L. 102 Bio
12 Khorana, H. G.# 98 Bio
13 Fersht, A. R. 97 Bio
13 Gratzel, M. 97 Physical
15 Zare, R. N. 96 Physical
15 Lippard, S. J. 96 Inorganic
15 Trost, B. M. 96 Organic
18 Clore, G. M. 95 Bio
18 Gray, H. B. 95 Inorganic
18 Marks, T. J. 95 Inorganic

Zhigang Suo's picture

Dear Ting:  Thank you very much for the tip.  I searched Jim Rice again, now without the affiliations.  I got the familiar top papers, but I also got papers by other authors also named J.R. Rice.  After eliminating them, I obtained Jim's h-index.  Of couse, this number may still be inexact.  It seems that finding exact h-index is not easy.

I have also searched Budiansky without affiliations.  In his case, there is no paper from other authors of the same name. 

I have modified Rice's and Budiansky's numbers in the post.  I'm sure that the numbers for other medalists are also inexact.  Please leave a comment below if anyone finds a better approximation.  I'll update the post accordingly. 

MichelleLOyen's picture

I almost wonder sometimes if we need a unique identifier for each scientist if these types of calculations are to become critical for assessing progress.   Many of us have changed our physical location a number of times; people have common names.  In addition, many female scientists have changed their names.  If someone wanted to calculate my h-index they would be incorrect if they did not know that my early publications were made under a hyphenated married name.  While I could go into a database and highlight my own publications under two different names, someone would not be able to make this assessment independently and calculating someone else's h-index is thus potentially problematic!

Zhigang Suo's picture

Dear Michelle:

I totally agree with you.  After going through the trouble finding the h-indices for faculty appointments, I'm wondering if anyone can find a close approximation for another individual in a hurry.  While many people have mixed feelings about the h-index, it has been gaining popularity.  To protect yourself against gross errors made by your potential employers, you might as well take the trouble to find the h-index and other citation statistics yourself and simply list them in your resume.  After all, we have been listing all kinds of miscellaneous data in our CVs.  What's wrong with listing some more?  Everything Is Miscellaneous.

Also, I like your idea about uique identifier for each author.  Every edition of a book has an ISBN.  The ISBNs for all editions of the same book can be obtained by the web service xISBN.  Every journal paper now has a DOI.  In US each individual person has a social secuity number.  So something like this is clearly doable.  Who will do it, ISI, Elsevier, Google?  Will OpenID serve the purpose?

At least each iMechanica user has a unique user number!  You are number 19.  The newest user today is number 4563. 

MichelleLOyen's picture

Great points about the ISBN and DOI numbers.  It's interesting, I was in Scopus yesterday on a fact-finding mission (I caved in and did exactly what you suggested about a month ago, and added total citations and h-index to my CV!) and not only do I have to deal with the former married name but I find also my more recent papers with my name spelled incorrectly (like "Michalle").  I have no fewer than six different names in the Scopus system even though I am a relatively young scientist--it can only get worse!    I have not thus far figured out a mechanism for making corrections to
the databases when there is a silly problem such as a mis-spelling. Scientists have a vested interest in identifying their own work, so it would be very easy to "tag" your own papers in a database and request corrections.  This is definitely a feature that should be incorporated in the databases, and a unique number (perhaps including encoded information about the field in which you work) would be a welcome solution to this problem. 

Zhigang Suo's picture

Dear Michelle:  Engineers at the Web of Science may as well be reading your comments.  They have recently developed ResearcherID.  Here is my ID:  B-1067-2008.  Here is John Hutchinson's:  B-1221-2008.

At this moment, I only know how to include papers indexed by the Web of Science.  On this service you can get all the basic statistics, such as the number of citations of each paper, the total number of citations, and the h-index.  You can also rank order papers in terms of the number of citations.  Best of all, all the statistics can be made open.

Update 15 April 2006.  Here is how to obtain your ResearcherID

Zhigang,  Getting good numbers is nearly next to impossible.  But you can improve your results a bit more by doing a cited reference search in ISI.  This will pull up all sorts of versions of citations that ISI could not figure out how to assign but which you can easily do yourself (if you are patient) -- this includes authors mis-citing volume numbers, citing pre-prints, etc. etc.  Using ISI's h-index computer is the easy way but it gives very distorted numbers for many authors for the typo reason and because it does not include papers not indexed by ISI.  The cited ref search, however, still suffers from this last problem too and thus misses some very important papers (which I know of by others).

Another problem with this is that it can falsely make some papers look important.  One nice example of this is a paper written by H. Einstein (Albert's son) on the Navier-Stokes equations.  He made an error in the paper and many people took delight in being able to point out the error made by an Einstein -- the rumor I heard is that the paper was cited hundred(s) of times.  (N.B. I've never checked whether this anecdote is true or not but it does make a good story.) -sanjay


Prof. Dr. Sanjay Govindjee
University of California, Berkeley

jfmolinari's picture

In case it is difficult to obtain/recover all papers of an individual or an institution, we have proposed a new indicator.

We call this indicator h_m, and it will be published in Scientometrics shortly in form of two papers (an experimental paper followed by a mathematical paper). The first one has been posted on imechanica:

where you can find a link to "paper.pdf"

Essentially, we have proposed a normalization of h-index by the number of papers N:


This normalization seems to be universal...


A practical implication is that if one cannot recover the exact N, as long as we have a significant number of papers, h_m can be computed.

(Important note: h_m can be computed for established scientists that have a large number of papers.)


h-index, which expresses the raw visibility, and h_m which expresses the normalized visibility (say average visibility), are indices that could work hand in hand.

It could be interesting to compute h and h_m for a set of scientists.



Jean-Francois Molinari

Zhigang Suo's picture

Dear Jean-Francois:

Thank you very much for this comment.  You call to attention a basic question:  why should we use h-index?  See the update of my initial post.  A number of people have expressed interest to see your metric for this list of medalist.  Would you be willing to compile and post the list? 

jfmolinari's picture

Thanks for your comment. I think it is a very interesting topic.

But, I'd rather not get involved in ranking individuals, as it is a complicated and sensitive issue.

However, I am interested in ranking of institutions.




Jean-Francois Molinari

Xiaodong Li's picture

Thanks for the discussions. I found this link. "The National Research Council (NRC) uses citations from the Science Citations Index as its main criterion for evaluation since citations indicate usefulness of work published in peer review publications." This might be another reason why so many universities use citations.  

The problem with any such an index is that what it mesures is the impact on other people---not on reality. Philosophically, that is what I find to be the most problematic aspect... One can never tell when it will lead to that degenerate situation which is best described as "social metaphysics" (to borrow a term from the American philosopher Ayn Rand.)

Of course, this does not mean that other people should not at all come into picture in formulating a criterion of worthiness of a theory/scientist.

Informally and personally, I do use a people-oriented criterion, but the criterion is such that people enter into it in a rather innocent way---as students. And, of course, the validity of the criterion depends on a tremendous amount of context, not all of which I could even begin to spell out. Yet, if taken in the right sense, it might be useful. So, might as well share it here....

I do judge, for my own personal reasons, the worths of theories, and whenever doing so for a given theory, what I ask myself is the following main question: Will this theory make it into an undergraduate text-book?

The secondary questions follow. If yes, when? In what kind of a subject? At what level (i.e. year in college)? And how much part of that subject will it come to occupy? When? 10 years later? 25? 50? And once it does so make it there, for how long will it stay there? Why?

That is the criterion I actually use.

Please note, we assume many things here, starting with the very assumption that the general standard of education will remain the same---that the system will not deteriorate into, for example, teaching plumbing to the undergraduates of engineering under the pretext of "enormous practical utility in the practice of their art" or "financial attractiveness," etc. I think you get my point.

But, overall, the criterion does help me a lot. More than any impact factor. That's because, indirectly, it forces one to integrate the theory to the rest of the knowledge. But this is pretty easy to do.

For instance, many things we now-a-days take as very "obvious" have made their way into texts only recently. For instance, both dislocation theory and boundary layer theory were absent in the 19th century. (In fact, atomic theory was still being accepted when Einstein wrote the 1905 Brownian movement paper.) These theories began their life as research papers, and went on to become indispensable part of the current mainstream theory.

In between the two, it is also obvious that dislocation theory will stay for a longer time than boundary layer theory would. (Availability of extra computational power will turn the boundary layer theory into a mere footnote eventually.) Naturally, dislocation theory would have a greater impact.

The criterion works just fine for me. Hope it will work for you all too.

Subscribe to Comments for "h-indices of Timoshenko medalists"

Recent comments

More comments


Subscribe to Syndicate