User login

Navigation

You are here

Integrity of Scholarly Publishing under Attack

 

This issue of CAM digest re-published D.N. Arnold's article  "Integrity under Attack - the state of scholarly publishing". The orginal article is available via weblink

http://www.ima.umn.edu/~arnold/siam-columns/integrity-under-attack.pdf

I feel the discussion (actually the journals mentioned in the article as well) is quite relevant to the mechanics community. While impact factors have no much meaning especially in mathematics, its role may not be easily dysfunctionalized by abusive publications/citations in a short run....

 X. Frank Xu     

Comments

azadpoor's picture

"One conclusion that I am ready to draw is that we need to back away from the use of bibliometrics like the impact factor in judging scientific quality."

All the problems he mentions are there. However, I do not think it is a wise idea to abandon the scientometrics measures altogether. The first reason is that no practical replacement is immediately available. Expert opinion is good but not easy to obtain and is also open to fraud. The second reason is that citations show something even if they do not everything. A paper with a citation count above the average (?) of its field is not necessarily a good paper. But, a paper cited well below the citation average of its field is probably an under-average paper. That much can be probably said. 

Not necessarily.  The number of citations is strongly influenced by the popularity of a particular model/ field of study at a particular point in time.   Citations also depend on the availability of the journals in which a paper is published and the status of the author. The reason is, given the tons of papers that get published each year, people just don't have the time to read every relevant paper on a subject.  The shortcut is to look at the author's name and institution and decide on that basis whether a paper might be worth reading.

In general,  the first few publications on a new subject of interest tend to be cited more because of the need to establish priority. That does not neceassarily mean that these papers are of high quality, as anyone doing research knows.

I personally know at least three academics who have published amazing, but barely cited, papers. The reason is, partly, their lack of willingness to get out and sell their ideas and their brand.

-- Biswajit

 

Pradeep Sharma's picture

Frank,

Thanks for posting the article. It is quite interesting and I forwarded the link to several of my colleauges as well who don't typically frequent iMechanica. Sadly, I am hardly surprised. Notions such as impact factor can be useful indicators if viewed and weighed along with other information. Just by themselves, impact factors (or for that matter any number purporting to represent quality) can be gamed and abused. I believe that the scientific community itself has (more or less) a fairly accurate perception of the quality of different journals (and has had so, long before such bibliometric concepts were in vogue). The real problem is the excessive and indiscriminate reliance on them by the current and presumably future generation of university and research administrators.

 

In my view, many such debates center around the confusing role of citations/impact factors - 

The immediate role of citations/impact factors, obviously, is to measure the degree of attention the authors/journals have received. Whether people want to interprete such a degree of attention as one (exclusively important, highly important, important, somewhat important, little important, not important at all)  index of quality will be partly a subjective issue (back to kind of expert opinion again).

One interesting opinion is, since the citations/IFs are measurement of attention, editors/authors then have all the rights (and actually a responsibility for an editor) to self-promote themselves to attract more attention, by trying all kinds of promoting ways, as long as these practices are not unethical in a common sense. Now the problem is the line between self-promotion and gaming/manipulation can be so subtle.....     

I recently received a doctorate in biomedical engineering from a major research university in the U.S. and my dissertation research was actually more biological than engineering, i.e., it was a wet science in a medical research lab. Hence, I necessarilly became familiar with the biomedical research and literature, along with the issues of plagiarism or falsifying data. It's probably because experimental biology is still more complex and inexact than engineering dealing with inanimate objects - not that the latter does not involve plenty of complexities or impreciseness, but it seems that plagiarism or data manipulation occur more often in biological or biomedical science than in engineering science or physics and, hence, are publicized more frequently. So, my point is that the integrity of scholarly publishing in engineering under attack, while of a grave concern, probably has a lot more to do with the plagiarism and citation issues than the validity of the results published which must strictly follow the well-established laws of physics and mathematical rules in logical steps. On the other hand, I do have a personal experience of witnessing plagiarism by a well-known engineer who practically copied verbatim a section from an older, published book by another author and including it in his own published textbook. Regardless of a field, then, those who are considered the respected experts (gurus) seem to and are allowed to get away with some questionable conducts more than their lesser peers. A case of professional privilege or abuse? 

Subscribe to Comments for "Integrity of Scholarly Publishing under Attack"

More comments

Syndicate

Subscribe to Syndicate