User login

Navigation

You are here

Stepan Prokofievich Timoshenko

Andrew Norris's picture

Comments

Andrew Norris's picture

A few days ago I messed up trying to put a nice picture of Timoshenko into the images section of iMechanica  when, by mistake,  I somehow made it into a blank comment.   In partial recompense for my error, I will add a real blog entry now.

Actually, this is a topic that should be of interest to mechanicians everywhere.  By way of introduction, I came across the Timoshenko picture after reading an unusual non-technical article - which I'll say more about in a sec.  The author - A. N. Guz - is from the S. P. Timoshenko Institute of Mechanics, which is part of the National Academy of Sciences of Ukraine, Kiev.    Timoshenko grew up in the Ukraine, and for several years (1907 - 1911) was a professor in the Kiev Polytechnic Institute.   The Ukrainians are rightly proud of Timoshenko, and it seems that Applied Mechanics has been and remains vibrant in Kiev and the Ukraine.    With that said, now to the article:

On the evolution of the scientific information environment ,  A. N. Guz,
International Applied Mechanics, Vol. 42, No. 11. (November 2006), pp. 1203-1222.  

A. N. Guz, in addition to being a prolific author himself, is the editor of the journal International Applied Mechanics (IAM), so the paper can be viewed as a 20-page editorial on the state of affairs of science publishing.  The emphasis is on how the rapidly changing world of journal publishing is effecting IAM and its community of authors - who are predominantly from the Ukraine and eastern Europe.   As such, it's a very interesting perspective.    

The paper pretty much follows the outline given in the Abstract:
"Problems arising in the development of the scientific information environment are considered. Emphasis is on the building of international databases for periodical publications and monographs, evaluation of publications and journals, and objective citation. The activity of the S. P. Timoshenko Institute of Mechanics and the journal International Applied Mechanics in this field is exemplified. Suggestions on how to ensure objective citation in articles are discussed."

The author focuses on three aspects of the emerging scientific information environment:
(iI) large databases that are globally accessible;
(ii) a system for the evaluation of scientific publications (meaning peer reviewed journals); and
(iii) assurance of objective citation to scientific publications.

The first two are very familiar to iMechanica people.  By databases the author means not just search engines (e.g. Web of Science), but in particular the large publishing houses like Elsevier and Springer, which publishes IAM.   Actually, Guz views Springer as a benefactor since it provides IAM and other journals in the Ukraine with international visibility they would not have otherwise - a perspective that we do not hear much (but that's another discussion on another thread!).    One point that Guz emphasizes is the dominance of the english language in global scientific publication - he notes that translation into english is a necessary though not sufficient condition for a journal such as IAM to achieve international prestige.    That raises the second issue of how journals are ranked - and Guz gives a creditable summary of the current practice of using impact factors, etc.   He provides data focused on journals from Ukraine, noting that IAM was in the top 10 of all mechanics-related journals in 2005, in terms of impact factor.  How many of us knew that?   Not me!

All this is just warming up to the main point of the article - which is that the system of ranking and rating journals based on impact factor is critically flawed because of what Guz calls "non-objective citation".    What is this?  Basically - he means the tendency of authors to not do their homework in searching the literature.   He states:
"...the non-objectivity of citation in some scientific publications has historical aspect and seems to have become stronger in recent years with increase in the number of journals.    Thus, it is the author’s believe that the problem of non-objective citation in periodical publications exists, is vital, and is a vulnerability of the scientific information environment because there are yet no methods to resolve it."

More specifically, the author is upset at the lack of citation to articles in IAM and similar journals.  He makes his point with two specific examples of "non-objectivity"  in a well known journal - higher up than IAM in the ranking of mechanics journals.   His two examples are papers on topics he is familar with and he goes into great detail on how there are many of his own papers in the open literature, in english, that according to him, the authors should have cited.   By not citing his papers, Guz argues, the net effect is that journals like IAM, where many of his papers appeared, are getting the short end of the stick.  Fewer citations => lower impact factor => lower recognition etc.  - exactly what journal editors fight to avoid.

I sympathize with the issues that Guz raises.  I have seen it in papers I have reviewed and in papers I've handled as an editor, and am probably guilty of non-objectivity myself.  How many of us have failed to adequately research the literature.  Its hard enough tracking down relevant papers in journals you know, but now there are many others easily within reach (online anyway) that were not previously accessible.  IAM is an example.   Others in my recent experience include
Journal of Applied Mathematics and Mechanics  (not the same as ZAMM  which has the same name in english), Russian Journal of Mathematical Physics  These are examples of journals where I have found - thanks to the web - articles that I would not have come across ten years ago.  

After making his case that a problem exists, Guz finishes by suggesting a means to help ensure objective citation in periodicals.   Here is his proposal in his own words:

"In the author’s opinion, objective citation in scientific and science-and-technology publications can most actively be promoted by introducing mandatory elements into the structure of papers. Such elements, for example, are abstract or summary and keywords, which are necessarily present in papers of the overwhelming majority of scientific and science-and-technology journals.  

"A general measure to promote objective citation may be the following element in the paper structure:

"Novelty of the Results and Objective Citation in this Paper are Confirmed by Database ( )

"This could require each author to indicate in parentheses the international database (s)he used to do a literature search, the author being fully responsible for the results of this search. This requirement would certainly complicate the preparation of papers for publication, on the one hand, and make authors more responsible to check the novelty of their results and to provide objective citation, on the other hand, which is extremely important for the world’s scientific community in the evolving scientific information environment. "

Would such a proposal take off?   Authors already have to jump through many hoops when submitting papers - and we do all the manuscript preparation that the publishers did before.    Some journals require you to tick off endless questions before finally submitting the paper.   What is achieved by adding one more hurdle?    But I think this type of statement would be worth including.  It makes authors stop and think, and it gives the reader a bit more confidence in the paper.   It also lessens the burden of the reviewer - or I should say -  decreases the responsibility of the reviewer to be a literature detective on top of everything else.    So it might even improve the problem of getting competent reviewers.  

Pradeep Sharma's picture

Andrew, This is an interesting post and article (by Guz). Like John, I am sympathetic to his viewpoint but skeptical about the viability of his proposal. I strongly believe that the referee must do a comprehensive literature check on the manunscript under review. If we adopt the type of solution proposed , people who are habitually negligent will find a way out.....yet the mere existence of "an apparent solution" might make the referees more complacent. In the end, I believe, this issue may be resolved by editors strongly emphasizing ethical and rigorous citation practises and consistently choosing responsible referees who take the time and effort to check for them.

MichelleLOyen's picture

Very interesting comment... it always seems to come down to bandwidth of every participant in the process.  Clearly the authors spend the most time on a paper's preparation, while the reviewers and editors spend some period of time considering the authors' final product.  My question to Pradeep based on his comments is regarding the suggestion of a comprehensive literature check on the manuscript under review: this seems like a remarkably time-consuming proposition, especially considering the large proportion of the literature that is still buried in hard copy-only form in brick-and-mortar libraries.  Is this literature search truly the responsibility of the reviewers for every single paper?  If each manuscript in the literature is reviewed by 2-3 reviewers, a larger and larger proportion of the total scientific "output" of a reviewer would be in confidential comments for which the reviewer receives no "credit" in terms of their own actual scientific output!  It is no wonder that it becomes increasingly difficult to locate an appropriate set of both technically competent and willing reviewers for any paper!

The other problem that always arises in this context is the purposeful selection of papers to cite in any work based on relevance, something that involves an aspect of opinion.  I have been frustrated at times when discussing with colleagues why they chose not to cite some work that they were clearly aware of... but we all have to make these decisions in every paper we write.  The way the literature is expanding, there are literally hundreds of papers that could be cited as relevant to any given topic, and in some cases (say, for example, bone biomechanics) there are literally tens of thousands.  We as authors still make scientific judgement calls about which papers are most important, most relevant, and most usefully cited in any work we write.  Looking at the paper from the outside, it may not always be obvious to an author why his or her work was "slighted" by the authors of a new manuscript by not being cited, but there could be a reason.  Only in an open dialog between authors and reviewers or authors and other scientists in the field would it become clear why certain papers have been discounted, skipped over, or otherwise left off the references list.  Not every paper on a given topic is a good one--technical errors creep through peer review all the time--and with the sheer breadth of the literature today, not every paper out there contains any useful contribution to the subject. 

(An amusing side note on this topic, I recently had a reviewer comment that criticized our paper for having more than 50 references, and suggested 30 was more "standard" and we should cut our list.  We argued to the editor that we stood behind the idea that these papers were indeed directly related to our own and should remain in the list... the jury's still out on that one!)

 

Pradeep Sharma's picture

Michelle,

Indeed, good refereeing is time consuming, and in my opinion it should be. Not always, but most of the time I decline to review papers that are on subject matters I do not consider myself an expert in---to my surprise I have found that this is not typical practice. In the event that I do agree to reviewing a paper, I routinely glance at Science citations and Inspec databases to check for duplication, incremental contributions and oversights. Disappointingly, most of the reasons for the rejections (I have recommended) fall in the latter three categories. In my experience, except for some high end journals (e.g. JMPS in the mechanics community), reviewers are not always carefully chosen. A "true" expert in the area of a given paper should know the literature like the back of her hand and accordingly will need little work to check the literature.

Regarding intentional exclusion of references, that is certainly perplexing to me as well but I don't worry too much about this....scholarly publication is a very Darwinian process. Reserchers who routinely engage in unfair or sub-standard citation practices will weed themselves out by sullying their reputation!

Pradeep,

We would all like our work to be reviewed by true experts.  But how many of such experts are there?  I have reviewed a handful of papers - none of which were truly in my areas of expertise.  The amount of knowledge needed is so huge that only very senior/extremely bright researchers can claim to have complete knowledge of any particular subfield of mechanics.  And even then it will have to be a very narrow field.

Editors typically send papers to people they know or to people cited in the manuscript.  And very often its only a small group of papers that are cited by others.  So some people are flooded with review requests whereas others don't get asked even if they possess the needed expertise in that subject.  So its a catch-22 situation.

I think a community such as iMechanica can provide a list from which editors can pick reviewers rather than having to depend on personal contacts or lists of people who have contributed to their particular journal. 

As far as checking citations is concerned, it's next to impossible to check all citations.  For instance, citations in the form of a book (with no specific page numbers) are quite difficult to check.  So are citations from journals in India, China, South America, or Eastern European countries.  Since 2000, the University of Utah has cut most of its mechanics journals in favor ofmedical/biological science related journals.  So I have to order most articles published after that date via interlibrary loans.   Those requests arrive in two weeks in most cases and in some cases I have to pay for them.  There's no way I'm going to pay for an article that I am interested in only the in context of refereeing a paper.

Also, the tendency to publish early and often means that the same material may be published thrice in the same year with subtle modifications.  It's hard for a reviewer who does not directly work with the authors of that cited work to know exactly what's right and what's wrong with the current and past work.  Particularly since authors generally fail to mention why they bothered to publish similar material in multiple papers.

The sense of patience that the academic / scholarly community typically displays can be a value. To not jump to conclusions, to wait and patiently consider someone's viewpoint is a good way for scholars to treat people.

But there is a difference between people and ideas.

Patience is no virtue when it comes to judgment of ideas--especially the very good and very bad ones. In reference to judgment of ideas, impatience *is* a virtue. ... (For that matter, even when it comes to judgment of *people*, one has to remember, in the long run, all of us are simply dead.)

It is obvious that no new ideas would at all spread if *all* the people "relied" on the Darwinian mechanism.

So, the really interesting part is not the assurance that Darwin's insights are applicable to the realm of researchers and their work. The really interesting part is: What do people do, in reference to ideas, if they had enough of individual initiative. What tools may ease their work? What systems may they follow?  That's the idea here...

MichelleLOyen's picture

I'm still not sure I can agree with Pradeep on putting the burden on the reviewer.  I think there's a problem here on simple economic terms: If the bottom line for appropariate literature searching and citations rests with the reviewer instead of the authors, then there needs to be a reward system for reviewing.  Right now, a paper's publication primarily benefits the authors.  Spending 2 days on a review and doing the sort of careful literature search that Pradeep suggests does not clearly benefit the reviewer in the current system. Therefore, in our Darwinian struggle, it becomes difficult to justify on time grounds.  

I would suggest that it's a bit tricky to suggest "reviewers are not always carefully chosen" ... if the three people who are best suited to review a work in the opinion of the editor all decline to review, the editor has no choice but to go down the "depth chart" and find someone else!  There are many reasons why someone might decline to review, and I'd suspect it's more common amongst the more senior scholars who must be inundated with requests.  In a scientific field with increasing specialization, it's sometimes conceivably difficult to locate an exact "match" for the subject of a paper, and that goes double if the paper is actually novel or ground-breaking such that there are not already a dozen similar works in the literature! 

Finally, I was quite carefully trying to construct a scenario in my comments above in which researchers were engaging in careful selection of works to cite on scientific grounds (which may or may not be clear to the authors of other works who feel slighted by not having been cited) and not at all addressing the scenario of "unfair or sub-standard citation practices".  I'm guessing that my point did not come across sufficiently clearly.  However, I truly believe that these sorts of decisions are made all the time and not just for sneaky or devious reasons: again, if there are literally hundreds of papers on a subject, and a reasonable list of references is needed for a new work on some aspect of a topic, there is simply not room to cite every single paper and it comes to the authors' discretion to identify which of the hundreds are the "key" papers in a field.  This sort of decision will inevitably involve some level of scientific judgement but I would rail at the thought that any case of a judgement call was automatically unfair or devious! 

 

Pradeep Sharma's picture

Michelle,

I don't believe that the burden of literature search should be solely on the reviewer......it is of course primarily the authors responsibility. However, at least by own personal experience I remain unconvinced that referees cannot do a credible job of checking the literature. Regarding reward for the referees and the lack of incentive for them to put in too much of their time, I acknowledge that you have a point and in fact we come full circle to the discussion intitiated recently by Eric on this topic.

Regarding "careful choice of referees"; again I can only speak from personal experience. I have found that with some journals, the selection of the referees is mixed....some clearly are experts and put in constructive thought while in some case I suspect the reviewer has not even bothered to read beyond the abstract. However, in some selected top notch journals I consistently find that the reviewers are very carefully chosen and always I leave impressed with their reports (whether I agree with them or not). This is what I meant by my statement, "reviewers are not carefully chosen". I certainly sympathise with the editors who have made attempts to find good referees but were turned down. Haveing not served in a major editorial capacity, I guess I cannot fully understand this situation....perhaps mechanicians who are editors can comment?

I did misunderstand your comment about "intentional selection of citations". I assumed you were referring to ones that are ill-intentioned. Of course, as you point out, in the general case, due to finite space and considerations of relevance we all have to make judgement calls as to exlcude many references. I don't consider the latter to be either sub-standard or unfair---that is the norm.

Andrew,

I'm sympathetic to the idea, but suppose I'm also fairly skeptical of something like this getting traction.  In the end, all one would have is that statement and no way to verify that the author had actually carefully looked through the manuscripts in the database listed to cite the relevant ones.  

Let me give you an example from my own experiences.  Recently we had put together what I'd thought was a very novel piece of work, and I also felt that we'd been very dilligent in identifying the background literature. All the more so as this was related to a problem I'd struggled with for several years, that I was almost certain I'd read everything there was to read.  We wrote the paper up and, fortunately, I sent it to a colleague with expertise in the area before submitting it to a journal to get his opinion.  Well, as it turns out he had done some related work, that was only published in a conference proceedings.  Indeed some of our 'novel' results were there, albeit in a slightly different form.  There were enough new results for us to publish the work, but the important point remains: even due dilligence can miss some important works.  

In this day and age of rapid review & publish, the burden falls on the community in my opinion.  We need to train people to do their homework as best they can.  When important works are overlooked, we need to call attention to it (to some degree, iMechanica is a great place for doing so) and try to prevent it from happening in the future.  We should commend those researchers who do this well.  

It also seems to me that there should be a mechanism for electronic versions of manuscripts to be revised after acceptance to include important reference material, particularly if something like this comes to light.  Why shouldn't the life of these manuscripts be more fluid?  

Zhigang Suo's picture

I'd like to add to John's comments.  An author does not cite a prior paper for many reasons, such as

  1. The content of the prior paper is well known to his audience.
  2. While the prior paper is on the same broad subject of his own paper, the prior paper does not contribute to the new things in his own paper.
  3. He does not know the paper.
  4. He wishes to hide the fact that his paper contains nothing new.

Perhaps most of us believe that, at least in principle, the author is right about 1 and 2.  It would be a terrible waste of time of the author and his readers if every time he does a differentiation he mentions Newton and Leibniz, and tries to decide who invented differentiation.  It would also be pointless if every time he calculates the energy release rate of a crack he cites Griffith and Irwin.  In practice, however, whether a prior paper is well known to his audience or irrelevant to the new points in his paper can be subjective.

Point 3 is the focus of this thread of discussion.  How much effort should he spend on searching relevant prior papers?  Some compromise is obviously necessary.  He should not spend all his time searching prior papers, but he will lose credibility if he makes no effort to connect his work to prior papers.

Point 4 needs no further discussion, except that I wish to point out that it is the responsibility of the author to state what is new in his paper.  Once the reviewer sees this statement of novelty, she will apply her knowledge and spend however much time she wishes to address 1-3.

Whether a paper has lasting value or bring the author credit rests on a simple question:  does the paper contain anything new?  This question will be answered sooner or later by the community.  Citation is one way to express the collective opinion; so is word of mouth.  In our own time, websites like iMechanica can accelerate this process.  

0. I believe Prof. Guz's suggestion will indeed go far, but only if he dresses it up appropriately--makes it presentable to the current Western audience.

Now, he may not realize it, but doing so is very, very easy these days! All that he has to do is to prepend the words "open source" to his proposal, that's all!!

Any proposal having "open source" in its title would fly high in the West, today! In contrast, Prof. Guz does seem to be very old-fashioned; the adjective he uses is "objective". Now, "objective" is a concept that is no longer fashionable in the West.... But the moment he chants the mantra "open source," people from NSF, Silicon Valley, Washington (DC), etc. are all likely to get impressed and go running after him.... After all, these are the days of being proud (i.e. antonymn of being shameful) of free texts, free journals, free software, free lunches...

1. OK. Recovering myself from that cynical streak in point 0. above...

I declare my in-principle suport for Prof. Guz's suggestion.

An immediate practical difficulty I see is the absence of a well-organized list of search databases (including databases of separate journals) and the short-form of names for use in the articles.

But I think the short-forms and all is a relatively minor difficulty. There is no reason why we can't have a registrar of search databases and their short-names just the way we have the Internet domain name registrar.

More acute is the issue of easily affordable searches through the full-texts of articles. Would it be possible for search databases to search through full-text and give sufficient indication of the surrounding material in the search results, even if they don't give the full-text of the article? I think today's technology certainly allows doing so. But it needs to be implemented. The availability of full-text search is necessary, but it's not a show-stopper: the currently available searching capabilities are good enough.


2. Having said that, I also don't believe that the main problem that Prof. Guz indicates will disappear with his proposal.

A deftly and deliberately executed "error" of omission in citing scholarly references, can't be expected to be a radically new addition in the entire array of the already existing ways and means that the humanity has managed to invent in its pursuit of all the lesser goals.


3. Further, on the brighter side of humanity, there always exists that possibility of two or more individuals reaching the same discoveries or inventions independently. And for the discovery/invention to be independent, they don't even have to be "almost contemporaries"--they could be separated even by centuries!

As a recent example, please see the history of wavelets. The concept has been invented and re-invented some 10+ times by various reseachers from different walks of science and technology!!

4. Then why insist on stating the database used?

An identification of the databases searched will help in establishing the quantum of intellectual effort that a given article *actually* effects.

It will also *help* referees--even though the final responsibility will still remain with the referees.

(Dr. Dolbow, if it would be possible to cheat a referee by blindly adding a search database name, it is already possible to cheat him by blindly adding a citation. To demonstrate this, I am willing to cite one of your articles without even reading it in one my forth-coming papers! :) )


5. Also, note, the issue of priority really has a far more burning relevance in the patents office--not in publication in scholarly journals. If the patents offices can find systems and procedures to deal with these things, why can't the scholarly community do the same?


6. Finally, what about some truly startling work that still could not find a place in *any* scholarly journal--whether Russian or American, coming from the formerly communist or the formerly mostly-capitalist countries?

This is neither a flame or a populist question. The list of such works includes: the first flight of the heavier-than-air machines by Wright brothers, the first comprehensive and consistent formulation of the first law of thermodynamics by Helmholtz (which he finally privately published in the form of a booklet going financially broke in the process).

What about this kind of work--which doesn't get published in any journal?

Another reason to mention this point is that a lot of application engineering related work, even today, doesn't at all get published in any community resources. There are some objective reasons for the same. Despite the ingenuity involved in, say, design of a die/jig/fixture, spelling out the particular context and the nature of the ingenuity itself will take such a long time that writing about it does not befit a scholarly journal.


7. Another, somewhat related point. If some *peer-reviewed* conference papers are freely available from an Internet site, and if that Internet site itself, is also very easily located using the Internet search engines, then, why shouldn't these papers be considered at par with the journal papers?

Any ideas? Esp. in the context of Web 2.0 and the communications/media revolution?

Ajit,

You wrote:

Dr. Dolbow, if it would be possible to cheat a referee by blindly
adding a search database name, it is already possible to cheat him by
blindly adding a citation.

I agree completely, but this just serves to illustrate the points that Pradeep and I have raised.  The responsibility ultimately falls to the reviewer to check things carefully.  The reviewer needs to be familiar with the pertinent literature.  If they are, then they don't need to see the database that the authors used.  If they aren't familiar with the pertinent literature, the database really isn't going to be much help. 

Dear John,

Thanks for your interest.. (Now, please brace yourself to read through another long post by me!)

The thing is, it isn't always possible for the editors to even know who should be appointed as the most suitable referees, and for the referees so appointed to locate what the pertinent literature is. I speak from experience.

When I wrote the "Research Proposal" document for admission to my current PhD program, more than 12 guides turned me down because they didn't even know where to look for references in verifying the novelty of the approach that I was talking about. And their difficulty did not arise because they were lazy or incompetent. The difficulty was simply because of the nature of the proposal--it took such a novel an approach. These guides came from different institutions: University of Pune, three different Indian Institutes of Technology (Bombay, Kanpur, Madras), and the very highly rated--I don't know precisely why--Indian Institute of Science. I was at the end of my wits as to how to convince people that every locatable or accessible database had been searched for. I had to fight with this thing for a full 2.5 years *before* mere admission was granted. But, in India, I at least got admitted--something that would have been impossible in the USA. (After thus running from the pillar to the post for a full 2.5-3.0 years, once admitted, in the next 2 years, I published 6 papers and 2 extended abstracts). One of the things that helped me in my fight was: clearly identifying all the search databases I had looked for. (I would challenge people to come up with something that even remotely resembled my ideas.)

Now, leaving aside my particular details, what's the abstract lesson or point here? I think it is this: If the research idea is very novel, even experts have a very hard time telling precisely how to place the "result". In such a case, an identification by the author will at least help the cause of a *timely* publication of the result--it will at least won't hurt the author.

If you still think I am talking of a far-stretched thing that is not possible with today's explosion of journals and all, here is another example, very close to all of us here at iMechanica. A few months back, I was interacting with Vikram Gavini of CalTech, here at iMechanica. I could place Vikram's research in some context up to a point but not fully. My difficulty was not just limited to failing to understand precisely what was being meant by the term "periodic boundary conditions." The real difficulty I had was the following (though I didn't talk about it then): We all know that for linear systems, every variational (weak) statement has the possibility of deriving a differential (strong) statement--necessarily so. Now, the DFT theory Vikram used is based on a variational viewpoint. And Vikram was now claiming that his works extends the number of atoms that can be included in the simulation. The state of the art is only hundreds/thousand atoms, whereas his work now shows how to include millions of atoms in simulation right using today's computers.

Now, how can one be sure that such a dramatic a claim doesn't have precedence? That it is not a mere recasting of a well-known result/perspective elsewhere? How novel is novel? Suppose that's the question to be settled. Is the fact that the research got published in a reputed journal (or one of high impact factor) any sort of assurance? Let's touch upon this briefly.

What would *you* do as an editor? Most journals would appoint referees like Pradip--people who themselves do research work based on or using the same approach (use of variational principle). Pradip, a PI in a large research program, would confirm the novelty of Vikram's work.

But the real question for a third party like me is: Did Pradip himself take care to look into the literature concerning the strong (non-varational) viewpoint? How novel is the novelty claim?

In such circumstances, if Vikram were to declare that he has checked his claim against, say, even fundamental physics and mathematical physics journals like PRL, and also all the major computational physics journals, (in addition to the FEM and multi-scale modeling and solid state physics journals), the aspect I was looking for would be covered. I could conclude that his work was really novel.

In the past, people have claimed novelty (and won prestigious awards--best graduate student, young investigator, young faculty etc. awards) for results that could have been obtained by mere analogy but none thought of cross-checking across the domains. As just one example, consider, how novel the results in artifical intelligence using neural nets have been if, under certain conditions, neural nets are precisely equivalent to what FDM meshes using Liebmann's methods are.

In all, you have problems both ways: Some innovative people don't get access to publication in the reputed journals, at least in a timely manner, and at least some people easily get awards portraying a larger-than-actual-achievements image.

So, the concluding point is: For "normal" research whose basis is well established, things are OK. You can find referees and the referees know what to look for and where. For instance, if I do a model of visco-elasticity coupled with thermal analysis using XFEM, it could be something new, and you would be one of the referees and you would know where to look for veracity of claims. But for a truly novel contribution that can even overturn the basic perceptions in a field or truly open up new avenues for cross-disciplinary and inter-disciplinary researches, the current system can easily allow referees to turn antagonistic without their ever meaning to be so. At such a time, a public statement by the author would help the referees. Knowing that a Vikram went through the literature verifying the absence of his results even in the form of a strong formulation, would be helpful.

Of course, the public statement by the author won't give 100% guaruntee. But we look for the six-sigma case, not the 100.0% case.... That way, even Nobel-winning theories have been later on found to be inconsistent. ... And in looking for the six-sigma case, the issue is not *whether* the search was conducted, but *what* precisely that search was.

Subscribe to Comments for "Stepan Prokofievich Timoshenko"

Recent comments

More comments

Syndicate

Subscribe to Syndicate