User login


You are here

Lessons Learned from 14 years as an Editor-In-Chief (for Elsevier)

As some of you may know, I recently announced that I would be stepping down as the Editor-In-Chief (EIC) for the Elsevier journal titled Finite Elements in Analysis and Design (FINEL).  I had served in that capacity since 2010, so roughly 14 years, and the team at Elsevier indicated that it was time for someone new to take over the role.  Although I would have been more than happy to stay on, in time I did appreciate that it was probably better for the community to pass the reins on to someone else. 

In any case, I thought it might be interesting for various mechanicians to read my reflections on this role, as I can provide a peek into what it’s like “behind the curtain.” Of course, I also have advice to authors seeking to publish in the broad space of engineering science, and I hope that what I share below is found to be helpful.  Finally, I hope my views are useful to newly minted EICs or to people thinking about taking on a role such as this.  

The Role of EIC   

I inherited this role from the former EIC Tod Laursen, who was a faculty member at Duke (before leaving for greener? pastures) for many years.  The main responsibility of course is to make sure that manuscripts receive a fair, thorough, and timely review by requisite experts.  

During my tenure, FINEL received anywhere from 500-700 manuscripts each year.  During any given week, this typically translated into 2-3 manuscripts that need to be processed each day.  This takes time for sure, but the real time commitment concerned monitoring the progress of existing manuscripts that were already under review.    Over time, I came to learn that this role was impossible to do well unless I devoted about an hour a day, every day, to it.  It is not a small commitment.  

The first task is to decide whether or not a manuscript should be sent out for review, or “desk rejected.”   In my final year as the EIC, my desk reject rate is a little above 70%.  What that means is that I am only sending out about 30% of the papers that the journal receives.  In previous years my rate was actually quite a bit higher (the publisher was not pleased).  These levels may seem high to those of you who have not served as EICs, but in my conversations with colleagues, I have come to learn that it’s not too far off the desk reject rates for other journals in mechanics or computational mechanics.  The reason is simple: it is a challenge to find qualified reviewers who will agree to review manuscripts in a timely fashion.  The only way to avoid overloading our colleagues (and the community) with review assignments that are not worth their (increasingly precious) time is to do the hard work as EIC and play the role of gatekeeper.  

My goal was always to process desk rejects as quickly as possible.  I tried not to let a manuscript languish with the status of “With Editor” for more than a couple of days.  I would like to point out that this speed translates into an “Average Time to Decision” that appears artificially fast.   This has long been one of my quibbles with Elsevier, not just for FINEL but for all of their journals.  Simply stated, desk rejects should not be part of the calculus for the average time to decision.  The fact that they are means that authors simply do not have useful information when considering which journal they want to submit their work to.  It would be much better to remove the desk rejects from this calculation, so that authors who actually had their manuscript sent out for review have a reasonable estimate of how long it might take to obtain a first set of reviews.  The “Review time” Elsevier provides is a little bit of a better estimate, but even that includes the time it takes for revisions to be evaluated.  

For manuscripts I did send out for review, with time as the EIC I had arrived at a fairly simple goal: find THREE reviewers who would agree to evaluate the manuscript.  My logic is as follows.  In order to make a first decision based on the reviews, I would typically want to have at least two reviews to consider.  Sometimes those two will disagree, in which case a third is very helpful.  But more often than not, one of the three simply does not submit their review on time.  If the EIC only seeks two from the outset, in many instances they are left with only one review and having to pester the second reviewer to submit their late review.  And unfortunately, many reviewers aren’t responsive to these pleas.

 I have seen many editors with (apparently) a goal of finding two, not three reviewers who will agree to review each paper in their queue.  And I think this is a mistake. But I can empathize with them, because sometimes finding even two reviewers who will agree to evaluate a given manuscript can be incredibly difficult.  

Once I had received two of the three reviews, my process was as follows.  I would read them to see if I could issue a decision on the basis of the two of them alone, without the third.  If they were solid and thorough reviews and basically consistent with one another, I would contact the third reviewer to let them know that I didn’t need their evaluation.  I would tell them that if they had started their review, that was fine, I would just need them to submit it soon so that I could issue the decision.  But most third reviewers are happy to be released from their assignment.  I do think it is vital to contact them and make sure they’re OK with this, just as a matter of professional courtesy.  As a reviewer I have had some of my assignments cancelled even though I had spent a good amount of time with the manuscript.  That’s not a pleasant experience, and it leaves you with the sentiment of maybe not accepting the next assignment from that same EIC.  

So, how hard is it to find three people to agree to review a paper, you might ask?  Well, here I have some statistics to share, because I kept them myself.  On average, I found that my ratio of invitations to acceptances was 3 to 1.  So in order to find three people who would agree to review any given manuscript, on average I had to invite more than nine.  As it does not make sense to invite nine from the beginning, you have to stage the invitations, identify alternates, and etc.  This can be incredibly time consuming.  

The biggest delays for manuscripts are caused by invited reviewers who simply do not respond to the invitations.  After a few weeks you assume they won’t respond and you need to move on to inviting other people.  But this sequence can repeat itself several times, to the increasing frustration of the EIC and the manuscript authors.  These days, Elsevier tells authors how many reviewers have accepted an invitation to evaluate their manuscript.  This is typically through their “Track Your Manuscript” option they send to the corresponding author.  I do think this is helpful information to some degree and I was happy to see Elsevier share this with authors.  

Overall, my experience with the reviews I did receive is actually quite positive.  My sense is that most reviewers took this responsibility seriously, and were reasonable when evaluating revisions.  I had several reviewers indicate to me (in their comments to the editor) that they read all of the reviews, and could see that they were much more critical than others.  

If I have a common complaint with reviewers it is that, increasingly, I have seen the tendency for people to tell authors they need to cite their own papers.  Of course, on occasion there is a perfectly good justification for this: one of the papers contains information or ideas that help inform the current manuscript, for example.  Or perhaps it contains ideas that are no longer novel, because the reviewer has published them before.  But what I have seen more recently is simply the tendency to provide a list of papers, with little to no justification, and ask the authors to include them in their list of references.  Obviously this is being driven by externalities and the increasing weight being placed on citation metrics for all sorts of things, from promotion and tenure to society awards.  I find it all rather distasteful, and I’d like to see more of an effort from our community to police this kind of behavior, although I fear it is a lost cause.  In any case, as the EIC I would often work with authors when I saw this kind of thing to try and ascertain what papers should be included in the references vs. not.    

To end this section on a better note, by far the best part of being an EIC was the opportunity to tell authors their paper had been accepted.  I always tried to get those notifications out as soon as possible.  By the same token, when revisions were requested, I made sure to place that information in the subject line of the email notifying them.  That unambiguous signal let authors know where the journal stood, regardless of the details in the reviews themselves.   


Advice for Authors

The primary task for a set of authors, when they have a paper they’re ready to submit, is to think carefully about where to submit it.  I always recommend that authors look first at their set of references.   Examine which journals you are citing the most, and those might form the first set you examine for your own work.  

When you finally have a journal picked out, you do yourself a tremendous favor by submitting your manuscript in the format (single column or two column) that will be used by the journal if it’s finally accepted for publication.  This move significantly reduces the possibilities of issues at the proof stage, in my experience.  

I have written elsewhere on here about recommending that authors think carefully about the set of people they suggest as reviewers.  See this post on iMechanica, for example.  I think this advice still holds, in fact it’s probably more important now than when I first posted it.  

But my main piece of advice is to encourage authors to reach out to Editors at all stages of the process.  Ask if your manuscript is suitable for the journal, for example.  If it is desk rejected and you don’t think that decision is fair, reach out to them.  Don’t be afraid to ask why it’s taking so long for reviews to come in.  And perhaps most importantly, if you receive a review that is problematic, don’t hesitate to contact the EIC before you submit your revision.  Letting the EIC know that you think one of the reviews is unfair is a perfectly reasonable thing to do.   


Advice for Reviewers

My main piece of advice to reviewers is to write reviews the way you’d want to read them if you were the author, even if you think the ideas are terrible.  A little grace and humility goes a long way.  

And this isn’t advice so much as a plea, but, when you decline an invitation to review, just about the ONLY information that is helpful to an editor concerns suggestions for alternate reviewers.  If you don’t have alternate reviewers to suggest, there is almost no reason to write anything in your explanation as to why.   If you don’t have any ideas for alternate reviewers, just decline and leave that response blank – the EIC will understand.  

Finally, on occasion colleagues have asked me what they should do when they think a manuscript they’re reviewing has failed to cite their own work.  My advice is not to provide a specific reference but to rather describe the paper(s) in general terms and the time period it was published.  That should be sufficient for the authors to find it, but they might also find some other papers in the same general area they had missed.  And if they don’t find your work then, so be it!  Place energy into developing your own ideas and work, not into developing your citation count.   


Experience with Elsevier

My experience as an EIC working with Elsevier has surely been different from many others.  Early on, I had so much trouble finding suitable reviewers for manuscripts, that when the opportunity came along to help them with some new software - what is now their Find Reviewers Tool - I jumped at the opportunity.  I worked with some truly exceptional people on this endeavor, and I was very happy when the tool was first released.  I believe that it is now an essential tool for almost all Elsevier EICs.  

By the same token, I have had my share of frustrations with the company.  Over the years I have found many basic problems with their online system for managing papers (currently called Editorial Manager).  For the most part, these have been fixed in due course.  But some of the issues I have identified have persisted for years.   These are small things that a careful editor will spot and address in real time if they’re paying attention, but they shouldn’t have to.  

But perhaps my biggest frustration has been the push to publish more and more papers, regardless of whether or not submissions had increased.  Obviously, this is a for profit company that feels the pressure to justify itself, or at least to justify the significant prices it charges universities for subscriptions.  I understand that.  But there are ways to do this that make sense and are consistent with maintaining scientific quality… and there are others that do not.  


The Bigger Picture

Pivoting on my last point, I did want to spend some time discussing Elsevier’s for profit status and its reputation in the academic community.  As an EIC, I have spent some time thinking about this.  

I don’t think I will surprise anyone to mention that Elsevier’s reputation among most academics is pretty terrible.  There are many reasons for this, the most prominent being what they charge institutions for subscriptions and authors for Open Access.  There is also the fact that Elsevier does not compensate reviewers in any direct way.  And of course, their main product, the papers in their journals, are produced not by Elsevier but rather by the authors themselves, who are also not compensated in any direct way.  

It has gotten to the point that there are some prominent departments wherein the culture is that none of the faculty submit their manuscripts to any Elsevier journals.   I have also had several reviewers tell me that, as a matter of principle, they refuse to review papers for Elsevier.

Over the years, I’ve had many conversations with colleagues over just about all of these aspects.  When it comes to being compensated for reviews, my own view is that I appreciate that I’m part of a reviewing ecosystem.  When I submit my own papers, I understand that I’ll likely receive at least two if not three reviews.  That means two to three of my colleagues have reviewed each of my papers, without direct compensation from the journal.  It only seems fair to me to return the favor, and each year review around 2-3 times the number of papers my own group submits.  

But this would be a lot more palatable were the journal not run by a for-profit company, wouldn’t it?  Of course.  The problem we face is that, in many communities, the top journals are run by Elsevier or Springer or Wiley.  Academic authors feel as though they really don’t have a choice to avoid these outlets if they want to be successful in seeking promotion and tenure.  

How is it that we have let this particular status quo come to be?   How is it that we have allowed an organization that is not staffed by academics and scientists to have such a vital role in disseminating science?  Personally, I think our community is long overdue to have a serious conversation about these basic questions.  But inertia is the obstacle.  It is much easier to accept the status quo and continue working within it than it is to do the very hard work of creating and maintaining an alternative.

I am sure that many of my colleagues in mechanics have fond memories of the late Chuck Steele, of Stanford University.  Chuck was the EIC of the International Journal of Solids and Structures (IJSS) from 1985-2005.  I had a few papers managed by Chuck when he was at the helm of IJSS, and I remember him being so thoughtful.  I also have fond memories of corresponding with his wife, Marie-Louise, who served as an Associate Editor for the journal and handled a lot of the daily processing.  Chuck and Marie-Louise decided to leave IJSS in no small part due to the dramatic increase in price Elsevier began charging institutions for subscriptions.  This was a fairly dramatic moment, as they both resigned, and they took 21 of the 23 members of the Editorial Board with them.  They went off to launch an entirely new journal, titled the Journal of the Mechanics of Materials and Structures (JoMMS), which was published by the non-profit organization Mathematical Sciences Publications.  They wanted an outlet that could continue to publish top papers in mechanics, but where the cost to libraries would be low.  This was a noble endeavor, to say the least.   

Despite their best efforts, JoMMS has not reached anywhere near the same level of success and influence that IJSS once commanded under their guidance.  Moreover, IJSS survived the exodus of the vast majority of its board, reconstituted itself, and continues to this day to be a fairly well-respected outlet for mechanicians.   Perhaps there are several aspects that explain this difference beyond the obvious one of inertia, I’m not sure.  But for 

me this history only serves as a cautionary tale of the difficulties of making a break from the for-profit publishers.  It’s worth discussing and considering, but the community should be clear-eyed about what it takes.  

As a final point, we should also be critical of the fact that ever since journals went online, the state of academic publishing has remained largely static.  We continue to mostly publish papers that consist of extended summaries of a set of methods, results, and accompanying discussion.  Where is the innovation?  Online discussions of papers tend not to be found where the papers themselves are published but in other places that not everyone knows about or can readily access.  Plots and figures tend to be static, not interactive.  And many communities remain slow in publicly sharing their data in a way that makes it easy for others to use it and replicate their results.  Although we can surely place some of the blame for this at the feet of the publishers, scientific communities also bear some responsibility in remaining comfortable with the status quo for so long.  

If you’ve come this far, I hope some of what I’ve written has been helpful, if not interesting.  I welcome your feedback, questions, and comments, and I hope this post spurs plenty of discussion here and elsewhere.  



Vladislav Yastrebov's picture

Thank you very much, @John E. Dolbow, for sharing your experience as EIC with Elsevier! I especially appreciate that you shared your frustration about pushing "to publish more and more papers, regardless of whether or not submissions had increased."

I also liked your comment: "How is it that we have allowed an organization that is not staffed by academics and scientists to have such a vital role in disseminating science? Personally, I think our community is long overdue to have a serious conversation about these basic questions."

I absolutely agree with you that a good discussion among institutions and academics is needed. It would be great if we could have more control over scientific publishing and the public money it involves.

France has just signed a new "transformative agreement" with Elsevier for 135 M€ for 4 years; this agreement also includes unlimited publication with Gold OA. I'm afraid that it will only deepen the problem, increase the publication pressure, increase inequalities between countries, and further postpone such a global discussion on the needed changes.

Personally, along with many colleagues, I'm involved in a young Diamond Open Access publication initiative, JTCAM (Journal of Theoretical, Computational and Applied Mechanics), which promotes Open Science principles (quality review process with Open Reviews published along with the papers) and also reproducible science (data and software are also published, and the authors are aided by a Data Editor).


   Thanks so much for reading my post and your comments.  

   I was wondering if you might say more about JTCAM.  In particular, I wonder what it costs to publish OA in the journal.  If it is free, then I wonder how the journal is supporting production costs.  Any comments you have around this would be greatly appreciated.

Vladislav Yastrebov's picture


Thank you for your interest!

Briefly, JTCAM is the first (and as far as I know, the only) Overlay Diamond Open Access journal in mechanics. Overlay means that it exploits open archives, such as arXiv, the French HAL, and Zenodo, to store papers and version control.

The authors, after sharing preprints on open archives, submit a link to the preprint, and the review process is classical and is handled by Associate Editors (there's no EIC). The review can be single-blind or not-blind-at-all depending on the choice of the reviewer. However, whatever the choice, the review is published alongside the paper. See, for example:

  • Audoly, B. and Lestringant, C. (2023). An energy approach to asymptotic, higher-order, linear homogenization, Journal of Theoretical, Computational and Applied Mechanics [doi]
  • Audoly, B., Lestringant, C., Seppecher, P., Pasini, D., Ganghoffer, J.F., Brassart, L. Review of "An energy approach to asymptotic, higher-order, linear homogenization" [HAL]
  • The journal is indexed in the DOAJ and we do not plan to be indexed in Scopus and Web of Science.
  • The journal is free for authors and readers (no APC = Diamond Open Access) and relies on the existing infrastructure arXiv, HAL, and we are hosted by a French platform
  • Associate editors are elected by the journal's Board which includes a Scientific Advisory Board, Editorial Board, and Technical Board.
  • We have a Data Editor who checks the data consistency and helps the authors to share their data, here's a [Zenodo community]

Regarding the real cost of publishing, if we do not count the [] platform and the cost of arXiv and HAL, it resides in very high standards of copy-editing that we ensure:

  • All plots are in vector graphics.
  • All references are triple-checked and whenever available, we provide not only DOI but also sustainable OA links including (HAL, arXiv, etc.).
  • All tables and equations should be consistent and well-formatted.

For the moment, at the start, we have a dedicated Technical Board which takes care of that, but we are about to improve this point with some minor funding from French institutions.


  Thank you for the very informative response, I appreciate it.

  I am curious about the fact that you do not plan to be indexed in Scopus or the Web of Science.  Was that a financial decision, or are there other considerations?


Vladislav Yastrebov's picture


Thank you for your interest!

The core motivation behind this non-indexation with these citation databases is the conflict of interest and of values. These databases are owned by Elsevier and Clarivate, for-profit organizations and not by the scientific community, therefore I see a conflict of interest on their side to promote Diamond Open Access journals which chase ethical, open and free publishing strategy.

Since we believe that the scientific community can judge on their own the relevance of a journal and its quality, we do not really need Elsevier to tell us which journal is good and which one is not. In what concerns the administration, it is more tricky, but in France at least, a lot of decisions on promotion and hiring are made by the academics, so the administration normally follows our recommendations.

There are also several stories about increasing external pressure from these databases on the number of papers, which we would like to avoid.

In conclusion, it is hard to win or even survive in this publication "game" if the rules are written by Elsevier and the referee is also Elsevier. We prefer to offer different values and establish a more sustainable set of rules.

Vladislav Yastrebov's picture

Another point that I forgot to mention is that we follow, along with many institutions and organizations in the World, San Francisco Declaration on Research Assessment SF DORA which promotes quality over quantity assessment, notably, we need to avoid bibliometric characteristics.

I understand the principled argument.  With Clarivate, although they are in this space they are disconnected from the publishing process itself.  I view them as less of a problem but perhaps I haven't considered this aspect as fully as you have.  

Unfortunately my sense is that the indexing drives many decisions, globally.  Breaking from it completely might make it a challenge for the journal to really gain traction with the community, to the point where it can genuinely compete with some of the top journals.  Nevertheless, I wish it the best of luck and will certainly pay attention to it.  

Vladislav Yastrebov's picture

Thank you John for your feedback!
I agree that Clarivate is supposed to be "publisher-neutral data" as stated on their website, nevertheless, they handle a paid and rather opaque platform which is not compatible with Open Science principles. There are alternative open projects, like, for example, OpenAlex, which could be used in the future.

I also agree that getting a lot of traction without indexation could be a problem, but anyway, I believe that a forced alignment with commercial indices is not a good long-term solution for Diamond OA initiatives.

xiangzhang's picture


Thank you very much for your great service to the community and for sharing your experience and thoughts as a researcher, author, reviewer, and EIC. There is a lot for us to learn from your experience. But also very importantly, your efforts and insights to push back the increasingly concerning conductions (i.e., unreasonably high subscription cost where the community is doing all the work; pursuing higher publish volume blindly) by the publisher is much appreciated and inspiring, and many including myself will agree that it is valuable to think as a community on how to shape future publishing for equitable and sustainable access of publications.

As a side note, my home institute UW, as part of a regional consortium, recently initiated a contract renewal negotiation with Elsevier, hoping to make the renewal more affordable such that researchers can access more journals. They pointed out that "The Elsevier contract is larger than all of our other consortial journal package contracts combined", and  "The Alliance institutions collectively have more than 180,000 students and thousands of faculty. Their Elsevier journal subscriptions represent a significant investment for members and currently represent a cost in excess of $7 million annually. Our hope is that through positive engagement and mutual understanding, an improved and sustainable model for the dissemination of scholarship can be achieved. " The negotiation was considered successful, but compromises were made. For instance, we lost access to some impactful and relevant journals in our field, including FINEL. 




Duke went through a similar calculus recently.  An outcome of this is that I no longer have access to FINEL myself, even though I am still the EIC!  

Emilio Martínez Pañeda's picture

Excellent piece of text John. My Editor experience is very limited but I see many common patterns. One that I keep seeing more and more lately is that selfish reviewer that does not give a useful review and goes on to list a large number of papers (all of which have him/her as a co-author) for the authors to cite. I have decided to keep a list of these people and provide it to our journal manager to ban them as reviewers. For the moment, only from our journal; I tried banning them from all Elsevier journals but this is apparently not possible. 

See you soon in Vancouver

Mike Ciavarella's picture

Thank you for this interesting debate.  From my experience about other publishers, which are often called predatory, or just about at the border with predatory, the question is proliferation of email spamming, robots finding (weak) reviewers, fast publication, all this is against quality.

See about MPDI for example



Mike Ciavarella's picture

Thank you for this interesting debate.  From my experience about other publishers, which are often called predatory, or just about at the border with predatory, the question is proliferation of email spamming, robots finding (weak) reviewers, fast publication, all this is against quality.    Many journals are run by secretaries, not scientists.

See about MPDI for example



Subscribe to Comments for "Lessons Learned from 14 years as an Editor-In-Chief (for Elsevier)"

Recent comments

More comments


Subscribe to Syndicate