User login

You are here

LiquidPub Project: Scientific Publications meet the Web, a project from University of Trento

Mike Ciavarella's picture

Liquid Publications: Scientific Publications meet the Web

Some very interesting projects from University of Trento. Changing the way scientific knowledge is produced, disseminated, evaluated, and consumed

The world of scientific publications has been largely oblivious to the advent of the Web and to advances in ICT. Even more surprisingly, this is the case even for research in the ICT area: ICT researchers have been able to exploit the Web to improve the (production) process in almost all areas, but not their own. We are producing scientific knowledge (and publications in particular) essentially following the very same approach we followed before the Web. Scientific knowledge dissemination is still based on the traditional notion of “paper” publication and on peer review as quality assessment method. The current approach encourages authors to write many (possibly incremental) papers to get more “tokens of credit”, generating often unnecessary dissemination overhead for themselves and for the community of reviewers. Furthermore, it does not encourage or support reuse and evolution of publications: whenever a (possibly small) progress is made on a certain subject, a new paper is written, reviewed, and published, often after several months.

The Liquid Publications community proposes a paradigm shift in the way scientific knowledge is created, disseminated, evaluated and maintained. This shift is enabled by the notion of Liquid Publications, which are evolutionary, collaborative, and composable scientific contributions. Many Liquid Publication concepts are based on a parallel between scientific knowledge artifacts and software artifacts, and hence on lessons learned in (agile, collaborative, open source) software development, as well as on lessons learned from Web 2.0 in terms of collaborative evaluation of knowledge artifacts.

The main concepts are illustrated in the papers below (doc and pdf format, feel free to reuse the content, there is no copyright). The preliminary version of the work below was posted on ACM Ubiquity ( This work inspired the LiquidPub project, where research institutions from various scientific disciplines, publishers, and societies come together to develop and validate concepts, algorithms, and tools that define and instantiate the Liquid Publication concepts.

In a nutshell, the approach proposes the following ideas and contributions:

1. It introduces the notion of Liquid Publications (and, analogously, Liquid Textbooks) as evolutionary, collaborative, multi-faceted knowledge objects that can be composed and consumed at different levels of detail.

2. It abstracts (and replaces) the notions of journals and conferences into collections, which are groupings of publications that can be based on topic and time but also on arbitrary rules in terms of what is included and how the quality of publications is assessed for them to be included in the collection. Collections can themselves be liquid. We believe that journals as they are conceived today (a periodic snapshot of papers on a given topic, selected by a restricted group of experts and based on submissions) will soon become obsolete both in their printed and electronic forms.

3. It proposes a radically different evaluation method for publications and for authors, based on the interest they generate in the community and on their innovative contributions and that is maintained in real time and possibly without reviewing effort (peer reviews can be used as a complement). The method also encourages early dissemination of innovative results. Around these main concepts, we advocate the need for services that benefit authors, readers, reviewers, conference organizers, editorial boards, and even evaluation committees. Examples of such services are an analysis center for helping committees to assess the scientific quality of people and publications, ways for people to bookmark papers or people of interest and to define collections, and an authoring/sharing/versioning environment for maintaining and evolving liquid publications and for the fruition of their content.

Although the change advocated here is dramatic, the transition is not. The current state of affair in knowledge dissemination is at an extreme of the Liquid Publication concepts, where papers are "solid" and static, collections are periodic snapshots of submissions, and evaluation is based on peer review by a team of "experts". The liquefaction and embracing of the concepts proposed here can be gradual to facilitate acceptance by the community at large.

Besides describing the liquid publication concepts, the papers below are an instance of a liquid publication. We wish we had already available a nice collaborative environment for evolving this liquid publication, for blogging it, reviewing it, evaluating it, etc... but we don't, at least for now. For now, you are welcome to send us your comments by emailing us at liquidpub -at- We look forward to receiving your feedback.

NOTE: the pdf below refers to the latest version of the MS Word docs


Download in other formats:


ericmock's picture

Very interesting.  The ideas are very similar to those I wrote about in a proposal to NSF about a year ago.  Unfortunately, it was not funded.  Anyone can see the proposal at  The biggest issue I see with starting something like this is getting people to use it.  That is why I think the Living Review article concept in a more recent (and again rejected) proposal to NSF is the way to go at first.  Anyone have contacts at Applied Mechanics Reviews?  Up-to-date review articles would be useful to the community and draw people to the system.  'Journals' could be started around the review articles.

Mike Ciavarella's picture

I would rather start from Wikipedia.

Somebody told me that Elsevier is trying to set his own WIKIPEDIA.  Is that true?  In that case, is it too late?

Elsevier is really trying hard, who knows if they have any chance to enter the open access as well in time... 

michele ciavarella

ericmock's picture

I'm not proposing a single review paper.  I'm proposing an entire library of them that can be continually updated.  Sounds like a wiki, eh?  It's all in the NSF-CDI proposal.  See for all the details.

Mike Ciavarella's picture

This was a project funded, by Fabio Casati, formerly at HP, now dean of ICT in Trento, Italy



A few first lines:-

 The research world, and specifically the academic world, is centered around the notion
of publication as the basic mean to disseminate results, foster interaction among
communities, and achieve international recognition (and career advancement).
Publications are done in conferences or journals, and are usually reviewed by a
committee of experts, also referred as “peers”. Typically, each paper is reviewed by 3 or
4 reviewers. The “best” papers among all the submitted ones are then accepted for
publication in the journal or in the conference proceedings. In the computer science
area, people typically publishes a dozen paper per year, and submit a little more than
that (not all papers are accepted the first time around). Acceptance rates for
conferences are often around 20% or lower1.

There are three drivers behind this model:

1. Disseminate ideas and make them visible. Through publication and review,
papers are made known to colleagues, and the review process is supposed to
ensure that the best papers are more visible, so that researchers know where to
go (good journals and conferences) if they want to read literature on certain
topics. Publications also have legal implications as they “timestamp” work and

2. Get credit, recognition. Having papers accepted at prestigious conferences and
journals is a way to prove (in theory) that the work is valuable. This in turn is a
major criterion to determine career advancement.

3. Meeting and networking. Publications and conference participation leads to
exchange of ideas with colleagues, and to networking. Conferences are also very
useful for students to come and learn how the research community operates.
Highly Inefficient Publishing Process. This model is incredibly inefficient under every
perspective, and results in a colossal waste of public funding, and forces researchers
worldwide to waste countless hours that could be devoted to better research (or to have
fun with family and friends). It is a system deeply rooted in the past, oblivious to the
advent of the Web and related new forms of communication, information sharing, social
networking and reputation. Here are some problems with the current state of affairs:


michele ciavarella

Mike Ciavarella's picture

The paper by Casati et al is from active, young and brilliant researchers.

Contrast that with this oen commissioned and funded by the Publishing Research Consortium ( like Elsevier and others (the paper is then  review to go to the Elsevier Editors journal to calm down potential criticism

The report is very long and a little boring if I can say, but the CLEVER TRICK to avoid serious criticism, is that the publishers took VERY SENIOR people, who are unlikely to be willing a revolution!!

Read the summary:

Peer review is seen as an essential component of scholarly communication, the mechanism that facilitates the publication of
primary research in academic journals. Although sometimes thought of as an essential part of the journal, it is only since the
second world war that peer review has been institutionalised in the form we know it today. More recently it has come under
criticism on a number of fronts: it has been said that it is unreliable, unfair and fails to validate or authenticate; that it is
unstandardised and idiosyncratic; that its secrecy leads to irresponsibility on the part of reviewers; that it stifles innovation;
that it causes delay in publication; and so on. Perhaps the strongest criticism is that there is a lack of evidence that peer
review actually works, and a lack of evidence to indicate whether the documented failings are rare exceptions or the tip of an
The survey reported here does not attempt directly to address the question of whether or not peer review works, but instead
looks in detail at the experiences and perceptions of a large group of mostly senior authors, reviewers and editors (there is of
course considerable overlap between these groups). Respondents were spread by region and by field of research broadly in
line with the universe of authors publishing in the journals in the Thomson Scientific database, which covers the leading peer
reviewed journals. The survey presents its findings in two broad areas: attitudes to peer review and current practices in peer


Read also towards the end 

The Dissatisfied group
While the large majority of respondents expressed themselves satisfied with peer review system used by scholarly journals, a
minority (12%) said they were dissatisfied or very dissatisfied, referred to as the Dissatisfied group in this report. It is
interesting to ask what we can say about this group.
In terms of demographics, there are relatively few differences from the average. There were no significant differences by age,
gender, type of organisation or position (seniority). By region, they were more somewhat likely to be in the Anglophone region
or USA/Canada, and less likely to be in Asia or the Rest of world. Looking at field of research, they were most likely to be in
HSS, and least likely in Physical sciences/engineering.
In terms of their own experience of peer review, this group reported that the peer review of their last published paper took
significantly longer than average (about 110 days compared to 80), and they were more likely to be dissatisfied with the
length of time involved (65% disagreed it was satisfactory compared to 38% agreeing overall). The Dissatisfied group
tended to be somewhat less likely to report that peer review had improved their last published paper, and likely to give lower
scores to the improvements they did report. We cannot say if there is a causal relationship; that is, is this group dissatisfied

with peer review because they have experienced longer times and less personal benefit on their own papers, or does their
dissatisfaction arise from other causes and then lead them to give less positive scores?
Looking at their attitudes towards peer review, they held consistently less positive views insofar as they were more likely to
agree that peer review needs a complete overhaul, or that it is completely unnecessary and that it is holding back scientific
communication. Similarly they were less likely to agree the current system was the best we can achieve, or that scientific
communication is greatly helped by peer review, or that without PR there is no control in scientific communication. They were
less likely than average to agree that peer review was effective on all the objectives proposed.
In terms of alternative approaches to peer review, the Dissatisfied group were more likely to agree that open and postpublication
review were effective. They were also more likely to agree that post-publication review would be a useful
supplement to formal review, that it would relieve the load on reviewers, and that it could offer an equally powerful alternative;
and they were less likely to agree it would encourage instant reactions, that readers would be unwilling to offer criticism for
fear of offending, and that they would be less likely to submit themselves to a journal using it.


 michele ciavarella

Mike Ciavarella's picture

If Elsevier reacted so brutally to my criticisms, it means that even a single scientist can do a lot against their interests.

I am flattered, but also surprised to have been forced to resign

A more open company should have discussed this more at length.  In the long terms, these are mistakes....

michele ciavarella

Mike Ciavarella's picture
Why Google SCHOLAR will stop this oligopoly of publishers

Mike Ciavarella's picture

Are some ways of measuring scientific quality better than others? Sune Lehmann, Andrew D. Jackson and
Benny E. Lautrup analyse the reliability of commonly used methods for comparing citation records.

 Some interesting extracts


Unfortunately, the potential benefits of
careful citation analyses are overshadowed
by their harmful misuse. Institutions have a
misguided sense of the fairness of decisions
reached by algorithm, and unable to measure
what they want to maximize (quality), institutions
will maximize what they can measure.
Decisions will continue
to be made using measures
of quality that either ignore
citation data entirely (such
as frequency of publication)
or rely on data sets of insufficient

For their part, scientists should insist that
their institutions disclose their uses of citation
data, making both data and the methods used
for data analysis available for scrutiny. In the
meantime, we shall have to continue to do
things the old-fashioned way and actually read
the papers.


michele ciavarella

Mike Ciavarella's picture

I am getting more and more convinced that traditional papers, the review process, the "mafia" that is behind every journal, the profit of publishers, all this NEEDS to be changed.

And is changing.  Old Institutes, even Harvard, are of course a good system, especially as they collect HUGE amounts of money.  But they correspond to MICROSOFT, EXXON oil, etc.  in short, traditional big companies/Institutions.  They are slow.  And closed circles.

Imechanica, the ideas from Eric, LIQUIDPUB, they are all ideas to renovate this.

We can try create google groups for subjects we like before
embarking into a web site. Maybe the future of a paper is a WEB site.  Each paper a web site. That's it. The end of profit of Elsevier and other publishing companies in the 9 Billion business.

Good papers/Web sites will make some money. 

Funding will be allocated to people able to create VIRTUAL organizations, not Harvard nor Caltech nor other places who need to deplace people.   What about good people in Kazakistan?  Why making all the effort to move for a quick idea?

The future is more of
social networks which make revolution by running ligth but with sheer amount
of people -- wikipedia, linux, google, etc.

 Read :

What is "Wikinomics"?

In the last few years, traditional collaboration—in a meeting
room, a conference call, even a convention center—has been superceded by
collaborations on an astronomical scale.

Today, encyclopedias, jetliners, operating systems, mutual
funds, and many other items are being created by teams numbering in the
thousands or even millions. While some leaders fear the heaving growth of these
massive online communities, Wikinomics explains how to prosper in a
world where new communications technologies are democratizing the creation of
value. Anyone who wants to understand the major forces revolutionizing business
today should consider Wikinomics their survival kit. Find out more >>



michele ciavarella

Mike Ciavarella's picture

Check my updates on

My letter of resignation from the board of International Journal of Solids and Structures



michele ciavarella

Wiki-Style review papers can
speed up science progress very much.

I think one of the important reasons
of much citation to many classical papers and methods is their popularity. When
a new researcher wants to start a scientific work he should review many papers
and see which methods, materials and models are better by comparing many
factors. But number of developed methods, materials and models are more than
what an ordinary person can read. And much of them are not well developed or
they are not good anyway. Many researchers have to finish their entire work in a
limited time. Then they have to don’t pay so much attention to this selection
and start they work as soon as possible by relying on more cited papers. And in
other side most researchers tend to base their work on more popular things.

Most of researchers start their
work with reading review papers who comprise advantages and disadvantages of
different methods and …, but even good reviewers had not so much information and
enough time to get information in order to categorize and comprise all of the
methods and ways available and they adequate to most popular methods. Then many
new methods and much availability may die in this process. (Even if they be published
even in good journals).

Wiki-Style Review papers can speed
up research communications very much and permit new ideas to burgeon. Every
author can add his paper to this review and in other side he knows his paper better
than anybody else. This is easier and less time consuming than what a one
person can do in a review process. Also authors tend to introduce their works
to public then will do it.

Attaching a FORUM to each
review paper can be a good idea too, I have read many comments in IMECHANICA that
I could not read like them in any book at least in such a short time. Authors
can get feedback of their readers by means of these forums to make better works
in future
. (Just like AMAZON books).  And many of comments and nice views can add to
main body of review by others who see these comments beneficial.

The wiki-style articles may be more ideal in this time
because of profits researchers gain by publishing ISI papers in well indexed
journals. But wiki-style review papers need not paying so much time for each
person and finally lead to a brilliant document for society.  Also authors tend to introduce their works to
public then will work on it.

Mike Ciavarella's picture



Hello imechanica users: I
launch a few ideas. Can we improve imechanica stealing ideas from
successful web systems like google, amazon, wikipedia, myspace,
youtube? Taking the best of the various worlds to improve our


michele ciavarella

There is currently a discussion about whether academic journals are obsolete on the popular technology news and information site Slashdot . I thought it might be interesting to the same people who are interested in this thread.

Subscribe to Comments for "LiquidPub Project:   Scientific Publications meet the Web, a project from University of Trento"

More comments


Subscribe to Syndicate