User login

Navigation

You are here

Knowledge processing and the Internet

Zhigang Suo's picture

(Originally posted on Applied Mechanics News on 10 May 2006)

By knowledge processing I mean all modes of interaction between humans and knowledge, including discovery, synthesis, dissemination, acquisition, and application of knowledge. The technology of knowledge processing has been refined since the dawn of civilization. A list of milestones might include the inventions of language, printing, library, computer, and the Internet. On this long time scale, the Internet is only with us very recently. Considering the impact of earlier innovations, it is safe to say that what we see today is just the beginning of a revolution in the technology of knowledge processing, and that it is presumptuous to predict the future. Nonetheless, it is useful to briefly reflect on the past and speculate on the immediate future.

To describe the established best practice of knowledge processing, I can do no better than quoting Ziman (1964).

"The Frontiers of Knowledge (to coin a phrase) are always on the move. Today's discovery will tomorrow be part of the mental furniture of every research worker. By the end of next week it will be in every course of graduate lectures. Within the month there will be a clamor to have it in the undergraduate curriculum. Next year, I do believe, it will seem so commonplace that it may be assumed to be known by every schoolboy.

"The process of advancing the line of settlements, and cultivating and civilizing the new territory, takes place in stages. The original papers are published, to the delight of their authors, and to the critical eyes of their readers. Review articles then provide crude sketch plans, elementary guides through the forests of the literature. Then come the monographs, exact surveys, mapping out the ground that has been won, adjusting claims for priority, putting each fact or theory into its place.

"Finally we need textbooks. There is a profound distinction between a treaties and a textbooks. A treatise expounds; a textbook explains. It has never been supposed that a student could get into his head the whole of physics, nor even the whole of any branch of physics. He does not need to remember what he can easily discover by reference to monographs, review articles and original papers. But he must learn to read those references: he must learn the language in which they are written: he must know the basic experimental facts, and general theoretical principles, upon which his science is founded"

To update on the changes in the last forty some years, we might note the following. Ziman's timelines were figurative speech. Even today we cannot process knowledge that fast, but the Internet has greatly accelerated the pace. We email a preprint to colleagues the moment it is written, and soon we will be able to download anything existing in any media. We all Google, and some of us wiki.

The first wave of the Internet has solved one problem in knowledge processing: it has made knowledge rapidly available (nearly) world wide. The solution, however, has made other problems in knowledge processing more evident. The bottleneck is no longer accessing knowledge, but is our own time: the speed at which our brains process information has not accelerated.

Then came the second wave of the Internet, known in the popular media as Web 2.0 or the Read/Write Web. Riding the second wave are millions of bloggers, wikians, social bookmarkers, and podcasters. They create, edit, annotate and vote on the content on the Internet. They organize knowledge by doing what scientists and engineers have been doing for centuries: large-scale asynchronous collaboration, irrespective of borders of nations and idiosyncrasies of people.

But the second-wave riders collaborate using new tools, tools that are created for the Internet, not merely closes of old tools. The new tools have fundamentally changed who can collaborate, as well as how and why they collaborate. The second wave has also lead to different products of knowledge. For example, Slashdot, an aggregator of news for nerds, feeds on all blogs, as well as on mainstream media. Anybody can submit news from any source, and each submission is reviewed by editors before inclusion. Once an item appears in Slashdot, hundreds of readers visit the original source, and many return to Slashdot to leave comments, which are often more informative than the original article. It is not uncommon that a piece of news published in venerable sources is found false by the users of Slashdot in hours. Slashdot may serve as a model for a new breed of scientific journals.

A second-wave rider can be a student, teacher, practitioner, researcher, and scholar, all at the same time. A high school student becomes a published researcher when she writes an entry in Wikipedia on the history of Chinese monetary system. A Microsoft engineer becomes a teacher to thousands of fellow users of Slashdot when he posts a critique on a Google service.

If you’d like to learn how to use tools like blogs, wikis, RSS feeds, social bookmarks and podcasts to enhance teaching, read this great book by Will Richardson (2006).

Ending added on 11 May 2006: In hindsight, perhaps the approaches of Wikipedia and Slashdot are not that radical after all. All humans since the dawn of civilization have participated in knowledge processing by large-scale asynchronous collaboration. In particular, one scientist can freely comment on the work of another, or completely rewrite it by publishing another paper. Large scale asynchronous collaboration is the first law of knowledge processing, if such a thing exists. No previous innovations violated this law; they have all reaffirmed the law by greatly easing collaboration. The Internet will further ease collaboration, in its own particular ways, now unfolding in front of our screens.

Subscribe to Comments for " Knowledge processing and the Internet"

Recent comments

More comments

Syndicate

Subscribe to Syndicate