User login


You are here

a point and a particle

Rui Huang's picture

A few of us have been discussing/debating over the existence or non-existence of Cauchy stress questioned in a theory proposed by Mr. Falk H. Koenemann. While such discussions may appear funny or irrelevant to many mechanicians, I take it as a challenge from an educational point of view to clearly understand what continuum mechanics is about and if any what are its limitations. Unfortunately, the discussions have not been fruitful and probably have become annoying to many who read imechanica. For that I apologize from my part. However, I remain hopeful that some consensus may be reached, if we can clear out or admit the misconceptions that have been brought up in the discussions from both sides. To begin with, I summarize below in a list of possible misconceptions about a point in a continuum and a particle in a discrete system. It is my understanding that the two are fundamentally different but have been mixed up in Mr. Koenemann's theory as well as in the discussions. As many mechanicians are doing research in both continuum mechanics and discrete modeling (e.g., atomistic, molecular dynamics), such a list may not be totally irrelevant. Of course I would welcome comments and discussions to make the list more accurate and more complete.

(0) First of all, a point in a continuum is not equivalent to or representing a particle in a discrete system. The state at a point of a continuum such as temperature and pressure is a statistical average of many particles in a representative volume.

(1) A continuum has infinite number of points; a discrete system has a finite number of particles.

(2) A point in a continuum has zero volume and zero mass; a particle (e.g., atom) in a discrete system has a finite volume and a finite mass.

(3) A point on a surface of a continuum has zero area, and thus under an external pressure (p) the force on one point is zero (f = PA). For a discrete system under external pressure, the external force acting on a particle of its surface is not zero (to be balanced by internal forces among the particles), but the sum of all the external forces (in vector form) on the surface is zero at equilibrium. The same equilibrium condition for a continuum leads to a surface integral: int(p n_i)dA = 0, where n_i is the unit normal vector on the surface and (p dA) is the magnitude of the force acting on a differential area dA (not a single point). This condition is simply force balance, not implying zero work done to the system by the surrounding.

(4)  Inside a continuum, force is undefined at each point. By selecting a differential area (dA) passing a point, a differential force vector (often called traction, df_i = T_i dA) at the point exists. Considering the vector nature of both the area (normal direction) and the force, a second-order tensor is needed so that the force over an arbitrarily selected differential area can be evaluated. Inside a discrete system, interactions among the particles lead directly to forces acting on each particle. At equilibrium, the sum of all forces (internal and external) acting on each particle is zero.  

(5) Body forces (e.g., gravity, inertial) in a continuum are defined as force per unit volume (mass density times acceleration vector, in unit [N][m^(-3)]). In a discrete system, gravity and inertia are concentrated in each particle.

(6) Since no individual particle is considered in a continuum, atomic bonds do not appear explicitly when the system is considered as a continuum. The definition of atomic bonds in a discrete atomistic system is through atomistic interactions (not necessarily pair interactions), with an intrinsic length scale for a specific interaction (e.g., short-range/long-range, strong/weak interactions). The effect of atomistic interactions can be accounted for in a continuum model by approaches like the Cauchy-Born rule or its modified forms.


Dear Rui:

In "(1) A continuum has infinite number of points; a discrete system has a finite number of particles. ", perhaps you mean: A continuum has an uncountable "number" of points while a discrete system has a countable "number" of points. Note that, for example, a Bravais lattice has infinitely many points but the set is countable (i.e. has a one-to-one correspondence with integers).

Note also that what we may call "continuum hypothesis" in mechanics is very different from what it means in set theory. There, "continuum hypothesis" (proposed by Cantor) says that there is no cardinality that is strictly larger that cardinality of integers and strictly smaller than cardinality of real numbers (For a finite set, cardinality is the number of elements of the set. A countable set, by definition, has the cardinality of integers.).


Rui Huang's picture

Dear Arash,

Thank you for the clarifications. Apparently I am not familiar with the set theory. For the Bravais lattice, would the number of particles be finite if we consider a system of finite volume? I understand that in atomistic modeling periodic boundary condition is often used to model an infinite system.


Dear Rui:

A Bravais lattice has an infinite number of points that can be generated by a finite number of vectors (lattice vectors, let us assume we are dealing with a simple lattice) that define a unit cell. Unit cell in a Bravais lattice is a special case of what is called a "fundamental domain" for an object with a symmetry group. In the absence of defects, you can look at a unit cell and calculate the dynamical matrix, for example, etc. that represents the behavior of the infinite system. In other words, using symmetry in three directions you reduce your problem to a system of finite size. But still the underlying lattice is infinite.

When there are defects, for example a point defect, one usually assumes that given a large enough domain interaction of the single defect (or a collection of defects) with the outside world (i.e. other defects) can be neglected. This way, analysis of an infinite domain with a single defect is reduced to that of a periodic system (one defect in each unit cell). When there are large-range interactions, e.g. electrostatic interactions for point charges one has to be careful with this approximation.

If you have an extended defect like domain walls or free surfaces you can still use some partial symmetries, e.g. particles parallel to the free surface have the same distortions and you would reduce the problem to a line or half line problem but not finite. Again you can make approximations and in the case of free surfaces use a slab (with large enough width) approximation.


Rich Lehoucq's picture


Do you have a source, or more, where the relationships between lattices and symmetry groups are introduced/derived? This is something I'd like to learn more about but lack a good source (let alone the time to do a careful search).

Thanks in advance,


Dear Rich:

There are many books that discuss applications of group theory in physics (my favorite is "Group Theory in Physics" by Wu-Ki Tung) but for applications in crystals the best book I've seen is:

Continuum Models for Phase Transitions and Twinning in Crystals by M. Pitteri and G. Zanzotto


Jason Mayeur's picture


Another text that I have found beneficial in this regard is Robert Newnham's "Properties of Crystals: Anisotropy|Symmetry|Structure".


Pradeep Sharma's picture


I particularly enjoyed the following book as it is fairly systematic (e.g. uses group theory) and the treatment is elementary (not usually the case): Crystal Properties Via Group Theory.

Rui Huang's picture

Here is the link to Mr. Koenemann's comment: (you have to scroll down the web page to see the comment).


Rich Lehoucq's picture


Your distinctions are reasonable to me. Some minor comments are

That the force at a (continuum) point is zero, not undefined. If we suppse force density can be represented as the divergence of a tensor, then integrating over a volume of zero measure gives zero force. 

A useful analogy (for me anyway) is that particle mechanics can be associated with a discrete probability space, where the events are all possible subsystems of a particle system. There can be a countably infinite number of particles. Particle balances are realized as random variables. You get a restriction to a finite number of particles if the probability space is interpreted as a product probability space (positions and momenta, e.g. phase space). 

One way way to relate discrete and continuum mechanics is by way of expectation in phase space. This was first done by Irving&Kirkwood (1950) doi:10.1063/1.1747782. It's a fascinating result that continuum balances can be realized via statistical considerations. The classical continuum theory results from a further approximation (that associated with simple materials). 


I've always thought of concentrated forces as dirac functions. In reality we have very local pressure distributions and it's always made sense to me that this is simplified and modeled mathematically as a dirac function, infinite at the point of contact, x,  and undefined at the limits x+ and x-

Is there a reason why such an interpretation would not fit nicely into continuum mechanics? It seems to fit very nicely into the mathematics around continuum mechanics as well (granted that i haven't looked deeply at theoretical problems)

Rich Lehoucq's picture


Dirac functions can be useful but you need to be careful from a mathematical perspective.

The density represented as a Dirac function is infinite but the integrated Dirac function over a volume is finite (assuming that the integrand not including the Dirac is continuous at the point where the Dirac is undefined).

Integrating a Dirac over a set of zero volume is zero. You see this by approximating the integral of a Dirac as the limit of really nice functions peaked about the point where the Dirac is undefined.

Also keep in mind that the function spaces in which classical 3D elasticity is typically posed, point loads are undefined.


Dear Rich,

(1) I did not quite understand what you mean by this:

"... (assuming that the integrand not including the Dirac is continuous at the point where the Dirac is undefined)..."

What other integrand did you have in mind? One of the series of those (finite) test functions whose limit the delta represents? Or something else? 

(2) Also, in your next paragraph, you say that:

"...Dirac as the limit of really nice functions peaked about the point where the Dirac is undefined."

I get a feel that, if we were to refer to the first diagram shown on the Wiki page for Dirac's delta "function", you would take Dirac's delta to be undefined at x = 0.

Well, if it is not defined at x = 0, and if it is anyway known to vanish everywhere else in the domain (i.e. for all other values of x), then how do we describe "it" at all? ...

... I think it's OK to consider Dirac's delta as defined at x= 0. It's OK to use the word "defined" here, because the delta is not a function anyway. (And, just in case I am wrong, I would very much like to know what word I should use in the place of "defined.")

Rich Lehoucq's picture


Let me clear up my rather loose statements.


The well-known identity \int ( f(x) \delta(x) ) dx = f(0) assumes that f is continuous at 0.


We have \int ( f(x) \phi_n(x) ) dx has limit f(0) when the \phi_n(x) are peaked about zero and \int ( f(x) \phi_n(x) ) dx = 1 for all n and a converging sequence of functions \phi_n(x) continuous at zero. This is a helpful way to understand what is going on. Think of normalized Gaussians.

Arash provided a concise but precise explanation of delta functions in the sense of distributions. \delta(x) is undefined at zero but it's integral over any volume containing zero in its interior is defined. The term delta function is stricly formal and only makes mathematical sense as a linear functional.  And so it doesn't matter whether \delta(0) is undefined.



(1) That's a nice clarification... (Actually, about the first part, I got what you must have meant soon after my posting... Happens...)

(2) Once again, I would like a mathematician to clear up this matter for me:

Is Dirac's delta really undefined at the only point where it has support?

After all, the requirement that the range value must be unique applies only to a function, not to a distribution---am I right? If I am wrong, then why be comfortable calling Dirac's delta a distribution either? On the other hand, if I am right, then what's the harm in taking Dirac's delta as being defined at the support point? Isn't the whole point in having an idea like Dirac's delta only about distinguishing that one point---the point of support? That's my point.

(Someone who is strong in mathematics even if not a professional mathematician himself, is also welcome to clarify.)


   The domainin in which \delta function defined is a function space, not the domain

  where f(x) (the test function that \delta function operates on) is defined. The  "point" associated with  \delta function is f(x) not x.

   From this meaning, we can say the value of  \delta function at "point"  f(x) is

  f(0). There is no need  (or even meaningless) to talk about the value of a functional at x, especially for  \delta function, which is a functional that cannot be generated by some ordinary functions.

 As for the "harm" of defining the value of  \delta function at x, I think that it is mainly from the mathematical concept considerations as metioned above.

  Hope this helps.


Gopinath Venkatesan's picture


I thought the δ(x) function is defined in the domain of x. For example, the δ(x) is used in the definition of moving loads problem, where the moving load P acts on the bridge structures. So to capture the different points of contact due to moving loads, the delta function is defined as δ(x-ct), and the force function accordingly as δ(x-ct) times P, where c is the speed of the moving load, and t is the time, so ct defines the point of action of load P.


Graduate Student

University of Oklahoma


    In mechanics,  δ(x) is usually used to REPRESENT a unit concentrated load applied at x=0. it is not the mathematical EXPRESSION of a "force function" as you mentioned.It is a functional defined on the test function space. It is not a "function" in the ordinary sense.

    When the external load is not smooth or regular enough (for example, concentrated load), the classical solution of the correesponding PDE does not exist. We can only talk about the generalized solution of  the PDE. In this case, the weak form of the considered PDE is often used. In term of mechanics, this is just the "virtual work" form of the PDE.

The right hand side of this  "virtual work" form, is the virtual work done by the applied

foce on the virtual displacement (test function).  Then for the given external load, for every virtual displacement (test function),  there is a real number (work) associated with it. From

 this sense, we can take the "external load" as a functional.   δ(x) is just one of such functional, which gives 1*u(0) for the test function u(x) defined in some suitable function


   δ(x)  is often described as a "function" such that δ(x)= inifinity at x=0 and   δ(x)=0 elsewhere. From my unstanding, this is just to give us some "feelings" of this mathematical

object, not its "expression".  Some of the properties of  δ(x) cannot be understood if we viewed it as a ordinary function.

  Of course,  δ(x) also has some relationship with the ordinary functions. Since it can be

 viewed as a fucntional which is the weak* limit of some functionals generated by ordinary functions. These functions are often pointwise converged to a "function" such that

"=" infinity at x=0; =0 otherwise.  






Gopinath Venkatesan's picture

Dear Xu

Thanks for the comments.

I wanted to mention only that δ(x) is defined on the domain of x but the range may be dependent on the function it is associated with.

you said: <<it is not the mathematical EXPRESSION of a "force function" as you mentioned.>>

But I never said that. I only indicated where we associate a delta function to activate the effect of load, so delta function acts like a "on/off" switch. The force function (that I mentioned) is only relevant to that problem, where the product δ(x-ct)P is considered as a force function.

The functionals you are talking about comes when delta acts on the function, like δ(y(x)), I believe, and thats when the domain changes to the function space (from the usual x). While I understand that the domain here becomes the function space, I don't know what it's range would be in, (a functional space?)


Graduate Student

University of Oklahoma

Dear Venkatesan,

    From my understanding, the domain of δ(x) is the test function space. The range of 

   δ(x) is always the real line. It is not "dependent on the function it is associated with".

    Engineers invent δ(x) and mathematicians established its soild mathematical foundation.

   As I said before, we can have our own understnding of  δ(x), but the key point is to use it properly to get right results.

    Thank you very much for your comments.

   best regards







Thanks for your reply.

I am not sure about the right mathematical jargon/language. But yes, I can see that as a functional, Dirac's delta would have to take as its input a set of functions and produce as its outputs a set of corresponding numbers. Now, referring to the MathWorld web page, I gather that the input set consists of those test functions, and so, the output numbers are going to be nothing but the values of those test functions at x = 0. This, essentially, is what you say, and there is no question of disagreement about it.

But all of it still does not address the basic issue that I had raised.

Another way to state that basic issue is to scroll down the MathWorld page a little bit and make a reference to eqs. (2), (3) etc. in that page. The eq. (2), in particular, gives the property of Dirac's delta involving a certain "a".

Now my question is: In reference to eq. (2), how do we capture the fact that the vertical line for Dirac's delta is going to be erected at x = a but not at x = 0 or at any other point? How do we communicate this basic fact?

The short and sweet way to communicate it would be to say that Dirac's delta is defined at x = a but not at x = 0. Now, what harm is there with that? That is the basic question I have. As I indicated above, personally, I can see no harm at all because the fact that Dirac's delta is not an ordinary function is, already, a part of the context, and so, it need not come in the way of saying that the delta is defined at x = a.

And, if mathematicians, when they talk to each other, do not say that Dirac's delta is defined at x = a, then how do they tell each other the distinction of "a" from all other points? Or is it that they always remain flying high up in the abstraction of linear functionals/function spaces and never come down to that domain in which x = a is defined? That too is a secondary question I have as an engineer.

Thanks in advance for clarifying these specific matters. 


- - - - -
Even as you read this, I remain jobless (as I have, for years)

Dear Ajit,

     I suppose that  for mathematicians, when they talk about  the " Dirac's delta  defined at

  x= a", maybe they will use the term δa(x) or δ(x-a) to communicate the meaning that

  a linear continuous functional which returns the value of the test function at x=a. Here a is

  a parameter used to identify a specific  \delta function.  Smile


     Anyway, put the mathematics aside, as mechanicians, we can have our own language.

   The key point is to use  \delta function properly to get right results.


   best regards




Dear Ajit:

I don't see any ambiguity in the definition of Dirac's delta. One first defines it when the support is the origin and then the support can be shifted to any other point in the real line (or R^n for higher dimensional problems). Shifting can be defined for any distribution. Dirac's delta would not be completely specified unless you are given its support, which is a single point. So, one would say "Dirac's delta supported at x = 0" or "Dirac's delta supported at x = a" (these are two different distributions). Support of a distribution is the closure of the set of points (in the standard topology of R^n) for which the distribution is not the "zero distribution", i.e. for those points there is always a nonzero test function on which the distribution is nonzero.


Dear Mikael:

The rigorous way of looking at Dirac "function" is what S. Sobolev and L. Schwratz (among others) built in the thirties and forties). In the rigorous theory, one does not think of point values of "generalized functions". What you would do is the following:

First a set of test functions are defined. These are infinitely smooth functions that have compact supports, i.e. are zero outside compact sets (a closed and bounded interval in the case of R^n). Now a distribution (generalized function) is dual of this set. In other words, given a distribution, it associates a real number to any given test function; distributions are linear, continuous functionals on the space of test functions.

In the case of Dirac delta, when it acts on any test function, it gives you the value of the test function at the origin (or the support point).   One can define derivatives of distributions, and many other operators.  

I'm not answering your question regarding discrete systems but if there is a rigorous theory it should be based on these ideas.


Dear Rich,

Inside a continuum, you can only define a density of a force---whether it's a body force (the volume density) or a surface/line traction force (the area/lineal density).

Defining a density is the only way theoretical available to bring the particle mechanical concept of force into correspondence with the concepts defined within the continuum mechanical framework (and vice versa: to take those continuum densities outside of CM and into the realm of the particle mechanics). 

As such, the notion of a "force inside a continuum" is in principle undefined. (Actually, it would be an example of an arbitrary concept.)

Rich Lehoucq's picture


I agree with you. 

My understanding when we talk about force in continuum mechanics, we've integrated a force density about some volume. If the volume is of zero measure, then the force is zero at the point but certainly the force density may not be.



No, you didn't agree with me, really speaking.

I still maintain, force is fundamentally an undefined term within continuum mechanics; only (some form of a) force-density is. In contrast, you think that it would be possible to speak meaningfully of a force when the volume is zero. I maintain that you can't. That's the difference in our positions.

Note, the limit of a function at some point is not the same as the value of that function at that point. The two concepts differ. Sometimes, only the limit may exist at a given point; the function itself may not even be defined at that point. Such, precisely, happens to be the case with the idea of a "point-force" within a continuum---it is not defined.

Rich Lehoucq's picture


OK, You make an interesting point. When I speak of a zero volume, and then integrate, this is allowable mathematically, although maybe not meaningful from a mechanical perspective. 



Good that you find it all interesting.

But I am not sure if even mathematics allows you what you say it does.

Do you have any evidence or any support issuing forth from the science of mathematics (let alone of mechanics) for saying that you can begin with a volume of zero measure and still be able to integrate around its faces (each, obviously, of a zero measure, too), rather than begin with a finite volume and then approach the zero size for it via an appropriate limiting process?

I think not. Newton himself was at pains in emphasizing precisely this point (even though  calculus was too new to be communicated very effectively even by he himself). And, no functional analysits or measure theorist has been able to override the basic considerations that were known to Newton himself.

Simply put, it's best to remember it all this way, with twin-points:

(1) All continuum theoretical definitions ultimately make reference to a differential element. [BTW, it was precisely this point which I had pointed out right in my very first comment w.r.t. Falk Koenemann's theory---right last year. It's been satisfying to note that many / all of those points have later been noted by other iMechanicians, this year.] This includes definitions of "scalars" like pressure or temperature too. (Not even a temperature is a point phenomenon---it, too, requires a differential element for its definition.)

(2) As Newton himself had emphasized (and all mathematicians since then have), a differential element has infinitesimal size, i.e. a size that is, emphatically, not zero even though it can be made as small as you would like it to be (simply because it's being used in a limiting process). Both epsilon and delta in the famous epsilon-delta way of putting it are non-zero in size.

Conclusion: Mathematically as well as mechanically what you say cannot be done. If you have other evidence, I would like to know of it.


If the reader would permit me a bit of a relevant aside: The Objectivist philosopher Harry Binswanger has argued, perhaps based on certain ancient Greeks' ideas, that even something as basic and simple as motion itself cannot be described as something that happens at a point; it can only be described in reference to the ever smaller measurements of length (dl) and duration (dt) neither of which can at all be made zero, in principle. It was the later rationalist tradition which popularized the idea of something happening at a point of space even if the point itself was, as everyone knows, left undefined in Euclid's original texts.

In other words, zero cannot be the basis of even a mathematical definition of motion. (I keep it in mind with a very meaningful pun (self-made) about it: "Zero is at the base of nothing" or, stronger: "Nothing can have zero as its basis.") Zero is strictly a derivative concept, meaningful only for closure in the context of certain higher-level operations.

The idea is straightforward to understand if you consider space as a continuum (a concept wherein materiality is not retained, only spatial attributes are).

[BTW, as to Binswanger's position, I could only glean some bits about it by going over the free course pamphlets etc. available at the Ayn Rand Book Store and other similar sources. I have been having no money for years (including the time I was in the USA) to be able to buy his products. So, for clarifications on his positions, contact him directly.] 

- - - - -
Even as you read this, I remain jobless (as I have, for years)

Pradeep Sharma's picture

Dear Rui and Rich,

Irving and Kirkwood is the classic paper which, as pointed out by Rich, was one of the first few to establish links between discrete and continuum mechanics. There is a more recent paper by Professor Ian Murdoch which is quite clear and rigorous in its exposition. I and my student have been working on related topics for a while and I found this paper to be the most useful as far linking atomistics with continuum mechanics is concerned---particularly the brand and style of continuum mechanics most mechanicians learn.

Rich Lehoucq's picture


Professor Murdoch's papers areexcellent papers, especially for those trained in continuum mechanics. It's always bothered me, though, that Ian's weighting functions are not assumed to be nonnegative. Hence, negative mass cannot be ruled out.

An excellent, largely unknown paper, is Die Herleitung der Grundgleichungen der Thermomechanik der Kontinua aus der Statistischen Mechanik by Walter Noll. (See
Derivation of the fundamental equations of continuum thermodynamics from statistical mechanics
for an English translation) This is a remarkable paper where continuum balances are derived (in contrast to the approximate linear momentum and energy balances derived by Irving&Kirkwood). Noll is able to do this because of two Lemmas introduced at the end of the paper (Murdoch's work has made much use of these lemmas).

The English translation, and a commentary I'm revising, will be published by the J. of Elasticity. You might also read The Statistical Mechanical Foundations of Peridynamics I. Mass and Momentum Conservation Laws where we demonstrate that the continuum theory derived by Irving-Kirkwood is not the classical continuum theory. The classical theory arises from a subsequent approximation.


ps congrats on your recent award.

Pradeep Sharma's picture

I agree....Noll's paper is great....I did not realize that this paper has been translated (by you!) and put on arxiv--thanks for pointing this out. I will re-read it. We have used Noll's results as well. When is the translated paper expected to appear in J. El? I will take a look at the peridynamics paper....

Rich Lehoucq's picture


I think the paper will appear sometime this summer. I'll post to imechanica.

Due credit should be given to Eliot Fried and Roger Fosdick, who heard that I had translated for my own research, and then made the effort to get the translation published.


Dear Rui (and others),

In your point no. (0) above, you say:

"... The state at a point of a continuum such as temperature and pressure is a statistical average of many particles in a representative volume."

I think that the inclusion of the word "statistical" here makes it all a bit too narrow, theoretically speaking.

The word "statistical" suggests randomness... Now, do we have to assume that state properties such as temperatures and pressures must be suffering random fluctuations inside every continuum? Why can't these fluctuations be just nonrandom or systematic fluctuations? After all, continuum is just an abstraction, right? (Here, concerning the basic nature of continuum, I was reminded of Zhigang's excellent post on the topic of why we would have to invent the continuum if it were not to exist already...) Since the continuum is basically an abstraction, you can always hypothesize a nonrandom fluctuation for it, right? An easy example: Electronic wavefunction is, by QM, random; but its representation in MD simulations is completely deterministic.

Thinking further, I am not even sure if the state at a point within a continuum has to be an average (whether statistical or deterministic). In fact, doesn't this suggest circularity in a sense? Consider this: The state at a point P is determined by the state at other points (an average of whose values is to be taken at the point P), and the state at those other points is determined by the state here at point P... That, clearly, is a circularity, with the concept "value of a state" being basically undefined all along...

This circularity is easily broken by recognizing that the continuum description basically (even axiomatically) includes a state definition at each point.

Averaging would be useful for having a steadier or lower-differential order description of more complex phenomena such as thermal fluctuations. Yet, the continuum model itself need not be bound by the process of taking an average---that's the basic point.


Apart from it all, Rui, your effort is very much laudable and appreciated. I mean, it's a very good list of points.

Amit Acharya's picture


A not so well-known paper by a not well-known author using space-time averaging which I have found useful is:

 Babic, M.  (1997) Average balance equations for Granular Materials, Intl. Journal of Engineering Sciences, v. 35,  p. 523-548.

This actually deals with, if i might say, the MD problem with collisions.

As a somewhat relevant, I hope, aside - Ultimately, the main question, in my opinion, is that of closure - how do you write the terms appearing in these average balances, defined in terms of microscopic entities, in tems of the averaged quantities  or a small augmentation to the set of these averaged quantities. This *is* the non-trivial question and at the heart of prediction of memory effects, macroscopic dissipation related to microscopic conservative physics,  macroscopic stick-slip as a limit of mixing up microscopically fast motions with dead stops, etc....

A thought-provoking book addressing small parts of this very big question of closure is:

 From Hyperbolic Systems to Kinetic Theory A Personalized Quest
Lecture Notes of the Unione Matematica Italiana
, Vol. 6

Tartar, Luc

2008, XXVIII, 282 p., Softcover

ISBN: 978-3-540-77561-4


The technical parts (that I understand ! - I figure it will take me two lifetimes of dedicated learning to really understand Tartar) are pure luminous gold - you will also get an interesting and personal view on science and scientists by  one of the most serious thinkers of our time and the last century, in my opinion.


- Amit





Pradeep Sharma's picture

Amit, thanks....I was not aware of Babic's paper. I will take a look

Closure? With respect to what operation(s), precisely?


[An aside: Since I have used precise mathematical terms, this should be enough to convey my ideas well about this topic, right? Or, is it that I am actually wrong? Can I be---if I said what I did in this aside?]

mohammedlamine's picture

The density of a particle is one of its properties which defines its mass for a unitary volume. If we need to integrate in the continum domain we can use the definition differential volume dV. Like in fluid mechanics these properties can be applied to solid mechanics. The kinetic energy can then be defined by (1/2)rho.(dot V)^2.

mohammedlamine's picture

The density of a particle is one of its properties which defines its mass for a unitary volume. If we need to integrate in the continum domain we can use the definition differential volume dV. Like in fluid mechanics these properties can be applied to solid mechanics. The kinetic energy can then be defined by (1/2)rho.(dot V)^2.

mohammedlamine's picture

Dear Mr Rui Huang

In point (2) of your blog you have defined a discrete system with a finite volume and a finite mass. I prefer that you call that a Differential Volume dV (or Elementary Volume) and a Differential Mass dm (or Elementary Mass). The Continuous Domain can be obtained from that by Exact Integration or Numerically by Discretization of the Continuous Domain.

Mohammed lamine MOUSSAOUI

Subscribe to Comments for "a point and a particle"

Recent comments

More comments


Subscribe to Syndicate