User login

Navigation

You are here

Journal Club Theme of September 2015: Kinematics of Crystal Elasto-Plasticity: F=FeFp

celiareina's picture

This month’s blog focuses on one of the most common kinematic assumptions in the continuum mechanics of finite elasto-plasticity: the multiplicative decomposition . This expression decomposes multiplicatively the deformation gradient of the body  into to the elastic () and plastic () deformation tensors; and it will be discussed here in the context of dislocation mediated plastic deformations. 

Origins

The idea behind the multiplicative decomposition  (Lee and Liu 1967, Lee 1969) lies on the chain rule for the deformation mapping when the material can fully relax its elastic deformation, see Figure 1(a). In that case, one can decompose the deformation mapping , relating the reference and deformed configuration in a Lagrangian description, as , and  would naturally follow.

 Figure 1: Elasto-plastic deformations with compatible (a) and incompatible (b) intermediate “configurations”.

However, this elastic relaxation is, in general, not possible due to the presence of dislocations. When these defects are contained in the interior of the material, they naturally induce an elastic deformation around them, which prevents the separation between the elastic and the plastic contribution, cf. Figure 1 (b). Then, the intermediate configuration commonly associated to  is physically unrealizable, or, in other words, incompatible (), and the argument of the chain rule for the multiplicative decomposition is no longer valid. 

Questions and sources of controversy

The absence of a physically realizable intermediate configuration has raised many issues, such as the precise physical meaning of the individual tensors  and , the existence of the multiplicative decomposition (Green and Naghdi 1971, Deseri and Owen 2002), its uniqueness (Lee 1969, Rice 1971, Mandel 1973, Nemat-Nasser 1979, Dafalias 1998), or the appropriate measure for the dislocation content in the body (Steinmann 1996, Acharya and Bassani 2000, Cermelli and Gurtin 2001). 

These controversies have lead to various studies to try to provide a micromechanical understanding of  (Davison 1995, Deseri and Owen 2002, Reina and Conti 2014, Reina et al. 2015). Alternative decompositions of the total deformation gradient  have also been proposed: some of them are additive (Green and Nagdhi 1971, Nemat-Nasser 1979, Zbib 1993, Pantelides 1994, Davison 1995), whereas others consist on the product of two ( Clifton 1972, Lubarda 1999) or three tensors (Lion 2000, Clayton and McDowell 2003, Hennan and Anand 2009).

A micromechanical understanding of F=FeFp

The elastic and plastic contribution to the total deformation at the continuum scale is as, previously mentioned, not a trivial issue. However, at a mesoscopic scale where the dislocations and active slip planes are resolved (denoted with subscript  ), the elastic and plastic mechanisms of deformation are clearly differentiated and the incompatibilities are concentrated at the dislocation lines. It is then possible to obtain from kinematic considerations, mathematically consistent definitions for ,  and  that are uniquely given from  (Reina and Conti 2014, Reina et al. 2015). The macroscopic quantities of interest  and  may then be defined as the limit of the corresponding mesoscopic objects as  (equivalent to a zoom out process in a real material). This provides definitions for the different deformation tensors that: (i) are explicitly and uniquely given from the microstructure; (ii) do not make use of any physically unrealizable intermediate configuration; and (iii) do not assume any a priori relationship between them.

 

Figure 2: Sketch of a two-dimensional elasto-plastic deformation involving an edge dislocation. The jump set (slip plane) is represented in red in the reference configuration. The Burgers vector, , and the normal to the jump set , are also indicated in the figure. 

More specifically, the deformation mapping at the mesoscopic scale, , can be described via functions that are continuous everywhere except at the planes where slip has occurred (area swept by dislocations during their motion), see Figure 2. Their distributional gradient, , (extension of the notion of gradient for discontinuous functions) is well characterized and leads to a consistent definition for the deformation gradient  (see Reina and Conti 2014 for a more precise mathematical characterization):

.                                                                                                                                         (1)

It consists of an absolutely continuous part, , and a singular part, , with support on the planes over which slip has occurred (jump set); see Figure 2. The quantity  represents the displacement jump at a point  on the jump set, i.e. , where the + side is the one indicated by the normal  to the plane. When the reference configuration is a perfect crystal, as in Figure 2, the set over which the jump occurs ought to be planar (a slip plane) and the jump must satisfy additional crystallographic constraints. These read, for a single active slip system, 

, and ,                                                                                                                                         (2)

where  is the Burgers vector; see Figure 2. Equation 2 indicates that atoms on the slip plane that were separated by a Burgers vector in the reference configuration bond with each other after the slip has occurred. 

The first term in Eq. (1), , represents the standard gradient away from the slip planes, and it can therefore be physically identified with the elastic deformation tensor  (note that  by construction, whereas  does not vanish in general). With regard to the definition of , we remark that, at the mesoscopic scale, the incompatibilities are physically concentrated at the dislocations. It is then possible to find in every subdomain away from the dislocations, a local decomposition of the form   and identify  with . It can be shown that these local decompositions are unique up to a translation and that  is uniquely and globally defined from  (Lemma 5.2 of Reina et al. 2015). Then, the application of Eq. (1) to a purely plastic deformation, as the one shown in Figure 3, leads to a general expression for  

,                                                                                                                                                         (3)

where  due to the lack of an elastic distortion and the sum in the second term is performed over all the slip planes (jump set) in the reference configuration. That jump set can be mathematically identified as the locus of points where  is discontinuous, and physically corresponds to the pullback of the individual slips to the reference configuration, in accordance with the order in which they occurred. Although this pullback may seem artificial, it is entirely due to the Lagrangian description adopted, and it encodes the non-commutative character of sequential deformations in the finite kinematic setting. We further remark that, induced by this pullback operation, the Burgers vector (defined at each point through the crystallographic constraints of ) is not always normal to ; see for instance the extract in Figure 3 where the support of the Burgers vector is indicated.

 

Figure 3:  Plastic deformation induced by the composition (left to right) of slips along two orthogonal slip systems. The deformations, top to bottom, represent elements of a sequence as  . The last row of images corresponds to the continuum limit.

The above mesoscopic description delivers globally unique definitions for  and  from . Their limit, as   (e.g. sequence in Figure 3 for a compatible plastic deformation), lead, respectively, to a unique definition of  and . It can then be shown that the multiplicative decomposition  holds at the macroscopic scale with  (Theorem 6.3 and 6.4 of Reina et al. 2015), and that  satisfies the classical rate formula of Rice 1971:  (formal result, Reina and Conti 2014). Furthermore, an explicit calculation of the Curl operator over   and suitable upscaling to the continuum limit, lead  as a consistent definition of the dislocation density tensor when expressed in the reference configuration (note that this is only true if the reference configuration is a perfect crystal), cf. Theorem 6.2 Reina et al. 2015.

The aforementioned results deliver a rigorous kinematic understanding of the multiplicative decomposition , where  and measure the shape change induced by elastic and plastic mechanisms, respectively. The derivations are currently limited to elasto-plastic deformations in two dimensions due to the mathematical complexity involved in the proofs. It is important to mention that these results do not invalidate other decompositions for the total deformation gradient, as it is highly dependent on the precise definition of the individual tensors in the decomposition; and these definitions are, in turn, essential for the development of constitutive relations and evolution equations for the internal variables.

Discussions

This topic presented is certainly controversial and there is an extensive body of literature on this subject. Although many important studies have been cited in the introduction, the list of references and authors is far from being comprehensive. Regarding the micromechanical justification for  described above, this represents the author’s perspective to this subject. Other points of view and references are welcomed.

Acknowledgements

I would like to thank my collaborators as well as the Lawrence Fellowship, the Hausdorff Center for Mathematics and the National Science Foundation (CMMI-1401537) for past and present financial support.

Comments

A false problem. For the multiplicative and additive decompositions are, formally, equivalent: one may always define appropriate "strain" measures such that the two decompositions imply each other. Specifically, if we regard the multiplicative decomposition as just a "parametrization" of the motion, things become really simple. A sketch of the argument may be found in Appendix A of "Plasticity and non-Schmid effects"-Proc.R.Soc.A-2014.

So the true problem is not which one is better, or which one has more physical support (I would bet that all "micro-mechanics" arguments in favor of one can be easily adapted to prove that the other has the same "properties"; indeed, it would be amazing if one could a priori distinguish between the two decompositions); The problem is how we correctly write the stress-strain relationship, according with what definition of plastic strain(-rate) one is working. At macro-level, in general, for the description of the plasticity of metals, no matter what strain measures (or decomposition) we use, the flow rule is always
associate, that is, the plastic rate is always related to the exterior unit normal of the yield surface. Now even if non-Schmid effects are ignored, this may not reduce to the classic normality rule. This simplest and most practical form (of the flow rule) is obtained only if a particular strain measure is adopted.

celiareina's picture

Dear Stefan,

Thank you for your comment. The two problems (kinematics and constitutive relation) are actually not independent. Both should be consistent and both are needed.

A very important aspect, as you mention, is to have appropriate state and internal variable with which to describe the behavior of the material. These could include, for instance,  and  as it is common in crystal plasticity, where  is defined as . But these quantities together with  are not independent. The question then comes to how  and , independently defined, relate to each other. This is where the micromechanical understanding section comes into place.

One could of course argue that other internal variables are more appropriate to characterize the material behavior (and this is indeed a very tricky and important question). In that case, their relation with  and , or equivalent quantities, shall be derived (not assumed) and this task is far from trivial (cf. proofs to the lemmas and theorems mentioned). For instance, the quantity Cp that appears in the additive decomposition is not well defined at the mesoscopic scale, as  is a singular, and the product of two such singular measures ( transpose and ) is not well characterized (it is analogous to multiplying two Dirac deltas). 

The post is not about which decomposition is better, but of the understanding of one specific decomposition, namely, the most popular one, . This point is actually made in the paragraph before the discussion section and is repeated here for clarity. The understanding of  from the independent definitions of  and , does not imply anything about other decompositions. Those will depend on the precise definition of the variables used.

Best regards,

Celia

 

Thank you for your comments. Rest assure, it is not my intention to divert the "theme of September" off-topic. On the contrary. Last year I used (almost) the same approach as you described here but in the context of grain boundary sliding (GBS), hence it must be true that ideas really "float" in the air (see also the paper of Weismuller et al, "Kinematics of polycrystal deformation by grain boundary sliding", Acta Mater., 2011). Unfortunately, due to other research priorities, of a more practical and applied nature, my notes on this beautiful topic are incomplete at this moment, but somewhere in the near future I hope I'll find the time to make them available.

Nevertheless, you may agree that the context of a polycrystal whose constituents (grains) may slide against each other is not that different from that of a single crystal with internal sliding surfaces. Then I can show, under quite general conditions, and using Hill's homogenization procedure, that the overall response
of the polycrystal can be described within the classical framework:

1) the kinematics, with an additive decomposition of the overall rate of deformation into "elastic" and "plastic" components, D=De+Dp;

2) the constitutive response Td=K:[D-Dp], where Td is an appropriate rate of the overall Cauchy stress.

In the process, we have acquired an interpretation of Dp in terms of micro-mechanics events, pertaining to crystal plasticity and GBS.

The same, I assume, can be adapted to obtain the overall response of a single crystal with internal sliding surfaces, in terms of an additive (rate) decomposition.

Best, Stefan

celiareina's picture

Dear Stefan,

I am looking forward to seeing your paper and understanding the details of your derivation.

Best regards,

Celia

M. Jahanshahi's picture

Dear Celia,

Of course not all three deformation gradients are independent. Having F and writing the evolution for Fp as described in your post, Fe becomes a dependent variable. What I do not understand is what you mean by independent definition of F, Fe and Fp.

Mohsen

celiareina's picture

Dear Mohsen,

In general, the multiplicative decomposition is assumed, and therefore the three deformation tensors are not independent via that assumption. In our work, we take a different approach, and provide independent definitions for the individual tensors, from which we prove (not assume) that the multiplicative decomposition holds (Reina et al. 2015). These definitions are:

 .

 is the limit of the absolutely continuous part of the deformation gradient at the mesoscale, and therefore it physically represents the elastic distortion.

-  represents the plastic distortion and satisfies .

Interestingly, from these definitions,  does not hold at the mesoscopic scale but it does in the continuum limit.

Best regards,

Celia

Amit Acharya's picture

Celia,

A remark: While it is true that C_p in your setting is not well defined at the mesoscale, it is possible to define F_p at the mesoscale such that C_p is well-defined. Just take the jump part in your definition of F^p_\epsilon and divide it by l_\epslion, where l_\epsilon is the width of a layer on which this jump part has support - i.e. spread out your discontinuity surface to a layer. Let l_epsilon tend to zero as \epsilon goes to zero on upscaling. Then at the mesoscale everything is nonsingular, there are no non-trivial measures and the analysis can be carried out by a Stokes-Helmholtz like orthogonal decomposition. In the limit, as epsllon goes to zero, you get back exactly your mesoscale constructs. Sec 3 of

Continuum mechanics of the interaction of phase boundaries and dislocations in solids

describes this. For instance, in eqn 2 there, the limit of A as layer width goes to zero would be your absolutely continuous part w.r.t. Lebesgue measure of F_\epsilon (and this has non-zero curl in the limit as well as away from it), the limit of grad B would be your F_\epsilon, and the limit of -WV there would be your F^p_\epsilon.

In essence, it seems to me that when you work with the type of measures you do in the context you are working in, you get a non-curl-free absolutely continuous part only when F_\epsilon is a nontrivial measure, i.e. when it really has singular content. In the way I describe it, even when things are smooth, you can have a non-trivial, non-curl-free elastic distortion. In what you do, right away at the mesoscale though you have a singular F_\epsilon whereas in what I am saying, this would happen only at the macroscale.

 

celiareina's picture

Dear Amit,

Yes, I indeed meant that Cp is not well defined in our setting. Of course, one can regularize the plastic deformation and have a well-defined product. We actually do that in order to prove that the determinant of the plastic deformation tensor is one, since the determinant involves products. However, the study will not be trivial. In that case one will be faced with the fact that, in general, the limit of products and the product of the limits do not coincide. 

In the semicontinous formulation that we use, indeed the elastic deformation tensor (absolutely continuous part) will be curl free if there is no singular part. But that would be entirely natural and physical, as in that case F=Fe.

Thank you very much for the reference. I will read it in detail and get back to you if I have any questions.

Best regards,

Celia

Amit Acharya's picture

Hi Celia: Yes I agree, passing to the limit of products will not be trivial - in general passing to the limit is non-trivial, isn't it, or else you would not have two papers that we are discussing here on the subject, right? :) But the sequences of which one wants limits better be well-defined, is a minimum requirement.

As to what is entirely natural and physical for a fine scale model is open to interpretation. As I see it, I would like everything to be as smooth as possible at the microscale so that at least there is hope of existence of solutions to a full model (not just kinematics but statics and dynamics) and then one can contemplate limits of such models at the macroscale. From this point of view, having singular measures (slip lines may still be ok, but with a dislocation is very bad news for any kind of nonlinear pde theory) that enter nonlinearly in a microscopic theory can really mess things up. Keeping all this in mind, my point of view is that one wants to be able to pose and solve actual dynamical problems of individual and collective dislocation motion and statics at the microscale with continuous fields as we do here

A single theory for some quasi-static, supersonic, atomic, and tectonic scale applications of dislocations

here

Traveling wave solutions for a quasilinear model of Field Dislocation Mechanics

here

Can equations of equilibrium predict all physical equilibria? A case study from Field Dislocation Mechanics

and here

On an equation from the theory of field dislocation mechanics

(and, for truth in advertising, I must say that, while I did contribute to this last paper, I can barely manage to hang on to the full mathematics here! The business is hard!!)

and then ask questions of upscaling such dynamics. Given the (natural) nonlinearities involved, I think even you would agree that having singular measures with dislocations at the microscale would not be a workable solution for the subsequent question of upscaling and even at the microscale itself.

Moreover, as you can see, we solve all kinds of  natural physical problems in dislocation statics and dynamics with a non-singular theory - so there is no loss of generality and 'physicality' in doing so - I would say that, in fact, it is only more natural to have a smooth microscopic theory whose fields in the macroscopic limit may look singular and then on upscaling even the form of the theory might change (precisely because things like limits of products are not products of limits intervene). After all, at the atomic scale there is always the interatomic scale to regularize your slip plane, so nothing really is singular! One can say one is operating at the mesoscale that is larger, but then we are back to the questions of nonlinearities with measures.

So, even with no singular part in the total deformation gradient, it can be absolutely natural and physical for F not to equal F^e - and this can actually solve many dislocation problems - both statics and dynamics.

celiareina's picture

Dear Amit,

In my opinion, both, a sharp and diffuse modeling of the slip are equally valid; and actually one would hope/want that such differences do not matter in the continuum limit. This is often the case. For instance, when we regularized the plastic deformation tensor for the proof of the determinant, we showed that both, the singular measure and the smooth version of it, converge to exactly the same continuous quantity, and thus they are physically as valid.

Using one or the other approach is often a matter of taste and in some circumstances it may depend on the problem that one is trying to solve and the techniques that one intends to use. In our case, we were trying to investigate the multiplicative decomposition, for which we wanted to have independent measures for the elastic and plastic distortion. In this case, it is far more convenient to have slip over surfaces of discontinuity, as the two mechanisms (elastic distortion and slip) can be physically identified from kinematic arguments. If a diffuse interface model is used from the start, then it would be much harder to differentiate between a strong elastic shear and a diffuse slip. One may have to introduce energetic arguments for it, which would complicate the analyses. 

Regarding the evolution with singular measures, I think there is no problem with that (at least for some set of problems). I have seen works which deal with the continuous limit of the evolution of empirical measures (which are sum of singular measures) both for particle models and dislocations. But I have not dealt myself with these type of problems so far.

Best regards,

Celia

Amit Acharya's picture

Celia - For pde you do need some smoothness, especially for nonlinear pde. In light of this, if you say there is no problem with singular measures as 'solutions' of nonlinear pde of evolution, I accept it as your opinion. Time will tell.....

celiareina's picture

Dear Amit,

I am not at all knowledagble on this topic, and I think it may be best to discuss these issues with a mathematician expert on measures. Some people have worked out hydrodynamic limits of discrete particles, obtaining their continuum evolutions, and that is the only thing I wanted to say as things that have been achieved via limits of singular measures regarding evolution. The difficulties that may rise or exist, I do not know them, and you certainly know better.

Best,

Celia

Dear Amit,

You pointed to a subtle distinction. In elasticity (or elliptic problems), for example, singular measures usually appear in the form of boundary conditions (e.g., Eshelby's inclusion problem), or point/concentraded forces (Green's integration method). (The two cases are not that distinct, since the former problem is usually solved by reducing it to the later.). In these "singular" cases the solutions are smooth almost everywhere. But they do have singularities at isolated points, or lines, or surfaces, while still preserving their measure-like feature (when integrated with test functions they yield finite numbers). For an evolution
problem, the example of a moving point force shows the same pattern. So generalized solutions (in the sense of distributions) shouldn't be excluded (at least on mathematical grounds). True, my examples were taken from the realm of linear PDE's...

Best, Stefan

Amit Acharya's picture

Amit Acharya, Robin J. Knops, Jeyabal Sivaloganathan (2019) On the structure of linear dislocation field theory,  Journal of the Mechanics and Physics of Solids, 130, 216-244.

sec. 3, specially.

M. Jahanshahi's picture

The ambiguities inherent in the definition of F=FeFp (such as the identification of intermediate configuration up to rigid body rotations, etc.) set aside, the choice between one decomposition or the other seems to be a matter of convenience. In most formulations F=FeFp is preferable, while in others F=FpFe might be more suitable. For example when the elastic strain energy is a function of Ee or Ce=Fe^TFe it might not make any difference to use either decompositions, but when the potential is a function of Fe only and not Ce then the application of different decompositions might have different impacts.

Mohsen

celiareina's picture

Dear Mohsen,

Thank you very much for your comment. Actually, in the decomposition  for crystal plasticity there is no need of an artificial intermediate configuration as  and  are measures defined in the reference configuration where there is no rotation ambiguity (cf. section on the micromechanical understanding).

Regarding the decomposition , it is possible to have the same definition for the elastic tensor as in . To see this, one can pass the expression of Eq. (1) to the limit (), obtaining an additive decomposition with  as the first summand (limit of the absolutely continuous part). Factorizing  to the left or to the right one would then obtain expressions of the form of either of the two multiplicative decompositions (this strategy was used by Deseri and Owen, 2002). However, the expression for the plastic tensor will be different in the two decompositions, and this needs to be taken into account when writing its evolution equation. In the decomposition , the plastic tensor evolves as , where the slip systems normals are referred to the reference configuration and are therefore constant and not affected by the elastic distortion. This would not be the case for the decomposition , where the plastic tensor there involved would follow another evolution equation. Yet, a rigorous proof of   with microstructural definitions of the individual tensors is, to the best of my knowledge, still missing.

Best regards,

Celia

M. Jahanshahi's picture

Dear Celia,

Thanks for your detailed description. Actually, it is not my field of expertise to talk about mesoscale. However, the evolution equation for Fp (the equation in your post) in crystal plasticity emerges as an outcome of maximum plastic dissipation using the decomposition F=FeFp. The same principle can be used to develop an equation for the evolution of Fp for the decomposition F=FpFe. Of course in this case the slip systems are no longer constants, but they are obtained from the relations s = Fe x S and m=Fe x M.  This adds to the complexity of formulating an equation for the evolution of Fp. Further to my previous posts, intermediate configuration is completely defined in crystal plasicity, since one has an equation for the evolution of Fp and therefore the rotation part (Rp) as well as the stretching part (Up) are known. The ambiguity of rotation poses certain problems in continuum plasticity when the plastic behavior is assumed to be isotropic and one has an evolution equation for Cp or be (symmetric tensors). Yet, the intermediate configuration is still needed in crystal plasticity, because the elastic response is required to be expressed in terms of tensors which are defined with respect to this configuration.

Regards

Mohsen

Mohsen

celiareina's picture

Dear Mohsen,

The evolution equation  can be understood from kinematic arguments looking at the shear rate of every slip system and their contribution to the total deformation tensor. You can find these kinematic arguments on page 448 of Rice 1971 using the intermediate configuration, or in page 57 of Reina and Conti 2014 without the use of the intermediate configuration. 

The classical point of view for the elastic deformation does indeed make use of the intermediate configuration. However, our analyses indicate that this is not needed, as both Fe and Fp can be defined as measures in the reference configuration (Reina et al. 2015). This is, in my opinion, comforting, as the intermediate configuration does not exist in general. Other works that use expressions analogues to Eq.(1) and identify there the elastic deformation without using the artificial intermediate configuration are the works of Davison 1995 and Deseri and Owen 2002.

Best regards,

Celia

 

M. Jahanshahi's picture

Dear Celia,

Actually, this is very interesting. Thanks for updating me.

Sincerely,

Mohsen

 

Dear Celia,

The matter of the "intermediate configuration" was solved elegantly by Hill and Rice in their 1973 work "Elastic potentials and the structure of inelastic constitutive laws", SIAM J. App. Math. There it was shown quite clearly the parametric interpretation of plasticity, where all measures of deformation, elastic or plastic, are Lagrangian (defined on the reference config.). The most early source, clearly an inspiration for the Hill and Rice work, where the parametric interpretation appears, is Green and Naghdi (1965), "A general theory of an elastic-plastic continuum", Arch. Rat. Mech. Anal.

Best, Stefan

M. Jahanshahi's picture

Dear Stefan,

As far as I remember the rate of deformation tensor, D, in the work of Green and Naghdi is considered to be additive (comprised of elastic and plastic parts). While in many other references (in the works of Simo for example) it was shown that it cannot be simply proposed that the deformation gradient is additive. He also elaborated on the importance of intermediate configuration. Also, we know that the decomposition F=FeFp (regardless of its problems) is based on physical backgrounds. For example, in the limit to infinitesimal deformations it leads to additive measures, etc. Now, what I understand from the discussion is that we want to base the measures on reference configruation because the intermediate configuration is fictitious and cannot physically exist.

Sincerely,

Mohsen

Dear Mohsen,

1) Green and Naghdi in their 1965-paper (A general theory of an elastic-plastic continuum) first formulated a..."general theory"; then they exemplified with an elastic-plastic decomposition of the total (Green) strain (E=E_e+E_p); after that, they reach the rate form for infinitesimal deformations. The essence here is the
natural invariance (objectivity) of the theory (due to its "total" Lagrangian basis) and the parametric representation of the elastic potential (with E_p, or C_p, as parameter).

2) Simo reconsidered precisely the model/theory of Green and Naghdi(1965), but discarded their particular proposal of a potential with C_e=C-C_p as parameter and adopted instead the general form U=U(C,C_p). Nevertheless, a model with C-C_p as parameter can still be adopted, provided the list of (plasticity-) parameters is configured appropriately, Soare(2014-Plasticity and non-Schmid effects, Proc. R. Soc. A) .

3) "we know that the decomposition F=FeFp (regardless of its problems) is based on physical backgrounds". from where, whom do "we know" ?; I am not aware about those "physical backgrounds" either. What I do know is that all kinds of decompositions, multiplicative, additive, mixed, etc, are rooted in one fundamental observation: the permanent deformation left behind by applied forces (be it in a crystal or a polycrystal). The differences between the many models trying to describe this reality is just in methodology.

4) "...because the intermediate configuration is fictitious and cannot physically exist." Must it be real ? It's a model. I would distinguish between models and reality. If we consider the similar context of the motion of a rigid body: Do Euler's angles correspond to actual configurations of a rigid body along its trajectory in space ? Only in very particular cases (e.g., rotation about a fixed axis), for otherwise, in general, the  configurations they describe are fictitious. And yet, the end result is Euler's multiplicative decomposition of the actual rotation in three simpler rotations. These parameterize the motion. Returning to our intermediate configuration... Any formulation based on it can be translated into invariant (Lagrangian) terms/measures and vice versa.

Best, Stefan

M. Jahanshahi's picture

Dear Stefan,

Thanks for your detailed descriptions. Actually, I have the following comments concerning my previous post:

1) In Simo's work (Famous articles 1988a and 1988b), he considered the strain energy to be a function of C and Cp (Cp-1 to be more exact). In the examples provided the strain energy is a function of

Cp-1:C=(Fp-1Fp-T):(FTF)=Ce:I

and the dependence on Ce is obvious, i.e Ψ=Ψ(Ce). This approach is followed in his and Hughes book "Computational Inelasticity". However this Ce is not C-Cp.

2) By physical background, I refer to the deformation of a prismatical bar beyond the elastic limit. Assuming the initial, final and the length after unloading are l0, l and lp, then we have the stretches,

λ=l/l0, λe=l/lp and λp=lp/l0

and therefore we have the relation: λ=λeλp. Extending this relation to the more general case leads to F=FeFp. Moreover, the classical discussions in crystal plasticity such as slipping on crystallographic planes have led to this decomposition.

3) I personally do not have any problem with intermediate configuration. As you said it is a mathematical model to describe the behavior of material. I thought that the reason for basing the measures on reference configuration is the problems attributable to this configuration (Curl Fp ≠ 0, etc.)

Sincerely,

Mohsen

Konstantin Volokh's picture

What a painful topic. Here is the alternative rooted in Eckart's work...

Pradeep Sharma's picture

Hi Celia, I enjoyed reading your journal club blog...this is a very nice summary of a long-standing issue that presumably all of us confront in our mechanics courses. I have started reading your two papers that pertain to this topic. I find it intriguing that your homogenization procedure is able to analytically upscale to the result you present despite, from how it appears to me, a rather complex set of microstructural conditions.

Out of curiosity, do you plan to extend some of the developments you have made in this work to other problems? For example, I was recently pointed to an interesting paper by Chenchiah and Shipman where they deal with the decomposition F=FeFg, where Fg pertains to growth. In their particular context, they show this decomposition to be erroneous.

celiareina's picture

Dear Pradeep,

Thank you for your note. The proofs are actually fairly long, but the physical ideas behind them are not too complicated, I think. For simplicity, if one considers a compatible domain (away from the dislocations) with one single active slip system, then , and     are defined, respectively as Eq. (1),   and Eq. (3). Furthermore, due to the crystallographic constraints expressed by Eq. (2), one has, after a Taylor expansion, (the Burgers vector is of the order of the lattice spacing and scales as ) that 

Combining the definition of the three deformation tensors, one then formally obtains (disregarding the higher order terms in , referred in the equation above as h.o.t),

which is the desired expression. 

The actual proof does not use a Taylor expansion. Rather the derivative of the elastic deformation tensor is controlled through the energy, which is required to be bounded for the sequences of deformation as . The idea of the proof of the multiplicative decomposition for the compatible regions is as above (although multiple slips are of course considered); and for the cores, one can show, given their volume, that their impact in the kinematics in the continuum limit is negligible.

I hope this short explanation helps understanding how such a proof can work. But I would be happy to answer any question you may have.

As you see from this small derivation for a single slip system in a compatible domain, the proof is very mechanistic dependent. It uses the spatial separation of the elastic deformation and the plastic one (concentrated over slips) and the crystallographic constraints underlying slip, e.g. Eq.(2) for single slip. This proof therefore does not extend easily to other processes, but I am indeed interested in them. Isaac and myself have had very interesting discussions regarding the multiplicative decomposition for elastoplasticity and growth. 

Kind regards,

Celia

Pradeep Sharma's picture

Hi Celia, thanks for the nice clarification...

Hi Celia and others,

Let me toss in an extremely stupid idea. Stupid, it certainly is. Also, this reply is written at length, as usual. None of you has to take this reply even quasi-seriously. It's just that just the way a cat cannot help but pause and peek into any and every room he(/she/it) passes by, similarly, I cannot help but very seriously contemplate (only for a very brief while) jumping-in into any on-going discussion here at iMechanica---whether I know anything about the topic under discussion or not. (In this particular case, I don't have even a smattering of an idea about these equations!)

OK. So, here's the idea:

First decompose not the deformation field but the domain i.e. the region of space.

Referring to fig. 2, demarcate the region lying to the left of the extra (and missing) ``plane'' of the edge dislocation, as the region E, and the remaining part as the region P.

E suffers only an elastic deformation whereas P suffers both elastic and plastic deformations.

Enforce displacement continuity (to the required order) at the mathematically demarkating surface.

The E region sub-problem should be ``trivial'' to handle. The point is: the P region sub-problem, too, is now simpler; it has got reduced to problem depicted in the Fig. 1 (a).

If the dislocation moves to the left, reduce the size of the E region and increase the size of the P region.

Hopefully, solving each sub-problem separately while also enforcing the continuity is feasible.

Yes, it's a very stupid line of thought, but my point was: why must these two modes of deformation---viz. elastic and plastic---be subject to the same treatment over the entire domain? Why must the deformation field be described via a single overall mathematical variable $\varphi$ that is spread over the entire domain?

After all, if you implement the problem here using some computational method (at least a ``meshful'' method), then you are anyway going to end up with not just two but a large number of spatially separated finite elements/volumes. (I don't know whether closed-form analytical solutions for the original problem are desirable or even always possible.) If so, then why not separate out the different regions for different treatments? It would only become, say, an E--EP interaction problem, analogous to the well-known fluid--solid interaction problem (or the contact nonlinearity problem).

OTOH, whether you follow the additive approach or the multiplicative one, the problem-defining geometrical/microstructural features are lost during the homogenization process anyway, aren't they?

OK. The very stupid idea is over. [I will check for the brickbats sometime later. (May be tomorrow evening IST or so.)]

Bye for now,

--Ajit
[E&OE]

celiareina's picture

Dear Ajit,

Your idea is actually not stupid at all, and we indeed treat separately in the analyses the compatible regions away from the dislocations via multiple subregions and the dislocation cores; and then study the full domain with these partial results.  But in order to do so, one has to guarantee that the quantities defined in the different patches (e.g. the different deformation tensors) are unique and deliver global definitions. For that it results convenient to have a global field and  is here a natural choice. 

Best regards,

Celia

 

Hello Celia,

...So, it's that same old beast ``uniqueness'' again!

... Why I call it the same old thing, and a beast: Even in the much simpler problem of diffusion (it's a vector problem, not tensor), it's the uniqueness that is key to justifying a global support for the solution---not compact. And they have a global support even for the transient case! ... I am fighting it---the uniqueness---mainly because I don't like the instantaneous-action-at-a-distance which a global support necessarily implies (at least while using the Fourier theory). [But I will fight this beast only there, in diffusion, mainly because diffusion is so much a simpler problem to handle! Arguments can be honed better with a simpler problem.]

... Anyway, thanks for all your clarifications in this post and sub-threads---I really didn't have any idea about how this kind of an analysis (the one you mentioned for this edition of the journal club) is done. ... And also, special thanks for taking my comment seriously enough to reply it; I really appreciate it.

Let me sign off now; I will come back and check this page some time over the week-end, for any new insights.

Best,

--Ajit

[E&OE]

 

Amit Acharya's picture

Hello Celia,

Nice work with Schlomerkemper and Conti. Obviously the upscaling is substantial content and I have not digested all details, in particular the physical meaning of all your scaling assumptions (e.g. why the number of dislocations should grow without bound on zooming out - I can understand core-width going to zero, and therefore having to do something about Burgers vector strength, in order to keep energy finite...), but this should be useful for folks wanting to work with a multiplicative decomposition.

By the way, at the mesoscale there is a kinematically fundamental, additive decomposition of the velocity gradient that arises purely from the kinematics of Burgers vector conservation (you know about this work?). It is here (sec 4.3):

From dislocation motion to an additive velocity gradient decomposition, and some simple models of dislocation dynamics

We have a very good guess as to the structure of the upscaled version of this decomposition, but the precise details involve averaging in time as well of a non-singular, dynamical, microscopic theory, even at small deformations, primarily because plasticity is about moving dislocations.

 

 

Kaushik Dayal's picture

Hi Amit,

It's an interesting point about the scaling of number of dislocations.  It seems to me that since the Burgers vector scales with the lattice spacing $\epsilon$, and in the limit of $\epsilon$ going to $0$, roughly one needs an infinite number of dislocations to see a finite effect since the charge of each dislocation is vanishing.  

I guess this is in addition to your point about handling the core energies, though perhaps that is not an issue in this paper since I do not see the energy being considered, only kinematics.

Kaushik

Amit Acharya's picture

Kaushik,

Well, this is why I said I need to understand the scaling assumptions well. Celia's paper is essentially devoid of considerations of a lattice beyond motivation. On a spatial zoom-out why the strength of the displacement discontinuity for continuum dislocations should necessarily vanish is not so clear to me (I do understand that other scaling assumptions can be made). Were this not so, then you would have non-zero energy even for a finite number of dislocations.

Even in a lattice, if the burgers vector strength goes to zero and the dislocations are well-separated, what does it say about the limiting elastic and plastic deformations - that it is compatible?.... This last bit is a casual remark, but definitely reinforces the subtleties here.....

I thought the second arxiv paper of Celia's (with Schlomekemper and Conti) has energetics in it.

Kaushik Dayal's picture

Hi Amit,

It's an interesting comment on using another scaling.  Scaling Burgers vector with lattice spacing seemed natural to me, but perhaps there are others which give interesting results.  Do you have any thoughts about alternate scalings?

Kaushik

Amit Acharya's picture

Kaushik,

It seems to me to be a physical problem to think about deriving a macroscopic limit (say for stress, to consider a static question) of a body containing a fixed number of microscopic dislocations with spread out cores with dislocation density in the cores scaling such that the Burgers vector of each dislocation remains finite as the core size goes to zero (which is one way of thinking about taking  a macroscopic limit). The core size will go to zero and the Burgers vector remaining finite would seem to send the total energy of the body to infinity (at least in the linear elastic quadratic setting), so this, off the top of my head, seems like a tricky question but a natural one, especially if you start with a microscopic energy density function that corresponds to finite elasticity. Of course I cannot know whether you would consider this to produce 'interesting results.'

On what scalings can do - there is an interesting result of Muller, Scardia, Zeppieri, where starting from nonlinear elasticity with dislocations they choose certain scalings (not of the type I am talking about) - you can look up their paper - and find that the elastic part of the limit energy is exactly the *linear elastic* energy density - now this seems rather puzzling to me from a physical point of view since if you stick in a finite elastic rotation field whose curls can produce dislocation walls with piecewise uniformly rotated pieces in between representative of physically observed polygonization, you will get a non-zero energy from this limit energy function but my expectation woud have been to find a 0 elastic energy in this case. But theirs is a rigorous result. i think this is very interesting!

Essentially, I do not see why a spatial zoom-out has to mean the Burgers vector of a dislocation has to go to zero in all cases of physical interest and the number of dislocations has to go to infinity. I can also see a physical situation where I think of a dense population of dislocations at the microscopic level which then might look like number going to infinity on zoom-out.

Since I think we may be derailing Celia's blog here, I'll quit here with this response, even though I am on sabbatical :) it is good to be....

Kaushik Dayal's picture

Thanks Amit.

Amit Acharya's picture

Kaushik: You asked me about alternate scalings for dislocation mechanics for going to the macroscopic limit, so here are some thoughts.They are only that at this stage, so it may very well not work out.

As we have already discussed here with some examples, my feeling is that if you keep a dilute limit (just to be safe, let's say a finite number of dislocations in the limit) as well as demand that the Burgers vector of an individual dislocation goes to zero on upscaling, then it is possible to get mathematically correct but physically irrelevant limit models, as concerns dislocation mechanics. That said, I do value mathematically rigorous work, even on problems of no direct physical relevance, for the methods and ideas that such endeavors can generate.

So my feeling is that as the size of a body goes to infinity compared to lattice spacing, one alway needs to keep Burgers vector strength fixed. What is the physical justification? The Burgers vector, while microscopically linked to atomic separation, also has a macroscopic significance of a topological nature, completely devoid of issues of lengths and scalings. If you have one dislocation in a macroscopic scale body, you can still identify its Burgers vector by a macroscopic contour integral. As you will certainly realize - it is like the charge of an electron - the charge remains fixed even if you go to macroscopic scales. And, this toplogical fact interacts with statics and dynamics to produce energy and stress and dislocation interaction, even when the body is viewed macrosocpically. Clearly, a macroscopic body with a single dislocation has a lot of stored energy in it.

So a *possible* scaling may the following: in the situation where you allow the number of dislocations to grow, the Burgers vector remains fixed, the core sizes go to zero, and the interdislocation spacing goes to zero with the net result that the dislocation density remains fixed, but the effective core size of N dislocations clustered together becomes large. So the picture I am working with is the following: suppose I take 10 positive edge dislocations each with core area 1b^2 with a constant dislocation density specified in each of 1/b. Each core then has a net Burgers vector of 1b. If we now think of coalescing all of these cores (as would happen on upscaling - the cores look like points, the interdislcoation separation shrinks, and the 10 cores look like having been tansformed to a line segment). The 'line' segment still has area 10b^2 and this effective core carries an effective charge of 10b, so the dislocation density is still 1/b.

Is it possible that the same dislocation density spread out on a larger core, delocalizes the energy and stress content and reduces total energy? Certainly, in the linear theory, if you take a Dirac for the dislocation density, you get the classical results. But it is also true that if you apply a spatially homogeneous dislocation density tensor of any magnitude on a body (with zero traction b.cs.) you will get an identically zero stress field and energy in the body. This is not so easy to see in finite deformation theory - Saurabh Puri and I played around with this with Saurabh's implementation of static finite deformation dislocation mechanics, and in all cases we tested, the result bears out, but I cannot prove it in contrast to the linear theory. (Incidentally Saurabh did some very nice work on mesocale dislocation plasticity here).

So what I am saying may also be thought of like this. Suppose you have a point dislocation followed by a slipped line (all in 2D), viewed at the macroscopic scale. Now if you have a bunch of slip planes with the same configuration stacked one on top of the other, the dislocations become a line of Diracs and the singularity/concentration this object produces in stress is necessarily much less than the individual Dirac. Extend this to an infinite wall and you have absolutely no stress from the wall. Going back to the macroscopic view of the finite stack, the region befind the partial wall looks like a shear band with finite thickness, macroscopically.

*IF* the above is correct (and it is a big IF), then one can have the singular picture for finite number of dislocations at the macroscopic scale as well as smooth elastic and plastic deformation fields at the macoscopic scale when numbers of dislocations grow, all with fixed Burgers vector.

Just some random thoughts......hopefully not completely bogus!

Kaushik Dayal's picture

Hi Amit,

 

A couple of side issues:

 

1) Perhaps to derive the type of models that you are thinking about, asymptotic analysis may not even be a good tool.  I’m reminded of Doug Arnold’s talk (~2007) where he described the situation in shell models, with asymptotic analysis able to give only the Kirchoff-Love theory and nothing else, while other methods (in that case, variational) gave a whole range of models that were suitable for different conditions, including the Reissner-Mindlin model which that community largely considers superior to Kirchoff-Love.

 

2) To get a point dipole, one can start with 2 separated equal-and-opposite charges and bring them together, but one has to scale the charge strength with separation to avoid getting a trivial limit.  In a continuum limit of a lattice of charges, one has to similarly scale the charge with the lattice size (e.g., text between (3.6) and (3.7) in http://imechanica.org/node/15450 and the references there by Toupin, Xiao, Puri, etc).  For a single electron, I agree that one may want to keep the charge fixed depending on the phenomenon for which we are trying to make an effective model, and I wonder then if asymptotic analysis is too constraining.

 

3) Regarding your example of a single dislocation in a body that is large compared to the atomic spacing (forgetting about the limit, but keeping realistic values), I see your point that the energy is significant, but I’m not able to see that one could detect the Burgers vector by a contour integral.  Wouldn’t this require accuracy on the order of the atomic spacing in evaluating the contour integral?

 

 

 

 

Getting to the main issue, I agree with your basic point — as I understand it — that the charge of an object can in general scale in many different ways, and in this case one wants to preserve the net charge in the limit, and do so even if it is only a single defect.

The scaling that I was originally thinking of — I’ll call it scaling (1) since I am not sure if Celia thinks of her scaling as the same — is as follows.

 

Scaling (1):

$b_\epsilon = \epsilon b_1$ for the Burgers vector.

To preserve the net charge in the limit, I want $N_\epsilon . b_\epsilon = N_1 b_1$ implying $N_\epsilon = N_1 b_1 / \epsilon$.

These together imply $\rho_\epsilon = N_epsilon / L^2 = \rho_1 b_1 / \epsilon$ for the areal density scaling, and that dislocation spacing scales as $\epsilon^0.5$.

 

Scaling (2) (My understanding of your proposed scaling):

$b_\epsilon = b_1$ for the Burgers vector

To preserve the net charge in the limit, I will continue to require the constraint $N_\epsilon . b_\epsilon = N_1 b_1$ implying $N_\epsilon = N_1$.

These together imply $\rho_epsilon = \rho$, and dislocation spacing is independent of $\epsilon$.

 

In scaling (2), if one now thinks of the dislocations as regularized objects with a charge density $\alpha$ distributed over the core, and the core radius scales as $\epsilon$, then $\alpha ~ b / (core)^2$ scales as $\epsilon^{-2}$ to keep the Burgers content of each dislocation finite in the limit.  In scaling (1), the argument is similar except that $b$ also scales with $\epsilon$, leading to a scaling $\alpha ~ \epsilon^{-1}$.

 

Scaling (2) does seem natural in some ways.  But what I have written down does not agree with all of your statements, so maybe you are thinking of yet another scaling.  In particular, scaling (2) keeps the dislocation spacing finite in the limit, and does not account for your picture of coalescing of cores.  One could perhaps get coalescence of cores in scaling (2) by assuming that the core scales as some negative power of $\epsilon$, though that seems to me (at my current understanding of this problem) to be unphysical.

Hi Kaushik,

1. I am not able to follow your maths for the scaling scheme (1), when it comes to this point:

``To preserve the net charge in the limit, I want $N_\epsilon . b_\epsilon = N_1 b_1$ implying $N_\epsilon = N_1 b_1 / \epsilon$.''

Shouldn't the implied part be: $N_{\epsilon} = \dfrac{N_1 b_1}{b_{epsilon}}$, leading to $N_{epsilon} = \dfrac{N_1}{\epsilon}? It seems physically meaningful too. So, following your assumptions, since the dislocation spacing (i.e. $b$) cancels out, $\rho_{\epsilon}$ would become independent of $b$. [BTW, is the notation you follow here standard in the field?]

2. Re: your point (3): You mention about (the difficulty/implausibility of) detecting the Burgers vector via a contour integral. Obviously you mean that the contour integral is conducted over the region in the continuum limit of the lattice. ... This is nothing but a straight-forward consequence of the lattice constant going to zero in the continuum limit (which is the same as having a finite lattice constant but making the lattice infinite in size).

Consequently, any enterprise to detect a finite value for the Burgers vector must fail in a continuum---it would sure be approaching the zero.

But my point is: Why do you want to detect a finite value for the contour integral, in the first place? Why don't you guys simply require the contour integral to evaluate to a finite, ``quantized,'' value? Thus, for example, for displacement $\oint d\vec{x} = n \vec{b}$ where $\vec{x}$ is the displacement along the integration path, $n = 0, 1, 2, 3 \dots$, and $\vec{b}$ is the Burgers vector.

So, my idea is, why not first assume such a quantization relation to hold always---at any scale, including also in the continuum limit---and then conduct the analysis using this as a starting point?

This way, you could perhaps get what you are looking for. ... [Just an idea; have no knowledge of this field...]

--Ajit

[E&OE]

 

Amit Acharya's picture

Kaushik: I hesitate to say much more than I have - primarily because at this point I do not have much more to say. With regard to your main points: I understood your setup (there was the small issue that Ajit pointed out in scaling 1). The way I view things would be a little different. You view the dislocations evenly distributed in the L X L fixed size block. I view it a little differently. I would view a region L_\espilon x L_\epsilon in an infinite domain populated by the N_\epsilon dislocations and as \epsilon goes to zero (zoom out), I would scale L_\epsilon as well - my gut feeling being faster than linear in \epslion. Your notion would then fit in as L_\epsilon = L_1 fixed. This is how I would see coalescence happening. I do feel that somehow the dislocation spacing has to decrease on zoom out or else the energy would not behave well - and I feel that having the energy startinng to behave well in the macrosocpic limit is important for these staitic considerations - but as I said, this is netiher here nor there and needs to be tested in some concrete way.

The worst part is, for all the models for upscaling microscopic dislocation dynamics I like to think of, an averaged energy does not show up at all!

As to your side comments:

re: 1) I like transverse shear too! Although I could not get the distinction you make between asymptotic and variational methods - I think of analysis based on \Gamma convergence as both asymptotic and variational. And asymptotic models are always meant to be appropriate for certain regimes - shell theories are parametrized by load magnitudes..... Btw, I heard Paolo Podio Guidugli once give a talk of some work of his with Roberto Paroni where he talked about deducing Reissner-Mindlin via Gamma convergence. It would be good if you explained a little bit more about the difference you have in mind. It is interesting...

re: 2) I will think and read about the dipole business as soon as I find a bit more time and get back if I have thoughts/questions.

re:3) How I wish the Burgers vector was not a physical length! I can see where you are coming from, but in the dislocation business it seems to me that no matter what macorscopic limit you go to, you cannot forget about the Burgers vector which sets a length scale - no matter how small, whether the contour integral is zero or not matters. Honestly, I woudl also like it if it were more clear cut.

To continue with this mumbo-jumbo (which if you get rid of, you get elasticity and no dislocations), I would like to think of the scalings in the following manner: non-dimensionalize all lengths with the mag. of Burgers vector. Then send L/b to infinity. As you do this the contour integral value does not change - stays fixed at 1. And you 'resolve the theory on scale of x/L' or in other words the non-dimensionalized spatial variable \tilde x = x/b runs on domain sizes of -x/L to +x/L. If this does not make any sense at all :), you will know that it is now time to leave me alone on this - I can't do better at this stage.... but cannot get rid of b. Btw, what I am saying is not so different (in my mind) from what Ajit already has said about quantization....but of course, I could be very wrong - as on everything realted to this scaling business.

Kaushik Dayal's picture

Hi Amit and Ajit,

 

The quantization idea seems linked to treating the Burgers vector as a quantity that scales in a different way than the lattice spacing.  That was why I asked about a setting where nothing is being scaled, but we are simply examining a “real” body with finite-but-small lattice spacing.  In that case, I do not see why the Burger’s vector will be detectable unless one can evaluate the contour integral exactly.  In other words, a body with a single dislocation in it will look just like a perfect crystal to me.

 

Going to Amit’s related point that no matter the limit one needs a finite Burgers vector yet also the Burgers vector is intimately linked to the lattice spacing, that suggests to me again that perhaps asymptotic analysis is not the best tool, or perhaps one needs corrections analogous to strain-gradients.

 

I didn’t get the point about the relation $N_\epsilon b_\epsilon = N_1 b_1$.  The left side is just constants — the total Burgers vector content before and rescaling (denoted by the subscript 1).  I’m just dividing after that.

 

About Arnold’s remarks, perhaps his terminology was not ideal, but I think he classed the scaling methods used in shells as “asymptotic”.  The “variational” methods did not assume anything explicitly about thickness, but just made an ansatz for the deformation, and used variational principles to optimize the ansatz.  Of course, the ansatz is implicitly motivated by the slenderness of the body.

 

For your scaling, I think I see your physical picture, but I guess that I would also want something reasonable for my special case of uniformly-spaced dislocations?  Do you see that happening with your scaling?  Anyway, maybe easier to discuss when you return to Pittsburgh next year.

Amit Acharya's picture

Kaushik: the small thing was in what you wrote you forgot to cancel the b_1...no big deal. When you say, "In other words, a body with a single dislocation in it will look just like a perfect crystal to me" I agree if it is only geometry. But how would that explain the energy of the unloaded 'perfect crystal' - so I hope you agree that something would not be right with that picture as well.

I think what you would get for you scaling of uniformly spaced dislocations would be the expected result - in linear theory, the energy for N_1 singular dislocations placed uniformly on an L x L grid. If L is large, then  the energy will msot probably blow up as for isolated dislocations in linear theory, but if L is small there may be interesting screening effects - a calculation that can be done. If you turn on nonlinear elasticity with softer growth for large elastic strains, even though the dislocations will be singular, their energy can be well behaved. Your limit, will essentially be what you would expect from the case of discrete dislocations on the grid. Quite reasonable if you ask me. The reason I invoked what I did was I have the feeling that on upscaling multiple dislocation fields can combine to give a smeared density with less energy concentration, but this scaling should not give an absurd (to me) answer in the single dislocation case - not only in terms of kinematics of Burgers vector identification, but also interms of energy and stress....

Kaushik Dayal's picture

Hi Amit, I did note your point about the energy of a crystal with a single dislocation earlier, and agree with it.  That seems a good reason to explore other scalings besides the one motvated by kinematics.

Amit Acharya's picture

a non-zero spatially homogeneous dislocation density does not have zero stress at finite deformation. I was able to prove this in the intervening time; interestingly, the result proves a new result in *classical* continuum mechanics: the brothers Cosserat knew that (on a simply connected domain) if a rotation field has vanishing curl then the rotation field is spatially constant. The paper below shows that actually if a rotation field has constant curl (not necessarily zero), then it has to be constant.

Amit Acharya (2019) Stress of a spatially uniform dislocation density field,   Journal of Elasticity, 137, 151-155. (electronically published, January 7, 2019)

the above results are in 2D.

In forthcoming joint work with Janusz Ginster, these results are shown to hold  in space dimension 3 as well.

celiareina's picture

Dear Amit,

I just want to clarify that the limiting elastic and plastic deformations are not necessarily compatible. We even characterize the dislocation density tensor and obtain its continuum limit, which is characterized by the Curl of the plastic tensor. I added a comment regarding the scaling at the end, which I think will help clarify this.

The analyses that we carry out are done at the mesoscopic scale (similar to the scale of dislocation dynamics simulations). The crystallographic nature of the material carries on at the mesoscopic scale in the fact that the slip surfaces in the reference configuration (considered a perfect crystal) are planar and that the jump cannot be arbitrary (see for instance, eq. (2) for a single slip).

Best regards,

Celia

 

Amit Acharya's picture

Thanks for the clarification, Celia, I did understand what you say above the first time around.

Amit Acharya's picture

As has been discussed in this thread, evolution equations for plasticity are of utmost importance. Postulated rate decompositions as well as those emerging from the multiplicative decomposition require the constitutive specification of a tensor valued function (3x3 - no plastic spin is also a constitutive assumption) - and it should be kept in mind that the evolution of plastic deformation physically need not be dependent on the order in which slips take place at a mesoscale point (at a mesoscale point where multiple slip systems are being considered, if there are dislocations on these slip systems ready to move, they do not take turns to do so - if they can, they move all at the same time).

The fundamental mesoscale rate decomposition I mentioned here above cuts down on this specification to an issue of specifying the constitutive response of a vector-valued function. This set-up has been approximately tested in Sec 8.1.2 of "Single theory..." with comparison to MD results. Even without any constitutive specification (i.e. only based on the kinematics of the decomposition), its 'equivalent' posed in terms of dislocation density evolution makes a successful prediction on dislocation nucleation, which can be found here.

M. Jahanshahi's picture

Dear Amit,

Combination of dislocations and their movements with multiple slip systems that are active at a point poses some difficulties. Can you introduce some references on this topic?

Sincerely,

Mohsen

Amit Acharya's picture

Mohsen: The small paper here

Anisotropic yield, plastic spin, and dislocation mechanics

has some preliminary development of a point of view on the subject.

celiareina's picture

Dear Amit,

I totally agree with your point that the evolution of the plastic deformation does not depend on the order of slip. The expression  expresses such a fact, where the sum is of course, commutative. I would like to add that Fp will however depend on the order of the slips, as it is natural in finite kinematics. Quoting Rice, from his famous 1971 paper "Note that Fp is not generally a point function of the shears, but depends instead on their sequence of application" (page 448).

Best regards,

Celia

 

Amit Acharya's picture

Hi Celia: I know you do. But one may think that if the ordering is not possible to know through dislocation motion, it is best not to bring it up in the definition of F^p?.... For instance, If you calculate F^p from the evolution statement you have written down with suitable constitutive assumption on \dot{\gamma}, would the slipping sequence be identifiable? if not, wouldn't it be preferable to work with concepts where this invariance is manifest? (I am just pulling your leg).

celiareina's picture

Dear Amit,

One cannot not bring the ordering in the definition of Fp, since finite deformations do not commute, and that information needs to be contained necessarily in Fp. Mathematically, it can be obtained via integration of , and physically it can also be obtained from a Lagrangian description of the mesoscopic deformation gradient. For instance, the support of the singular part of Fp is just the surfaces of discontinuities of the mesoscopic deformation mapping and it can thus be identified at that scale (it contains in it information about the order).

To study the evolution of a material with dislocations for modeling purposes, I also think that it would be unpratical and not very useful to find Fp from Eq. (3), although possible, but rather one would obtain it from the rate equation as you mention (the equivalence between the two descriptions can be found in page 57 of Reina and Conti 2014). In our studies, however, we were interested in providing a proof for F=FeFp, and for that, one has to work with the expression of Fp given in Eq. (3), in which the order of the slips matters.

Best regards,

Celia

Re. Kaushik and Amit's exchange:

1. I can get the idea that as the lattice spacing goes to zero (as by increasing the size of the crystal without bound), an infinity of dislocations would be needed to be able to have a finite area for an internal slip region, because the Burgers vector for each dislocation would individually tend to go to zero. Here, due to infinity of dislocations, the finite slip region could still imply an infinity of energy; also see the next point.

2. I can also get the idea that there may not be an external step (of the slip) even if a finite crystal carries dislocations within its volume. In the absence of external forces or the supply of an activation energy, these dislocations will stay put in the metastable configuration, and not move so as to cancel each other out. It may so happen that their density distribution may drop rapidly enough that only the elastic effect predominates as you go radially out from the center of the crystal, and approach the external surface. If you then pan out---in the sense: if you increase the size of such a crystal without any bound---then I can see that the lattice constant would approach zero, and yet you would get a finite (non-zero) energy. The reason would be the rapidly falling dislocation density away from the center of the crystal.

Thus, as far as I can see, it's the dislocation density distribution which decides whether the energy remains bounded or not, even as the crystal size approaches infinity. I can get this part.

However, the part I do not get is this: How the Burgers vector of a (single) dislocation might remain finite even in the limit that the crystal size increases without any upper bound (which I take is tantamount to having the lattice constant tending to zero). I would be happy to see a concrete description of such an arrangement and of the limiting process.

--Ajit

[E&OE]

 

celiareina's picture

Dear Amit, Kaushik and Ajit,

Let me clarify here the scaling. In a real material the lattice parameter (a) is fixed, and as you zoom out in an experimental observation for instance, the domain which you observe increases in size. Let’s denote the characteristic length of that domain by L. The way in which we treat this (and that is common in the mathematics literature) is by rescaling all lengths with L. Then, the normalized lattice parameter is =a/L. As you zoom out, then  tends to zero and the domain is of fixed size (L/L=1); and this is the limit that we consider. One may say that in reality  will be really really small but still finite. Yet, the limit that one is interested in here is still that of  tending to 0, as we are trying to show the continuum formulation, where by construction the displacement field ought to be continuous and not jump and the lattice parameter is vanishingly small.

The Burgers vector is proportional to the lattice parameter and therefore scales with . Regarding the number of dislocations, if one has a constant density of dislocations, then as L increases to infinity, the number of dislocations would naturally increase to infinity. But this is not at all required. Note that what we impose for the number of dislocations is an inequality and not an equality. More precisely, the number of dislocations () should satisfy

where C is a global constant. This allows for dislocation walls, for finite number of dislocations as well as zero dislocations.  In all these cases, the multiplicative decomposition holds.

I hope this clarifies all the previous discussions regarding the scaling. If there are any other questions, I will be happy to answer them.

Kind regards,

Celia

Amit Acharya's picture

Hi Celia: Thanks for gettting back on this.

1) Re. your comments on scaling, thanks for stating the physical basis behind your assumptions. As I discussed with Kaushik, I am not convinced that is the only reasonable physical scenario to consider.

2) You bring up the subject of walls, so let me ask you something. Are you familiar with a recent paper of Muller, Scardia, Zeppieri in the Indiana Univ. Math Journal on the Gamma limit of a nonlinear elastic theory with dislocations? If so, I would ask you whether you think your scaling assumptions are the same as theirs. And then I would ask you to explain the implication of their result related to polygonization I explained to Kaushik in my comment "Re: Burgers vector scaling." I know this is not completely fair, but this is casual talk on imechanica! If you don't want to get into that, I would of course understand. if you do, and want more clarification on my question (and the precise ref to their paper) just ask.

Hi Celia,

Thanks for the very pertinent and easily understandable clarifications, here and and also in your other replies above.

Best,

--Ajit

[E&OE]

 

Amit Acharya's picture

Hi Celia: May be I should be a little more specific about the issue I am talking about here. Take N_\epsilon = 1 with C = 1. This satisfies your hypothesis. My physical expectation in this case would be to get in the macroscopic limit basically your mesoscale solution itself, i.e. Dirac for the dislocation density with curls of F^p and F^e appropriately defined. If you were bothered about energy or stress, then the total energy would blow up with quadratic growth but if the growth is slower at infinity then you can get bounded total energy.

However, this would be true only if the Burgers vector magnitude remained non-zero in the limit, which is not allowed by your hypothesis (last ten words/symbols on page 6 of your paper with Schlomerkemper and Conti).

My feeling is that your limit in this case would be that there is no dislocation in the limit and F^e and F^p are compatible and F = F^e - may be I am wrong (I have thoguht about this for a second), so you can clarify....

Please know that I am not suggesting your result, whatever it is, is incorrect, for the hypotheses made.....

celiareina's picture

Dear Amit, Kaushik and Ajit,

Since there seems to be some confusion about the scaling, let me try to explain it in another way, which is similar to that of Kaushik’s approach. In particular, instead of considering the continuum limit of an arbitrary sequence of mesoscopic structures, I will approximate a uniform density via discrete points, and I will do so first for the concept of mass density and discrete mass points, as it is a little simpler than that of dislocations.

Imagine that you would like to approximate a uniform mass density (corresponding to a perfect crystal). A natural sequence to consider is that of the image below, which corresponds to the physical sequence in a zoom out process.

The mass that one would put in each of the squares corresponds to the density multiplied by the volume of each square (this would maintain the total mass constant). If the grid has spacing epsilon, then the mass would be proportional (in this 2D example) to epsilon square. As the size of the grid tends to zero, the mass would naturally tend to zero as well. In a real material epsilon physically has a value of the lattice parameter divided by L, which is really really small, yet finite; and the discrete masses correspond to the atomic masses, which are thus very very small as well. However, in order to model the macroscopic medium as a continuum (even though nature is discrete) one actually has to go to the limit epsilon tending to zero (quantization disappears in the continuum formulation by construction). The mass points in each of the squares would then tend to zero as well as the grid is vanishingly fine. In a continuum model, one can therefore not identify an atom any more, as the discrete nature of the material has faded, even though, one can measure the atomic mass in an experiment.

A very similar thing occurs with the approximation of a continuous distribution of dislocations via sequences of discrete dislocation points. Even though the Burgers vector is finite but very very small size in a real crystal (like the atomic mass), in order to achieve a continuum formulation, the Burgers vector, which measures the jump on the slip surfaces, has to tend to zero. And in a continuum description, one cannot single out anymore a single dislocation and its Burgers vector, since the discrete nature of individual dislocations has faded in a continuum macroscopic description where one works with densities of dislocations.

Along the sequence, the Burger vector, being physically proportional to the lattice parameter, it has to go to zero scaling like epsilon. In order to do achieve the desired scaling for the Burgers vector, the grid then needs to have a spacing (in 2D) of sqrt of epsilon, as pointed out by Kaushik. This implies, that the number of dislocations scales as 1/epsilon, which is precisely the upper bound for the number of dislocations that we consider.

I hope the scaling of the Burgers vector and number of dislocations is now more clear. I would also be happy to have a phone or skype discussion with any of you, if questions remain.

Kind regards,

Celia

 

Amit Acharya's picture

Celia - We understand what you have said, and was already clear from discussion wth Kaushik. And, from our previous discussion, this scaling does not work for finite numbers of dislocations. Also, as should be amply clear from my discussion with Kaushik, this is also not the only scaling possible and cannot be forced to be a unique choice. Just like choosing a scaling of \epsilon^2 |log \epsilon|^2 for energy (along with exactly the \epsilon b scaling of the Burgers vector of individual dislocations) can give a non-frame indifferent limit energy function for materials with dislocations arived at from finite elasticity (because the scaling allows only deformations close to being compatible and with very small elastic strains). So, again scalings can do weird things.

celiareina's picture

Dear Amit,

I certainly respect and value your opinion, as you know, and I find it very enriching to have more than one point of view to the same topic. For this same reason, I would like to understand a little bit better your perspective on this, since we have different approaches to it. However, a blog does not seem to be the most efficient communication pathway, and a dialogue seems more appropriate for it. I will contact you via email to continue the conversation offline. 

Kind regards,

Celia

vicky.nguyen's picture

I just wanted to thank everyone here for an informative and lively discussion!

Amit Acharya's picture

see sec. 4.2 in

Paper

to appear in J. Mech. Phys Solids.

A correction on something I had said in 2015! above: in discussing the paper of Muller Scardia Zeppieri, I had said that their limit energy was not frame indifferent. That is not right as pointed out in this paper, even though that limit energy would give the wrong energy content if applied to a dislocation wall (as also shown). (Their notation of what \beta^\epsilon means and the limit \beta means was the source of my confusion!).

Subscribe to Comments for "Journal Club Theme of September 2015: Kinematics of Crystal Elasto-Plasticity: F=FeFp"

Recent comments

More comments

Syndicate

Subscribe to Syndicate