User login

Navigation

You are here

Journal Club Theme of Sept. 15 2008: Defects in Solids---Where Mechanics Meets Quantum Mechanics

Vikram Gavini's picture

Defects in solids have been studied by the mechanics community for over five decades, some of the earliest works on this topic dating back to Eshelby. Yet, they still remain interesting, challenging, and often spring surprises—one example being the observed hardening behavior in surface dominated structures (as discussed in past journal club themes by Wei Cai and Julia Greer). In this journal theme, I wish to concentrate on the underlying physics behind defect behavior and motivate the need to combine quantum mechanical and mechanics descriptions of materials behavior. Through this discussion, I hope to bring forth: (i) The need to bridge mechanics with quantum mechanics; (ii) The challenges in quantum mechanical calculations; (iii) How the mechanics community can have a great impact.

(i) The need to bridge mechanics with quantum mechanics:

Defects play a crucial role in influencing the macroscopic properties of solids—examples include the role of dislocations in plastic deformation, dopants in semiconductor properties, domain walls in ferroelectric properties, and the list goes on. These defects are present in very small concentrations (few parts per million), yet, produce a significant macroscopic effect on the materials behavior through the long-ranged elastic and electrostatic fields they generate. But, the strength and nature of these fields as well as other critical aspects of the defect core are all determined by the electronic structure of the material at the quantum-mechanical length-scale. Hence, there is a wide range of interacting length-scales, from electronic structure to continuum, that need to be resolved to accurately describe defects in materials and their influence on the macroscopic properties of materials.

At this point, I wish to stress the importance of both electronic structure (quantum-mechanical effects) and long-ranged elastic fields by presenting some known results on the energetics of a single vacancy. The vacancy formation energy in aluminum computed from electronic-structure (ab-initio) calculations is about 0.7 eV, of which the contribution of elastic effects (atomic relaxations) is less than 10% of the formation energy, rest is electronic effects (quantum-mechanical effects)! In mechanics, these electronic effects are lumped as the core-energy, which is considered an inconsequential constant, and we deal with only elastic effects. On the other hand, computational materials scientists often work with only core energies as they appear to be the major contribution to the total defect energy. In my opinion, both are equally important and neither can be neglected and I will present some evidence to corroborate this claim. Some recent electronic-structure calculations have been performed to investigate the influence of homogeneous macroscopic strain on the energetics of vacancies (some of which are present in Ho et al. Phys. Chem. Chem. Phys. 2007, 9, 4951), where, in one case atomic relaxations are suppressed and the energetics are solely due to electronic effects and another where atomic relaxations are allowed which contain both electronic effects and elastic interactions with macroscopic fields. In the first case, the vacancy formation energy changed from 0.7 eV at no imposed macroscopic strain to 0.2 eV for 0.15 volumetric strain. This suggests that the defect core energy is very strongly influenced by the macroscopic deformation at the core site, and is not an inconsequential constant! This dependence is quantum-mechanical and there is no obvious way to determine this other than resorting to electronic structure (ab-initio) calculations. On the other hand, in the second case, upon relaxing the atoms and accounting for the elastic effects, the contribution for these elastic effects changed from 10% of the total formation energy at no macroscopic deformation to 50% at 0.15 volumetric strain. These results provide strong evidence that both the core of a defect and the long-ranged elastic fields are equally important in understanding the behavior of defects and these are inherently coupled through the electronic structure of the material.

(ii) The challenges in electronic structure calculations:

The basis of all electronic structure calculations is quantum mechanics which has the mathematical structure of an eigen-value problem. Though the physics behind quantum mechanics has been well-known for almost seven decades, the challenge arises from the computational complexity of the resulting governing equations (Schrodinger’s equation). Unfortunately solutions to the full Schrodinger’s equation are intractable beyond a few electrons (<10) making any meaningful computation of materials properties beyond reach. The direction pursued by the computational physics community in the mid-nineteenth century was beautifully summarized by Paul Dirac:  “The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. It therefore becomes desirable that approximate practical methods of quantum mechanics should be developed, which can lead to an explanation of the main features of the complex atomic systems without too much computation”. These approximate methods are what constitute the electronic structure calculations which are widely used in the present day. The starting point of all electronic structure theories for computing ground-state materials properties is a variational principle, something which is very often seen in mechanics.  I have written a brief overview (for readers interested in more details) of various electronic structure calculations and the various approximations involved in arriving at these theories:

node/3813 

One of the most popular electronic structure theory that is widely used is the density-functional theory (DFT). It has its roots in the seminal work of Kohn, where he rigorously proved that the ground-state properties of a material system are only a function of the electron-density, which has made electronic structure calculations of materials possible. Albeit many theoretical developments in this field and the advent of supercomputing, the computational complexity of these calculations still restricts computational domains to couple of hundred atoms. Thus, historically it was natural to concentrate on periodic properties of materials. DFT has been very successfully in capturing a wide range of bulk properties which include elastic moduli, band-structure, phase transformations, etc. The interest in periodic properties has resulted in the use of a plane-waves as a basis set to compute the variational problem associated with density functional theory. Such a Fourier space formulation has limitations, especially in the context of defects: it requires periodic boundary conditions, thus limiting an investigation to a periodic array of defects. This periodicity restriction in conjunction with the cell-size limitations (200 atoms) arising from the enormous computational cost associated with electronic structure calculations, limits the scope of these studies to very high concentrations of defects that rarely—if ever—are realized in nature. Thus recently, there is an increasing thrust towards using real-space formulations and using finite-element or a wavelet basis, or a finite-difference scheme. The following three articles are good representations of the use of these methods.

1.    J.E. Pask, B.M. Klein, C.Y. Fong, P.A. Sterne, Real-space local polynomial basis for solid-state electronic-structure calculations: A finite-element approach,        Phys. Rev. B. 59 12352 (1999).
2.    T.A. Arias, Multiresolution analysis of electronic structure: semicardinal and wavelet basis, Rev. Mod. Phys. 71, 267 (1999).
3.    C.J. Garcia-Cevera, An efficient real-space method for orbital-free density functional theory, Comm. Comp. Phys.  2, 334 (2006).


(iii) How the mechanics community can have a great impact.

Although the use of real-space formulations seems to provide freedom from periodicity, the computational complexity still restricts calculations to a few hundred atoms. However, an accurate description of defects requires resolution of the electronic structure of the core as well as the long-ranged elastic effects. There have been some multi-scale methods based on embedding schemes that have been proposed which address this problem. One representative article for these methods is the following:

4.   G. Lu, E. Tadmor, and E.Kaxiras, "From electrons to finite elements: A concurrent multiscale approach for metals"  Phys. Rev. B 73, 024108 (2005).


The philosophy behind these embedding schemes is to embed a refined electronic structure calculation (inside a small domain) in a coarser atomistic simulation using empirical potentials, which in turn is embedded in a continuum theory. Valuable as these schemes are, they suffer from some notable shortcomings. In some cases, uncontrolled approximations are made such as the assumption of separation of scales, the validity of which can not be asserted. Moreover, these schemes are not seamless and are not solely based on a single electronic structure theory. In particular, they introduce undesirable overlaps between regions of the model governed by heterogeneous and mathematically unrelated theories.

I feel there is tremendous potential for the mechanics community to contribute in the development of multi-scale schemes solely based on electronic structure calculations which are seamless, have controlled approximations, assure the notion of convergence, and provide insights into the behavior of defects. To start the discussion and motivate, let me provide an analogy. The electronic structure of a defect in a material has similar structure to a composite problem with a damage zone. Homogenization techniques and adaptive finite-element basis sets are common solutions to such composite problems! With a little care, I believe it is possible for the mechanics community to make a huge impact in electronic structure calculations of defects.
 

Comments

N. Sukumar's picture

Vikram,

Thanks for initiating this interesting theme, and for posting your overview on electronic-structure theories. The topic is timely, given the continued and growing interest of mechanicians in the development of multiscale computational methods. I am aware of the papers you have suggested for reading that pertain to the solution of the equations of DFT (Schrodinger and Poisson) via real-space methods, but am not well-versed with the details and intricacies on concurrent methods that attempt to bridge the scales.  On taking a peek at Lu et al. (2005), I deciphered that they use the QC framework with the distinction that in the region in the vicinity of the defect, ab initio (first principles) calculations are done, unlike the original QC paper where classical potentials are used.  Would it be possible to indicate via a step-by-step procedure what is entailed within a concurrent method, if one w'd like to include quantum-to-continuum simulations within a single umbrella, and to also briefly indicate some of the issues (e.g., interfacial conditions) that need to be addressed therein.  I was trying to see if something along the lines of a flowchart for a FE analysis can be crafted for a concurrent multiscale simulation, so that one can better see and appreciate the various links.  Possibly my question is a tad vague and overreaching, but I hope you get my drift?  There are many experts on iMechanica who have contributed in this endeavor, and hence it would be beneficial to hear from others too about the strengths/weaknesses of competing methods, the current state-of-the-art if it exists, and the main challenges that remain.

Vikram Gavini's picture

Sukumar,

Many thanks for raising this question as this opens up discussion on more than one aspect of multi-scale methods, and also will provide an opportunity to put forth my view point of what the real QC method is.

In my opinion there are two kinds of concurrent methods. One where different physics is used to model different regions of the domain, and another where the same physics is used everywhere, but the range of multiple length scales are spanned by coarse-graing the numerical scheme. In my opinion the article Lu et al. falls in the former category. Here DFT is used to describe the physics right at the defect core, which is embedded in a region described by empirical potentials. Two very important question that comes up in such a formulation are (i) what happens on boundary of these heterogeneous theories---what are the conditions imposed? (ii) what errors do they introduce into the energetics of the system. Firstly, there is no right answer to the first question as the attempt is to stitch two heterogeneous theories; in Lu et al. this done by introducing an interaction energy term which is computed by using empirical potentials everywhere. Also, there is no way of quantifying the error associated with this approximation, even numerically for a particular problem (let alone rigorous bounds), because of the limitations on the size of DFT calculations. This turn out to be the major limitation of concurrent schemes which use heterogeneous mathematical theories to describe different regions of the domain.

Now coming to the second type, here one uses the same physics but use coarse-graining of numerical schemes to span across the multiple length scales. The quasi-continuum method was developed in this spirit. It relies on a variational principle: minimize the energy of the system with respect to the positions of atoms. The key idea of the QC method is to choose some special atoms called representative atoms and constrain the positions of other atoms with respect to these representative atoms (through the shape functions of the FE). This is tantamount to looking for a solution of the variational problem in a sub-space spanned by this constraint. Now this is nice because one can test for convergence (at least numerically) of the method as the same theory is being used every where and the subspace becomes increasingly richer with increasing the number of representative nodes. Further, I also like to clarify a common misconceptions about the QC method that the QC method uses only empirical potentials. The QC method is a method which coarse-grains the basis functions, it can be developed for any physics. It is most popular for empirical potentials, but there
are some efforts to develop the QC method for DFT too (by this I mean
DFT is the sole input physics, unlike what is done in Lu et al.). Below
is a reference where the QC method for orbital-free DFT is discussed. 

 Gavini V, Bhattacharya K, Ortiz M, Quasi-continuum orbital-free density-functional theory: A route to multi-million atom non-periodic DFT calculation, J. Mech. Phys. Solids 55 697 2007

Now coming to the heart of your question: "Is there a flow chat for concurrent multi-scale simulations". I would say if one follows the first approach there is none, as there is no systematic way to address the question of how to stitch heterogeneous mathematical theories. However, the QC method, in its true spirit, provides a systematic solution. Of course there are caveats to this statements and there are open questions, but, in my opinion, it provides a hope.

I hope I provided some inputs to the questions you have raised, atleast partially.

 

Dear Vikram,

This journal club in general and your replies in particular make for an interesting reading. I have a few questions, even if they might be a bit subsidiary in nature.

In your above reply, you say:

The key idea of the QC method is to choose some special atoms called representative atoms and constrain the positions of other atoms with respect to these representative atoms (through the shape functions of the FE).This is tantamount to looking for a solution of the variational problem in a sub-space spanned by this constraint.

My questions are:

How does one pick up the representative atoms? What guiding principle is available in this respect? Roughly how many--what volume fraction--are they?

How does one know that the sub-space not spanned by this constraint does not carry something that is interesting (perhaps even critical) to the physical phenomena being modeled? What, possibly, could be the nature of such things (that possibly might get left out)?

BTW, please note that I am not challenging the very idea of having such an approach. I realize that there must be some great practical benefits following it, e.g., dramatically increasing the number of atoms being modeled or altering the kind of BCs that are being handled.... It is just that I want to make sure that I understand the approach conceptually and physically right before progressing further, that's all. (As you know already from our last year's discussions, I was, and still am, pretty much a novice to both DFT and QC.)

Thanks in advance.

Vikram Gavini's picture

Dear Ajith,

The representative atoms are picked in such away that they account for every atom around the defect core, where as away from the defect core fewer representative atoms are picked. You have brought up an interesting question as to how one chooses these representative atoms? Often the displacement field governs the choice of these representative atoms. If the displacement field is rapidly varying or has large gradients (like near the defect core) more representative atoms are introduced. If the field is smooth then fewer representative atoms are introduced, as a smooth variation can be captured by few representative atoms. 

Depending on whether one has a prior knowledge of the "nature" of the displacement field gives rise to two methods of choosing the rep atoms:

(i) Apriori mesh adaptation: Here the prior knowledge of the nature of the displacement field can be used to determine error estimates associated with the interpolation introduced through the rep atoms. These error estimates provide the optimal distribution of the rep atoms for a given problem.

(ii) Aposteriori mesh adaptation: Often the nature of the displacement field is not know. In such situation, one starts with a choice of the rep atoms and solves the problem. Then, they introduce a h-adaptation, i.e., they introduce an additional rep atom at various locations and see if the energy change is beyond a tolerance. If it is they accept the addition of the rep atom, if not, they reject it. 

These adaptation schemes are analogous to mesh adaptation schemes in standard finite-elements but applied to a discrete lattice. For more information I suggest the following reference:

Knap, J., Ortiz, M., 2001. An analysis of the quasicontinuum method. J. Mech. Phys. Solids 49, 1899.

Further, in any QC calculation, convergence of the energy is always checked with respect to the rep atoms and once the convergence is attained the solution you are looking for has been captured. If there was some crucial information about the solution present in the sub-space not spanned by the constraint then one would not achieve the convergence with respect to the rep atoms. 

N. Sukumar's picture

Dear Vikram,

Thanks for the explanation, and for further clarifications on QC. The DFT everywhere and coarse-graining route does have its appeal over the approach of having different physics in different regions and then having to deal with `apt' matching conditions on the interface. In DFT calculations, systematic improvability and the ability to have strict control of the error is crucial, and hence the leaning to the former.  In variational formulations, the basis-function viewpoint is often forgotten (in lieu of the common usage of shape functions, FE implementational details, etc.); your view of QC as a coarse-graining of basis functions is a valuable perspective.

Pradeep Sharma's picture

Dear Vikram,

Thank for your interesting and thoughtful post. I have closely followed the literature on electronic structure calculations although more from the perspective of the "user" rather than the "developer" of computational methods. As you know, electronic structure calculations are central to the study of quantum dots----another area (like defects) where quantum mechanics meets head on with solid mechanics. I have a general question for you which struck me after reading your jclub issue: the finite element method has been around for a while.....why has it then taken the electronic structure community so long to get around to exploring its utility?

You did not mention your own work on orbital free DFT which I find to be quite interesting as well. Could you post a few lines describing that? In particular, I would be curious to hear what challenges one is likely to face if your work is to be extended for application towards semi-conductors (covalent solids) and ferroelectrics (highly polar dielectrics).

Vikram Gavini's picture

Pradeep,

Since this is a question on history of electronic structure calculations, what I am putting forth is the opinion I gathered through the literature and discussions with others working in this area. Firstly, I should mention that there were attempts in the mid 80's to use FE in electronic structure calculations. But the number of elements required to achieve chemical accuracy that chemists desire was far too many than the computational power at that time could handle. On the other hand plane-waves provided an ideal solution to atleast the problems involving perfect materials, and there were plenty of interesting problems in perfect materials to address given that the field was just beginning to build. Thus a lot of effort went in to developing commercial codes based on plane-wave basis, and, in my opinion, it became an accepted norm in the community. The issues started to surface when people wanted to address defects in materials using electronic structure calculation in the late 90's as Fourier space formulations could not account for defects at their realistic concentrations. This prompted the concurrent multi-scale schemes of the first kind that I discussed in my response to Sukumar's question---one where an electronic structure calculation is embedded in a less accurate model like empirical potentials or tight binding. But with the computational power available today and the fact that FE formulations can be implemented with ease in a parallel computing framework, it is worth revisiting the problem of using FE in electronic structure calculations. What the FE framework offers, which lacks in other real-space and Fourier space methods is the power of coarse-graining---being able to adapt the resolution of the basis set as necessary. I think this holds the key to be able to develop a seamless multi-scale scheme with DFT as the sole input and spanning across length scales from electronic structure to the continuum.

 

I have mentioned about the QC-OFDFT work in my repsonse to sukumar. One of the limitations of the orbital-free approach to DFT is that the kinetic energy functionals used are not good enough to describe covalent and ionic bonded systems (they are good only for metallic systems). Thus the QC formulation of the standard DFT needs to be developed to address problems in semicoductors and ferroelectrics where defects play an important role, like quantum dots in semiconductors and domain walls in ferroelectrics. One of the major hurdles in developing the QC version for DFT is that the wavefunctions are delocalized and handling such a system with QC is tricky because QC relies on local perturbations of the system. However, there may be ways to get around these issues, but this too early to give a definitive answer.  

Dear Vikram and others,

Thank you for your interesting posts. I'd like to offer a few comments here since our work has been mentioned in the discussions (Lu, et al., 2005): (1) In our original paper in 2005, we coupled Kohn-Sham DFT (KS-DFT) to the embedded atom method (EAM) for nonlocal QC simulations. In this approach, we viewed the empirical EAM model as an approximation to DFT because EAM was derived based on DFT and EAM potentials are often fitted from DFT data. In this sense, there are some physical connections between DFT and EAM. The main motivation of the work was to develop a simple and efficient method that can deal with extended defects, such as dislocations and cracks, with necessary quantum level accuracy maintained at the defect core. Anyone who is familiar with DFT and EAM simulations can implement the method easily and the computational overhead for the coupling is minimal. (2) Although the 2005 paper involved two disparate theories (DFT and EAM) in exchange of simplicity, one could develop a QC method that is entirely based on either OFDFT or KS-DFT. We have done so with OFDFT (Q. Peng, X. Zhang, L. Hung, E.A.Carter and G. Lu, "Quantum Simulation of Materials at Micron Scales and Beyond", PRB,78,054118 (2008) ) and the extension to KS-DFT is straightforward and is currently under way. The paper of X. Zhang and G. Lu, "Quantum mechanics/molecular mechanics methodology for metals based on orbital-free density functional theory", PRB 76,245111 (2007) points out the direction of achieving this goal. In our recent 2008 paper, we have performed QC-OFDFT calculations for nanoindentation of Al thin film with micron dimensions. Our method is based on a nonlocal formulation of OFDFT which is applicable to simple metals, like Al, Mg and Li. In the interest of a full disclosure, I should also mention a major difference between our QC-OFDFT method and the one developed by Vikram et al.: in its present form, Vikram's method takes a local approximation of the kinetic energy. The problems associated with the local approximation have been discussed in a recent paper by Emily Carter's group at Princeton (see P.11 of Ho et al. Phys. Chem. Chem. Phys. 2007, 9, 4951). It's not clear to me how the method of Vikram can be extended to nonlocal OFDFT or to KS-DFT. In Vikram's post, he has also acknowledged the difficulty in combining QC and KS-DFT within his approach. Having said that, it would be truly fantastic if someone figures out a way to do the coupling within the framework of Vikram et al. To their credit, Vikram's work has certainly laid down a solid foundation for future developments in this direction and I'm looking forward to its success. (3) Vikram mentioned at the end of the last post that "One of the major hurdles in developing the QC version for DFT is that the wavefunctions are delocalized and handing such a system with QC is tricky because QC relies on local perturbation of the system." I wish to comment that although wavefunctions are delocalized, electron density is not. Maybe one should think in terms of electron density in QCDFT, and that is actually what we did in our QCDFT method. Our method can be applied to semiconductor quantum dots or other systems that may be interested to this community. Overall, I agree with Vikram's posts; as in any scientific debates, people often have different perspectives on the same issues and I have just offered mine. 

Vikram Gavini's picture

Dear Gang,

Many thanks for sharing your views and providing with additional information. As you rightly pointed out, in a scientific problem, esp. in its infancy, people have different perspectives, and it is great that these perspectives are coming out in this forum for the readers to appreciate the importance and the challenge this problem poses. I believe with time all the various methods proposed will continue to improve and the issues in these methods will get addressed systematically: like the need for rigorous justification of the quadrature rules proposed in Gavini et al. J. Mech. Phys. Solids. 55 697 2007, or the assumption of separation of length scales and issues concerning ghost forces in the formulations proposed in your recent articles.

I wish to clarify some of the points brought up in your comments:

(i) The choice of a local kinetic energy functional in QC-OFDFT method proposed in Gavini et al. J. Mech. Phys. Solids. 55 697 2007 was only for the prupose of demonstration of the method. Your comment was very accurate in pointing out that non-local kinetic energy functionals must be used for an accurate description of the system. However, the incorporation of these non-local kinetic energy functionals into QC-OFDFT is not difficult, and is in fact straightforward. The way to achieve this was already indicated in the appendix of Gavini et al. J. Mech. Phys. Solids. 55 669 2007. The numerical implementation of these functionals is work in progress.

(ii) As I pointed out in a previous comment, developing the QC techniques for KS-DFT is tricky because of the delocalized nature of the wave functions in terms of which the energy of the system is described. However, there are ways in which this issue can be circumvented without introducing any further approximations. It is too early for me to give a definitive answer on this as these are early days, and this is work in progress.

(iii) Regarding your comment that instead of considering wave-functions which are delocalized it may be better to address the problem in terms of electron-density which is local, I am not sure how one would achieve this with out resorting to an approximation on the kinetic energy functionals.

Subscribe to Comments for "Journal Club Theme of Sept. 15 2008: Defects in Solids---Where Mechanics Meets Quantum Mechanics"

Recent comments

More comments

Syndicate

Subscribe to Syndicate
Error | iMechanica

Error

The website encountered an unexpected error. Please try again later.