User login


You are here

Experimental determination of crack driving forces in integrated structures

Zhigang Suo's picture

Jun He and Guanghai Xu (Intel Corporation)

Zhigang Suo (Harvard University)

pp. 3-14 in Proceedings of the 7th International Workshop on Stress-Induced Phenomena in Metallization, Austin, Texas, 14-16 June 2004, edited by P.S. Ho, S.P. Baker, T. Nakamura, C.A. Volkert, American Institute of Physics, New York.

A crack extends in a brittle material by breaking atomic bonds along the crack front. The physics of crack growth may well have been understood, from electrons to atoms and to microstructures. This statement by itself, however, is of limited value; it offers little help to the engineer trying to prevent cracking in an integrated structure. Hypes of multi-scale computation aside, no reliable method exists today to predict cracking by computation alone. The pragmatic approach is to divide the labor between computation and measurement within the framework of fracture mechanics. Some quantities are easier to compute, and others easier to measure. A combination of computation and measurement solves problems economically.

Of course what is easy changes with circumstances. As new tools and applications emerge, it behooves us to renegotiate a more economical division of labor. The history of the fracture mechanics makes an excellent case study of such divisions and renegotiations. For the last few years, in the course of studying cracks in interconnect structures, we have found it necessary to make a new division of labor. This paper gives a preliminary account of our work.

For a crack in a structure, the crack driving force, G, is the reduction of the elastic energy in the structure associated with the crack extending per unit area, when the external mechanical load is rigidly held and does no work. An existing protocol is to calculate G by solving a boundary value problem. Such solutions are accumulating for thin film structures. To prevent a crack from growing, the engineer must ensure that G is below a threshold value. The latter is estimated from experimental measurements.

The calculation of G is prohibitively difficult for three-dimensional structures that integrate diverse materials. This is particularly so when the stress-strain relations of the constitute materials, as well as the residual stress fields in the structures, are poorly characterized. On the other hand, compared to large structures such as airplanes and ships, small structures such as on-chip interconnects are inexpensive. It costs little to make many replicates of an integrated small structure, so that massive full-structure testing is practical. An integrated structure is an analog computer, and massive testing a form of parallel computing. Indeed, massive testing has been a key to the spectacular success of the microelectronic industry.

These considerations have motivated us to develop a method to measure the crack driving force experimentally. Our method relies on a familiar phenomenon: moisture-assisted crack growth. Water (and some other molecules) in the environment may participate in the process of breaking atomic bonds along the crack front. For the environmental molecules to reach the crack front and to break atomic bonds there, the crack extends at a velocity much below the sound speed in the material. The crack velocity V is an increasing function of the crack driving force G. In recent years, such V-G functions have been measured for various dielectric films.

The V-G function is specific to a given material and its environment. Once determined, the same function applies when this material is integrated in a structure with other materials, provided environmental molecules reach the crack front. In the integrated structure, an observed crack velocity, together with the known V-G function, provides a reading of the crack driving force. The observed crack velocity can be used to measure deformation properties of ultrathin films. We also describe a procedure to measure the crack driving force GR due to the residual stress field in the integrated structures, even when GR by itself is too low for the crack to extend at a measurable velocity.

Update on 3 November 2006. See also reports on channel cracks from IBM and Texas Instruments.

PDF icon Channel Crack.pdf620.06 KB


Rui Huang's picture

Dear Zhigang:

I read this paper a couple of years ago. As usual, you have got a novel idea and put it nicely in a historic perspective. For this one, however, I was fascinated and puzzled at the same time, partly because I am trying to work on some similar problems. As you just mentioned in this post, "The calculation of G is prohibitively difficult for three-dimensional structures that integrate diverse materials." On this I completely agree, as one of my students has been trying to do this. Following your suggestion, this should not even be attempted. Instead, "massive full-structure testing" should give a measure of the G, based on the known V-G functions. However, I am not sure how one could measure V in a 3D full-structure with scales from centimeters (after packaging) to below 100 nm (backend interconnects). One possibility, of course, is: one make many samples of different designs, from first level interconnects all the way to packaging; subject them to some thermomechanical tests, after which the samples are striped off level by level and layer by layer to see if there is any cracks and how long they have grown (from which the velocity V may be roughly estimated). If this is what you mean by "massive full-structure testing", I don't think it is cheap and practical. The question that has been bugging me for a long time is: how do we design the 3D integrated structures at all levels with some predictive tools that can give a measure of crack driving force for each design? It seems to me G or something equivalent has to be calculated at the designing stage.

I would appreciate your counter-comments on mine. Thanks.


Zhigang Suo's picture


You are absolutely correct in that the technique is useful only when you can measure crack velocity. However, the technique may not be as restrictive as you might think.

At this point, people at Intel, IBM, TI, and elsewhere have just dealt with cracks on the surfaces of structures. The structures are 3D in that materials underneath and adjacent to the cracks can be heterogeneous.

If a crack is in the interior of a structure, you will have to find a way to "watch" the crack. I'll leave to an experimental colleague to say something about possible techniques and resolutions.

Ting Tsui's picture

Hi Rui, Zhigang, 

It is possible to measure the crack growth velocity in the 3-D interconnect structures by using electrical signals or automated in-fab defect review methods.  

The electrical method requires special cracking test structures. For examples, metal structures with different dielectric spacing between them and different geometries. Dielectric cracks may form and propagate during chemical mechanical polish (CMP). In most cases, they will be filled with conductive materials from the slurry. This creates electrical shorts and can be detected during post CMP electrical probing or even after packaging. The parametric yield results of these cracking structures will indicate dielectric crack growth velocity and distributions under different processing conditions. The best part of this technique is fully automated but the resolution depends on how many test modules with different metal spacing on a die. It is a good qualitative method to screen out different processing conditions. 

Automated in-fab defect review is another good technique to detect cracks. These instruments will locate and process SEM images of dielectric cracks automatically. However, human intervention is required to measure the crack growth rate from the micrograph after the inspection. The resolution is excellent (same as the SEM imaging capability).

Hope this helps.


Rui Huang's picture

Hi Ting:

Thanks for your response, and I am glad to see you coming onboard (Congrats to Zhigang for another success in bringing together industry and academics!). 

As a theoretician, I am amazed by the techniques you just described and have no base to argue. Allow me to ask two questions: 

(1) Has anyone used these techniques to measure crack velocities inside 3D interconnect structures? If yes, could you provide a link to the work so that I can learn more?

(2) At the designing stage, do you think it is practical to make massive tests with different designs so that no more computation is needed? In fact, I have been wondering if there is any computer-aided design (CAD) used in the microelectronics industry, like the automobile industry using CAD tools to design cars.



Jun He's picture


As Ting described in his reply, using specially designed test structures to electrically detect cracks in fully integrated samples is fairly common in micro-electronic industry. Acoustic and visual techniques such as CSAM, optical/SEM and x-ray imaging, sometimes, are being used to compliment the electrical testing results to provide higher spatial resolution and morphological information of the cracks. However, those approaches are performed on much limited volume due to the overhead of the measurement. The CAD is critical in the early stage of the development to guide the design of the testing structures and also to aid the interpretation of the experimental data. However, in order to provide accurate quantative prediction of cracking behavior in 3D integrated structures, all the material properties have to be mapped out. In most of the cases, this turns out to be much more challenging than directly monitoring the fracture evolution because of the limitation of characterization metrology on thin film and small scale structures.


Rui Huang's picture


Thanks for your clarifications. I can understand the difficulty and inaccuracry in computational calculation of crack driving force, and now have learned more about the possibility and limitations in experimental methods from Ting and you. It appears to me there is no perfect solution at this point.  On the other hand, the microelectronics industry has been doing well for many years with the empirical methods and some inaccurate calculations. Does it make sense to make any effort to improve on  the accuracy of the calculations and to better design the experiments with lower overhead? Maybe the best to do is pushing from both ends and making some compromise in between? I have no idea. From the industry point of view, is there anything academics can do to help in this regards? 

Seems like I have endless questions. I will tell you why later. Thanks.  


Zhigang Suo's picture


Is there any value of mechanics research in the world of massive testing? Perhaps we should let our industrial colleagues speak to this, or ask your colleague Paul Ho, who has more and stronger ties with industries, to comment. But to me, the answer is a clear YES. From this paper under discussion, we can already make several remarks:

  1. A better understanding is always valuable. Just because you can observe a phenomenon or measure a quantity is no reason to stop asking why things happen the way they do. In writing this paper, Jun, Jessica and I have gone through many rounds of discussion, many of which are concerned with basic mechanics understanding.
  2. The method is based on past mechanics research. Although this work does not invent any new mechanics, it uses fracture mechanics at a fundamental level. Past research in formulating fracture mechanics and in understanding moisture-assisted cracking provided the foundation for our method. By inference, basic mechanics research done today will help solving practical problems in the future.
  3. The method also points out new opportunity in mechanics research. One idea we discussed in the paper is that our method may provide a new approach to determining material properties at small scales. G sensitively depends on the properties of the materials surrounding a crack. We now have a way to measure G. We can use measured G to determine material properties if we know how G relates to the material properties. This last link requires careful design of experiments and accurate mechanics calculation. In principle, this method can measure mechanical properties at very small scales, e.g., for films of a few atomic layers.
  4. How to interpret accelerated tests is a good question of mechanics. The method is only good if the crack velocity is measurable. It means that you have to accelerate the test, perhaps by increasing temperature, humidity, or load. How to predict lifetime under service conditions has always been a playground for mechanics research.

I am repeating things that you already know, and the conclusion must be what you want to hear: Yes, mechanics research is valuable in the world of massive testing. Massive testing simply provides a new context to do mechanics research.

Ravi-Chandar's picture

It seems to me that the problem does not lie in the experimental arena. The industry is able to fabricate complicated 3D architectures at will. Perhaps they can even control the residual stress distribution (I am not certain that they do this actively, but I can imagine ways in which to accomplish this). Certainly for purposes of mateiral property characterization, controlled cracks can be introduced at specific locations and other randomly located cracks generated indavertently. Tracking of (interior) crack growth may be a challenging task, but from comments above, not a pipe dream. So, subcritical v-G curves or critical fracture energy Gamma can all be determined, if only with "massive testing".

From the point of view of theoretical fracture mechanics, concepts of cohesive crack growth, interfacial fracture, kinking or penetration across an interface are all reasonably well understood (except for some esoteric residual problems).

So, the difficulty of applying fracture mechanics ideas to "prediction" of reliability must lie in computing the correct quantities. Rui tells me that it is extremely diffficult to calculate the energy release rates in these structures with finite element packages. The difficulty may be twofold

  • first that the computations are quite complex and expensive. I do not believe that this is the case. Simulations in other industries - aerospace, automobile, power, oil exploration and production - tackle problems of equal complexity, with millions of degrees of freedom, routinely.
  • second, and potentially a more serious problem, not knowing how to formulate the correct (appropriate) computational problem. The structure is heavily residually stressed and one does not really have a handle on the detailed spatial distribution of residual stress. While some steps in the processing such as deposition of different materials can be handled appropriately, other processes such as CMP are not modeled as cleanly. In such circumstances, crack nucleation sites are not readily identified.

Since residual stresses and cracks are intimately related, not knowing the initial state of the structure may pose about the most serious challenge to predictive application of fracture mechanics. Experiments (not tests) may still play a crucial role in sorting out this issue, but we don't have to be Edisonian about it.

I've been following this discussion from afar, and it seems like a good time to throw my hat in the ring on this issue.

As an undegraduate, I was involved in a research project sponsored by IBM to examine the failure of some of their high-density interconnect devices. This was in the early-mid nineties. The project resulted in two publications, one examining the relationship between material properties and the formation of stress in the three-dimensional structures (online here), and another developing the interaction integral for curved cracks on bimaterial interfaces (online here).

Since that time, there have certainly been a number of advances in computational fracture mechanics for material interfaces and etc. Nonetheless, if we consider a three-dimensional interconnect structure and randomly seeded flaws (even if the state of residual stress is known), I would still consider this to be an expensive calculation to perform with existing commercial software.

We need to remember that this involves careful mesh generation and construction, a process that by itself can be quite time-consuming. I believe that many of these CAD->mesh packages that are used to design such structures are not necessarily well suited to introduce flaws into the geometry. The nature of the singularity, particularly with bimaterial interfaces, does require quite a bit of local refinement to attain spatial convergence. Further, few packages have the capability to adaptively refine the mesh as flaws propagate (although I'm not convinced this would necessarily need to be part of a reliability analysis).

So would these calculations be prohibitively expensive? Perhaps not. Certainly such calculations can be performed, but I'd be a little surprised if they could be effected in a matter of days. For industry, expense is often intimately tied to person-hours. Along these same lines, however, I guess I would be surprised if the experimental characterization described here would not also require quite a bit of time to complete?

Jun He's picture

Ravi’s comparison of aerospace vs. chip level interconnect is a interesting one and here is some of my thoughts.

Being a UCSB graduate back in, arguably, the golden age of ceramic matrix composite, I agree that fracture mechanics computation in aerospace is very advanced and capable of solving complicate problem. One of key difference, however, is the scale. With interconnect dimension at 10s of nm, most of conventional characterization methods for mechanical properties including ones designed for "thin film" are no longer applicable. As John correctly pointed out, even a complex CAD model usually runs faster than experiments once properly formulated. The problem is that it usually takes more efforts to do a detail property mapping, which is required for accurate model prediction, than measure the end response of a fully integrated structure.

The other big challenge in nano-scale fracture mechanics is the characterization of "flaw". Every time I try to persuade one of our process engineer that his or her process step is responsible for cracking or delamination due to defects, the typical reply is what do these “flaws” look like and how to detect them during process. Unfortunately, this is a rather complicated question. The “flaw” in interconnect is not a small crack/delamination can be detected optically or even under SEM. They can be local topology leads to stress concentration, contaminates changing local chemistry or subtle composition gradient on nano-scale, most of those are only visible under TEM. So detailed characterizations are not feasible compare to large scale electrical testing of the fully integrated structures.

When Zhigang and I advocated large scale testing of fully integrated structures, one fact to keep in mind is industry has been doing large scaled product qualification for various other purposes such as functionality, performance and process window. So it is not prohibitly expensive to add some test structures to measure the end response of fracture behavior electrically. Put all those infrastructures just for mechanical characterization is an entirely different matter.

Zhigang Suo's picture

I am grateful for all your perceptive comments.

I have just pasted part of the abstract of the original paper into the post as the last paragraph, so that people who do not have time to read the full paper can still get a rough idea.

Ting Tsui, of Texas Instruments, has also uploaded a paper on channel cracks, which contains several beautimicrographsaphs.  Students of fracture mechanics must love to see them. 

Xiao-Yan Gong's picture

I've just realized how much fun I missed.  These are wonderful discussions and I'd like to add a few comments for medical device and implant industry.

People always point us to aerospace industry just like what they did to electronics industry every time when reliability becomes an issue.  Truth is, every industry has its uniqueness.  The medical implant industry are in the fast pace of transfering open surgery into minimally invasive surgery that the implant, such as stents and heart valves today are sub-milimeter length scale.  No, we are not micron or nano yet, but the scale is small enough to share exactly the challenges that electronics industry has.  So Jun, job well done.

Often, flaws are difficult to detect.  Processing created cracks are closed because of residue stress.  What proof test can we do on metals to ensure the critical flaw size is below the threshold if there is one.  Simulation run into wall for convergence issues due to nonlinearity.  It's an art to compute the J-integral for very small cracks with substantial material property changes during cycling, let alone the material constitutive law remains questionable.  These are a few examples of our challenges, sounds familiar?  Luckily mechanics reamins and good news is that this is our great opportunity to make a difference as computationlists, theoretists and experimentalits.  I strongly believe that today, we need work together to customize the experiements so that we can predict next experiement or learn something in the future.  A test is only as good as a test without a fully understanding to expand its implication beyond.

Recent poll from industry in regarding predcition of Nitinol fatigue, 9 out 10 voted negative.  We have a long way to go, but I am sure we are glad to be accompnied by electronics industry.


Ravi-Chandar's picture

Both Jun He and Xiao-Yan Gong bring to the table a list of real challenges in the microelectronic and biomedical industries; these challenges are at the periphery of traditional mechanics - how do you characterize, define and detect flaws, how do you determine appropriate residual stress distributions in complex structures, how do you account for statistical variabilities in geometry, constitutive, fracture, and interfacial properties, etc - but many mechanicians work on these problems intensively. (Incidentally, the point of my previous comment was that these are the real issues and not the computational cost of calculating the stress field and energy release rates). The upshot is, of course, that if we can do all these, we will still use traditional continuum mechanics, fracture mechanics, fatigue limits, etc to estimate reliability.

The beauty of mechanics is that when the underlying similarity and scaling are elucidated, such applications are automatic. The facts that the absolute length scale of the device is small or large, that they are in benign or hostile environments, subjected to deterministic or stochastic loading etc, while different in each practical application, and technologically important, is irrelevant from the point of view of mechanics; of course, one has to account for the appropriate deformation mechansims, force interactions, microstructural effects, statistical variability etc.

So, perhaps we need to focus more on how to handle above mentioned uncertainties better. Engineers in the aerospace, nuclear and other industries do not demand that their structures/machines/devices be defect free; they admit material/structural variabilities and design their artifacts to be flaw-tolerant - a practical strategy to work around the uncertainties. Can the designers of microelectronic and biomedical devices develop such strategies? There might be an impulse to say no, but consider that biological entities - at many scales: cells, tissues, organs, organisms - are flaw tolerant as well. I fully recognize that the flaw tolerance strategies may not be the same in each application, but there ought to be pathways to explore in each case.

Xiao-Yan Gong's picture

I think we all admit that flaws exist. The issue is that the flaw size does not scale down as device dimension goes down. Therefore the similarity doesn't really exist. It's relatively easy to detect a flaw of meter or even mm size, but it is hard when the critical flaw size goes down to microns. (It may be easier to say it now after they've done it. :-))

We were (at least, I was) told that to develop a fatigue crack growth rate curve down to micro scale is very difficult when we were pushing the durability assessment into this direction. However, nobody said just because it was difficult, it couldn't be done.

I think the real questions are:

1. Is there anything more effective? I am a true believer in that there is not. Because "Statistics Doesn't Tell Science". However, knowing that the flaw size will be a factor, how do we extend the traditional flaw tolerant fatigue assessment into the small scale? At least there is one more variable need be included, i.e., the flaw size. Anything else do we miss?

2. Suppose that we worked out the experiment and computation methods and proved that the flaw tolerant assessment applies to small scale structures and nonlinear materials. Is the technology mature enough to build instruments to detect these flaws? If so, industry would love to see the case studies and they may want to standardize it.

Xiao-Yan Gong, PhD

Subscribe to Comments for "Experimental determination of crack driving forces in integrated structures"

Recent comments

More comments


Subscribe to Syndicate