User login

You are here

Estimating Condition Number

tlaverne's picture

The convergence of iterative methods like conjugate gradient depends highly on the condition number of the matrix. I use a preconditioned conjugate gradient (with a gauss-seidel preconditionner or ILU(0)) and I would like to know if there is a way to estimate the condition  number of the matrix easily. The matrix system arises from the basic structural equations of mechanic in large deformation. Is there any estimate that can be easily computed (I have in mind a method which uses element aspect ratio for example...). My goal is to have more control on the solving process and to have the ability to predict when the solver will fail.

Thank you !

Moreover if someone knows how to estimate the condition number of the preconditioned system it would be great too (and more accurate)! 


tlaverne's picture

ramdas chennamsetti's picture

R. Chennamsetti, Scientist, R&D Engineers, India

You may find out the largest and smallest eigenvalues of the matrix. Then condition number = Max/min.


tlaverne's picture

As far as I know methods for computing sparse matrices eigenvalues are computationnaly expensive and I would like to avoid the numerical cost of such methods. That's why even rough estimates that could be cheaply computed would be very helpful. But may be is there fast methods to but the two extreme eigenvalues of the spectrum. Any suggestion ?

If you want an order of magnitude estimate then you can note that (for many problems in mechanics) the eigenvalues will have the same orders of magnitude as the diagonal terms of the matrix.  So you can get an estimate of the condition number by just finding the maximum and minimum values of the diagonal terms and finding their ratio.  This approach works reasonably well for fully coupled poroelastic problems where the diagonal terms differ by many orders of magnitude.  However, I'm not sure how well it will apply to your problem.



Temesgen Markos's picture

Some computationally cheap numerical algorithms are there which can give you the  largest and smallest eigenvalues. For symmetric matrices, which I guess is true in your case, this process can even be made more cheaper.

If you get the book "Scientific computing: an introductory survey" by Michael Heath, read the section "Methods for computing selected eigenvalues".


I have been following this thread with some interest.

I was wondering why is it that the ratio of the max/min eigenvalues does estimate the condition number, esp. for solution via iterative algorithms.

I suppose there ought to be a physically based explanation for this fact. After all, things like this particular ratio are only heuristics. They certainly aren't like corollaries to proved theorems, to be sure! 

If anyone has run into a clear explanation of the reason why this ratio (or similar measures) works, then please leave a note or a link here.

The sought explanation should preferably be in physical terms. However, the explanation could also be stated in "purely" mathematical terms. But if the latter, the explanation should be of the kind that has been written to be understood. ... I do doubt if there is any such explanation here. This seems to be one of those topics about which everyone thinks that someone knows for sure, but in fact, none does, actually!!

Thanks in advance!

ramdas chennamsetti's picture

R. Chennamsetti, Scientist, R&D Engineers, India

Yes, as stated T. Markos, there are algorithms which we will give you directly maximum and minimum eigenvalues. You can make use these for computing condition number.

tlaverne's picture

All for your wise answers! I'll try some implementations on numerical algorithm to compute the extremal eigenvalues and I will compare this to the min and max diagonal terms tosee wether it is an appropriate approximation or not in my case. Is the Lanczos a good choice to compute the eigenvalues followed by a tridiagonal QR ? Yes indeed, the matrices are symmetric.

ramdas chennamsetti's picture

R. Chennamsetti, Scientist, R&D Engineers, India

Dear Ajith Sir,

For a well-conditioned matrix, in general, all the diagonal terms will be of same order. As mentioned by Biswajit Sir, the eigenvalues will be close to diagonal terms for a diagonally dominated matrix.

Now the ratio of the biggest to smallest ( = condition number) should give a smaller number if the matrix is well-conditioned, since all the diagonal terms are of the same order. If condition number is big, that means the order of difference among the diagonal terms is more, this says that the matrix is ill-conditioned.

With regards,

- Ramdas

Temesgen Markos's picture

As rightly pointed out by Ajit Jadhav, this thing is one of those thing that every one "feels" like it works but becomes tricky when asked to prove or give a concrete physical example. I was very surprised that when I checked up my numerical linear algebra books, they just state these things causually and none provide a proper proof or justification.

I am writing up a proof of why the condition number for Symmetric matrices is related to the ratio of the extreme eigenvalues. Here too a word of caution is that it is actually the ratio of singular values and not eigen values which gives the condition number for general matrices. But if the matrix is symmetric, then the eigenvalues are square roots of the singular values. For non symmetric matrices, there is no relation what so ever between eigenvalues and the condition number.

Here is the line of argument I think will lead into the proof

1. show that cond(A) = ||A||*||A-1|| where ||A|| is the matrix norm derived induced by the vector norm. ||A|| = sup||Ay|| for all vectors y with ||y|| = 1. Many math books directly start with this as a definition of the condition number, but for a physically inclined forum like this we have to show that this equality follows from the definition of conditioning in terms of sensitivity to error.

2. Show that the Euclidean norm of a matrix is equal to the biggest singular value (eigenvalue of ATA = square of eigenvalue of A if A is symmetric). Note here too that the matrix norm can be defined via eigenvalues only if we use the Euclidean norm.

3. The singular values of a matrix and its inverse are reciprocals.

If we prove these statements then we are done.

Cond(A) = ||A||*||A-1||   ... from statement 1

            = max SV(A) * max SV(A-1) ... from statement 2

            = max SV(A)/min SV(A)   ... from 3

I think I have to fill out the details and properly write it may be on LaTeX. 

Check in matlab for the condition number of A = [1 1000; 0 1]. It is 1e6. But both eigen values are 1, which gives a ratio of eigenvalues of 1. If we use this for condition number, it can be not just wrong but misleading. Very ill-conditioned matrices can have closely clustered eigenvalues. If you take singular values, 'svds' is the command in matlab, the ratio gives the same thing as what you find with the 'cond' command.

Now to the physical side of it. What happens when eigenvalues are on very different scales? A precise explanation depends on the physical subject under consideration but the starting point for most is that you can decompose a solution, a transformation etc interms of the eigenvalues and what it does to eigen vectors. 

For example if you have a linear system of DE's dx/dt = Ax, the solution is a linear combination of exponentials of the eigenvalues multiplied by the eigenvectors. If there is a very small eigenvalue, it almost does not show in the solution because the ones with bigger eigenvalues dominate in the sum. It's as if the problem has lesser dimension. The dimension along the vector with small eigenvalue is practically not contributing. Experimental people use the same reasoning with eigenvalues to see if some of the parameters are not that significant.

Think of a stress tensor. You can express it interms of normal stresses on the principal directions (eigenvectors). The eigenvalues give the normal stresses on the principal planes and if one of them is much smaller in scale, practically the problem can reduce from a 3-D to a 2-D one or from a plane stress to an axial stress problem.

You people out there can come up with additional/different reasoning or refine my reasoning here and collectively we should be able to get the heart of this thing.


You can find a slightly technical but still readable discussion of the concept of condition in

J. Rice (1966), "A theory of condition", SIAM J. Num. Anal. 3, 287-310.  The JSTOR link is here


A concise description of the idea can be found in Matrix Computations by Golub and Van Loan (2nd Ed), pages 79 thru 84 (the chapter is called "The sensitivity of square systems").  Algorithms for estimating the condition number can be found on page 128 of the same book.  They require some structure in the matrix for efficiency.   I'm sure there are more efficient algorithms in the recent literature.  However, you'll have to ask an expert in solver technology.



mohamedlamine's picture

close all
clear all
A=[ 1 1000 ; 0 1]

disp('Conditionnement avec la fonction cond :')

disp('Conditionnement avec les Valeurs Propres :')

disp('Conditionnement selon Matlab avec les Normes 2 :')
% => REPONSE FAUSSE avec KN et cond

disp('Conditionnement avec la fonction svds :')
n2=max(1.0/vd1(1) , 1.0/vd1(2))

disp('Valeurs Singulieres avec les formules Si A est non-symetrique :')
% D = matrice adjointe de A :
D=[ 1 -1000 ; 0 1]

disp('Conditionnement avec les Valeurs Singulieres :')
% => EXACT


Mohammed lamine 

mohamedlamine's picture

The conditioning number calculated with Matlab cond function is wrong since it is

checked with several methods in the above m.file . Adjoint of the matrix is

not computed very well in Matlab and the related documentation is false.




Mohammed lamine 

Subscribe to Comments for "Estimating Condition Number"

Recent comments

More comments


Subscribe to Syndicate