Eigenvectors of an upper triangular matrix

Open discussion regarding features, bugs, issues, vendors, etc.

Eigenvectors of an upper triangular matrix

Postby JnBiz » Mon May 29, 2006 5:29 pm

I'm sorry for my english, i'm french.

I'm developing a linar algebra program for Delphi which computes eigenvalues and eigenvectors of a matrix.
For general Matrix i use the complex Schur decomposition to find eigenvalues.
To find eigenvectors i use the upper triangular matrix T from the shur decomposition and i simply solve the system T*X=v*X where v is an eigenvalue computed before.
It works most of the time, but when eigenvalues vary quite a bit in size, the eigenvectors computed are not precise at all.
So, my question is: Is there other methods to find eigenvectors of a non-hermitian matrix when some eigenvalues are "almost degenerated"?
In particular, what is the algorithm used by LAPACK ?

Thank you for your answers.
Posts: 3
Joined: Mon May 29, 2006 5:25 pm
Location: France

Re: Eigenvectors of an upper triangular matrix

Postby kressner » Sat Jun 03, 2006 2:48 pm


LAPACK essentially uses the backward recursion procedure
described in the Golub/Van Loan book with some extra care
taken to avoid overflow in the presence of identical or nearby
eigenvalues. From the description you have given, I guess
that you are using a similar approach. Could you give an
example when your implementation fails to compute the
eigenvector accurately?
(I should perhaps mention that in the case of nearby eigenvalues
the eigenvector will be ill-conditioned and high accuracy cannot be

Good luck with Delphi,
Posts: 4
Joined: Fri Mar 18, 2005 6:14 am

Postby JnBiz » Sat Jun 03, 2006 4:51 pm


To compute eigenvectors, i use a backward recursion with deflation procedure.
It fails when the relative difference between 2 eigenvalues is less than 1E-15.
To evaluate the procedure for nearby eigenvalues, i use a 100X100 complex matrix which is almost hermitian, so, in order to verify i can also compute the eigenvectors with another appropriate procedure.
In that matrix, some eigenvalues are identical, and a dozen of couple is almost identical. (It is the Hamiltonian of a unidimensional quantum harmonic oscillator).

I tried the same matrix with matlab which use LAPACK, and the accuracy is good.

Could you be more precise about the "extra care", because, it's impossible to change the diagonal elements with unitary transformations.

Thank you for your answers.
Posts: 3
Joined: Mon May 29, 2006 5:25 pm
Location: France

Postby kressner » Sun Jun 04, 2006 11:33 am

The LAPACK routine dynamically multiplies the eigenvector
to be computed with a scalar to avoid overflow. If the
difference between 2 eigenvalues is as small as 1E-15 it
might just be coincidence that Matlab/LAPACK yield the
results you expected. In such cases, perturbation theory
tells that no accuracy at all can be expected. Depending
on the application, the right quantity to look at would be
the invariant subspace spanned by both eigenvectors
(or the corresponding spectral projector).

Posts: 4
Joined: Fri Mar 18, 2005 6:14 am

Postby Julien Langou » Mon Jun 05, 2006 8:47 am


Daniel is right but I just want to give an example to illustrate what he is saying, this is a problem that often comes around.

Take for example
A = [ 2 0 0; 0 2 0; 0 0 1];
A is a diagonal matrix with two 2s and one 1 on the diagonal. There is one eigenvalue (2) for A of multiplicity two, and one (1) of multiplicity one.
Then a suitable diagonalization for A is:
V = [ 1 0 0; 0 1 0; 0 0 1];
D = [2 0 0; 0 2 0; 0 0 1];
V = [sqrt(2)/2 -sqrt(2)/2 0; sqrt(2)/2 sqrt(2)/2 0; 0 0 1];
D = [ 2 0 0; 0 2 0; 0 0 1];
is as suitable.
ACtually, any orthonormal basis of the 2D plan z=0 (i.e. this corresponds to the invariant space for A corresponding to the eigenvalue 2) gives you an acceptable V matrix.

This simple example means that when you have a multiple eigenvalue (or two eigenvalues close up to machine precision), you can not expect two different solvers to return you the same eigenvectors. They will return you hopefully the same invariant subspace but they have no reason/constraint to choose the same basis to span those subspaces.

Julien Langou
Posts: 835
Joined: Thu Dec 09, 2004 12:32 pm
Location: Denver, CO, USA

Postby JnBiz » Mon Jun 05, 2006 3:52 pm

Alright, Thanks a lot for your answers.

After some tests on differents matrix, i would say that the method described by Daniel really works!

Here is the description of what gives (for the moment) the best results. In fact when the difference between 2 (or more) eigenvalues is less than c*n*epsilon, where n is the size of the matrix, epsilon the machine precision, and c is a coefficient which is adjustable. In that case, the program considers that the 2 (or more) eigenvalues are the same, and then it computes the multiple eigenvectors by considering the invariant subspace given by the modified schur form of the matrix.
For the matrix i've tested, the worst relative precision is 1E-12 which is very correct.

If you are interested by my program you can find Delphi sources here http://www.delphifr.com/codes/CLASS-TCM ... 37701.aspx
(will be soon update)
and a simple example here http://www.delphifr.com/codes/RESOLUTIO ... 37617.aspx
(send me a mail if you want the exe file).
Posts: 3
Joined: Mon May 29, 2006 5:25 pm
Location: France

Return to User Discussion

Who is online

Users browsing this forum: No registered users and 4 guests