It seems I don't really know what a left eigenvector is. You surprised me with
the equation: x^H * A = (lambda) * x^H.
This is the standard definition. You can find it in any textbook.
Does this mean that in order to apply the output VL to A one must take the
Yes if you want to apply the output VL from the left to A, you need to use the
Hermitian adjoint (in order for it to make some sense ...). Or you can apply
the output VL from the right to A^H. (Since your matrix A is real, this means
A^T.) But then it's not lambda but conj(lambda). Either you view it as: x^H * A = (lambda) * x^H.
or A^H * x = x * conj(lambda)
where x is a coumn of VL.
The crucial question for me is: is inv(M) = VL or VL^T.
Let's rename your M matrix by VR to make things clearer. The relation is:
(1) inv(VR) = VL^H
. But the '=' sign
is not 'exactly exact' .... It's slightly overstated. Say that inv(VR) is a
valid left eigenvector basis. The problem is that there is not unicity for a
basis of eigenvectors if a matrix is a diagonalizable.
Just a few lines to convince you about Eq. (1)
From the right eigenvector decomposition we have:
(2) A*VR = VR*D
where A is a real diagonalizable n-by-n matrix, VR is a basis of
right eigenvectors (they might be complex), D is a diagonal complex n-by-n
matrix. VR and D are outputs from DGEEV (or DGEEVX). Fine.
Multiplying on the right and on the left by inv(VR), you get:
(2) inv(VR) * A = D * inv(VR)
Then you recall the definition of the left eigenvectors:
(3) VL^H * A = D * VL^H
So this implies that you can take inv(VR) as your VL^H which is Eq.
I think all this is pretty clear. Now there is an important subtility. You
need to keep in mind that Eq. (2)
do not define VL resp. VR uniquely.
Keep in mind the following reasons:
(1) there is the order of the eigenvalues in the matrix D. Well in LAPACK we
ensure that the order is the same for VL and for VR. (There is just one D.) So
this is not an issue.
(2) in any case you can always multiply a column of VR, or a row of VL by a
constant. Since we impose our vector to be of norm 1, the constant can only be
+/- 1 in the case your eigenvector is real or any complex of the form
e^(i\theta) (a complex of modulus 1) in the case your eigenvector is complex.
Consequently, LAPACK do not guarantee that inv(VL) and VR^H are the same and in
general those two matrices are not the same.
(3) Now you also have the case of multiple eigenvalues. If an eigenvalue is
multiple then you have an invariant subspace and any basis of this invariant
subspace (infinte number of possibility) is a valid solution and satisfies Eq.
(or Eq. (3)
). Well so if your
eigenvalues are distinct, this case can not happen. (And keep in mind that
working in finite precision arithmetic, distinct means that you would like a
relative gap between your eigenvalues fairly large.)
What's the conclusion of all this. Looks like for your application, you really
want inv(VR) and that VL is not enough. Even if your eigenvalue are distinct,
LAPACK will not guarantee you
VL^H * VR = I
But (if dist. eigs.) VR^H * VL is a diagonal matrix with modulus 1 diagonal element,
therefore you can compute for each i=1 to n:
VL(:,i)^H * VR(:,i) = alpha
alpha is necessarilly a scalar of modulus 1, if it's not, then the modulus of
alpha nedds to be between 0 and 1 and this means that you have multiple eigenvalues
and well, it's a little harder to handle, so I'll skip.
Then you just need to divide the row i of VL by conj(alpha) or the col i of VR
by alpha. (Either one or the other, but not both!). And that should do it.
After this scaling, you will find inv(VR) from VL^H.
As a side note: why are you using DGEEVX and not DGEEV?