Page 1 of 1

### Fast Accuracy Loss in DSTEVD and DSTEVR

Posted: Mon Jan 18, 2016 3:39 am
I noticed that DSTEVD and DSTEVR routines are much less numerically stable compared to DSTEV in computing eigenvectors.

Let's consider tridiagonal symmetric matrix of the form:
Code: Select all
`J =     1     1     0     0     0     1     3     2     0     0     0     2     5     3     0    ...     0     0     3     7     4     0     0     0     4     9                 .              .                 .                .                 .                  .`

It has 2*i-1 on main diagonal and i on sub-diagonals.

Eigenvectors of the problem are sensitive and their ill-conditioning grows with matrix size.
All routines working with fixed precision are expected to deviate from true result starting from some matrix size.

But I noticed that DSTEVD and DSTEVR collapse for much smaller matrix size compared to DSTEV.
DSTEV gives correct results for 128x128, whereas DSTEVD and DSTEVR start to show accuracy loss even for relatively small size of 32x32.

For example, in 64x64 case, DSTEV computes all eigenvectors correctly, whereas DC and MRRR already fail on 30% of them (corresponding to the greatest eigenvalues).
MRRR just gives zeros, DC misses values by several orders of magnitude.

(We compare only the first component of every eigenvector).

This is practical problem, related to computing coefficients of Gauss-type quadratures (Gauss-Laguerre in this particular case).
Probably this is one of the most common areas where DC and MRRR find direct application.

Is such quick accuracy loss inherent to algorithms or possible issue in implementation?

(I suspect it is the latter, since MRRR just gives plain zeroes if eigenvector's first component is smaller than some threshold. Could it be some algorithm parameter?)

### Re: Fast Accuracy Loss in DSTEVD and DSTEVR

Posted: Thu Jan 28, 2016 5:22 am
This strategy is not usual.
Usually, in this case, we use the following steps.

1) J is the irreducibly diagonally dominant matrix.
All eigenvalues are positive.
Here, we can decompose J=B^T*B by using Cholesky decomposition.
The diagonal element of B is sqrt(i) and the subdiagonal element of B is also sqrt(i).

2)We compute the singular value decomposition B=U*\Sigma*V^T or B^T=V*\Sigma*U^T.
3)From \Sigma and V, we can get J=V *\Sigma^2*V^T.

Therefore, this problem can be regarded as the problem of the singular value decomposition.

If you compute a singular value decomposition, please use our implementation of orthogonal QD algorithm
in http://www-is.amp.i.kyoto-u.ac.jp/kkimu ... ROGNC.html.

Our orthogonal qd algorithm is more accurate than DBDSQR.

The detail is included in the attached file.

### Re: Fast Accuracy Loss in DSTEVD and DSTEVR

Posted: Fri Jan 29, 2016 11:39 pm
Thank you very much for your input.

Indeed there are other, more efficient ways to solve the task.
Probably the fastest and most accurate one would be just finding roots of Laguerre polynomials (or other quadrature-oriented algorithm).

But search for the most optimal algorithm to solve the task was not my initial goal.

Main point was the comparison of different routines for finding eigenvectors of tridiagonal matrices regarding its numerical accuracy.
The most advanced and fastest O(n^2) MRRR seems to be much less robust than the classic QR, it is supposed to replace.

### Re: Fast Accuracy Loss in DSTEVD and DSTEVR

Posted: Sat Jan 30, 2016 8:35 am
>The most advanced and fastest O(n^2) MRRR seems to be much less robust than the classic QR, it is supposed to replace.

Anyway, J is not suitable. Because, J is the irreducibly diagonally dominant matrix.
And, the diagonal element of B is sqrt(i) and the subdiagonal element of B is also sqrt(i) exactly.

### Re: Fast Accuracy Loss in DSTEVD and DSTEVR

Posted: Wed Feb 24, 2016 4:04 pm
Dear LAPACK user,

For an eigenvalue problem T*z = lambda*z, where n is the dimension of the (tridiagonal) matrix T, the metrics of success for the eigensolvers implemented in LAPACK are resd = norm( T - Z*LAMBDA*Z**T ) / ( norm( T )*n*eps ) and orth = norm( I - Z*Z**T ) / ( n*eps ), where Z = [z_1 z_2 ... z_n] and LAMBDA = diag(lambda_1 lambda_2 ... lambda_n). Taking your matrix with n=64, our experiments (on an Intel-based desktop, Intel compiler), indicated that those numbers are always small (as desired), in particular,

DSTEDC:
resd = 1.2825e-01
orth = 1.9810e-01
DSTEMR:
resd = 3.4763e-01
orth = 8.0929e-01
DSTEQR:
resd = 3.4866e-01
orth = 4.3032e-01
DSTEVX:
resd = 5.7184e-02
orth = 6.2238e-01

We also performed another experiment, taking the eigenvectors computed by DSTEQR (which presumably returns eigenvectors close to "exact") and compared them with the eigenvectors computed by DSTEDC,
DSTEMR, and DSTEVX, in the following way: orth_rel = norm( eye(n) - abs( (eigenvectors_QR)**T * (eigenvectors_from_DC_or_MR_or_VX) ). This is what we obtained

DSTEQR x DSTEDC:
orth_rel = 2.0233e-13
DSTEQR x DSTEMR:
orth_rel = 2.7266e-13
DSTEQR x DSTEVX:
orth_rel = 2.6645e-13

so the differences are negligible.

I take the opportunity to refer to "Accurate and Efficient Expression Evaluation and Linear Algebra", J. Demmel, I. Dumitriu, O. Holtz and P. Koev,
Acta Numerica, vol. 17, ed. A. Iserles, Cambridge UP, 2008 (also arXiv:0712.4027), for a recent survey on when eigenproblems (and other linear algebra problems)
can be computed with high relative accuracy.

With best regards,

Osni

PS. The INFO=22 that user 'resultant' makes reference to plays no role here. That issue is currently under investigation.