Let's consider tridiagonal symmetric matrix of the form:

- Code: Select all
`J =`

1 1 0 0 0

1 3 2 0 0

0 2 5 3 0 ...

0 0 3 7 4

0 0 0 4 9

. .

. .

. .

It has 2*i-1 on main diagonal and i on sub-diagonals.

Eigenvectors of the problem are sensitive and their ill-conditioning grows with matrix size.

All routines working with fixed precision are expected to deviate from true result starting from some matrix size.

But I noticed that DSTEVD and DSTEVR collapse for much smaller matrix size compared to DSTEV.

DSTEV gives correct results for 128x128, whereas DSTEVD and DSTEVR start to show accuracy loss even for relatively small size of 32x32.

For example, in 64x64 case, DSTEV computes all eigenvectors correctly, whereas DC and MRRR already fail on 30% of them (corresponding to the greatest eigenvalues).

MRRR just gives zeros, DC misses values by several orders of magnitude.

Please see comparison in attachment.

(We compare only the first component of every eigenvector).

This is practical problem, related to computing coefficients of Gauss-type quadratures (Gauss-Laguerre in this particular case).

Probably this is one of the most common areas where DC and MRRR find direct application.

Is such quick accuracy loss inherent to algorithms or possible issue in implementation?

(I suspect it is the latter, since MRRR just gives plain zeroes if eigenvector's first component is smaller than some threshold. Could it be some algorithm parameter?)