dgesvd and MATLAB SVD

Post here if you have a question about LAPACK or ScaLAPACK algorithm or data format

dgesvd and MATLAB SVD

Postby srao12 » Wed Oct 18, 2017 3:06 pm

I see inconsistencies between dgesvd and MATLAB svd.

I am using LAPACK dgesvd to compute the SVD of a real-valued Hankel matrix as a part of my C program and am comparing the results obtained against the MATLAB function SVD. I notice that while the worst difference between the singular values is of the order of 10^-6 or 10^-7 for various cases (which isn't too bad); as the matrix size increases, the worst difference between the left and right singular vector matrices from LAPACK and MATLAB starts increasing, reaching as high as ~0.08. I was also accounting for the possibility of sign differences while computing the differences(i.e. (-U)*S*(-V') also being a valid solution) . From what I was able to find, MATLAB also uses the dgesvd function internally. Could someone guide me as to what would lead to the larger differences between the singular vector and why these differences are not reflected in the singular values? MATLAB possibly does some preconditioning internally to improve its accuracy. Any suggestions for me to improve the accuracy obtained from LAPACK?

Side notes: The condition number of the matrix in general worsens as the matrix size grows, reaching about 10^10 for a 400x400 matrix. Also, I can confirm that MATLAB's solution is more accurate by finding the difference between the matrix and U*S*V' for the two approaches. While the accuracy of MATLAB's solution is always 10^-11 or better; that obtained from LAPACK is always worse than 10^-5, worst case being about 0.0177
problem desctiption.txt
(1.42 KiB) Downloaded 426 times
Posts: 2
Joined: Wed Oct 18, 2017 2:48 pm

Re: dgesvd and MATLAB SVD

Postby Julien Langou » Wed Oct 18, 2017 11:35 pm


Thanks for the report.

(1) the better check is ``finding the difference between the matrix and U*S*V'``. This is called ``backward error``. Comparing the ``difference between the left and right singular vector matrices from LAPACK and MATLAB`` has some flaws.

(2) with respect to comparing the ``difference between the left and right singular vector matrices from LAPACK and MATLAB``, if you have close singular values, then the singular vectors can be very different for two different methods. The reason is that the problem of finding singular vectors of a multiple singular value is an ill-conditioned problem. If you have a singular value with multiplicity 3 (say), you are asking for a left or right singular subspace of dimension 3 and any orthonormal basis of this subspace of dimension 3 is a correct answer. The problem is ill conditioned as far as vectors go. There is not one good answer. The problem is well conditioned as far as subspace goes. This explains why for a non-multiple singular value, you have subspace of dimension 1, so you have two orthonormal basis: either +u or -u. This is why you are playing with the signs. This gets more complicated (much more) as soon as the dimension is greater than 1. Any orthonormal basis is fine. So two different methods are likely to return two different singular vectors. All the above holds in "exact arithmetic" with an "exact" multiple singular value. In the computer, with floating-point errors, as soon as two distinct singular values get closed (say 1e-5) you start to have the difference between the singular vectors returned by two methods that is likely increasing.

(3) I do not know why MATLAB returns a (significantly) better backward error than LAPACK. Did you try some of our other SVD solvers? For example DGEJSV.
I would try DGEJSV but another SVD method in LAPACK DGESDD. Since your matrix are relatively small, I would use DGEJSV. No hesitation.

Feel welcome to come back to us.

Julien Langou
Posts: 835
Joined: Thu Dec 09, 2004 12:32 pm
Location: Denver, CO, USA

Re: dgesvd and MATLAB SVD

Postby resultant » Sat Oct 21, 2017 4:44 am

In general, the real-valued Hankel matrix is a symmetric matrix.

In another approach, we use the eigen decomposition,
From the eigen decomposition H=V*D*V^T, we can get a matrix U as follows,
If D_{ii}>=0 then u_i=v_i,sigma_i=D_{ii} else (D_{ii}<0), u_i=-vi, sigma_i=-D_{ii}.
Therefore, we can get H=U*S*V^T.

Fortunately, if the real-valued Hankel matrix is a positive symmetric matrix,
then we can use Cholesky decomposition H=R^T*R.
And, we compute SVD as R=U*S*V^T. That means H=V*S^2*V^T.
Therefore, if we set U=V, we can obtain H=U*S^2*V^T.
Posts: 13
Joined: Fri Nov 13, 2015 9:35 am

Return to Algorithm / Data

Who is online

Users browsing this forum: No registered users and 3 guests