4 posts
• Page **1** of **1**

I need to check positive definiteness of a real symmetric matrix, and then if not positive definite, I would like to continue calculating its eigenvalues(vectors). I wonder what is the best strategy I should choose to save computation time. Thanks a lot!!!

- bsmile
**Posts:**8**Joined:**Sat Dec 04, 2010 2:40 pm

To check positive definiteness of a real symmetric matrix, the best way is to perform its Cholesky factorization. If the Cholesky factorization succeeds, the matrix is

symmetric positive definite. If the Cholesky factorization does not succeed, the matrix is not symmetric positive definite. LAPACK routine name for Cholesky factorization is DPOTRF.

To compute the eigenvalues of a real symmetric matrix, you can use DSYEV, DSYEVD, DSYEVR or DSYEVX.

Of course you can avoid DPOTRF and always compute the eigenvalues with DSYEV (or others) and check whether some eigenvalues are negative to assess positive definiteness. If your matrix is likely to be positive definite, a check with DPOTRF is really faster than using DSYEV (or others).

Cheers,

Julien.

symmetric positive definite. If the Cholesky factorization does not succeed, the matrix is not symmetric positive definite. LAPACK routine name for Cholesky factorization is DPOTRF.

To compute the eigenvalues of a real symmetric matrix, you can use DSYEV, DSYEVD, DSYEVR or DSYEVX.

Of course you can avoid DPOTRF and always compute the eigenvalues with DSYEV (or others) and check whether some eigenvalues are negative to assess positive definiteness. If your matrix is likely to be positive definite, a check with DPOTRF is really faster than using DSYEV (or others).

Cheers,

Julien.

- Julien Langou
**Posts:**740**Joined:**Thu Dec 09, 2004 12:32 pm**Location:**Denver, CO, USA

Thanks, Julien. I am wondering whether I can make use of calculation performed by DPOTRF and apply its returns to DSYEV to speed up eigenvalue calculation. From the document, it seems DSYEV does not do something like this, but DPTEQR does. But, aren't there a version like DPTEQR for general real symmetric matrices as well? Would the time for DSYEV = time for DPOTRF + the rest time, I guess you know what I mean by writing in this way. By the way, can you comment on speed of doing Cholesky factorization and say QR decomposition? Thanks again.

- bsmile
**Posts:**8**Joined:**Sat Dec 04, 2010 2:40 pm

1) The use of the Cholesky factorization in the subroutine DPTEQR is not to fasten the computation but to enable high relative accuracy.

Indeed DPTEQR is for tridiagonal matrices, there is no specialized routine to compute the eigenvalue to high relative accuracy for a general SPD matrix.

2) The Cholesky factorization of (SPD matrix) A is of no use (in the LAPACK setting) for accelerating the computation of the eigenvalues of A.

(The statement is slightly less valid when you use iterative methods, where you can use the Cholesky factorization of A as a preconditioner for the iterative method to find some eigenvalues (the one close to zero) faster.)

3) A Cholesky factorization of an n-by-n matrix A requires (1/3)n^3 flops while a QR factorization of a n-by-n matrix A requires (4/3)n^3 flops so there is a factor four there. Cholesky is four times faster than QR factorization. In practice this is what you should see. The kernels for Cholesky factorization are faster than the one for QR, moreover the parallelization of Cholesky is easier than the one of QR, so at the end Cholesky is more than four times faster than QR factorization.

Julien

Indeed DPTEQR is for tridiagonal matrices, there is no specialized routine to compute the eigenvalue to high relative accuracy for a general SPD matrix.

2) The Cholesky factorization of (SPD matrix) A is of no use (in the LAPACK setting) for accelerating the computation of the eigenvalues of A.

(The statement is slightly less valid when you use iterative methods, where you can use the Cholesky factorization of A as a preconditioner for the iterative method to find some eigenvalues (the one close to zero) faster.)

3) A Cholesky factorization of an n-by-n matrix A requires (1/3)n^3 flops while a QR factorization of a n-by-n matrix A requires (4/3)n^3 flops so there is a factor four there. Cholesky is four times faster than QR factorization. In practice this is what you should see. The kernels for Cholesky factorization are faster than the one for QR, moreover the parallelization of Cholesky is easier than the one of QR, so at the end Cholesky is more than four times faster than QR factorization.

Julien

- Julien Langou
**Posts:**740**Joined:**Thu Dec 09, 2004 12:32 pm**Location:**Denver, CO, USA

4 posts
• Page **1** of **1**

Users browsing this forum: Bing [Bot] and 2 guests