About technical details of mixed precision routine DSGESV

Post here if you have a question about LAPACK or ScaLAPACK algorithm or data format

About technical details of mixed precision routine DSGESV

Postby jgpallero » Fri Dec 16, 2016 3:00 pm

Hello:

I'm thinking to use the DSGESV routine to solve some linear systems Ax=b and I have some questions about its accuracy and implementation in Lapack. First of all, the big question: is attainable using DSGESV the same accuracy in x than using the double DGESV version? As I understand it after reading [1], the answer is yes. But in [2] (page 234), [3, pages 108-109] and [4, page 140] it can be read that the maximum attained relative accuracy for x is (in infinite norm and using mixed precision with units roundoff u_1 and u_2, with u_2=u_1^2)

Code: Select all
||x-hat{x}||/||x|| ~ u_1


I think my doubt is due a misunderstanding by me, as I think that in the mixed iterative refinement algorithms presented in these books the original A and b data are used in u_1 (although the residual computation is stored in u_2), while in DSGESV are used in u_2 for r=b-Ax. Am I right?

Also, other question is about the stopping criteria used in DSGESV. I can see in [5] and [6] that the stopping criteria is

Code: Select all
||r||_2 <= ||x||_2*||A||_F*u_2*sqrt(N)


but in the current Lapack the criteria is

Code: Select all
||r||_Inf <= ||x||_Inf*||A||_Inf*u_2*sqrt(N)


So my question is: what is the difference between the criteria based on the 2 and Frobenius norm and the one based on the infinity norm?

I've read [5], but can't find the derivation from where such stopping criteria comes. Is there any document where I could find it?

In [3, page 99] it can be seen that for the solution of Ax=b via LU with partial pivoting with not large grow factor can be stated that

Code: Select all
||r||_Inf/(||x||_Inf*||A||_Inf) <= N*u,


which is similar to the DSGESV topping criteria, but using N instead of sqrt(N). Exist any objective reason for this or am I mixing concepts?

Also I can see in [2, page 104, exercise 7.2] that the forward error is bounded by (for any subordinated norm)

Code: Select all
||r||/(||A||*||x||) <= ||x-hat{x}||/||x|| <= k(A)*||r||/(||A||*||x||),


so an upper bound for ||x-hat{x}||/||x|| is k(A)*||r||/(||A||*||x||), and using the relation ||r||_Inf/(||x||_Inf*||A||_Inf) <= N*u I could write

Code: Select all
||x-hat{x}||/||x|| <= N*u_2*k_Inf(A)


or

Code: Select all
||x-hat{x}||/||x|| <= sqrt(N)*u_2*k_Inf(A)


if I use the stopping criteria from Lapack. Is this upper bound for the forward error of DSGESV in u_2 rigth?

*References

[1] A. Buttari, J. Dongarra, J. Langou, P. Luszczek and J. Kurzak, (2007), Mixed precision iterative refinement techniques for the solution of dense linear systems
[2] N.J. Higham, (2002), Accuracy and stability of numerical algorithms. 2nd ed., SIAM
[3] A. Björck (2015), Numerical methods in matrix computations. Springer
[4] G.H. Golub and C.F. Van Loan (2013), Matrix computations. 4th ed., John Hopkins
[5] J. Langou, J. Langou, P. Luszczek, J. Kurzak, A. Buttari and J. Dongarra, (2006), LAWN 175: Exploiting the performance of 32 bit floating point arithmetic in Obtaining 64 bit accuracy (revising iterative refinement for linear systems)
[6] http://icl.cs.utk.edu/iter-ref/custom/i ... 6&slid=207

Thank you
jgpallero
 
Posts: 24
Joined: Thu Jul 29, 2010 2:29 pm
Location: Madrid, Spain

Re: About technical details of mixed precision routine DSGES

Postby Julien Langou » Fri Dec 16, 2016 4:52 pm

Some quick answers.

First of all, the big question: is attainable using DSGESV the same accuracy in x than using the double DGESV version?


There are two questions: (1) whether iterative refinement converges, (2) what is the limiting accuracy of iterative refinement.

(1) is a real problem. If the matrix is too ill conditioned (like 1e10) iterative refinement will not converge. In this case we back up on double precision, and so we get the accuracy of double precision. (But the time is much higher than a double precision solve. About 1.5x higher.)

(2) The limiting accuracy of iterative refinement is in general better than the accuracy of a double precision solve. So, in other words, if you converge, then your converge to a very nice final accuracy. You just need to stop there.

Also, other question is about the stopping criteria used in DSGESV. I can see in [5] and [6] that the stopping criteria is
||r||_2 <= ||x||_2*||A||_F*u_2*sqrt(N)
but in the current Lapack the criteria is
||r||_Inf <= ||x||_Inf*||A||_Inf*u_2*sqrt(N)
So my question is: what is the difference between the criteria based on the 2 and Frobenius norm and the one based on the infinity norm?
I've read [5], but can't find the derivation from where such stopping criteria comes. Is there any document where I could find it?


We picked INFINITY norm because it seems more standard at the time. This is somewhat arbitrary. Frobenius-norm would have been OK.
I feel good with INFINITY norm.

In [3, page 99] it can be seen that for the solution of Ax=b via LU with partial pivoting with not large grow factor can be stated that
||r||_Inf/(||x||_Inf*||A||_Inf) <= N*u,
which is similar to the DSGESV topping criteria, but using N instead of sqrt(N). Exist any objective reason for this or am I mixing concepts?


Good catch. The idea is that N*u is a very pessimistic bound. It assumes that all round-off errors are working against you in the same direction. This is pretty much `impossible`. A better bound which assumes that round-off errors are somewhat random would be with SQRT(N). So we were bold and used SQRT(N). This gives a better precision. If you use an `N` in the formula, you would stop iteration too early and you would get hat DGESV is better than DSGESV. If you use SQRT(N), you are about the level of DGESV which is the goal.

Julien.
Julien Langou
 
Posts: 820
Joined: Thu Dec 09, 2004 12:32 pm
Location: Denver, CO, USA

Re: About technical details of mixed precision routine DSGES

Postby jgpallero » Fri Dec 23, 2016 9:55 am

Thank you very much for your answer

Julien Langou wrote:Some quick answers.

First of all, the big question: is attainable using DSGESV the same accuracy in x than using the double DGESV version?


There are two questions: (1) whether iterative refinement converges, (2) what is the limiting accuracy of iterative refinement.

(1) is a real problem. If the matrix is too ill conditioned (like 1e10) iterative refinement will not converge. In this case we back up on double precision, and so we get the accuracy of double precision. (But the time is much higher than a double precision solve. About 1.5x higher.)


Is not possible to use any of the Lapack condition estimators in O(n^2) in order to check if the matrix A could not be decomposed in single precision in order to use directly the double algorithm?

In [3, page 99] it can be seen that for the solution of Ax=b via LU with partial pivoting with not large grow factor can be stated that
||r||_Inf/(||x||_Inf*||A||_Inf) <= N*u,
which is similar to the DSGESV topping criteria, but using N instead of sqrt(N). Exist any objective reason for this or am I mixing concepts?


Good catch. The idea is that N*u is a very pessimistic bound. It assumes that all round-off errors are working against you in the same direction. This is pretty much `impossible`. A better bound which assumes that round-off errors are somewhat random would be with SQRT(N). So we were bold and used SQRT(N). This gives a better precision. If you use an `N` in the formula, you would stop iteration too early and you would get hat DGESV is better than DSGESV. If you use SQRT(N), you are about the level of DGESV which is the goal.

Julien.


OK, I understand this as a direct application of
A well-known rule of thumb is that a more realistic error estimate for a numerical method is obtained by replacing the dimension-dependent constants in a rounding error bound by their square root; thus if the bound is f(n)*u, the rule of thumb says that the error is typically of order sqrt(f(n))*u

which is a rule of thumb stated in Higham's Accuracy and stability of numerical algorithms, 2nd edition, page 48, who refers to

J.H. Wilkinson, (1963), Rounding errors in algebraic processes, page 26

Thanks
jgpallero
 
Posts: 24
Joined: Thu Jul 29, 2010 2:29 pm
Location: Madrid, Spain

Re: About technical details of mixed precision routine DSGES

Postby Julien Langou » Fri Dec 23, 2016 10:05 am

Is not possible to use any of the Lapack condition estimators in O(n^2) in order to check if the matrix A could not be decomposed in single precision in order to use directly the double algorithm?


LAPACK condition estimator is O(n^2) but we can use it only once the factorization (LU, Cholesky, etc. whichever) is done.

The factorization is O(n^3).

So yes xPOCON, xGECON are O(n^2) but they rely on a previously done O(n^3) factorizations.

In any case, I thought there was no point in doing condition estimation in this case. Condition estimation is O(n^2). Iterative refinement is O(n^2). So I thought that the best way to know if iterative refinement works was to . . . run iterative refinement and see if it succeeds or fails. To some extent, we can consider iterative refinement as some kind of condition estimation: If iterative refinement works, we are likely to have a well enough condition matrix; if iterative refinement fails, we know the matrix was ill-conditioned enough.

Please feel free to comment.

Julien.
Last edited by Julien Langou on Fri Dec 23, 2016 10:17 am, edited 1 time in total.
Julien Langou
 
Posts: 820
Joined: Thu Dec 09, 2004 12:32 pm
Location: Denver, CO, USA

Re: About technical details of mixed precision routine DSGES

Postby Julien Langou » Fri Dec 23, 2016 10:13 am

OK, I understand this as a direct application of "A well-known rule of thumb is that a more realistic error estimate for a numerical method is obtained by replacing the dimension-dependent constants in a rounding error bound by their square root; thus if the bound is f(n)*u, the rule of thumb says that the error is typically of order sqrt(f(n))*u" which is a rule of thumb stated in Higham's Accuracy and stability of numerical algorithms, 2nd edition, page 48, who refers to J.H. Wilkinson, (1963), Rounding errors in algebraic processes, page 26


Exactly. This is it. Julien.
Julien Langou
 
Posts: 820
Joined: Thu Dec 09, 2004 12:32 pm
Location: Denver, CO, USA

Re: About technical details of mixed precision routine DSGES

Postby jgpallero » Fri Dec 23, 2016 10:15 am

Trank you again for your fast and clear answers

Cheers
jgpallero
 
Posts: 24
Joined: Thu Jul 29, 2010 2:29 pm
Location: Madrid, Spain


Return to Algorithm / Data

Who is online

Users browsing this forum: No registered users and 1 guest

cron