DPOTRS in extended precision?

Post here if you have a question about LAPACK or ScaLAPACK algorithm or data format

DPOTRS in extended precision?

Postby jgpallero » Thu Jun 20, 2019 5:35 pm

Hello:

I must deal with a symmetric system of equations, A*x=B with a very badly conditioned matrix (cond ~ 1.0e21). Solving the problem with double precision routines (using GNU Octave, which calls Lapack) I obtain a solution, but as the forward error bound (if I remember correctly) is FE<=cond(A)*epsilon, and epsilon is 2.22e-16, the forward error is bounded by the number 2e5, which I find so big.

Then, I would like to check the solution of the problem using quadruple precision (128 bits), as the corresponding machine epsilon is ~2e-34. I know there is the XBLAS implementation of BLAS in extended precision, but what about Lapack? Is there a Lapack implementation in extended precision? I would like to use the DPOTRS subroutine in extended precision in order to solve my problem. Is it possible?

Thanks
jgpallero
 
Posts: 27
Joined: Thu Jul 29, 2010 2:29 pm
Location: Madrid, Spain

Re: DPOTRS in extended precision?

Postby Julien Langou » Thu Jun 20, 2019 5:58 pm

Is there a Lapack implementation in extended precision?


1) There are some implementations out there. If you google you will find a few. LAPACK does not support extended precision.

2) For Cholesky you can almost ``do it yourself``. Cholesky is pretty easy. Sometimes compiler with a flag can change the precision at compile time.
You can have a look at DPOTF2+DPOTRS ( or even better DPOTRF2+DPOTRS ) and try to use a multi-precision package below it. Should work. This is not ``too`` hard.

3) You can look at: ( https://github.com/langou/latl )
It uses C++ templates. If you know C++, you can use multiprecision with with it. It is a while I did not work on this, but, at the time, it was working nicely.

4) One of the main problems is that you will not have an optimized BLAS library. So computation will take time. If the matrices are small, this is fine. If they are large, this will take time.

I know there is the XBLAS implementation of BLAS in extended precision?


XBLAS uses extra precision internally. The idea is to use quadruple precision inside the routine to do accurate computations, then returns the result as double. What I am saying is that using XBLAS below Cholesky will not help in your case. (I am not suggesting that this is what you are suggesting.) We would need to write a full on multi-precision LAPACK.

I must deal with a symmetric system of equations, A*x=B with a very badly conditioned matrix (cond ~ 1.0e21). Solving the problem with double precision routines (using GNU Octave, which calls Lapack) I obtain a solution, but as the forward error bound (if I remember correctly) is FE<=cond(A)*epsilon, and epsilon is 2.22e-16, the forward error is bounded by the number 2e5.


This is correct. If the ill-conditioning comes from ``scaling``, maybe you can do something about it. Using DPOSVX or DPOSVXX could help.

I would like to use the DPOTRS subroutine in extended precision in order to solve my problem.


Please note that this is not only the solve (TRS) that needs to be in extended precision, but you also want the factorization (TRF) in extended precision. Unless you want to use iterative refinement, which would be a good idea. So you factor in double, and you use iterative refinement in quadruple to improve the quality of the solution. (This is the idea behind DPOSVXX.)

I must deal with a symmetric system of equations, A*x=B with a very badly conditioned matrix (cond ~ 1.0e21).


Please note that what often happens, unfortunately, is that, if the condition number of A in double precision is computed in double precision as 1e16, then, unfortunately, the condition number of A in quadruple precision is computed in quadruple precision as 1e34. I.e. what is really happening is that the condition number of A is "infinite" and increasing the precision does not really help. The more precision, the more ``you see`` the ill-conditioning. The ill-conditioning is intrinsic to the problem. Let us hope that this is not the case for your problems.

Cheers,
Julien.
Julien Langou
 
Posts: 834
Joined: Thu Dec 09, 2004 12:32 pm
Location: Denver, CO, USA

Re: DPOTRS in extended precision?

Postby jgpallero » Fri Jun 21, 2019 6:28 am

Thank you for your answer

2) For Cholesky you can almost ``do it yourself``. Cholesky is pretty easy. Sometimes compiler with a flag can change the precision at compile time.


Gfortran has the flag -fdefault-real-16 in order to convert the DOUBLE PRECISION variables into quadruple precision. So the question is: is it safe to compile with this flag the WHOLE Lapack and BLAS implementation? The reason for do this is the aim of using not only the DPOTRF and DPOTRF subroutines, but also the DPOCON subroutine in order to check the condition number for inspecting the behavior you said in this paragraph:
Please note that what often happens, unfortunately, is that, if the condition number of A in double precision is computed in double precision as 1e16, then, unfortunately, the condition number of A in quadruple precision is computed in quadruple precision as 1e34. I.e. what is really happening is that the condition number of A is "infinite" and increasing the precision does not really help. The more precision, the more ``you see`` the ill-conditioning. The ill-conditioning is intrinsic to the problem. Let us hope that this is not the case for your problems.


I must tell the compiled subroutines from C, so I must to write also the corresponding header for the Fortran call. GCC has the __float128 data, which is 128 bits length, but is this datatype compatible with the result of applying the -fdefault-real-16 flag in the compilation with Gfortran?

I'll also try DPOSVXX, but I would like also to compare with quadruple precision

Thanks

POST SCRIPTUM EDITION

On the other hand, I've tried to apply a scaling to my symmetrical matrix in the form of B=diag(1.0/sqrt(diag(A))), so the problem is now B'*A*B*x1=B'*b, and x=B*x1. Now, the matrix condition was drastically reduced from ~1e21 to ~1e6. Could this mean that the ill-conditioning of my matrix comes from a bad scaling and is not intrinsic to the undelying physical problem?

I solve the problem with this new matrix and then apply to the solution the change x=B*x1 in order to obtain the solution vector corresponding to the original problem. Now, if I compare the solution obtained with unscaled system, which I call x_orig, to the one obtained with the scaled system, which I call x_scaled, I can see verysmall differences. Could be an approximation to the forward error the metric nom(x_orig-x_scaled)/norm(x_scaled)? In my particular problem I obtain in this way a value equal to ~ 1e-11, which is below the theoretical bound norm(x-x_true)/norm(x_true)<=cond(A)*epsilon, which is about ~1e-9

If I consider the x_scaled the true solution, means that that the unscaled solution is good although for the original problem the forward error bound were of the order of ~1e6?

Is there any mistake in my reasoning?
jgpallero
 
Posts: 27
Joined: Thu Jul 29, 2010 2:29 pm
Location: Madrid, Spain

Re: DPOTRS in extended precision?

Postby jgpallero » Sun Jun 23, 2019 5:43 pm

Hello again:

Finally, I've implemented three different solutions:

1. Solution in double precision, using both, the system matrix normalized with S=diag(1/sqrt(diag(A))), and not normalized
2. Solution in quadruple precision without normalization compiling the corresponding Lapack subroutines using the Gfortran's flag -fdefault-real-16
3. Solution in double precision with DPOSVX

The condition number (based on the L1 norm) of the unnormalized matrix is ~1e21, and for the normalized version the condition is ~1e7. The 1e21 value bounds the forward error as FE<=epsilon_double*cond=3e6, but usind the condition for the normalized matrix the forward error (using L1 norm) is bounded by FE<=4e-9. Is correct the error bound computation of the unnormalized solution using the normalized matrix condition number? In any case, the solution using normalization or not is almost the same. The relative forward error considering the solution obtained with the normalized system as the true one is (L1 norm) ||x-x_true||/||x_true||=1.5e-10.

On the other hand, using the DPOCON subroutine compiled fo quadruple precision I've checked that the condition number of the matrix is the ~1e21 value, in order to discard the possible problem pointed by you in your previous message. In this case, as the epsilon for quadruple precision is 1.9e-34 the forward error is bounded by FE<=2e-12, so the solution can be considered reliable. But the solution in quadruple precision is almost the same as the ones obtained in double precision. COnsidering the quadruple result as the true one, the relative error is x_true||/||x_true||=7e-11

But using the DPOSVX subroutine it is mandatory to normalize the matrix, because using the unnormalized problem the matrix is detected as singular. Solving the normalized problem the forward error estimation of DPOSVX is ~1.9e2, but using again the quadruple solution as the true one I obtain the relative error as x_true||/||x_true||=2e-10

So finally I'm a bit confused because solving the problem in double and unnormalized (condition number 1e21) I obtain a correct result, fully compatible with the one obtained in quadruple precision and DPOSVX

Thanks
jgpallero
 
Posts: 27
Joined: Thu Jul 29, 2010 2:29 pm
Location: Madrid, Spain


Return to Algorithm / Data

Who is online

Users browsing this forum: No registered users and 3 guests

cron