LAPACK Archives

[Lapack] [Dongarra-local] similar lapack function

Thanks to Mark and Julien for these clarification.
Actually, I was comparing the matlab QR, and QR that I implemented and based on 
DLARFG,  
with the LAPACK one and discover these sign+/- difference then I checked the 
DGEQRF of lapack and it was using the DLARFP.
Thanks
Azzam
 



On Nov 9, 2011, at 11:20 AM, Langou, Julien wrote:


On Nov 8, 2011, at 9:14 AM, Mark Gates wrote:


Does anyone know what is the difference between DLARFG and DLARFP

See 
      James W. Demmel, Mark Hoemmen, Yozo Hida, and E. Jason Riedy. 
Nonnegative Diagonals and High Performance on Low-Profile Matrices from 
Householder QR. SIAM J. Sci. Comput. 31, pp. 2832-2841 (10 pages).
      BUG :: Revert DLARFG to 3.1.0, move DLARFG to DLARFP and improved DLARFP

for the whole story. Here is the short story. You can print it and read it 
before bed time, it's a nice story. Azzam, please use DLARFG by default.

DLARFG is the "standard" one. Older than LAPACK. Given vector "a" construct 
the Householder reflector "v" such that "a" is mapped onto 
"e1" (with a scaling, since it is a unitary transformation, the scaling is 
+/- || a || in real or (e^(i.theta) || A || in complex).
Below each time I write +/-, or speak about a sign, I assume real arithmetic, 
please convert to ( e^( i . theta ) ) for the complex case.

DLARFP is new (SC08 -- LAPACK-3-2-0).
DLARFP guarantees that the diagonal entries of the R matrix are nonnegative ( 
=  0 ). 
( DLARFG guarantees nothing, you can have a plus or a minus. Depends. ) 
DLARFP is a contribution from Berkeley. 

DLARFP was requested by Alan Edleman, reason: this is how you should 
construct "true" random orthogonal matrices:
take a random A, do a QR factorization, get the Q. That's a random orthogonal 
matrix. However do this, you need to have
the diagonal entries of R positive to do this "right". (I skip the details 
... )

DLARFP is also convenient since it guarantees unicity of the R factor. 
(Otherwise you have +/- sign on the diagonal
that you cannot control.) So typically depending on the reduction tree, you 
would get different R with DLARFG (likely different), 
whereas for all reduction tree, you will get the same R for all. (The one 
with the nonnegative diagonal.)

DLARFP also enables the binary operation ( R ) <- 
rfactor_of_the_qr_factorization( R1, R2 ) to be associative and commutative. 

Does anyone know what is the difference between DLARFG and DLARFP
which should do the same job thus should be the same function.

You are correct the interfaces are 100% compatible. 
( So for LAPACK-3-2-0 we initially changed all calls to LARFG to calls 
toLARFP. We kept LARFG for backward compatibility. )

It appears LARFP was renamed LARFGP in a recent lapack release?

Correct. 

They look like are the same function with very slight difference probably 
related to different version of Lapack.
if yes why don't lapack people redirect the bad function to the good one?

The answer is that we discovered that there is no good one and no bad one, so 
we have both.
More specifically, there is a nasty numerical problem in LARFGP.
We discovered quite late. This was reported by MathWorks. (And there were not 
happy, not happy at all.) I even had a student breaking Matlab
in my classroom and finding this bug while coding the QR algorithm for 
nonsymmetric matrices ... The bug is subtle. It is an denormalized numbers
problem where we lose relative accuracy when this denormalized numbers 
appear. (Would there not be any denorms, the code would work just fine.)

We fixed the bug. Took us a few months ... Finally Jim and Guillaume Revy 
performed an extensive test suite, the new DLARFP as fixed is OK, but yet 
might lost a digit of accuracy with respect to DLARFG. No idea why.

So we decided to roll back LAPACK to DLARFG for all routines and created a 
DGEQRFP which calls DLARFP.
(Just for Alan Edleman ;) ).

LARFG is used 3x more than LARFGP in lapack.

Maybe, that's super negligible. The cost is O(n) in any case and it is worth 
doing a good job in this kind of routines since
this is where all the subtleties are. 

I would think LARFG would be more accurate since it can avoid cancellation 
in cases where alpha is positive.

Sure, this is if you are not careful, there is a very nasty cancellation. 
This is well-known, however there is a so-called trick: Paige's trick (which 
is explained in Golub and van Loan for example)
to avoid this cancellation. The initial LARFP relies on this trick. Otherwise 
that would be disastrous.

Best wishes,
Julien.




----------------------------------------------------------------------------------------------------
Azzam Haidar; Innovative Computing Laboratory; 
Electrical Engineering and Computer Science Department;
1122 Volunteer Blvd; University of Tennessee; Knoxville TN 37996-3450;
+1-865-974-9308; haidar@Domain.Removed; http://www.cs.utk.edu/~haidar/
----------------------------------------------------------------------------------------------------






-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
http://lists.eecs.utk.edu/mailman/private/lapack/attachments/20111109/96daf8e9/attachment-0001.html
 

<Prev in Thread] Current Thread [Next in Thread>


For additional information you may use the LAPACK/ScaLAPACK Forum.
Or one of the mailing lists, or