LAPACK and ScaLAPACK Survey Results - ordered by question
Question #16. What information in the LAPACK guide is hard to find or is missing, if any?
| Responses |
| More examples would be nice (more examples for scalapack would be nice as well) |
| N/A |
| Information about sparse matrices could be better documented. |
| Performance information is difficult to find. For instance, it took me
a while to confirm my observation that the Cholesky factorization using packed symmetric format were much slower than Cholesky factorizations using full dense format. |
| Information is not missing; however, it can be difficult to find on netlib, and more examples are needed. Some time the documentation is difficult to understand (you can't assume that even the most
hard core of us know the nomenclature). |
| It is difficult to find the names of the high-level functions. |
| I would like to see a list of NAG routines which have corresponding LAPACK
| it's so hard to read, i would like to see examples. |
| Sometimes hard to find the right function name for a given computation. |
| It is ok. |
| Syntax of routines, meanings and orders of parameters |
| Not everybody nowadays knows what LDA is - this scares off
some especially young users of LAPACK |
| The information is relevant and useful |
| algorithmic aspects and details |
| a precise description of the algorithm |
| It's not clear from the section on storage schemes how non-square packed storage works (the examples are square, only). One has to figure out how it might work, eg for non-square packed band storage.
The use of m,n,k,etc in the few topmost SVD routines is slightly confusing. It's not immediately clear how to implement it to produce a "non-full-span" U (for U*S*V^T=A say) so as to be most efficient when solving least-squares with m>>n .
In the absence of infomation in the docs about whether the QR implementations (with or without column pivoting) are rank-revealing it becomes more prudent to always have to do SVD for least-squares problems with might be rank-deficient. It might be useful if the docs said more on this. But that's asking for more mathematical education in the docs, which I realize is a big request.
Links in the LUG on netlib from the instances of LAPACK function names to their nearby source locations on netlib would be very useful, since the specification and calling sequence infomation is in comments in the individual routines' source files.
NB.The LUG section Specifications of Routines (lug/node149.html), as it appears on netlib at least, is empty except for a brief note. Thus the Individual Routines sources' comments appear to be the specs.
Details of the encoding of the results of the Bunch-Kaufman-Parlett decomposition of symmetric indefinite matrices seems missing in the guide. This makes it near difficult to extract and form the individual factors computed by, say, dsytrf. I realize that this request is inefficient and unwise and unnecessary when solving systems, etc, but sometimes users just really want to get their hands on the explicit matrix factors and are willing to lose the efficient encoding which the dsycon, dsytrs, etc understand. Using the details as supplied in the comments in, say, dsytrf's source, is involved.
The approximation of condition numbers (Higham's modification of Hager's method) can be inaccurate. The documentation isn't very clear on how inaccurate it might be.
| explicit examples |
| Program samples would be nice. Especially from other languages. |
| Always found what I need. |
| See Scalapack comments below. |
| The guide does not seem to include a discussion of the individual functions and their arguments. I often have to go to the Fortran source to find out which arguments a function takes. |