## Solving linear systems?

Open discussion regarding features, bugs, issues, vendors, etc.

### Solving linear systems?

Hello, everyone!

I have a question about solving *non-square* linear systems. I know that I can use dgelss to solve Ax = b, and I believe dgelss uses an SVD (singular value decomposition) as its backend.

I also know that I can solve a linear system via a qr factorization by 1) doing the QR factorization via dgeqrf or dgeqp3, 2) constructing Q via dorgqr, 3) multiplying Q^tb using dgemm, 4) and finally writing my own backsubstition to solve Rx = Q^tb.

Given that an SVD decomposition and a QR factorization are both O(mn^2) flops, is there any reason to use the second method if I am only solving one linear system? I believe that if I am solving multiple linear systems (i.e., Ax = b1, Ax = b2, Ax = b3, etc.), then it is useful to use the QR factorization method. I will be solving multiple linear systems.

I am really interested in whether the linear system is feasible. Thus, after I have a solution Ax = b, I should perform the multiplication Ax using dgemm, and test to see if Ax is close to b within some tolerance.

Are there any Lapack functions that perform a backsubstitution, or that would help in determining the feasibility checking?

I would greatly appreciate any comments on either the methods proposed here, or the Lapack functions that I am using. Please share if you think I am missing something!

Very best regards,
Susan
susan_ex2_texas

Posts: 13
Joined: Sat Jun 20, 2009 3:54 pm

### Re: Solving linear systems?

I have a question about solving *non-square* linear systems. I know that I can use dgelss to solve Ax = b, and I believe dgelss uses an SVD (singular value decomposition) as its backend.

Yes, DGELSS is fine, DGELSD uses SVD as well but computes the SVD with the divide and conquer algorithm. DGELS is another option that gives you the solution of the linear least squares through a QR factorization, and there also is DGELSY. So you have a total of four different options.

I also know that I can solve a linear system via a qr factorization by 1) doing the QR factorization via dgeqrf or dgeqp3, 2) constructing Q via dorgqr, 3) multiplying Q^tb using dgemm, 4) and finally writing my own backsubstition to solve Rx = Q^tb.

This is (almost) what DGELS and DGELSY do. (Step 2 and 3 are done in one step using DORMQR.) DGELS uses DGEQRF. DGELSY uses DGEQP3.

Given that an SVD decomposition and a QR factorization are both O(mn^2) flops, is there any reason to use the second method if I am only solving one linear system? I believe that if I am solving multiple linear systems (i.e., Ax = b1, Ax = b2, Ax = b3, etc.), then it is useful to use the QR factorization method. I will be solving multiple linear systems.

(1) SVD decomposition and QR factorization are O(mn^2) when m >>>> n. This is correct. If m is of the same order as n, you will see that QR factorization is significantly faster than SVD decomposition.
(2) You solve multiple linear least squares with either DGELS, DGELSY, DGELSD, DGELSS.

I am really interested in whether the linear system is feasible. Thus, after I have a solution Ax = b, I should perform the multiplication Ax using dgemm, and test to see if Ax is close to b within some tolerance.
Are there any Lapack functions that perform a backsubstitution, or that would help in determining the feasibility checking?

Yes that's it. You call DGEMV (one RHS) or DGEMM (several RHS) and then check the norm. Another way to do this is using the feature of the input/output vector B in DGELS:
Code: Select all
`*  B       (input/output) DOUBLE PRECISION array, dimension (LDB,NRHS)*          if TRANS = 'N' and m >= n, rows 1 to n of B contain the least*          squares solution vectors; the residual sum of squares for the*          solution in each column is given by the sum of squares of*          elements N+1 to M in that column;`

So the input vector B is "too long" for the output vector X. The first N slots of B are used to store X, the last N+1 to M are such that the norm of them norm( Bout(N+1:M, J) ) is the norm of the J-th residual: || Bin(1:M, J) - A(1:M,1:N) * X(1:N, J) || (where X(1:N, J) is Bout(1:N, J)). So all you need to do after DGELS is something like: NORMRES = DNRM2( M-N, B( 1, J), 1). (It's not a bad idea to check that this is the same as what comes out of a DGEMM on A though ... )
The same is true for DGESLD, DGELSS (but not DGELSY).

--julien
Julien Langou

Posts: 827
Joined: Thu Dec 09, 2004 12:32 pm
Location: Denver, CO, USA

### Re: Solving linear systems?

susan_ex2_texas wrote:...
Are there any Lapack functions that perform a backsubstitution, or that would help in determining the feasibility checking?
...

Very late :-) but as I was looking for this myself: (D,..)TRSV (BLAS Level 2) solves a linear equation system with a triangular matrix, i.e. does backsubstitution.
- Mathias
mkoerner

Posts: 1
Joined: Mon Jun 06, 2011 12:36 pm