Good-Quality Solver for Quartic Equation

Open discussion regarding features, bugs, issues, vendors, etc.

Good-Quality Solver for Quartic Equation

Postby DavidB » Tue Aug 08, 2006 1:37 pm

Well, I am going back to basics, reviewing some code.

At the moment, I am using the basic algorithm for calculating the four roots of a Quartic Equation:

a*x^4 + b*x^3 + c*x^2 + d*x + e = 0

(a, b, c, d, and e are real)

In other words, I do very little in terms of checking for overflow and/or underflow as I go through the computations. I would now like to find some more robust code, code that is better at dealing with round-off errors.

Does LAPACK include a routine for calculating the roots of a real, quartic equation? If so, what is the routine's name?

Thanks for your assistance.

David
DavidB
 
Posts: 38
Joined: Thu Dec 15, 2005 1:49 pm
Location: Vancouver, B.C. Canada

Postby Julie » Tue Aug 08, 2006 4:13 pm

Hi DavidB,

Lapack does not have any "direct" routine for calculating the roots of a real, quartic equation, i.e:
Code: Select all
if you give (a,b,c,d,e) then you get the roots of the polynomial as output

But....
If you really want to use the LAPACK library, you can construct the corresponding companion matrix of your degree four polynomial (a.ne.0) this gives:
Code: Select all
    A = [
    0 0 0 -e /a
    1 0 0 -d/a
    0 1 0 -c/a
    0 0 1 -b/a
    ];
   

Eigenvalues of A and roots of P are the same.
This technique is useful (and used) for large polynomial, it gives rise to an O(n3) algorithm.
For a degree 4 polynomial, using direct formulas might be faster, while I do not know if any time is that important for such a problem scale.

Sincerely,
Julie
Julie
 
Posts: 299
Joined: Wed Feb 23, 2005 12:32 am
Location: ICL, Denver. Colorado

Postby DavidB » Fri Aug 11, 2006 7:03 pm

Thanks, Julie.

That worked.

I have done some further searching and come across a NETLIB routine that is highly recommended. It is not a LAPACK routine, or one that is specific to the Quartic Equation. It is a general solver for the roots of a polynomial of real coefficients:

http://www.netlib.org/toms/493

I also found a C++ translation of this routine:

http://www.crbond.com/download/misc/rpoly.cpp


Regards,


David
DavidB
 
Posts: 38
Joined: Thu Dec 15, 2005 1:49 pm
Location: Vancouver, B.C. Canada

Postby DavidB » Mon Aug 28, 2006 1:58 pm

A follow-up regarding the toms/493 routine I mentioned in my previous post
(http://www.netlib.org/toms/493).

Is anybody supporting TOMS algorithms?
Many routines posted on the NETLIB site include documentation within the comments (date written, author, etc.). However, this TOMS/493 routine does not. Who would a person contact with questions about this routine? The reason I ask is because I am trying to write my own version of this routine in C++. Most of the program looks pretty straightforward; however, the three lines declaring machine constants is throwing me off:

Code: Select all
C THE VALUES BELOW CORRESPOND TO THE BURROUGHS B6700
      BASE = 8.
      ETA = .5*BASE**(1-26)
      INFIN = 4.3E68
      SMALNO = 1.0E-45


I am not familiar with the BURROUGHS B6700.

C++ provides some support for machine constants. For example, I am pretty sure that ETA could be replaced with the machine constant DBL_EPSILON provided within C++. Confirmation of this assumption would be nice.
Same thing for BASE: I think on most PCs, BASE would simply be 2.

Somebody please let me know the C++ equivalents of those four values (BASE, ETA, INFIN, and SMALNO). Built-in functions would be preferred, rather than magic numbers.

Regards,


David
DavidB
 
Posts: 38
Joined: Thu Dec 15, 2005 1:49 pm
Location: Vancouver, B.C. Canada

Postby sven » Tue Aug 29, 2006 6:54 am

Dear David,

If you look at the original TOMS article it does say what these vaues are:

ETA maximum relative representation error, which can be described as the
smallest positive floating-point number such that 1 + ETA > 1 in
floating-point arithmetic
INFIN large floating-point number near the top of the range
SMALNO small positive floating-point number near zero
BASE exponent base for the floating-point number system.

TOMS algorithms are not formally supported, although contacting the authors is often successful. I appreciate that in this case the article is over thirty years old!

By the way, TOMS algorithms are copyrighted - see the top of the netlib web page page:

"Use of ACM Algorithms is subject to the ACM Software Copyright and
License Agreement"

You are unlikely to have any problem with this, but formally you should ask for permission. A contact address is given on the netlib web page.

Best wishes,

Sven Hammarling.
sven
 
Posts: 146
Joined: Wed Dec 22, 2004 4:28 am

Re: Good-Quality Solver for Quartic Equation

Postby Julien Langou » Wed Oct 21, 2009 4:30 pm

Here are three emails we had with Peter Strobach. The emails are in
chronological order. (So the first email is the orginal email from Peter,
then you will read my answer, finally his answer.)

To put it short, Peter is reporting than using close form formula to solve
degree 4 polynomials ("quartic polynomials") is faster and more stable
than DGEEV + companion matrix.


>> From: "Peter Strobach"
>> Date: October 21, 2009 3:52:31 AM MDT
>> Subject: Good-Quality Solver for Quartic Equation
>>
>> Dear Julie,
>>
>> I have seen your comment on the above topic "good-quality solver for
>> quartic equation" in the Lapack/Scalapack forum user discussion.
>>
>> You wrote on Aug. 08, 2006: "Lapack does not have any 'direct' method
>> for calculating the roots of a real, quartic equation, but ...''
>>
>> and then you recommend the classical companion matrix method as an
>> alternative to the closed-form solution.
>>
>> Now it appears that I have a larger optimization problem where I have
>> to solve very many quartic equations, literally tens of thousands of
>> them, in one of these larger problems.
>>
>> Therefore, for me, it is a pressing issue, and the solution of the
>> quartic, in my case, is time and accuracy critical, as this
>> dominates runtimes and overall accuracy of my larger optimization
>> program.
>>
>> In 2008, I began writing a code (in double precision Fortran) for
>> the closed-form quartic solver. Meanwhile, this program is well
>> developed.
>>
>> Now, as I have seen your comment in this forum, I had the desire
>> to run an experiment, how fast and accurate a companion matrix
>> type (iterative) quartic solver based on unsymmetric eigenvalue
>> computation might be, in direct comparison with my closed-form
>> quartic solver.
>>
>> Thus, I readily implemented the companion type solver and used
>> the Lapack unsymmetric eigenvalue routine DGEEV.f (computation
>> of eigenvectors disabled, of course).
>>
>> The experiment was carried out on a Acer TravelMate 6592G, my favourite
>> simulation computer, with processor Intel T9300 dual core, 2.5 GHz,
>> 800 MHz FSB, 6 MB L2 cache. The compilier was the latest version
>> of the Intel vectorizing Fortran compiler.
>>
>> My test program comprises a random generation of the quartic
>> coefficients and then the call of the solver routines.
>>
>> The test amounts the solution of 100.000 (one hundred thousand)
>> random quartics. The runtime was accumulated for the solver subroutines
>> using the cpu_time command in Fortran.
>>
>> The following overall cpu_time results were obtained this morning
>> for solving 100.000 random quartics on this TravelMate 6592G:
>>
>> - Closed-form quartic solver (Strobach): overall cpu_time = 0.25 sec
>> - Companion/Lapack eigenvalue solver : overall cpu_time = 2.78 sec
>>
>> You can see - quite striking differences: The closed form quartic
>> solver is more than 10 times faster than the Companion/Eigenvalue
>> based routine.
>>
>> And also the accuray of the solution was overall significantly better
>> for the closed-form solver when compared with the eigenvalue based
>> solver, where the overall accuracy was unsatisfactory.
>>
>> For me, this is very important, because the quality of my overall
>> optimization routine depends crucially on these two factors:
>> Runtime and Accuracy of the underlying quartic solver.
>>
>> Best regards,
>> Peter Strobach, Ph.D.

> Datum: Wed, 21 Oct 2009 08:58:05 -0600 (MDT)
> Von: Julien Langou
> An: Peter Strobach
> CC: Julie Langou
> Betreff: Re: Fwd: Good-Quality Solver for Quartic Equation

>
> Dear Peter,
>
> Thanks for your comment. Yes, there is close form solution for any
> polynomial of degree 4. (That stops here though! You wont do this for
> degree 5 or higher!) In the nonsymmetric eigenvalue solver, LAPACK stops
> at degree 2: when we have a 2-by-2 block diagonal, we get the eigenvalues
> using the discriminant formula. We could theoretically have stopped before
> at degree 4 as you mentionned or later at degree 1. Odd degree (1 and 3)
> being not particularly welcome since we want to leave complex conjugate
> pair in pair. (To avoid complex arithmetic.)
>
> It is good to know that the close formula for the quartic equation is
> faster than LAPACK on the companion matrix. Thanks for reporting your
> experiment to us. Yes a factor of 10 is relevant. The stability report is
> even more interesting. At least unknown to me. I am not sure what we will
> do with this. The software implication seems quite heavy. But it is good
> to know for sure.
>
> For information, current research on 'numerical linear algebra' and
> 'polynomial root finding' is on: how to develop algorithms that exploit
> the structure of the companion matrix. Exploiting the structure of the
> companion should enable to have faster and more stable root finding
> algorithms (for general n).
>
> May I forward your to the lapackers community? and post it on our forum as
> well? I believe what you are saying is relevant for our community.
>
> Best wishes,
> Julien and Julie.
>


Date: Wed, 21 Oct 2009 09:31:21 -0600
To: "Langou, Julien"
Subject: Re: Fwd: Good-Quality Solver for Quartic Equation

Dear Julien and Julie,

there is no problem in forwarding this report to the lapackers
community. I have seen there are relatively many views on this
subject matter.

I can see your argumentation below. With 2 x 2 blocks you can
stay within the real Schur form, avoiding complex arithmetic.

Avoiding complex arithmetic: Sometimes this is nice and sometimes
this is not wise ...

(for instance, my closed-form solver uses double complex in a
versatile way at some places ...)

Anyway, an old wisdom counts: Don't do anything iteratively
before you haven't done it in closed form (if you can)!

It was only a subproblem in a larger problem which is joint
matrix diagonalization: Finding the best generalized eigenvectors
for a set of more than two matrices, which is a challenging
nonlinear optimization problem that can be handled very nicely
using the unit-determinant constraint.

Yes, this is enlightening: To discover how restrictive it is to think
in the world of the orthonormality constraint, and how much freedom
one gains as soon as one arrives in the world of the unit-determinant
constraint!

All the best,
Peter.
Julien Langou
 
Posts: 835
Joined: Thu Dec 09, 2004 12:32 pm
Location: Denver, CO, USA

Re: Good-Quality Solver for Quartic Equation

Postby DavidB » Sat Oct 31, 2009 9:51 pm

Julien, this is good information from Peter.

Would Peter be willing to share his source code for the quartic equation solver?

When I first looked into this problem, I was told there is no way to take advantage of the closed-form equation to create a robust numerical solver for a quartic equation; I was told an iterative approach was the best approach because it could be programmed to provide a solution to as many digits of accuracy as possible (i.e. - as good as possible allowed by computer round-off error, which a simple coding of the closed-form equation could not).

In any case, I am very curious to see Peter's source code; any chance he is willing to have it posted here?
DavidB
 
Posts: 38
Joined: Thu Dec 15, 2005 1:49 pm
Location: Vancouver, B.C. Canada


Return to User Discussion

Who is online

Users browsing this forum: No registered users and 5 guests