floating point error causing segfault in dgelsd on mac os x

Open discussion regarding features, bugs, issues, vendors, etc.

floating point error causing segfault in dgelsd on mac os x

Postby aaronBabcock » Thu Mar 22, 2007 11:46 am

Hello All,

In the clapack documentation for the dgelsd function, the computation for worspace size contains a variable called nlvl whose value is specified as:

NLVL = MAX( 0, INT( LOG_2( MIN( M,N )/(SMLSIZ+1) ) ) + 1 )

I am running into a specific case where floating point error is resulting in a too small NLVL value, resulting in an inadequate workspace size, finally causing a segfault in dgelsd.

In my case MIN( M,N )/(SMLSIZ+1) evaluates to 8. It appears on most platforms int(log2(8)) is 3. On my platform int(log2(8)) evaluates to 2.

My question is, how should I protect against this, and do you think the formula in the documentation should change? Or is the problem solely the fault of the Mac OS X math library.

I can provide more information about my problem if necessary but if anyone has access to an intel Mac OS X machine you can reproduce the root of the problem with the simple
program:

int main(void){
printf("%d:", (int)log2(8));
}

Platform information:
Mac OS X 10.4.9 intel: int(log2(8)) = 2
Mac OX X 10.4.9 ppc: int(log2(8)) = 3
Linux on intel: int(log2(8)) = 3

thanks,
Aaron
aaronBabcock
 
Posts: 1
Joined: Fri Mar 16, 2007 4:25 pm

Postby Julie » Fri Mar 30, 2007 2:35 pm

Yes...very interesting problem indeed.
I checked and you are correct. It looks like the conversion does not round but trunk the float on a Mac OS X Intel
you will have to "force" to round, just do int(round(log2(8))) on the Mac Intel to get the good result.
I need to check if there is a standard that Mac OS X intel is not repecting.
Julie
Julie
 
Posts: 299
Joined: Wed Feb 23, 2005 12:32 am
Location: ICL, Denver. Colorado


Return to User Discussion

Who is online

Users browsing this forum: Google [Bot] and 5 guests