Page 1 of 1

floating point error causing segfault in dgelsd on mac os x

PostPosted: Thu Mar 22, 2007 11:46 am
by aaronBabcock
Hello All,

In the clapack documentation for the dgelsd function, the computation for worspace size contains a variable called nlvl whose value is specified as:

NLVL = MAX( 0, INT( LOG_2( MIN( M,N )/(SMLSIZ+1) ) ) + 1 )

I am running into a specific case where floating point error is resulting in a too small NLVL value, resulting in an inadequate workspace size, finally causing a segfault in dgelsd.

In my case MIN( M,N )/(SMLSIZ+1) evaluates to 8. It appears on most platforms int(log2(8)) is 3. On my platform int(log2(8)) evaluates to 2.

My question is, how should I protect against this, and do you think the formula in the documentation should change? Or is the problem solely the fault of the Mac OS X math library.

I can provide more information about my problem if necessary but if anyone has access to an intel Mac OS X machine you can reproduce the root of the problem with the simple
program:

int main(void){
printf("%d:", (int)log2(8));
}

Platform information:
Mac OS X 10.4.9 intel: int(log2(8)) = 2
Mac OX X 10.4.9 ppc: int(log2(8)) = 3
Linux on intel: int(log2(8)) = 3

thanks,
Aaron

PostPosted: Fri Mar 30, 2007 2:35 pm
by Julie
Yes...very interesting problem indeed.
I checked and you are correct. It looks like the conversion does not round but trunk the float on a Mac OS X Intel
you will have to "force" to round, just do int(round(log2(8))) on the Mac Intel to get the good result.
I need to check if there is a standard that Mac OS X intel is not repecting.
Julie