LAPACK Archives

[Lapack] SPECjvm2007: dataset for computations kernel question


I'm working for Intel and participating in SPECjvm2007 development.

I would like to ask your advice as you are an expert in Applied


We would like to include several computations kernels (FFT, SPARSE, LU
and SOR) and are trying to understand now what dataset it makes sense to
use. I mean size of vectors, matrixes _per_ hw thread.


The alternatives are: use some small dataset ("in-cache", size of vector
or matrix about 256K - 1M) or large version ("out-of-cache" - ~8M).

The advantage of using "in-cache" version: the focus of the benchmark is
on computations rather than on the memory subsystem (as in
"out-of-cache" version).


But what choice corresponds to real world applications more? What do you


Say, for FFT, as these algorithms are recursive, and FFT is used for
sound decoding/encoding, probably FFT for small sizes are more actually.
Do you agree? Can we say the same for SPARSE, LU, SOR? 


Can you advice something according to your experience, accordingly to
some examples?


Thank you so much,



-------------- next part --------------
An HTML attachment was scrubbed...

<Prev in Thread] Current Thread [Next in Thread>
  • [Lapack] SPECjvm2007: dataset for computations kernel question, Maenkova, Evgeniya G <=

For additional information you may use the LAPACK/ScaLAPACK Forum.
Or one of the mailing lists, or