Displaying 11-15 of 49 Entries
STREAM Benchmarking: Intel Xeon 5500 Nehalem vs AMD Opteron 2400 Istanbul

The STREAM memory benchmark is a widely used synthetic test that measures sustainable memory bandwidth in MB/s.  Memory bandwidth has become more important as CPU vendors start adding more cores to a chip.


HPCC 1.3.1 Released

We are pleased to announce the release of HPCC 1.3.1. This is a bug-fix release that fixes two bugs that were introduced in version 1.3.0. The bugs were in PTRANS and FFT components of the code. Please use this version of the code instead of 1.3.0.

Summary of changes

  1. Update version of HPL (import of HPL 2.0 source code)
    • Replaced 32-bit Pseudo Random Number Generator (PRNG) with a 64-bit one.
    • Removed 3 numerical checks of the solution residual with a single one.
    • Added support for 64-bit systems with large memory sizes (before they would overflow during index calculations 32-bit integers.)
  2. Introduction of limit on FFT vector size so they fit in a 32-bit integer (only applicable when using FFTW version 2.)

2008 Cluster Challenge Results

The second Cluster Challenge was held a few weeks ago in association with SC08 in Austin and, with the cycling theme, has been described as the “Tour de France of SC”.  The peloton consisted of 7 teams gathered from 4 countries to build and run a supercomputer on benchmarks and applications for 46 straight hours.  The level of activity associated with the event is amazing with individuals working intensely until they exhaust themselves and slump in place to sleep (and I have photos to prove that).


Argonne, Oak Ridge labs sweep HPC Challenge

The Defense Advanced Research Projects Agency's High Performance Computing Challenge has recognized supercomputers at the Energy Department's Argonne National Laboratory and Oak Ridge National Laboratory for their superior performance.
The winners were announced at the SC 08 supercomputing conference held here this week.


With the petaflop barrier broken, is it time to change the benchmark?

At a presentation at the SC08 semi-annual supercomputing conference in Austin, Texas, an engineer with Oak Ridge National Laboratories in Tennessee who is an expert on the Linpack benchmark, suggested that the methodology used to determine supercomputer performance using Linmark may be behind the times. Specifically, Jack Dongarra -- the man credited with introducing the High-Performance Linpack (HPL) benchmark to the Top 500 program -- suggested that as supercomputers get bigger and can store more data, their lag times increase exponentially. This implies that making existing supercomputers bigger and faster eventually leads to a point of diminishing returns.


Displaying 11-15 of 49 Entries
Jul 30 2014 Contact: Admin Login