Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Manufacturer/Processor Type, Speed, Count, Threads, Processes
Includes the manufacturer/processor type, processor speed, number of processors, threads, and number of processes.
Move mouse over this column for each row to display additional information, including; manufacturer, system name, interconnect, MPI, affiliation, and submission date.

Run Type

Run Type, indicates whether the benchmark was a base run or was optimized.

Processors

Processors, this is the number of processors used in the benchmark, entered in the form by the benchmark submitter.

G-HPL ( system performance )
HPL, Solves a randomly generated dense linear system of equations in double floating-point precision (IEEE 64-bit) arithmetic using MPI. The linear system matrix is stored in a two-dimensional block-cyclic fashion and multiple variants of code are provided for computational kernels and communication patterns. The solution method is LU factorization through Gaussian elimination with partial row pivoting followed by a backward substitution. Unit: Tera Flops per Second
G-PTRANS (A=A+B^T, MPI) ( system performance )
PTRANS (A=A+B^T, MPI), Implements a parallel matrix transpose for two-dimensional block-cyclic storage. It is an important benchmark because it exercises the communications of the computer heavily on a realistic problem where pairs of processors communicate with each other simultaneously. It is a useful test of the total communications capacity of the network. Unit: Giga Bytes per Second
G-RandomAccess ( system performance )
Global RandomAccess, also called GUPs, measures the rate at which the computer can update pseudo-random locations of its memory - this rate is expressed in billions (giga) of updates per second (GUP/s). Unit: Giga Updates per Second
EP-STREAM Triad ( per process )
The Embarrassingly Parallel STREAM benchmark is a simple synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple numerical vector kernels. It is run in embarrassingly parallel manner - all computational processes perform the benchmark at the same time, the arithmetic average rate is reported. Unit: Giga Bytes per Second
EP-STREAM-sys ( system performance - derived )
The Embarrassingly Parallel STREAM benchmark is a simple synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple numerical vector kernels. It is run in embarrassingly parallel manner - all computational processes perform the benchmark at the same time, the arithmetic average rate is multiplied by the number of processes to attain this derived value. ( EP-STREAM Triad * MPI Processes ) Unit: Giga Bytes per Second
EP-DGEMM ( per process )
Embarrassingly Parallel DGEMM, benchmark measures the floating-point execution rate of double precision real matrix-matrix multiply performed by the DGEMM subroutine from the BLAS (Basic Linear Algebra Subprograms). It is run in embarrassingly parallel manner - all computational processes perform the benchmark at the same time, the arithmetic average rate is reported. Unit: Giga Flops per Second
G-FFT ( system performance )
Global FFT, performs the same test as FFT but across the entire system by distributing the input vector in block fashion across all the processes. Unit: Giga Flops per Second
Randomly Ordered Ring Bandwidth ( per process )
Randomly Ordered Ring Bandwidth, reports bandwidth achieved in the ring communication pattern. The communicating processes are ordered randomly in the ring (with respect to the natural ordering of the MPI default communicator). The result is averaged over various random assignments of processes in the ring. Unit: Giga Bytes per second
Randomly-Ordered Ring Latency ( per process )
Randomly-Ordered Ring Latency, reports latency in the ring communication pattern. The communicating processes are ordered randomly in the ring (with respect to the natural ordering of the MPI default communicator) in the ring. The result is averaged over various random assignments of processes in the ring. Unit: micro-seconds
Geometric mean
Geometric mean of normalized results for G-HPL, G-RandomAccess, EP-STREAM-Sys, and G-FFT. These are the four performance results which are featured in the HPCC Awards. The normalization is done independently for each column rather than against a single machine's results. Consequently, the value of mean will change over time as faster machines appear in the HPCC database. Unit: unitless







To highlight the HPC Challenge Class 1 Awards which will be presented at SC14 we are not displaying the new submissions until the Awards session at SC14. The Awards session is on November 18th at 12:15 CST in room 273.
( 19.1 more days... )

Geometric Mean - Optimized Runs Only - 19 Systems - Generated on Thu Oct 30 11:16:21 2014
PlotSystem Information
System - Processor - Speed - Count - Threads - Processes
G-HPL G-Random
Access
EP-STREAM Sys G-FFT Geometric
mean
 MA/PT/PS/PC/TH/PR/CM/CS/IC/IA/SDTFlop/s Gup/s TB/s TFlop/s mean
Manufacturer: NEC
Processor Type: NEC SX-7
Processor Speed: 0.552GHz
Processor Count: 32
Threads: 1
Processses: 32
System Name: NEC SX-7
Interconnect: non
MPI: MPI/SX 7.0.6
Affiliation: Tohoku University, Information Synergy Center
Submission Date: 03-24-06
0.26
0.26
0.88
0.08
0.0058
Manufacturer: NEC
Processor Type: NEC SX-7
Processor Speed: 0.552GHz
Processor Count: 32
Threads: 16
Processses: 2
System Name: NEC SX-7
Interconnect: non
MPI: MPI/SX 7.0.6
Affiliation: Tohoku University, Information Synergy Center
Submission Date: 03-24-06
0.18
0.15
0.90
0.01
0.0026
Manufacturer: NEC
Processor Type: NEC SX-8
Processor Speed: 2GHz
Processor Count: 40
Threads: 1
Processses: 40
System Name: NEC SX-7C
Interconnect: IXS
MPI: MPI/SX 7.1.3
Affiliation: Tohoku University, Information Synergy Center
Submission Date: 03-24-06
0.61
0.01
1.44
0.09
0.0036
Manufacturer: NEC
Processor Type: NEC SX-8
Processor Speed: 2GHz
Processor Count: 40
Threads: 8
Processses: 5
System Name: NEC SX-7C
Interconnect: IXS
MPI: MPI/SX 7.1.3
Affiliation: Tohoku University, Information Synergy Center
Submission Date: 03-24-06
0.30
0.00
1.44
0.03
0.0016
Manufacturer: IBM
Processor Type: IBM Power5+
Processor Speed: 2.2GHz
Processor Count: 64
Threads: 1
Processses: 64
System Name: P5 P575+
Interconnect: HPS
MPI: poe 4.2.2.3
Affiliation: IBM
Submission Date: 05-08-06
0.49
0.26
0.77
0.02
0.0048
Manufacturer: IBM
Processor Type: IBM Power5+
Processor Speed: 2.2GHz
Processor Count: 128
Threads: 1
Processses: 128
System Name: P5 P575+
Interconnect: HPS
MPI: poe 4.2.2.3
Affiliation: IBM
Submission Date: 05-08-06
0.99
0.44
1.53
0.04
0.0090
Manufacturer: Cray Inc.
Processor Type: Cray X1E
Processor Speed: 1.13GHz
Processor Count: 248
Threads: 1
Processses: 248
System Name: mfeg8
Interconnect: Modified 2D Torus
MPI: mpt 2.4
Affiliation: Cray
Submission Date: 06-15-05
3.39
1.85
3.28
-0.00
0.0000
Manufacturer: Cray Inc.
Processor Type: Cray X1E
Processor Speed: 1.13GHz
Processor Count: 1008
Threads: 1
Processses: 1008
System Name: X1
Interconnect: Cray Modified 2D torus
MPI: MPT
Affiliation: DOE/Office of Science/ORNL
Submission Date: 11-02-05
12.27
7.69
12.69
0.25
0.0913
Manufacturer: IBM
Processor Type: IBM PowerPC 440
Processor Speed: 0.7GHz
Processor Count: 1024
Threads: 1
Processses: 1024
System Name: Blue Gene/L
Interconnect: Custom
MPI: MPICH 1.0 customized for Blue Gene/L
Affiliation: Blue Gene Computational Center at IBM T.J. Watson Research Center
Submission Date: 04-11-05
1.42
0.13
0.86
0.05
0.0066
Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.4GHz
Processor Count: 5208
Threads: 1
Processses: 5208
System Name: XT3
Interconnect: Cray Seastar
MPI: xt-mpt/1.3.07
Affiliation: Oak Ridge National Laboratory, DOE Office of Science
Submission Date: 11-10-05
20.42
0.66
29.32
0.78
0.0924
Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.4GHz
Processor Count: 5208
Threads: 1
Processses: 5208
System Name: XT3
Interconnect: Cray Seastar
MPI: xt-mpt/1.3.07
Affiliation: Oak Ridge National Laboratories - DOE Office of Science
Submission Date: 11-12-05
20.42
0.66
29.32
0.78
0.0924
Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.4GHz
Processor Count: 5208
Threads: 1
Processses: 5208
System Name: XT3
Interconnect: Cray Seastar
MPI: xt-mpt/1.3.07
Affiliation: Oak Ridge National Lab - DOD Office of Science
Submission Date: 11-12-05
20.34
0.69
29.22
0.86
0.0954
HPC Challenge Award Winner
2007 - 3rd place - G-FFT: 1.1 Tflop/s
2006 - 2nd place - G-FFT: 1.12 Tflop/s
2006 - 3rd place - G-RandomAccess: 10 GUPS

Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.6GHz
Processor Count: 10404
Threads: 1
Processses: 10404
System Name: XT3 Dual-Core
Interconnect: Cray SeaStar
MPI: xt-mpt 1.5.25
Affiliation: Oak Ridge National Lab
Submission Date: 11-06-06
43.51
10.67
26.54
1.12
0.2392
HPC Challenge Award Winner
2008 - 3rd place - G-RandomAccess: 34 GUPS
2007 - 2nd place - EP-STREAM system: 77 TB/s
2007 - 2nd place - G-RandomAccess: 33.6 GUPS
2007 - 2nd place - G-HPL: 94 Tflop/s

Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.4GHz
Processor Count: 12800
Threads: 1
Processses: 25600
System Name: Red Storm/XT3
Interconnect: Seastar
MPI: xt-mpt/1.5.39 based on MPICH 2.0
Affiliation: DOE/NNSA/Sandia National Laboratories
Submission Date: 11-06-07
93.58
33.56
77.13
1.52
0.5429
Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.4GHz
Processor Count: 12960
Threads: 1
Processses: 25920
System Name: Red Storm/XT3
Interconnect: Cray custom
MPI: MPICH 2 v1.0.2
Affiliation: NNSA/Sandia National Laboratories
Submission Date: 11-10-06
90.99
29.82
53.89
1.53
0.4796
HPC Challenge Award Winner
2008 - 2nd place - G-FFT: 2.87 Tflop/s
2007 - 1st place - G-FFT: 2.8 Tflop/s

Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.4GHz
Processor Count: 12960
Threads: 1
Processses: 25920
System Name: Red Storm/XT3
Interconnect: Seastar
MPI: xt-mpt/1.5.39 based on MPICH 2.0
Affiliation: DOE/NNSA/Sandia National Laboratories
Submission Date: 11-06-07
93.24
29.46
69.67
2.87
0.6005
PlotSystem Information
System - Processor - Speed - Count - Threads - Processes
G-HPL G-Random
Access
EP-STREAM Sys G-FFT Geometric
mean
 MA/PT/PS/PC/TH/PR/CM/CS/IC/IA/SDTFlop/s Gup/s TB/s TFlop/s mean
HPC Challenge Award Winner
2007 - 3rd place - G-HPL: 67 Tflop/s
2007 - 3rd place - G-RandomAccess: 17.3 GUPS
2006 - 2nd place - G-HPL: 67 Tflop/s
2006 - 2nd place - G-RandomAccess: 17 GUPS

Manufacturer: IBM
Processor Type: IBM PowerPC 440
Processor Speed: 0.7GHz
Processor Count: 32768
Threads: 1
Processses: 16384
System Name: Blue Gene/L
Interconnect: Blue Gene Custom Interconnect
MPI: MPICH 1.1
Affiliation: IBM T.J. Watson Research Center
Submission Date: 11-04-05
67.12
17.29
39.98
0.99
0.3228
HPC Challenge Award Winner
2008 - 2nd place - EP-STREAM system: 160 TB/s
2008 - 2nd place - G-RandomAccess: 35 GUPS
2007 - 1st place - G-RandomAccess: 35.5 GUPS
2007 - 1st place - EP-STREAM system: 160 TB/s
2007 - 2nd place - G-FFT: 2.3 Tflop/s
2006 - 1st place - G-RandomAccess: 35 GUPS
2006 - 1st place - EP-STREAM system: 160 TB/s
2006 - 1st place - G-FFT: 2.3 Tflop/s
2005 - 1st place - G-FFT: 2.3 Tflop/s
2005 - 1st place - EP-STREAM system: 160 TB/s
2005 - 1st place - G-RandomAccess: 35 GUPS

Manufacturer: IBM
Processor Type: IBM PowerPC 440
Processor Speed: 0.7GHz
Processor Count: 131072
Threads: 1
Processses: 65536
System Name: Blue Gene/L
Interconnect: Custom Torus / Tree
MPI: MPICH2 1.0.1
Affiliation: National Nuclear Security Administration
Submission Date: 11-02-05
252.30
35.47
160.06
2.31
0.9408
HPC Challenge Award Winner
2008 - 2nd place - G-HPL: 259 Tflop/s
2007 - 1st place - G-HPL: 259 Tflop/s
2006 - 1st place - G-HPL: 259 Tflop/s
2005 - 1st place - G-HPL: 259 Tflop/s

Manufacturer: IBM
Processor Type: IBM PowerPC 440
Processor Speed: 0.7GHz
Processor Count: 131072
Threads: 1
Processses: 65536
System Name: Blue Gene/L
Interconnect: Custom Torus / Tree
MPI: MPICH2 1.0.1
Affiliation: National Nuclear Security Administration
Submission Date: 11-02-05
259.21
32.98
159.90
2.23
0.9215

 

Note:
Blank fields in the table above are from early benchmark runs that did not include that individual benchmark,
in particular G-RandomAccess, G-FFTE and EP-DGEMM.

Column Definitions
G-HPL ( system performance )
Solves a randomly generated dense linear system of equations in double floating-point precision (IEEE 64-bit) arithmetic using MPI. The linear system matrix is stored in a two-dimensional block-cyclic fashion and multiple variants of code are provided for computational kernels and communication patterns. The solution method is LU factorization through Gaussian elimination with partial row pivoting followed by a backward substitution. Unit: Tera Flops per Second
G-PTRANS (A=A+B^T, MPI) ( system performance )
Implements a parallel matrix transpose for two-dimensional block-cyclic storage. It is an important benchmark because it exercises the communications of the computer heavily on a realistic problem where pairs of processors communicate with each other simultaneously. It is a useful test of the total communications capacity of the network. Unit: Giga Bytes per Second
G-RandomAccess ( system performance )
Global RandomAccess, also called GUPs, measures the rate at which the computer can update pseudo-random locations of its memory - this rate is expressed in billions (giga) of updates per second (GUP/s). Unit: Giga Updates per Second
G-FFT ( system performance )
Global FFT performs the same test as FFT but across the entire system by distributing the input vector in block fashion across all the processes. Unit: Giga Flops per Second
EP-STREAM Triad ( per process )
The Embarrassingly Parallel STREAM benchmark is a simple synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple numerical vector kernels. It is run in embarrassingly parallel manner - all computational processes perform the benchmark at the same time, the arithmetic average rate is reported. Unit: Giga Bytes per Second
EP-STREAM-sys ( system performance - derived )
The Embarrassingly Parallel STREAM benchmark is a simple synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple numerical vector kernels. It is run in embarrassingly parallel manner - all computational processes perform the benchmark at the same time, the arithmetic average rate is multiplied by the number of processes for this value. ( EP-STREAM Triad * MPI Processes ) Unit: Giga Bytes per Second
EP-DGEMM ( per process )
The Embarrassingly Parallel DGEMM benchmark measures the floating-point execution rate of double precision real matrix-matrix multiply performed by the DGEMM subroutine from the BLAS (Basic Linear Algebra Subprograms). It is run in embarrassingly parallel manner - all computational processes perform the benchmark at the same time, the arithmetic average rate is reported. Unit: Giga Flops per Second
Geometric mean
Geometric mean of normalized results for G-HPL, G-RandomAccess, EP-STREAM-Sys, and G-FFT. These are the four performance results which are featured in the HPCC Awards. The normalization is done independently for each column rather than against a single machine's results. Consequently, the value of mean will change over time as faster machines appear in the HPCC database. Unit: unitless
Random Ring Bandwidth ( per process )
Randomly Ordered Ring Bandwidth, reports bandwidth achieved in the ring communication pattern. The communicating processes are ordered randomly in the ring (with respect to the natural ordering of the MPI default communicator). The result is averaged over various random assignments of processes in the ring. Unit: Giga Bytes per second
Random Ring Latency ( per process )
Randomly-Ordered Ring Latency, reports latency in the ring communication pattern. The communicating processes are ordered randomly in the ring (with respect to the natural ordering of the MPI default communicator) in the ring. The result is averaged over various random assignments of processes in the ring. Unit: micro-seconds




Thu Oct 30 11:16:21 2014
0.0568 seconds