Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Base Results
Optimized Results
Base and Optimized
Manufacturer/Processor Type, Speed, Count, Threads, Processes
Includes the manufacturer/processor type, processor speed, number of processors, threads, and number of processes.
Move mouse over this column for each row to display additional information, including; manufacturer, system name, interconnect, MPI, affiliation, and submission date.

Run Type

Run Type, indicates whether the benchmark was a base run or was optimized.

Processors

Processors, this is the number of processors used in the benchmark, entered in the form by the benchmark submitter.

G-HPL ( system performance )
HPL, Solves a randomly generated dense linear system of equations in double floating-point precision (IEEE 64-bit) arithmetic using MPI. The linear system matrix is stored in a two-dimensional block-cyclic fashion and multiple variants of code are provided for computational kernels and communication patterns. The solution method is LU factorization through Gaussian elimination with partial row pivoting followed by a backward substitution. Unit: Tera Flops per Second
G-PTRANS (A=A+B^T, MPI) ( system performance )
PTRANS (A=A+B^T, MPI), Implements a parallel matrix transpose for two-dimensional block-cyclic storage. It is an important benchmark because it exercises the communications of the computer heavily on a realistic problem where pairs of processors communicate with each other simultaneously. It is a useful test of the total communications capacity of the network. Unit: Giga Bytes per Second
G-RandomAccess ( system performance )
Global RandomAccess, also called GUPs, measures the rate at which the computer can update pseudo-random locations of its memory - this rate is expressed in billions (giga) of updates per second (GUP/s). Unit: Giga Updates per Second
EP-STREAM Triad ( per process )
The Embarrassingly Parallel STREAM benchmark is a simple synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple numerical vector kernels. It is run in embarrassingly parallel manner - all computational processes perform the benchmark at the same time, the arithmetic average rate is reported. Unit: Giga Bytes per Second
EP-STREAM-sys ( system performance - derived )
The Embarrassingly Parallel STREAM benchmark is a simple synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple numerical vector kernels. It is run in embarrassingly parallel manner - all computational processes perform the benchmark at the same time, the arithmetic average rate is multiplied by the number of processes to attain this derived value. ( EP-STREAM Triad * MPI Processes ) Unit: Giga Bytes per Second
EP-DGEMM ( per process )
Embarrassingly Parallel DGEMM, benchmark measures the floating-point execution rate of double precision real matrix-matrix multiply performed by the DGEMM subroutine from the BLAS (Basic Linear Algebra Subprograms). It is run in embarrassingly parallel manner - all computational processes perform the benchmark at the same time, the arithmetic average rate is reported. Unit: Giga Flops per Second
G-FFT ( system performance )
Global FFT, performs the same test as FFT but across the entire system by distributing the input vector in block fashion across all the processes. Unit: Giga Flops per Second
Randomly Ordered Ring Bandwidth ( per process )
Randomly Ordered Ring Bandwidth, reports bandwidth achieved in the ring communication pattern. The communicating processes are ordered randomly in the ring (with respect to the natural ordering of the MPI default communicator). The result is averaged over various random assignments of processes in the ring. Unit: Giga Bytes per second
Randomly-Ordered Ring Latency ( per process )
Randomly-Ordered Ring Latency, reports latency in the ring communication pattern. The communicating processes are ordered randomly in the ring (with respect to the natural ordering of the MPI default communicator) in the ring. The result is averaged over various random assignments of processes in the ring. Unit: micro-seconds
Geometric mean
Geometric mean of normalized results for G-HPL, G-RandomAccess, EP-STREAM-Sys, and G-FFT. These are the four performance results which are featured in the HPCC Awards. The normalization is done independently for each column rather than against a single machine's results. Consequently, the value of mean will change over time as faster machines appear in the HPCC database. Unit: unitless







Geometric Mean - Optimized Runs Only - 35 Systems - Generated on Thu Aug 21 08:04:40 2014
PlotSystem Information
System - Processor - Speed - Count - Threads - Processes
G-HPL G-Random
Access
EP-STREAM Sys G-FFT Geometric
mean
 MA/PT/PS/PC/TH/PR/CM/CS/IC/IA/SDTFlop/s Gup/s TB/s TFlop/s mean
Manufacturer: Cray Inc.
Processor Type: Cray X1E
Processor Speed: 1.13GHz
Processor Count: 248
Threads: 1
Processses: 248
System Name: mfeg8
Interconnect: Modified 2D Torus
MPI: mpt 2.4
Affiliation: Cray
Submission Date: 06-15-05
3.39
1.85
3.28
-0.00
0.0000
Manufacturer: NEC
Processor Type: SX-9
Processor Speed: 3.2GHz
Processor Count: 8
Threads: 1
Processses: 2
System Name: SX-9
Interconnect: IXS
MPI: MPI/SX 8.0.10
Affiliation: Japan Agency for Marine-Earth Science and Technology (JAMSTEC)
Submission Date: 11-16-09
0.21
0.16
1.52
0.00
0.0000
Manufacturer: NEC
Processor Type: SX-9
Processor Speed: 3.2GHz
Processor Count: 16
Threads: 1
Processses: 2
System Name: SX-9
Interconnect: IXS
MPI: MPI/SX 8.0.10
Affiliation: Japan Agency for Marine-Earth Science and Technology (JAMSTEC)
Submission Date: 11-16-09
0.59
0.09
2.93
0.00
0.0000
Manufacturer: NEC
Processor Type: NEC SX-7
Processor Speed: 0.552GHz
Processor Count: 32
Threads: 16
Processses: 2
System Name: NEC SX-7
Interconnect: non
MPI: MPI/SX 7.0.6
Affiliation: Tohoku University, Information Synergy Center
Submission Date: 03-24-06
0.18
0.15
0.90
0.01
0.0001
Manufacturer: IBM
Processor Type: IBM Power5+
Processor Speed: 2.2GHz
Processor Count: 64
Threads: 1
Processses: 64
System Name: P5 P575+
Interconnect: HPS
MPI: poe 4.2.2.3
Affiliation: IBM
Submission Date: 05-08-06
0.49
0.26
0.77
0.02
0.0001
Manufacturer: NEC
Processor Type: NEC SX-8
Processor Speed: 2GHz
Processor Count: 40
Threads: 8
Processses: 5
System Name: NEC SX-7C
Interconnect: IXS
MPI: MPI/SX 7.1.3
Affiliation: Tohoku University, Information Synergy Center
Submission Date: 03-24-06
0.30
0.00
1.44
0.03
0.0000
Manufacturer: IBM
Processor Type: IBM Power5+
Processor Speed: 2.2GHz
Processor Count: 128
Threads: 1
Processses: 128
System Name: P5 P575+
Interconnect: HPS
MPI: poe 4.2.2.3
Affiliation: IBM
Submission Date: 05-08-06
0.99
0.44
1.53
0.04
0.0002
Manufacturer: IBM
Processor Type: IBM PowerPC 440
Processor Speed: 0.7GHz
Processor Count: 1024
Threads: 1
Processses: 1024
System Name: Blue Gene/L
Interconnect: Custom
MPI: MPICH 1.0 customized for Blue Gene/L
Affiliation: Blue Gene Computational Center at IBM T.J. Watson Research Center
Submission Date: 04-11-05
1.42
0.13
0.86
0.05
0.0002
Manufacturer: NEC
Processor Type: NEC SX-9
Processor Speed: 3.2GHz
Processor Count: 32
Threads: 16
Processses: 2
System Name: SX-9
Interconnect: IXS
MPI: MPI/SX 8.0.0/ISC
Affiliation: TOHOKU UNIVERSITY
Submission Date: 11-06-08
1.83
0.10
5.54
0.06
0.0002
Manufacturer: NEC
Processor Type: NEC SX-7
Processor Speed: 0.552GHz
Processor Count: 32
Threads: 1
Processses: 32
System Name: NEC SX-7
Interconnect: non
MPI: MPI/SX 7.0.6
Affiliation: Tohoku University, Information Synergy Center
Submission Date: 03-24-06
0.26
0.26
0.88
0.08
0.0001
Manufacturer: NEC
Processor Type: NEC SX-8
Processor Speed: 2GHz
Processor Count: 40
Threads: 1
Processses: 40
System Name: NEC SX-7C
Interconnect: IXS
MPI: MPI/SX 7.1.3
Affiliation: Tohoku University, Information Synergy Center
Submission Date: 03-24-06
0.61
0.01
1.44
0.09
0.0001
Manufacturer: Cray Inc.
Processor Type: Cray X1E
Processor Speed: 1.13GHz
Processor Count: 1008
Threads: 1
Processses: 1008
System Name: X1
Interconnect: Cray Modified 2D torus
MPI: MPT
Affiliation: DOE/Office of Science/ORNL
Submission Date: 11-02-05
12.27
7.69
12.69
0.25
0.0021
Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.4GHz
Processor Count: 5208
Threads: 1
Processses: 5208
System Name: XT3
Interconnect: Cray Seastar
MPI: xt-mpt/1.3.07
Affiliation: Oak Ridge National Laboratory, DOE Office of Science
Submission Date: 11-10-05
20.42
0.66
29.32
0.78
0.0021
Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.4GHz
Processor Count: 5208
Threads: 1
Processses: 5208
System Name: XT3
Interconnect: Cray Seastar
MPI: xt-mpt/1.3.07
Affiliation: Oak Ridge National Laboratories - DOE Office of Science
Submission Date: 11-12-05
20.42
0.66
29.32
0.78
0.0021
Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.4GHz
Processor Count: 5208
Threads: 1
Processses: 5208
System Name: XT3
Interconnect: Cray Seastar
MPI: xt-mpt/1.3.07
Affiliation: Oak Ridge National Lab - DOD Office of Science
Submission Date: 11-12-05
20.34
0.69
29.22
0.86
0.0022
HPC Challenge Award Winner
2007 - 3rd place - G-HPL: 67 Tflop/s
2007 - 3rd place - G-RandomAccess: 17.3 GUPS
2006 - 2nd place - G-HPL: 67 Tflop/s
2006 - 2nd place - G-RandomAccess: 17 GUPS

Manufacturer: IBM
Processor Type: IBM PowerPC 440
Processor Speed: 0.7GHz
Processor Count: 32768
Threads: 1
Processses: 16384
System Name: Blue Gene/L
Interconnect: Blue Gene Custom Interconnect
MPI: MPICH 1.1
Affiliation: IBM T.J. Watson Research Center
Submission Date: 11-04-05
67.12
17.29
39.98
0.99
0.0073
PlotSystem Information
System - Processor - Speed - Count - Threads - Processes
G-HPL G-Random
Access
EP-STREAM Sys G-FFT Geometric
mean
 MA/PT/PS/PC/TH/PR/CM/CS/IC/IA/SDTFlop/s Gup/s TB/s TFlop/s mean
HPC Challenge Award Winner
2007 - 3rd place - G-FFT: 1.1 Tflop/s
2006 - 2nd place - G-FFT: 1.12 Tflop/s
2006 - 3rd place - G-RandomAccess: 10 GUPS

Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.6GHz
Processor Count: 10404
Threads: 1
Processses: 10404
System Name: XT3 Dual-Core
Interconnect: Cray SeaStar
MPI: xt-mpt 1.5.25
Affiliation: Oak Ridge National Lab
Submission Date: 11-06-06
43.51
10.67
26.54
1.12
0.0054
HPC Challenge Award Winner
2008 - 3rd place - G-RandomAccess: 34 GUPS
2007 - 2nd place - EP-STREAM system: 77 TB/s
2007 - 2nd place - G-RandomAccess: 33.6 GUPS
2007 - 2nd place - G-HPL: 94 Tflop/s

Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.4GHz
Processor Count: 12800
Threads: 1
Processses: 25600
System Name: Red Storm/XT3
Interconnect: Seastar
MPI: xt-mpt/1.5.39 based on MPICH 2.0
Affiliation: DOE/NNSA/Sandia National Laboratories
Submission Date: 11-06-07
93.58
33.56
77.13
1.52
0.0124
Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.4GHz
Processor Count: 12960
Threads: 1
Processses: 25920
System Name: Red Storm/XT3
Interconnect: Cray custom
MPI: MPICH 2 v1.0.2
Affiliation: NNSA/Sandia National Laboratories
Submission Date: 11-10-06
90.99
29.82
53.89
1.53
0.0109
HPC Challenge Award Winner
2008 - 2nd place - G-HPL: 259 Tflop/s
2007 - 1st place - G-HPL: 259 Tflop/s
2006 - 1st place - G-HPL: 259 Tflop/s
2005 - 1st place - G-HPL: 259 Tflop/s

Manufacturer: IBM
Processor Type: IBM PowerPC 440
Processor Speed: 0.7GHz
Processor Count: 131072
Threads: 1
Processses: 65536
System Name: Blue Gene/L
Interconnect: Custom Torus / Tree
MPI: MPICH2 1.0.1
Affiliation: National Nuclear Security Administration
Submission Date: 11-02-05
259.21
32.98
159.90
2.23
0.0210
HPC Challenge Award Winner
2008 - 2nd place - EP-STREAM system: 160 TB/s
2008 - 2nd place - G-RandomAccess: 35 GUPS
2007 - 1st place - G-RandomAccess: 35.5 GUPS
2007 - 1st place - EP-STREAM system: 160 TB/s
2007 - 2nd place - G-FFT: 2.3 Tflop/s
2006 - 1st place - G-RandomAccess: 35 GUPS
2006 - 1st place - EP-STREAM system: 160 TB/s
2006 - 1st place - G-FFT: 2.3 Tflop/s
2005 - 1st place - G-FFT: 2.3 Tflop/s
2005 - 1st place - EP-STREAM system: 160 TB/s
2005 - 1st place - G-RandomAccess: 35 GUPS

Manufacturer: IBM
Processor Type: IBM PowerPC 440
Processor Speed: 0.7GHz
Processor Count: 131072
Threads: 1
Processses: 65536
System Name: Blue Gene/L
Interconnect: Custom Torus / Tree
MPI: MPICH2 1.0.1
Affiliation: National Nuclear Security Administration
Submission Date: 11-02-05
252.30
35.47
160.06
2.31
0.0214
Manufacturer: NEC
Processor Type: NEC SX-9
Processor Speed: 3.2GHz
Processor Count: 256
Threads: 1
Processses: 256
System Name: SX-9
Interconnect: IXS
MPI: MPI/SX 8.0.0/ISC
Affiliation: TOHOKU UNIVERSITY
Submission Date: 11-06-08
20.19
1.40
43.43
2.38
0.0037
HPC Challenge Award Winner
2008 - 2nd place - G-FFT: 2.87 Tflop/s
2007 - 1st place - G-FFT: 2.8 Tflop/s

Manufacturer: Cray Inc.
Processor Type: AMD Opteron
Processor Speed: 2.4GHz
Processor Count: 12960
Threads: 1
Processses: 25920
System Name: Red Storm/XT3
Interconnect: Seastar
MPI: xt-mpt/1.5.39 based on MPICH 2.0
Affiliation: DOE/NNSA/Sandia National Laboratories
Submission Date: 11-06-07
93.24
29.46
69.67
2.87
0.0137
HPC Challenge Award Winner
2011 - 2nd place - G-RandomAccess: 117 GUPS
2010 - 1st place - G-RandomAccess: 117 GUPS
2010 - 3rd place - G-HPL: 368 Tflop/s
2009 - 1st place - G-RandomAccess: 117 GUPS
2009 - 3rd place - G-HPL: 368 Tflop/s

Manufacturer: IBM
Processor Type: Power PC 450
Processor Speed: 0.85GHz
Processor Count: 131072
Threads: 4
Processses: 32768
System Name: Dawn
Interconnect: Custom Torus + Tree + Barrier
MPI: MPICH2 1.0.7
Affiliation: NNSA - Lawrence Livermore National Laboratory
Submission Date: 11-11-09
367.82
117.13
130.41
3.20
0.0327
HPC Challenge Award Winner
2011 - 2nd place - EP-STREAM system: 398 TB/s
2010 - 1st place - EP-STREAM system: 398 TB/s
2010 - 3rd place - G-RandomAccess: 38 GUPS
2009 - 1st place - EP-STREAM system: 398 TB/s
2009 - 3rd place - G-RandomAccess: 38 GUPS

Manufacturer: Cray
Processor Type: AMD Opteron
Processor Speed: 2.6GHz
Processor Count: 223112
Threads: 2
Processses: 111556
System Name: XT5
Interconnect: Seastar
MPI: MPT 3.4.2
Affiliation: Oak Ridge National Laboratory
Submission Date: 11-10-09
1467.66
37.69
398.27
3.88
0.0483
HPC Challenge Award Winner
2011 - 3rd place - G-RandomAccess: 103 GUPS
2010 - 2nd place - G-RandomAccess: 103 GUPS
2009 - 2nd place - G-RandomAccess: 103 GUPS
2008 - 1st place - G-FFT: 5.08 Tflop/s
2008 - 1st place - G-RandomAccess: 103 GUPS

Manufacturer: IBM
Processor Type: PowerPC 450
Processor Speed: 0.85GHz
Processor Count: 32768
Threads: 4
Processses: 32768
System Name: Blue Gene/P
Interconnect: Torus
MPI: MPICH 2
Affiliation: Argonne National Lab - LCF
Submission Date: 11-17-08
173.36
103.18
130.42
5.08
0.0295
HPC Challenge Award Winner
2010 - 3rd place - G-FFT: 7.5 Tflop/s
2010 - 3rd place - EP-STREAM system: 233 TB/s
2009 - 3rd place - G-FFT: 7 Tflop/s
2009 - 3rd place - EP-STREAM system: 173 TB/s
2009 - 3rd place - EP-STREAM system: 173 TB/s
2009 - 3rd place - EP-STREAM system: 233 TB/s

Manufacturer: NEC
Processor Type: SX-9
Processor Speed: 3.2GHz
Processor Count: 960
Threads: 1
Processses: 960
System Name: SX-9
Interconnect: IXS
MPI: MPI/SX 8.0.10
Affiliation: Japan Agency for Marine-Earth Science and Technology (JAMSTEC)
Submission Date: 11-11-09
79.55
2.07
172.98
6.94
0.0106
HPC Challenge Award Winner
2010 - 2nd place - G-FFT: 10.7 Tflop/s
2009 - 2nd place - G-FFT: 11 Tflop/s

Manufacturer: Cray, Inc.
Processor Type: AMD Opteron
Processor Speed: 2.6GHz
Processor Count: 98304
Threads: 3
Processses: 32768
System Name: XT5
Interconnect: SeaStar 2+
MPI: MPT 3.4.2
Affiliation: National Institute for Computational Sciences
Submission Date: 11-02-09
657.62
18.50
127.20
7.53
0.0293
HPC Challenge Award Winner
2011 - 3rd place - G-FFT: 10.7 Tflop/s
2010 - 1st place - G-FFT: 11.88 Tflop/s
2009 - 1st place - G-FFT: 11 Tflop/s

Manufacturer: Cray
Processor Type: AMD Opteron
Processor Speed: 2.6GHz
Processor Count: 196608
Threads: 3
Processses: 65536
System Name: XT5
Interconnect: Seastar
MPI: MPT 3.4.2
Affiliation: Oak Ridge National Laboratory
Submission Date: 11-10-09
1338.67
36.43
243.32
10.70
0.0533
HPC Challenge Award Winner
2011 - 2nd place - G-FFT: 11.9 Tflop/s

Manufacturer: NEC
Processor Type: SX-9
Processor Speed: 3.2GHz
Processor Count: 1280
Threads: 1
Processses: 1280
System Name: SX-9
Interconnect: IXS
MPI: MPI/SX 8.0.12a
Affiliation: Japan Agency for Marine-Earth Science and Technology (JAMSTEC)
Submission Date: 11-11-10
100.28
2.58
233.38
11.88
0.0146
Manufacturer: Fujitsu Ltd.
Processor Type: Fujitsu SPARC64 VIIIfx
Processor Speed: 2GHz
Processor Count: 147456
Threads: 8
Processses: 18432
System Name: K computer
Interconnect: Tofu interconnect
MPI: Parallelnavi Technical Computing Language V1.0L20
Affiliation: RIKEN Advanced Institute for Computational Science (AICS)
Submission Date: 10-31-11
2114.19
77.61
797.38
32.50
0.1282
HPC Challenge Award Winner
2011 - 1st place - G-FFT: 34.7 Tflop/s
2011 - 1st place - G-HPL: 2,118 Tflop/s
2011 - 1st place - G-RandomAccess: 121 GUPS
2011 - 1st place - EP-STREAM system: 812 TB/s

Manufacturer: Fujitsu Ltd.
Processor Type: Fujitsu SPARC64 VIIIfx
Processor Speed: 2GHz
Processor Count: 147456
Threads: 8
Processses: 18432
System Name: K computer
Interconnect: Tofu interconnect
MPI: Parallelnavi Technical Computing Language V1.0L20
Affiliation: RIKEN Advanced Institute for Computational Science (AICS)
Submission Date: 11-08-11
2117.70
121.10
812.13
34.72
0.1464
PlotSystem Information
System - Processor - Speed - Count - Threads - Processes
G-HPL G-Random
Access
EP-STREAM Sys G-FFT Geometric
mean
 MA/PT/PS/PC/TH/PR/CM/CS/IC/IA/SDTFlop/s Gup/s TB/s TFlop/s mean
Manufacturer: IBM
Processor Type: IBM Power7 Quad-Chip module
Processor Speed: 3.836GHz
Processor Count: 1470
Threads: 32
Processses: 1470
System Name: IBM Power775
Interconnect: IBM Hub Chip integrated interconnect
MPI: IBM PE MPI release 1206
Affiliation: IBM Development Engineering - DARPA Trial Subset
Submission Date: 07-15-12
1067.79
1571.91
389.99
94.86
0.2507
Manufacturer: IBM
Processor Type: IBM Power7
Processor Speed: 3.836GHz
Processor Count: 1989
Threads: 32
Processses: 1989
System Name: Power 775
Interconnect: Custom IBM Hub Chip
MPI: IBM PE v1209
Affiliation: IBM Development Engineering
Submission Date: 11-08-12
1343.67
2020.77
525.41
132.66
0.3312
Manufacturer: Fujitsu
Processor Type: Fujitsu SPARC64 VIIIfx
Processor Speed: 2GHz
Processor Count: 663552
Threads: 8
Processses: 82944
System Name: K computer
Interconnect: Tofu Interconnect
MPI: Parallelnavi Technical Computing Language V1.0L20
Affiliation: RIKEN Advanced Institute for Computational Scinece
Submission Date: 10-23-12
9795.56
471.94
3857.32
205.94
0.6952

 

Note:
Blank fields in the table above are from early benchmark runs that did not include that individual benchmark,
in particular G-RandomAccess, G-FFTE and EP-DGEMM.

Column Definitions
G-HPL ( system performance )
Solves a randomly generated dense linear system of equations in double floating-point precision (IEEE 64-bit) arithmetic using MPI. The linear system matrix is stored in a two-dimensional block-cyclic fashion and multiple variants of code are provided for computational kernels and communication patterns. The solution method is LU factorization through Gaussian elimination with partial row pivoting followed by a backward substitution. Unit: Tera Flops per Second
G-PTRANS (A=A+B^T, MPI) ( system performance )
Implements a parallel matrix transpose for two-dimensional block-cyclic storage. It is an important benchmark because it exercises the communications of the computer heavily on a realistic problem where pairs of processors communicate with each other simultaneously. It is a useful test of the total communications capacity of the network. Unit: Giga Bytes per Second
G-RandomAccess ( system performance )
Global RandomAccess, also called GUPs, measures the rate at which the computer can update pseudo-random locations of its memory - this rate is expressed in billions (giga) of updates per second (GUP/s). Unit: Giga Updates per Second
G-FFT ( system performance )
Global FFT performs the same test as FFT but across the entire system by distributing the input vector in block fashion across all the processes. Unit: Giga Flops per Second
EP-STREAM Triad ( per process )
The Embarrassingly Parallel STREAM benchmark is a simple synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple numerical vector kernels. It is run in embarrassingly parallel manner - all computational processes perform the benchmark at the same time, the arithmetic average rate is reported. Unit: Giga Bytes per Second
EP-STREAM-sys ( system performance - derived )
The Embarrassingly Parallel STREAM benchmark is a simple synthetic benchmark program that measures sustainable memory bandwidth and the corresponding computation rate for simple numerical vector kernels. It is run in embarrassingly parallel manner - all computational processes perform the benchmark at the same time, the arithmetic average rate is multiplied by the number of processes for this value. ( EP-STREAM Triad * MPI Processes ) Unit: Giga Bytes per Second
EP-DGEMM ( per process )
The Embarrassingly Parallel DGEMM benchmark measures the floating-point execution rate of double precision real matrix-matrix multiply performed by the DGEMM subroutine from the BLAS (Basic Linear Algebra Subprograms). It is run in embarrassingly parallel manner - all computational processes perform the benchmark at the same time, the arithmetic average rate is reported. Unit: Giga Flops per Second
Geometric mean
Geometric mean of normalized results for G-HPL, G-RandomAccess, EP-STREAM-Sys, and G-FFT. These are the four performance results which are featured in the HPCC Awards. The normalization is done independently for each column rather than against a single machine's results. Consequently, the value of mean will change over time as faster machines appear in the HPCC database. Unit: unitless
Random Ring Bandwidth ( per process )
Randomly Ordered Ring Bandwidth, reports bandwidth achieved in the ring communication pattern. The communicating processes are ordered randomly in the ring (with respect to the natural ordering of the MPI default communicator). The result is averaged over various random assignments of processes in the ring. Unit: Giga Bytes per second
Random Ring Latency ( per process )
Randomly-Ordered Ring Latency, reports latency in the ring communication pattern. The communicating processes are ordered randomly in the ring (with respect to the natural ordering of the MPI default communicator) in the ring. The result is averaged over various random assignments of processes in the ring. Unit: micro-seconds




Thu Aug 21 08:04:40 2014
0.0747 seconds