The research is investigating a number of innovative approaches and techniques that lead to or enable revolutionary advances in the state of the art for compiler testing. The approaches and techniques will encompass the most important stages undergone by any non-trivial compilation system:

  1. Target system characterization via micro-benchmarks, system configuration inquiry, and so on. This includes full specification of metadata for configuration and characterization of various computing architectures.
  2. Comprehensive benchmark suite ranging from simple application kernels to nearly complete application codes.
  3. Testing harness and database for an ongoing monitoring of development progress and elimination of potential regressions.

 We are collecting and creating a suite of benchmarks and applications with which to evaluate compilers. We focus here on our innovative claims in each of our six thrust areas.

  1. Metrics Our goal in the first phase was to guide the compiler deverlopment teams in developing architectural characterization tools that will precisely, accurately, and completely describe the important characteristics of a system, taking into account the effects of system software (compilers, libraries, operating system) that can affect application performance. 
  2. Compiler Configuration and System Characterization The compiler environments can be guided by system information contained in configuration and characterization files. The system configuration file will contain information provided by the system vendor and/or system installation, which could not be discovered automatically or would be inappropriate for automatic discovery. Examples of the types of information that will be in a system configuration file are the number of nodes (for a multicomputer), processors per node (for multiprocessor nodes), cores per processor (for multicore processors), and instruction set of the processor or processors (for heterogeneous nodes/processors).
  3. Benchmark and Application Evaluation Suites Blackjack will use a number of existing benchmarks and applications to create an initial thorough evaluation suite for performance, correctness, and productivity. Collectively, the members of the Blackjack team have access to a large number of benchmark and application suites from DARPA, DOD, and DOE.
  4. Evaluation Harness and Database In order to evaluate the compiler offerings, Blackjack will design and implement a number of tools to implement the evaluation methodology. First, to enforce fairness across the evaluation, we will develop a Blackjack evaluation harness that tests the compiler systems using the benchmark and application. Second, to enable easy access to data across multiple dimensions, we will create a restricted access evaluation database. This database will track the metrics of each compiler historically across all evaluations.
  5. Evaluation System Selection and Characterization In cooperation with DARPA, we will select three target systems for our evaluation. Collectively, we have long experience with a wide variety of HPC systems, and can direct the selection to comprise a broad range of systems to best test the richness of the AACE compiler environments. In addition, we have access to a collection of large scale clusters and supercomputers for scaling tests.
  6. Compiler Evaluations The overall goal and expected outcome for this thrust is to successfully measure the Benchmark metrics on all of the selected compiler systems.

Apr 16 2014 Admin Login