As the new GPU hybrid computing paradigm leads the evolution of computational hardware into Petascale computing, computing architectures are increasingly changing. However, the programming tools, applications, and algorithms that form the backbone of the ever growing need for greater performance are equally as important. Such myriad hardware/software configurations present unique challenges that require testing and development of applications that are often unique to the platform on which they reside. For this reason, it is imperative that we have access to a wide range of computing resources in order to conduct our cutting-edge research.
ICL has multiple heterogeneous systems in house, and access to numerous architectures around the country, due in large part to our many partners and collaborators. Locally, we maintain systems ranging from individual desktops to large, networked clusters.
Below is a summary of the local computing resources used by ICL:Hybrid Systems
In addition to these resources, we have access to several server-class machines and HPC clusters within the EECS department. These clusters consist of multiple architectures and comprise over 100 machines with various architectures. All of our clusters are arranged in the classic Beowulf configuration in which machines are connected by low latency, high-speed network switches.
ICL also has access to many remote resources to help keep us at the forefront of enabling technology research, including some machines that are regularly found on the TOP500 list of the world’s fastest supercomputers. The recent modernization of the DOE’s National Center for Computational Sciences (NCCS), just 30 minutes away at the Oak Ridge National Laboratory (ORNL), has enabled us to leverage our ORNL collaborations to take advantage of what has become one of the world’s fastest scientific computing facilities. The NCCS houses Jaguar, a Cray XT5 that was the third fastest supercomputer in the world in mid-2011. The National Institute for Computational Sciences (NICS), another computing facility at ORNL, houses Kraken, UT’s Cray XT5 system which is one of the world’s fastest open-science supercomputers. We also have access to resources on XSEDE — the successor to TeraGrid — and France’s Grid5000.
The following are some of the remote systems and architectures that we utilize: