MPI-Connect: integrating MPI applications

What's new


A software system that allows independent MPI applications to inter-communicate even if they are running under different MPI implementations using different language bindings.

Rational behind the project

Under MPI-1 applications cannot communicate with other processes outside their MPI_COMM_WORLD once it has formed (without using some other entity). This means that there is only a static process model. This has advantages such as:

The main disadvantages of such a static model are:

Although the process model was simple in MPI-1, due to the diverse number of methods available to start jobs on MPPs no standard method was declared for starting jobs (even the support application MPIRUN varies).
Many message passing users had come from Network/Cluster computing environments and hence may feel restricted by MPI and many of the run time environments on MPPs.

This project aims to give users of MPI the capabilities and functionality that they have come accustomed to from system like PVM or LAM. This includes some features that were not even considered in the MPI-2 forum.

Goals of the project

The main goal of the MPI-Connect project is to allow different MPP vendor MPI implementations to inter-communicate to aid the use of multiple MPPs in solving challenging HPC problems.

Selective requirements and goals:

Targeted users of the project

Any users that require multi-part dynamic MPI applications.

Current work

Press release for the CEWES MSRC project that won the "Most Effective Engineering Methodology" award for the HPC Challenge at SC98. This project used MPI-Connect to perform the simulations across computers located at CEWES MSRC and ASC MSRC.

Power Point presentation on MPI-Connect

Graham Fagg and Kevin London, MPI Interconnection and Control, CEWES MSRC Technical Report 98-42.

SC 97 paper on SNIPE which provides name lookup and intercommunication facilities for MPI-Connect.

Related Projects, Papers et al..



EuroPVM/MPI 97 paper on PVMPI, the forerunner of MPI-Connect. PVMPI used PVM to control and intercommunicate between MPI applications.


Will be released in a public form in the first quarter 1999.
Should support C and F77 bindings under MPICH, LAM, IBM MPI, and SGI MPI.
Please watch for announcements on comp.parallel.pvm and comp.parallel.mpi.

Project team (ICL/UTK/ORNL)

Project design, implementation, support and documentation by Graham

Testing (beta) team : Kevin London

Management and Support: Jack Dongarra, Shirley V Browne and Al Geist.

Last updated Dec 14th 1998 by Shirley.