MPI-Connect is a software package that provides interoperability between
different MPI implementations.  MPI is transparent to MPI applications in that
it allows intercommunication between
different MPI implementations or instances of the same implementation using
normal MPI communication calls.  MPI-Connect uses the MPI profiling interface
to intercept MPI calls and determine whether communication is on or off the
machine.  In the case that communication is on the machine, the native MPI
routine is called.  In the case that communication is off the machine,
MPI-Connect handles communication with the remote system.  For an MPI
application to interoperate with another MPI application using MPI-Connect,
the following two calls are required to initialize intercommunication:
  MPI_Conn_Register(appname, comm, &handle)
  MPI_Conn_Intercomm_create(handle, appname, &intercom)
MPI_Conn_Register registers the application with the MPI-Connect system.
MPI_Conn_Intercomm_create sets up an intercommunicate with another registered
MPI application.  After this initialization, the application can 
intercommunicate by using the intercommunicate in normal MPI calls.
The MPI-Connect system includes a client library which implements the above
routines and the MPI profiling routines for intercommunication, and a server
that listens for communication requests. 
 
Examples in C Examples in Fortran
Application 1 Application 1
Application 2 Application 2
Application 3 Application 3
 
MPI-Connect-IO is an extension to MPI-Connect to handle the situation where
distributed MPI applications need shared access to the same remote files.
MPI-Connect-IO can be used either together with MPI-Connect or in standalone
mode.  MPI-Connect-IO provides the following routines:
  MPI_Conn_getfile(globalfname, localfname, outsize, comm)
  MPI_Conn_getfile_view(globalfname, localfname, my_app, num_apps, dtype,
                        outsize, comm)
  MPI_Conn_releasefile(globalfname)
MPI_Conn_getfile provides a replicated copy of the specified global file to
the calling process.  MPI_Conn_getfile_view is a collective call that splits
up the global file into contiguous pieces with the size of the pieces 
determined by num_apps and which piece the calling process gets determined
by my_app.
In addition to the above routines, MPI-Connect-IO may be used together
with MPI-Connect so that MPI applications may access shared files using
normal MPI-2 IO calls.  MPI-Connect-IO uses the MPI profiling library to
intercept MPI-2 IO calls.  Arbitrary derived data types and semantic
consistency for updates are not currently supported for remote files.
The MPI-Connect-IO systems includes a client library which implements the
above routines and the MPI profiling routines for MPI-IO, and a server
that listens for file access requests.
  
An simple example of opening and closing a remote file can be found here.
A more complex example showing an application that reads a portion of a remote file, using MPI2 parallel IO calls can be found here. This application computes the max/min and avg of the data in the file.

The original power point presentation about MPI-Connect and Parallel IO can be found here.

The latest power point presentation about  MPI-Connect and Parallel IO can be found here.

Below is a detailed diagram of the interaction between MPI-Connect and Parallel IO click on the image for a larger version of the image.

MPI_Connect Parallel IO diagram.GIF (26023 bytes)

For more information please contact Dr. Graham Fagg at fagg@cs.utk.edu.