Open MPI logo

FAQ:
What kinds of systems / networks / run-time environments does Open MPI support?

  |   Home   |   Support   |   FAQ   |   all just the FAQ

Table of contents:

  1. What operating systems does Open MPI support?
  2. What hardware platforms does Open MPI support?
  3. What network interconnects does Open MPI support?
  4. What run-time environments does Open MPI support?
  5. Does Open MPI support LSF?
  6. How much MPI does Open MPI support?
  7. Is Open MPI thread safe?
  8. Does Open MPI support 64 bit environments?
  9. Does Open MPI support execution in heterogeneous environments?
  10. Does Open MPI support parallel debuggers?


1. What operating systems does Open MPI support?

We primarily develop Open MPI on Linux, OS X, Solaris (both 32 and 64 on all platforms) and Windows (Windows XP, Windows HPC Server 2003/2008 and also Windows 7 RC).

Open MPI is fairly POSIX-neutral, so it will run without too many modifications on most POSIX-like systems. Hence, if we haven't listed your favorite operating system here, it should not be difficult to get Open MPI to compile and run properly. The biggest obstacle is typically the assembly language, but that's fairly modular and we're happy to provide information about how to port it to new platforms.

It should be noted that we are quite open to accepting patches for operating systems that we do not currently support. If we do not have systems to test these on, we probably will only claim to "unofficially" support those systems.

Microsoft Windows support has been added in v1.3.3, please see the file README.WINDOWS.


2. What hardware platforms does Open MPI support?

Essentially all the common platforms that the operating systems listed in the previous question support.

For example, Linux runs on a wide variety of platforms, and we certainly can't claim to support all of them (e.g., Open MPI does not run in an embedded environment), but we include assembly for support Intel, AMD, and PowerPC chips, for example.


3. What network interconnects does Open MPI support?

Open MPI is based upon a component architecture; support for its MPI point-to-point functionality only utilize a small number of components at run-time. Adding native support for a new network interconnect was specifically designed to be easy.

Here's the list of networks that we natively support for point-to-point communication:

  • TCP / ethernet
  • Shared memory
  • Loopback (send-to-self)
  • Myrinet / GM
  • Myrinet / MX
  • Infiniband / OpenIB
  • Infiniband / mVAPI
  • Portals

Is there a network that you'd like to see supported that is not shown above? Contributions are welcome!


4. What run-time environments does Open MPI support?

Open MPI is layered on top of the Open Run-Time Environment (ORTE), which originally started as a small portion of the Open MPI code base. However, ORTE has effectively spun off into its own sub-project.

ORTE is a modular system that was specifically architected to abstract away the back-end run-time environment (RTE) system, providing a neutral API to the upper-level Open MPI layer. Components can be written for ORTE that allow it to natively utilize a wide variety of back-end RTEs.

ORTE currently natively supports the following run-time environments:

  • Recent versions of BProc (e.g., Clustermatic)
  • Sun Grid Engine
  • PBS Pro, Torque, and Open PBS (the TM system)
  • LoadLeveler
  • LSF
  • POE
  • rsh / ssh
  • SLURM
  • XGrid
  • Yod (Red Storm)

Is there a run-time system that you'd like to use Open MPI with that is not listed above? Component contributions are welcome!


5. Does Open MPI support LSF?

Starting with Open MPI v1.3, yes!

Prior to Open MPI v1.3, Platform released a script-based integration in the LSF 6.1 and 6.2 maintenance packs around November of 2006. If you want this integration, please contact your normal Platform support channels.


6. How much MPI does Open MPI support?

Open MPI 1.2 supports all of MPI-2.0.

Open MPI 1.3 supports all of MPI-2.1.


7. Is Open MPI thread safe?

Support for MPI_THREAD_MULTIPLE (i.e., multiple threads executing within the MPI library) and asynchronous message passing progress (i.e., continuing message passing operations even while no user threads are in the MPI library) has been designed into Open MPI from its first planning meetings.

Support for MPI_THREAD_MULTIPLE is included in the first version of Open MPI, but it is only lightly tested and likely still has some bugs. Support for asynchronous progress is included in the TCP point-to-point device, but it, too, has only had light testing and likely still has bugs.

Completing the testing for full support of MPI_THREAD_MULTIPLE and asynchronous progress is planned in the near future.


8. Does Open MPI support 64 bit environments?

Yes, Open MPI is 64 bit clean. You should be able to use Open MPI on 64 bit architectures and operating systems with no difficulty.


9. Does Open MPI support execution in heterogeneous environments?

As of v1.1, Open MPI requires that the size of C, C++, and Fortran datatypes be the same on all platforms within a single parallel application with the exception of types represented by MPI_BOOL and MPI_LOGICAL -- size differences in these types between processes are properly handled. Endian differences between processes in a single MPI job are properly and automatically handled.

Prior to v1.1, Open MPI did not include any support for data size or endian heterogeneity.


10. Does Open MPI support parallel debuggers?

Yes. Open MPI supports the TotalView API for parallel process attaching, which several parallel debuggers support (e.g., DDT, fx2). As part of v1.2.4 (released in September 2007), Open MPI also supports the TotalView API for viewing message queues in running MPI processes.

See this FAQ entry for details on how to run Open MPI jobs under TotalView, and this FAQ entry for details on how to run Open MPI jobs under DDT.

NOTE: The integration of Open MPI message queue support is problematic with 64 bit versions of TotalView prior to v8.3:

  • The message queues views will be truncated
  • Both the communicators and requests list will be incomplete
  • Both the communicators and requests list may be filled with wrong values (such as an MPI_Send to the destination ANY_SOURCE)

There are two workarounds:

  • Use a 32 bit version of TotalView
  • Upgrade to TotalView v8.3