MVAPICH

From HP-SEE Wiki

Revision as of 08:55, 11 April 2012 by Dusan (Talk | contribs)
Jump to: navigation, search

InfiniBand, 10GigE/iWARP and RDMA over Converged Ethernet (RoCE) are emerging as high-performance networking technologies which deliver low latency and high bandwidth to HPC users and, in addition, they are also achieving widespread acceptance due to their open standards. MVAPICH is an open-source MPI implementation developed in the Network-Based Computing Laboratory (NBCL) of the Ohio State University which exploit the novel features and mechanisms of mentioned networking technologies. Currently, there are two versions of this MPI library: MVAPICH with MPI-1 semantics and MVAPICH2 with MPI-2 semantics.

These MPI implementations are used by many organizations world wide (national laboratories, universities and industry) and several InfiniBand systems using MVAPICH/MVAPICH2 are present in the TOP 500 ranking. Many InfiniBand, 10GigE/iWARP and RoCE vendors, server vendors, systems integrators and Linux distributors have been incorporating MVAPICH/MVAPICH2 into their software stacks. MVAPICH and MVAPICH2 are also available with the Open Fabrics Enterprise Distribution (OFED) stack (www.openfabrics.org) and through public anonymous MVAPICH SVN. Both MVAPICH and MVAPICH2 distributions are available under BSD licensing.

At the Institute of Physics Belgrade, MVAPICH MPI implementations are used within the tPARADOX cluster as it provides InfiniBand interconnect between its servers.

MVAPICH is an implementation of MPI-1 standard which is based on MPICH and MVICH (MPI for Virtual Interface Architecture). The latest release is MVAPICH 1.2 (includes MPICH 1.2.7). MVAPICH 1.2 supports the following underlying transport interfaces:

  • High-Performance support with scalability for OpenFabrics/Gen2 interface to work with InfiniBand and other RDMA interconnects.
  • High-Performance support with scalability for OpenFabrics/Gen2-RDMAoE interface.
  • High-Performance support with scalability (for clusters with multi-thousand cores) for OpenFabrics/Gen2-Hybrid interface to work with InfiniBand.
  • Shared-Memory only channel which is useful for running MPI jobs on multi-processor systems without using any high-performance network. For example, multi-core servers, desktops, and laptops; and clusters with serial nodes.
  • The InfiniPath interface for InfiniPath adapters.
  • The standard TCP/IP interface (provided by MPICH) to work with a range of networks. This interface can be used with IPoIB support of InfiniBand also.

In addition, MVAPICH 1.2 supports many features for high performance, scalability portability and fault tolerance. It also supports a wide range of platforms (architecture, OS, compilers and InfiniBand adapters).

Personal tools