Parallel code

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(Porting between MPI implementations)
(Porting between MPI implementations)
Line 59: Line 59:
== Porting between MPI implementations ==
== Porting between MPI implementations ==
-
 
+
'''Por by IICT'''
The MPI standard has been developed with the aim to simplify the development of cross-platform  
The MPI standard has been developed with the aim to simplify the development of cross-platform  
applications that use distributed memory model, as opposed to SMP.  
applications that use distributed memory model, as opposed to SMP.  

Revision as of 08:54, 18 April 2012

Contents

Message Passing Model

Section contributed by IFIN-HH

In most distributed memory systems parallelization is achieved by using various implementations of the widely adopted Message Passing Interface (MPI) standard [mpis]. MPI presents a set of specifications for writing message-passing programs, that is parallel programs in which one assumes the interprocess communication through messages. There are two versions of the MPI standard currently in use, MPI-1 and MPI-2, and various library implementations of these which are particularly tuned for specific target platforms.

The standard specifications, and its implementations that are used in the framework of the HP-SEE ecosystem have been shortly exposed in HP-SEE deliverable 8.1 [D8.1].

Here a detailed discussion is presented on the message passing issues that are relevant for the migration, adaption and optimization of parallel applications into the HPC infrastructure, together with examples drawn from the developers’ experience.

The topics is restricted to the libraries that are effectively in use in HP-SEE: implementations of the MPI-1 version (MPICH1 and its derivatives MPICH-MX, MVAPICH), Open MPI (which implements MPI-2), and MPICH2 (together with its derivatives MVAPICH2, MPIX, and Intel MPI) - which implements both versions MPI-1 and MPI-2.

REFERENCES

[mpis] MPI standard, http://www.mcs.anl.gov/research/projects/mpi/

[D8.1] HP-SEE deliverable D8.1, Software Scalability Analysis and Interoperability Issues Assessment

MPICH implementations

Introduction contributed by IFIN-HH

Proposed as a freely available and portable implementation of the MPI standard, MPICH has evolved along with it, from the MPICH1 implementation [mpc1] (fulfilling MPI-1 specifications and partially supporting some MPI-2 features like the parallel I/O), to MPICH2 [mpc2], which is moreover fully compatible with the MPI-2 version of the standard. Although the development of the MPICH1 was frozen since 2005 to the 1.2.7p1 version, with the intention to be replaced with MPICH2, it continues to be the most used MPI implementation worldwide.

REFERENCES

[mpc1] MPICH1, http://www.mcs.anl.gov/research/projects/mpi/mpich1-old/

[mpc2] MPICH2, http://www.mcs.anl.gov/research/projects/mpich2/

Open MPI

Introduction Implementations of OpenMPI are used in the HP-SEE infrastructure by ...

NEURON ParallelContext

Section contributed by IMBB-FORTH (CMSLTM application)

Preferred MPI environment is OpenMPI (openmpi_gcc-1.4.3). NEURON was compiled with parallel support (MPI) using the gcc compiler:

 ./configure --without-iv --with-paranrn --prefix=/home/gkastel/src/nrn-7.1

We used NEURON's ParallelContext to distribute the simulation of each neuron to different nodes evenly.

REFERENCES

Porting between MPI implementations

Por by IICT The MPI standard has been developed with the aim to simplify the development of cross-platform applications that use distributed memory model, as opposed to SMP. The first version of the MPI standard is well supported by the various MPI implementations, thus ensuring that a program tested with one such implementation will work correctly with another. Within the MPI specification there is some freedom of design choices, which are well documented and should serve as a warning to the user not to rely on specific implementation details. These considerations mostly affect the so-called asynchronous or non-blocking operations. For example, MPI_Isend is non-blocking version of MPI_Send. When a thread uses this function to send data, the function will return immediately. This would usually happen before the data is finished being sent. This means that users should not change any of the data in the buffer, which was passed as an argument, until they are sure that the data sending is completed. This can be ensured by invoking MPI_Wait. Although usage of non-blocking operations adds complexity to the program, it also enables overlap between communication and computations, thus increasing parallel efficiency.

The version 2 of the MPI standard added new advanced features, like parallel I/O, dynamic process management and remote memory operations. The implementations of these features among MPI versions is unequal and may lead to portability problems.

Some MPI implementations offer a way of network topology discovery, which may be extremely useful for achieving good parallel efficiency, especially when running on heterogeneous resources, but the usage of such information may also lead to portability problems for the application.

Shared Memory Model

OpenMP

Pthreads

Hybrid Programming

CUDA

Personal tools