Parallel code

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(Open MPI)
(Deep sequencing for short fragment alignment (DeepAligner))
Line 57: Line 57:
'''REFERENCES'''
'''REFERENCES'''
-
 
-
=== Deep sequencing for short fragment alignment (DeepAligner) ===
 
-
 
-
''Section contributed by SZTAKI (DeepAligner application)''
 
-
 
-
The DeepAligner application’s workflow has been developed as a Parameter Study workflow with usage of autogenerator port (second small box around left top box in Fig xs1.) and collector job (right bottom box in Fig. xs1). The preprocessor job generates a set of input files from some pre-adjusted parameter. Then the second job (middle box in Fig. xs1) will be executed as many times as the input files specify. The second job is an MPI based BLAST executable (MPIBlast) which aligns short sequences. The inputs of the MPI job are the sets of sequences (defined by the researcher) and the already deployed sequence database fragments. The last job of the workflow is a Collector which is used to collect several files and then process them as a single input. Collectors force delayed job execution until the last file of the input file set to be collected has arrived to the Collector job. The workflow engine computes the expected number of input files at run time. When all the expected inputs arrived to the Collector it starts to process all the incoming inputs files as a single input set. Finally output files will be generated, and will be stored on a Storage Element of the DCI shown as little box around the Collector in Fig xs1.
 
-
 
-
[[File:Blast_wf.jpg]]
 
-
 
-
Fig xs1: DeepAligner workflow with the MPI based Blast job in the middle
 
== Porting between MPI implementations ==
== Porting between MPI implementations ==

Revision as of 11:45, 10 April 2012

Contents

Message Passing Model

Section contributed by IFIN-HH

In most distributed memory systems parallelization is achieved by using various implementations of the widely adopted Message Passing Interface (MPI) standard [mpis]. MPI presents a set of specifications for writing message-passing programs, that is parallel programs in which one assumes the interprocess communication through messages. There are two versions of the MPI standard currently in use, MPI-1 and MPI-2, and various library implementations of these which are particularly tuned for specific target platforms.

The standard specifications, and its implementations that are used in the framework of the HP-SEE ecosystem have been shortly exposed in HP-SEE deliverable 8.1 [D8.1].

Here a detailed discussion is presented on the message passing issues that are relevant for the migration, adaption and optimization of parallel applications into the HPC infrastructure, together with examples drawn from the developers’ experience.

The topics is restricted to the libraries that are effectively in use in HP-SEE: implementations of the MPI-1 version (MPICH1 and its derivatives MPICH-MX, MVAPICH), Open MPI (which implements MPI-2), and MPICH2 (together with its derivatives MVAPICH2, MPIX, and Intel MPI) - which implements both versions MPI-1 and MPI-2.

REFERENCES

[mpis] MPI standard, http://www.mcs.anl.gov/research/projects/mpi/

[D8.1] HP-SEE deliverable D8.1, Software Scalability Analysis and Interoperability Issues Assessment

MPICH implementations

Introduction contributed by IFIN-HH

Proposed as a freely available and portable implementation of the MPI standard, MPICH has evolved along with it, from the MPICH1 implementation [mpc1] (fulfilling MPI-1 specifications and partially supporting some MPI-2 features like the parallel I/O), to MPICH2 [mpc2], which is moreover fully compatible with the MPI-2 version of the standard. Although the development of the MPICH1 was frozen since 2005 to the 1.2.7p1 version, with the intention to be replaced with MPICH2, it continues to be the most used MPI implementation worldwide.

REFERENCES

[mpc1] MPICH1, http://www.mcs.anl.gov/research/projects/mpi/mpich1-old/

[mpc2] MPICH2, http://www.mcs.anl.gov/research/projects/mpich2/

Open MPI

Introduction Implementations of OpenMPI are used in the HP-SEE infrastructure by ...

NEURON ParallelContext

Section contributed by IMBB-FORTH (CMSLTM application)

Preferred MPI environment is OpenMPI (openmpi_gcc-1.4.3). NEURON was compiled with parallel support (MPI) using the gcc compiler:

 ./configure --without-iv --with-paranrn --prefix=/home/gkastel/src/nrn-7.1

We used NEURON's ParallelContext to distribute the simulation of each neuron to different nodes evenly.

REFERENCES

Porting between MPI implementations

Shared Memory Model

OpenMP

Pthreads

Hybrid Programming

CUDA

Personal tools