SFHG

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(Problems Solved)
(Achieved Results)
 
(9 intermediate revisions not shown)
Line 27: Line 27:
== Beneficiaries ==
== Beneficiaries ==
 +
Faculty of Science. Dept of Physics
== Number of users ==
== Number of users ==
Line 35: Line 36:
* Concept: 2012-06-01
* Concept: 2012-06-01
-
* Start of alpha stage: 2012-11-01
+
* Start of alpha stage: 2012-10-01
* Start of beta stage: 2013-01-01
* Start of beta stage: 2013-01-01
-
* Start of testing stage: 2013-04-01
+
* Start of testing stage: 2013-01-15
-
* Start of deployment stage: 2013-05-01
+
* Start of deployment stage: 2013-02-15
-
* Start of production stage: 2013-06-01
+
* Start of production stage: 2013-03-01
== Resource Requirements ==
== Resource Requirements ==
Line 45: Line 46:
* Number of cores required for a single run: ''up to 64''
* Number of cores required for a single run: ''up to 64''
* Minimum RAM/core required: ''<1GB''
* Minimum RAM/core required: ''<1GB''
-
* Storage space during a single run: ''<10GB''
+
* Storage space during a single run: ''<1GB''
-
* Long-term data storage: ''<40GB''
+
* Long-term data storage: ''<1GB''
* Total core hours required: ''<32000''
* Total core hours required: ''<32000''
== Technical Features and HP-SEE Implementation ==
== Technical Features and HP-SEE Implementation ==
-
* Primary programming language: ''FORTRAN''
+
* Primary programming language: ''C/C++''
* Parallel programming paradigm: ''OpenMP''
* Parallel programming paradigm: ''OpenMP''
-
* Main parallel code: ''.''
+
* Main parallel code: ''Nested OpenMP''
-
* Pre/post processing code: ''.''
+
* Pre/post processing code: ''N/A''
-
* Application tools and libraries: ''.''
+
* Application tools and libraries: ''libgomp, gcc, gprof''
== Usage Example ==
== Usage Example ==
Line 67: Line 68:
** Achieved scalability: ''8''
** Achieved scalability: ''8''
* Accessed production systems: ''.''
* Accessed production systems: ''.''
-
# ''.''
+
# ''PARADOX''
-
#* Applied for access on: ''.''
+
#* Applied for access on: ''2010-09-01''
-
#* Access granted on: ''.''
+
#* Access granted on: ''2010-09-01''
-
#* Achieved scalability: ''.''
+
#* Achieved scalability: ''8''
-
* Porting activities: ''.''
+
# ''Pecs SC''
-
* Scalability studies: ''.''
+
#* Applied for access on: ''2013-01-18''
 +
#* Access granted on: ''2013-01-21''
 +
#* Achieved scalability: ''48''
 +
# ''Szeged SC''
 +
#* Applied for access on: ''2013-01-18''
 +
#* Access granted on: ''2013-01-21''
 +
#* Achieved scalability: ''24''
 +
* Porting activities: ''Emulated implementation of OpenMP functions not implemented in older versions of gcc/libgomp''
 +
* Scalability studies: ''Scalability studies performed at PARADOX (up to 8 CPU cores), BA-01-ETFBL (up to 16 CPU cores) and Pecs SC (up to 48 CPU cores). Detailed results available in Deliverable D8.4.''
== Running on Several HP-SEE Centres ==
== Running on Several HP-SEE Centres ==
-
* Benchmarking activities and results: ''.''
+
Application is currently running at PARADOX, Pecs SC and Szeged SC.
-
* Other issues: ''.''
+
 
 +
* Benchmarking activities and results: ''Performed scalability study for D8.4. Achieved 207k Walks/s for level 8 and 132k Walks/s for level 9.''
 +
* Other issues: ''Issues with incomplete or missing support for specific OpenMP features with Open64 and Portland Group compilers.''
== Achieved Results ==
== Achieved Results ==
 +
*We have calcualted exact number of walks for previously unavailable levels 8 and 9 for all needed types of walks.
 +
*We have created a novel approach to solution of the problem without resorting to brute-force walk counting.
== Publications ==
== Publications ==
Line 86: Line 99:
== Foreseen Activities ==
== Foreseen Activities ==
 +
* Optimization of parallelization approach to tasks - partially completed.
 +
* Creating MPI code.
 +
* Creating hybrid MPI+OpenMP code.
 +
* Performing all types of walks at the same time (most likely via MPI).

Latest revision as of 09:40, 5 March 2013

Contents

General Information

  • Application's name: Self Avoiding Hamiltonian Walk on Gaskets
  • Application's acronym: SFHG
  • Virtual Research Community: Computational Physics Applications
  • Scientific contact: Sreten Lekic, slekic@blic.net
  • Technical contact: Sreten Lekic, slekic@blic.net
  • Developers: Sreten Lekic, Faculty of Mech. Engineering, University of Banja Luka (UoBL), Bosnia - Herzegovina, Igor Sevo, Mihajlo Savic Faculty of Electrical Engineering, University of Banja Luka (UoBL), Bosnia - Herzegovina
  • Web site: http://wiki.hp-see.eu/index.php/SFHG

Short Description

Hamiltonian self avoiding walks on fractals (Sierpinsky gaskets) are one of perspective models for long polymers (DNA, RNA and other bio polymers) behavior description.

We have developed a program for counting self-avoiding Hamiltonian walks to run on multiple processors in a parallel mode. We study Hamiltonian walks (HWs) on the family of two-dimensional modified Sierpinski gasket fractals, as a simple model for compact polymers in nonhomogeneous media in two dimensions. We apply an exact recursive method which allows for explicit enumeration of extremely long Hamiltonian walks of different types: closed and open, with end-points anywhere in the lattice, or with one or both ends fixed at the corner sites. The leading term n is characterized by the value of the connectivity constant 1, which depends on fractal type, but not on the type of HW.

Problems Solved

New serial C++ code produced. Parallelization of serial C++ code done in OpenMP.

Scientific and Social Impact

Collaborations

Beneficiaries

Faculty of Science. Dept of Physics

Number of users

3

Development Plan

  • Concept: 2012-06-01
  • Start of alpha stage: 2012-10-01
  • Start of beta stage: 2013-01-01
  • Start of testing stage: 2013-01-15
  • Start of deployment stage: 2013-02-15
  • Start of production stage: 2013-03-01

Resource Requirements

  • Number of cores required for a single run: up to 64
  • Minimum RAM/core required: <1GB
  • Storage space during a single run: <1GB
  • Long-term data storage: <1GB
  • Total core hours required: <32000

Technical Features and HP-SEE Implementation

  • Primary programming language: C/C++
  • Parallel programming paradigm: OpenMP
  • Main parallel code: Nested OpenMP
  • Pre/post processing code: N/A
  • Application tools and libraries: libgomp, gcc, gprof

Usage Example

Infrastructure Usage

  • Home system: PARADOX
    • Applied for access on: 2010-09-01
    • Access granted on: 2010-09-01
    • Achieved scalability: 8
  • Accessed production systems: .
  1. PARADOX
    • Applied for access on: 2010-09-01
    • Access granted on: 2010-09-01
    • Achieved scalability: 8
  2. Pecs SC
    • Applied for access on: 2013-01-18
    • Access granted on: 2013-01-21
    • Achieved scalability: 48
  3. Szeged SC
    • Applied for access on: 2013-01-18
    • Access granted on: 2013-01-21
    • Achieved scalability: 24
  • Porting activities: Emulated implementation of OpenMP functions not implemented in older versions of gcc/libgomp
  • Scalability studies: Scalability studies performed at PARADOX (up to 8 CPU cores), BA-01-ETFBL (up to 16 CPU cores) and Pecs SC (up to 48 CPU cores). Detailed results available in Deliverable D8.4.

Running on Several HP-SEE Centres

Application is currently running at PARADOX, Pecs SC and Szeged SC.

  • Benchmarking activities and results: Performed scalability study for D8.4. Achieved 207k Walks/s for level 8 and 132k Walks/s for level 9.
  • Other issues: Issues with incomplete or missing support for specific OpenMP features with Open64 and Portland Group compilers.

Achieved Results

  • We have calcualted exact number of walks for previously unavailable levels 8 and 9 for all needed types of walks.
  • We have created a novel approach to solution of the problem without resorting to brute-force walk counting.

Publications

Foreseen Activities

  • Optimization of parallelization approach to tasks - partially completed.
  • Creating MPI code.
  • Creating hybrid MPI+OpenMP code.
  • Performing all types of walks at the same time (most likely via MPI).
Personal tools