SFHG

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(Achieved Results)
 
(15 intermediate revisions not shown)
Line 6: Line 6:
* Scientific contact: ''Sreten Lekic, slekic@blic.net''
* Scientific contact: ''Sreten Lekic, slekic@blic.net''
* Technical contact: ''Sreten Lekic, slekic@blic.net''
* Technical contact: ''Sreten Lekic, slekic@blic.net''
-
* Developers: ''Sreten Lekic, Faculty of Mech. Engineering, University of Banja Luka (UoBL), Bosnia - Herzegovina''
+
* Developers: ''Sreten Lekic, Faculty of Mech. Engineering, University of Banja Luka (UoBL), Bosnia - Herzegovina'', ''Igor Sevo, Mihajlo Savic Faculty of Electrical Engineering, University of Banja Luka (UoBL), Bosnia - Herzegovina''
* Web site: http://wiki.hp-see.eu/index.php/SFHG
* Web site: http://wiki.hp-see.eu/index.php/SFHG
Line 13: Line 13:
Hamiltonian self avoiding walks on fractals (Sierpinsky gaskets) are one of perspective models for long polymers (DNA, RNA and other bio polymers) behavior description.
Hamiltonian self avoiding walks on fractals (Sierpinsky gaskets) are one of perspective models for long polymers (DNA, RNA and other bio polymers) behavior description.
-
FORTRAN algorithms for 2D and 3D were tested on multiprocessors core.
+
We have developed a program for counting self-avoiding Hamiltonian walks to run on multiple processors in a parallel mode. We study Hamiltonian walks (HWs) on the family of two-dimensional modified Sierpinski gasket fractals, as a simple model for compact polymers in nonhomogeneous media in two dimensions. We apply an exact recursive method which allows for explicit enumeration of extremely long Hamiltonian walks of different types: closed and open, with end-points anywhere in the lattice, or with one or both ends fixed at the corner sites. The leading term n is characterized by the value of the connectivity constant 1, which depends on fractal type, but not on the type of HW.
== Problems Solved ==
== Problems Solved ==
-
Initial parallelization of process, partitioning.
+
New serial C++ code produced. Parallelization of serial C++ code done in OpenMP.
== Scientific and Social Impact ==
== Scientific and Social Impact ==
-
''Not applicable yet as the application is mostly pushed to Y2 of the project''
 
-
== Collaborations and Beneficiaries ==
+
== Collaborations ==
-
''Not applicable yet as the application is mostly pushed to Y2 of the project''
+
 
 +
== Beneficiaries ==
 +
 
 +
Faculty of Science. Dept of Physics
 +
 
 +
== Number of users ==
 +
 
 +
3
 +
 
 +
== Development Plan ==
 +
 
 +
* Concept: 2012-06-01
 +
* Start of alpha stage: 2012-10-01
 +
* Start of beta stage: 2013-01-01
 +
* Start of testing stage: 2013-01-15
 +
* Start of deployment stage: 2013-02-15
 +
* Start of production stage: 2013-03-01
 +
 
 +
== Resource Requirements ==
 +
 
 +
* Number of cores required for a single run: ''up to 64''
 +
* Minimum RAM/core required: ''<1GB''
 +
* Storage space during a single run: ''<1GB''
 +
* Long-term data storage: ''<1GB''
 +
* Total core hours required: ''<32000''
== Technical Features and HP-SEE Implementation ==
== Technical Features and HP-SEE Implementation ==
-
* Primary programming language: FORTRAN
+
* Primary programming language: ''C/C++''
-
* Parallel programming paradigm: MPI
+
* Parallel programming paradigm: ''OpenMP''
-
* Main parallel code: ''Tobefilledin''
+
* Main parallel code: ''Nested OpenMP''
-
* Pre/post processing code: ''Not applicable yet as the application is mostly pushed to Y2 of the project''
+
* Pre/post processing code: ''N/A''
-
* Application tools and libraries: ''Not applicable yet as the application is mostly pushed to Y2 of the project''
+
* Application tools and libraries: ''libgomp, gcc, gprof''
-
* Number of cores required: 512
+
-
* Minimum RAM/core required: 1GB
+
-
* Storage space during a single run: 10GB
+
-
* Long-term data storage: 40GB
+
== Usage Example ==
== Usage Example ==
-
''Not applicable yet as the application is mostly pushed to Y2 of the project''
+
 
 +
== Infrastructure Usage ==
 +
 
 +
* Home system: ''PARADOX''
 +
** Applied for access on: ''2010-09-01''
 +
** Access granted on: ''2010-09-01''
 +
** Achieved scalability: ''8''
 +
* Accessed production systems: ''.''
 +
# ''PARADOX''
 +
#* Applied for access on: ''2010-09-01''
 +
#* Access granted on: ''2010-09-01''
 +
#* Achieved scalability: ''8''
 +
# ''Pecs SC''
 +
#* Applied for access on: ''2013-01-18''
 +
#* Access granted on: ''2013-01-21''
 +
#* Achieved scalability: ''48''
 +
# ''Szeged SC''
 +
#* Applied for access on: ''2013-01-18''
 +
#* Access granted on: ''2013-01-21''
 +
#* Achieved scalability: ''24''
 +
* Porting activities: ''Emulated implementation of OpenMP functions not implemented in older versions of gcc/libgomp''
 +
* Scalability studies: ''Scalability studies performed at PARADOX (up to 8 CPU cores), BA-01-ETFBL (up to 16 CPU cores) and Pecs SC (up to 48 CPU cores). Detailed results available in Deliverable D8.4.''
 +
 
 +
== Running on Several HP-SEE Centres ==
 +
 
 +
Application is currently running at PARADOX, Pecs SC and Szeged SC.
 +
 
 +
* Benchmarking activities and results: ''Performed scalability study for D8.4. Achieved 207k Walks/s for level 8 and 132k Walks/s for level 9.''
 +
* Other issues: ''Issues with incomplete or missing support for specific OpenMP features with Open64 and Portland Group compilers.''
 +
 
 +
== Achieved Results ==
 +
 
 +
*We have calcualted exact number of walks for previously unavailable levels 8 and 9 for all needed types of walks.
 +
*We have created a novel approach to solution of the problem without resorting to brute-force walk counting.
== Publications ==
== Publications ==
-
''Not applicable yet as the application is mostly pushed to Y2 of the project''
+
 
 +
== Foreseen Activities ==
 +
* Optimization of parallelization approach to tasks - partially completed.
 +
* Creating MPI code.
 +
* Creating hybrid MPI+OpenMP code.
 +
* Performing all types of walks at the same time (most likely via MPI).

Latest revision as of 09:40, 5 March 2013

Contents

General Information

  • Application's name: Self Avoiding Hamiltonian Walk on Gaskets
  • Application's acronym: SFHG
  • Virtual Research Community: Computational Physics Applications
  • Scientific contact: Sreten Lekic, slekic@blic.net
  • Technical contact: Sreten Lekic, slekic@blic.net
  • Developers: Sreten Lekic, Faculty of Mech. Engineering, University of Banja Luka (UoBL), Bosnia - Herzegovina, Igor Sevo, Mihajlo Savic Faculty of Electrical Engineering, University of Banja Luka (UoBL), Bosnia - Herzegovina
  • Web site: http://wiki.hp-see.eu/index.php/SFHG

Short Description

Hamiltonian self avoiding walks on fractals (Sierpinsky gaskets) are one of perspective models for long polymers (DNA, RNA and other bio polymers) behavior description.

We have developed a program for counting self-avoiding Hamiltonian walks to run on multiple processors in a parallel mode. We study Hamiltonian walks (HWs) on the family of two-dimensional modified Sierpinski gasket fractals, as a simple model for compact polymers in nonhomogeneous media in two dimensions. We apply an exact recursive method which allows for explicit enumeration of extremely long Hamiltonian walks of different types: closed and open, with end-points anywhere in the lattice, or with one or both ends fixed at the corner sites. The leading term n is characterized by the value of the connectivity constant 1, which depends on fractal type, but not on the type of HW.

Problems Solved

New serial C++ code produced. Parallelization of serial C++ code done in OpenMP.

Scientific and Social Impact

Collaborations

Beneficiaries

Faculty of Science. Dept of Physics

Number of users

3

Development Plan

  • Concept: 2012-06-01
  • Start of alpha stage: 2012-10-01
  • Start of beta stage: 2013-01-01
  • Start of testing stage: 2013-01-15
  • Start of deployment stage: 2013-02-15
  • Start of production stage: 2013-03-01

Resource Requirements

  • Number of cores required for a single run: up to 64
  • Minimum RAM/core required: <1GB
  • Storage space during a single run: <1GB
  • Long-term data storage: <1GB
  • Total core hours required: <32000

Technical Features and HP-SEE Implementation

  • Primary programming language: C/C++
  • Parallel programming paradigm: OpenMP
  • Main parallel code: Nested OpenMP
  • Pre/post processing code: N/A
  • Application tools and libraries: libgomp, gcc, gprof

Usage Example

Infrastructure Usage

  • Home system: PARADOX
    • Applied for access on: 2010-09-01
    • Access granted on: 2010-09-01
    • Achieved scalability: 8
  • Accessed production systems: .
  1. PARADOX
    • Applied for access on: 2010-09-01
    • Access granted on: 2010-09-01
    • Achieved scalability: 8
  2. Pecs SC
    • Applied for access on: 2013-01-18
    • Access granted on: 2013-01-21
    • Achieved scalability: 48
  3. Szeged SC
    • Applied for access on: 2013-01-18
    • Access granted on: 2013-01-21
    • Achieved scalability: 24
  • Porting activities: Emulated implementation of OpenMP functions not implemented in older versions of gcc/libgomp
  • Scalability studies: Scalability studies performed at PARADOX (up to 8 CPU cores), BA-01-ETFBL (up to 16 CPU cores) and Pecs SC (up to 48 CPU cores). Detailed results available in Deliverable D8.4.

Running on Several HP-SEE Centres

Application is currently running at PARADOX, Pecs SC and Szeged SC.

  • Benchmarking activities and results: Performed scalability study for D8.4. Achieved 207k Walks/s for level 8 and 132k Walks/s for level 9.
  • Other issues: Issues with incomplete or missing support for specific OpenMP features with Open64 and Portland Group compilers.

Achieved Results

  • We have calcualted exact number of walks for previously unavailable levels 8 and 9 for all needed types of walks.
  • We have created a novel approach to solution of the problem without resorting to brute-force walk counting.

Publications

Foreseen Activities

  • Optimization of parallelization approach to tasks - partially completed.
  • Creating MPI code.
  • Creating hybrid MPI+OpenMP code.
  • Performing all types of walks at the same time (most likely via MPI).
Personal tools