AppTemp

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(Beneficiaries)
 
(8 intermediate revisions not shown)
Line 1: Line 1:
== General Information ==
== General Information ==
-
* Application's name: ''Parallel algorithm and program for the solving of continuum mechanics equations using Adaptive Mesh Refinement''
+
* Application's name: ''Hadron Masses from Lattice QCD ''
-
* Application's acronym: ''AMR_PAR''
+
* Application's acronym: ''HMLQCD''
-
* Virtual Research Community: ''VRC "Computational Physics"''
+
* Virtual Research Community: '' Computational Physics''
-
* Scientific contact: ''Boris RYBAKIN, rybakin@math.md''
+
* Scientific contact: ''Artan BORICI, artanborici@yahoo.com''
-
* Technical contact: ''Nicolai ILIUHA, nick@renam.md''
+
* Technical contact: ''Dafina XHAKO, dafinaxhako@yahoo.com; Rudina ZEQIRLLARI, rudina_mj@hotmail.com''
-
* Developers: ''Dr. habil. Boris RYBAKIN,Institute of Mathematics and Computer Science, Laboratory of Mathematical Modelling, Republic of Moldova''
+
* Developers: ''MSc. Dafina XHAKO, MSc.Rudina ZEQIRLLARI, Department of Physics,Faculty of Natural Science, University of Tirana, Albania ''
-
* Web site: http://www.math.md/en/
+
* Web site:
== Short Description ==
== Short Description ==
-
Many complex problems of continuum mechanics are numerically solved on structured or unstructured grids. To improve the accuracy of the calculations is necessary to choose a sufficiently small grid (with a small cell size). This leads to the drawback of a substantial increase of computation time. Therefore, for the calculations of complex problems in recent years the method of Adaptive Mesh Refinement (AMR) is applied. That is, the grid refinement is performed only in the areas of interest of the structure, where e.g. the shock waves are generated, or a complex geometry or other such features exist. Thus, the computing time is greatly reduced. In addition, the execution of the application on the resulting sequence of nested, decreasing nets can be parallelized. For the arrays with dimension up to 128x128x128 the application AMR_PAR can still be executed in Grid. For higher-dimensional arrays, which are of practical interest, the delays in sending messages is greatly increasing the run time of the application, and the use of the HPC solution becomes mandatory.
+
Lattice QCD has become an indispensible tool both for particle and nuclear physics. It has fondamental role in describing elementary particle interactions from first principles. In the application side it has become a tool to understand small nuclear systems from first principles. LQCD is a quantum field theory whose correlation functions are described by means of vacum expectation values. These are path integrals whose measure is defined on four dimensional hypercubic lattices. The computation of path integrals is performed via Markov Chain Monte Carlo sampling of the underlying positive definite measure. The lattice QCD measure is a non-local function on the degrees of freedom which makes the evolution in configuration space very slow with large autocorrelation times of certain observables. At any Markov step several huge and sparse linear systems have to be solved.One the gauge field configurations are produced, one stores them in the disk for further analysis. The mass spectrum analysis involves computation of quark propagators, which are the solution of huge linear systems of Dirac operators on the lattice. As a typical example on 32^3 by 64 lattices one needs thousands of Monte Carlo steps to compute one statistically indipendent configuration. One Krylov solver needs tyically hundreds of iterations and one multiplication by the Wilson-Dirac operator needs 1Gflops. In total, 1000x100x1Gflops=100Tflops are needed for one configuration. Thus a typical sample of 100 configurations requires 10Pflops in total.Given a 10% efficiency in parallelisation the requested CPU hours justify three such simulations at different lattice spacigns in order to make a scaling study of hadron masses in the continuum limit. The project idea is computation of basic properties of matter simulating the theory of strong interactions, Quantum Chromodynamic on the Lattice on massively parallel computers.
 +
 
 +
 
== Problems Solved ==
== Problems Solved ==
-
We consider a continuum mechanics problem, such as the problem of modeling the explosion of a supernova type II and, for this example, create an algorithm using the method of AMR and build a parallel program. Then the results of the calculation of specified problem of blast are visualized.
+
* Hadron spectrum computation
 +
* Decay constants and comparison with chiral perturbation theory
 +
 
== Scientific and Social Impact ==
== Scientific and Social Impact ==
-
This method can be applied to any other nowaday problem of continuum mechanics - to calculate the aerodynamics of aircraft, the calculations of the air flow of cars, a large number of other problems of mathematical modeling - calculation of the flow of blood through the vessels, the calculations of the heart valves, etc. Hence, the practical use – the calculation of complex problems in a reasonable time. In all these cases, at the beginning of the problem we define a way to highlight areas in which need to construct the grid, then the program builds a sequence of grids and makes a decision on them. The social impact depends on the problem to be solved, the use of AMR_PAR being of interest for heavy industry (e.g. car body design and development, aircraft aerodynamics), or for healthcare industry.
+
Solution of QCD has not been yet achieved. Our lattice study would like to complement other studies at different parameters and different lattice actions.
== Collaborations ==
== Collaborations ==
-
* ...
+
* CaSToRC Institute, Cyprus, Greece
-
* ...
+
== Beneficiaries ==
== Beneficiaries ==
-
* Main beneficiaries are research groups in Computational Mathematics and Computational Astrophysics.
+
* Main beneficiaries are research groups in Computational Physics.
== Number of users ==
== Number of users ==
-
...
+
2
== Development Plan ==
== Development Plan ==
-
* Concept: ''...''
+
* Concept: ''The concept was done before the project started''
-
* Start of alpha stage: ''...''
+
* Start of alpha stage: ''M01. Construction of an algorithm. Creating of the program.''
-
* Start of beta stage: ''...''
+
* Start of beta stage: ''M6. Parallelization and Debugging of the application.''
-
* Start of testing stage: ''...''
+
* Start of testing stage: ''M8. Testing on multiprocessor platforms.''
-
* Start of deployment stage: ''...''
+
* Start of deployment stage: ''M10. Performing calculations.''
-
* Start of production stage: ''...''
+
* Start of production stage: '' ''
== Resource Requirements ==
== Resource Requirements ==
-
* Number of cores required for a single run: ''...''
+
* Number of cores required for a single run: ''from 4 to up to 32''
-
* Minimum RAM/core required: ''...''
+
* Minimum RAM/core required: ''1 Gb''
-
* Storage space during a single run: ''...''
+
* Storage space during a single run: ''1-200 Gb''
-
* Long-term data storage: ''...''
+
* Long-term data storage: ''1 TB''
-
* Total core hours required: ''...''
+
* Total core hours required: ''Unknown''
== Technical Features and HP-SEE Implementation ==
== Technical Features and HP-SEE Implementation ==
-
* Primary programming language: ''...''
+
* Primary programming language: ''C++''
-
* Parallel programming paradigm: ''...''
+
* Parallel programming paradigm: ''Open MP''
-
* Main parallel code: ''...''
+
* Main parallel code: ''Open MP''
-
* Pre/post processing code: ''...''
+
* Pre/post processing code: ''Own developer''
-
* Application tools and libraries: ''...''
+
* Application tools and libraries: ''FermiQCD, OpenMP,''
== Usage Example ==
== Usage Example ==
Line 65: Line 68:
== Infrastructure Usage ==
== Infrastructure Usage ==
-
* Home system: ''...''
+
* Home system: ''''
-
** Applied for access on: ''...''
+
** Applied for access on: ''''
-
** Access granted on: ''...''
+
** Access granted on: ''''
** Achieved scalability: ''... cores''
** Achieved scalability: ''... cores''
* Accessed production systems:
* Accessed production systems:
Line 92: Line 95:
== Publications ==
== Publications ==
-
* ...
+
* ...
* ...

Latest revision as of 16:49, 26 March 2012

Contents

General Information

  • Application's name: Hadron Masses from Lattice QCD
  • Application's acronym: HMLQCD
  • Virtual Research Community: Computational Physics
  • Scientific contact: Artan BORICI, artanborici@yahoo.com
  • Technical contact: Dafina XHAKO, dafinaxhako@yahoo.com; Rudina ZEQIRLLARI, rudina_mj@hotmail.com
  • Developers: MSc. Dafina XHAKO, MSc.Rudina ZEQIRLLARI, Department of Physics,Faculty of Natural Science, University of Tirana, Albania
  • Web site:

Short Description

Lattice QCD has become an indispensible tool both for particle and nuclear physics. It has fondamental role in describing elementary particle interactions from first principles. In the application side it has become a tool to understand small nuclear systems from first principles. LQCD is a quantum field theory whose correlation functions are described by means of vacum expectation values. These are path integrals whose measure is defined on four dimensional hypercubic lattices. The computation of path integrals is performed via Markov Chain Monte Carlo sampling of the underlying positive definite measure. The lattice QCD measure is a non-local function on the degrees of freedom which makes the evolution in configuration space very slow with large autocorrelation times of certain observables. At any Markov step several huge and sparse linear systems have to be solved.One the gauge field configurations are produced, one stores them in the disk for further analysis. The mass spectrum analysis involves computation of quark propagators, which are the solution of huge linear systems of Dirac operators on the lattice. As a typical example on 32^3 by 64 lattices one needs thousands of Monte Carlo steps to compute one statistically indipendent configuration. One Krylov solver needs tyically hundreds of iterations and one multiplication by the Wilson-Dirac operator needs 1Gflops. In total, 1000x100x1Gflops=100Tflops are needed for one configuration. Thus a typical sample of 100 configurations requires 10Pflops in total.Given a 10% efficiency in parallelisation the requested CPU hours justify three such simulations at different lattice spacigns in order to make a scaling study of hadron masses in the continuum limit. The project idea is computation of basic properties of matter simulating the theory of strong interactions, Quantum Chromodynamic on the Lattice on massively parallel computers.


Problems Solved

  • Hadron spectrum computation
  • Decay constants and comparison with chiral perturbation theory


Scientific and Social Impact

Solution of QCD has not been yet achieved. Our lattice study would like to complement other studies at different parameters and different lattice actions.

Collaborations

  • CaSToRC Institute, Cyprus, Greece

Beneficiaries

  • Main beneficiaries are research groups in Computational Physics.

Number of users

2

Development Plan

  • Concept: The concept was done before the project started
  • Start of alpha stage: M01. Construction of an algorithm. Creating of the program.
  • Start of beta stage: M6. Parallelization and Debugging of the application.
  • Start of testing stage: M8. Testing on multiprocessor platforms.
  • Start of deployment stage: M10. Performing calculations.
  • Start of production stage:

Resource Requirements

  • Number of cores required for a single run: from 4 to up to 32
  • Minimum RAM/core required: 1 Gb
  • Storage space during a single run: 1-200 Gb
  • Long-term data storage: 1 TB
  • Total core hours required: Unknown

Technical Features and HP-SEE Implementation

  • Primary programming language: C++
  • Parallel programming paradigm: Open MP
  • Main parallel code: Open MP
  • Pre/post processing code: Own developer
  • Application tools and libraries: FermiQCD, OpenMP,

Usage Example

...

Infrastructure Usage

  • Home system: '
    • Applied for access on: '
    • Access granted on: '
    • Achieved scalability: ... cores
  • Accessed production systems:
  1. ...
    • Applied for access on: ...
    • Access granted on: ...
    • Achieved scalability: ... cores
  2. ...
    • Applied for access on: ...
    • Access granted on: ...
    • Achieved scalability: ... cores
  • Porting activities: ...
  • Scalability studies: ...

Running on Several HP-SEE Centres

  • Benchmarking activities and results: ...
  • Other issues: ...

Achieved Results

...

Publications

  • ...

Foreseen Activities

  • ...
  • ...
Personal tools