AppTemp

From HP-SEE Wiki

Jump to: navigation, search

Contents

General Information

  • Application's name: Hadron Masses from Lattice QCD
  • Application's acronym: HMLQCD
  • Virtual Research Community: Computational Physics
  • Scientific contact: Artan BORICI, artanborici@yahoo.com
  • Technical contact: Dafina XHAKO, dafinaxhako@yahoo.com; Rudina ZEQIRLLARI, rudina_mj@hotmail.com
  • Developers: MSc. Dafina XHAKO, MSc.Rudina ZEQIRLLARI, Department of Physics,Faculty of Natural Science, University of Tirana, Albania
  • Web site:

Short Description

Lattice QCD has become an indispensible tool both for particle and nuclear physics. It has fondamental role in describing elementary particle interactions from first principles. In the application side it has become a tool to understand small nuclear systems from first principles. LQCD is a quantum field theory whose correlation functions are described by means of vacum expectation values. These are path integrals whose measure is defined on four dimensional hypercubic lattices. The computation of path integrals is performed via Markov Chain Monte Carlo sampling of the underlying positive definite measure. The lattice QCD measure is a non-local function on the degrees of freedom which makes the evolution in configuration space very slow with large autocorrelation times of certain observables. At any Markov step several huge and sparse linear systems have to be solved.One the gauge field configurations are produced, one stores them in the disk for further analysis. The mass spectrum analysis involves computation of quark propagators, which are the solution of huge linear systems of Dirac operators on the lattice. As a typical example on 32^3 by 64 lattices one needs thousands of Monte Carlo steps to compute one statistically indipendent configuration. One Krylov solver needs tyically hundreds of iterations and one multiplication by the Wilson-Dirac operator needs 1Gflops. In total, 1000x100x1Gflops=100Tflops are needed for one configuration. Thus a typical sample of 100 configurations requires 10Pflops in total.Given a 10% efficiency in parallelisation the requested CPU hours justify three such simulations at different lattice spacigns in order to make a scaling study of hadron masses in the continuum limit. The project idea is computation of basic properties of matter simulating the theory of strong interactions, Quantum Chromodynamic on the Lattice on massively parallel computers.


Problems Solved

  • Hadron spectrum computation
  • Decay constants and comparison with chiral perturbation theory


Scientific and Social Impact

Solution of QCD has not been yet achieved. Our lattice study would like to complement other studies at different parameters and different lattice actions.

Collaborations

  • CaSToRC Institute, Cyprus, Greece

Beneficiaries

  • Main beneficiaries are research groups in Computational Physics.

Number of users

2

Development Plan

  • Concept: The concept was done before the project started
  • Start of alpha stage: M01. Construction of an algorithm. Creating of the program.
  • Start of beta stage: M6. Parallelization and Debugging of the application.
  • Start of testing stage: M8. Testing on multiprocessor platforms.
  • Start of deployment stage: M10. Performing calculations.
  • Start of production stage:

Resource Requirements

  • Number of cores required for a single run: from 4 to up to 32
  • Minimum RAM/core required: 1 Gb
  • Storage space during a single run: 1-200 Gb
  • Long-term data storage: 1 TB
  • Total core hours required: Unknown

Technical Features and HP-SEE Implementation

  • Primary programming language: C++
  • Parallel programming paradigm: Open MP
  • Main parallel code: Open MP
  • Pre/post processing code: Own developer
  • Application tools and libraries: FermiQCD, OpenMP,

Usage Example

...

Infrastructure Usage

  • Home system: '
    • Applied for access on: '
    • Access granted on: '
    • Achieved scalability: ... cores
  • Accessed production systems:
  1. ...
    • Applied for access on: ...
    • Access granted on: ...
    • Achieved scalability: ... cores
  2. ...
    • Applied for access on: ...
    • Access granted on: ...
    • Achieved scalability: ... cores
  • Porting activities: ...
  • Scalability studies: ...

Running on Several HP-SEE Centres

  • Benchmarking activities and results: ...
  • Other issues: ...

Achieved Results

...

Publications

  • ...

Foreseen Activities

  • ...
  • ...
Personal tools