CFDOF

From HP-SEE Wiki

Jump to: navigation, search

Contents

General Information

  • Application's name : CFD Analysis of Combustion
  • Application's acronym: CFDOF
  • Virtual Research Communities : Computational Chemistry Applications
  • Scientific contact : Sreten Lekic, slekic@blic.net
  • Technical contact : Sreten Lekic, slekic@blic.net
  • Developers : Sreten Lekic, Faculty of Mech. Engineering, University of Banja Luka (UoBL), Bosnia - Herzegovina
  • Web site : http://wiki.hp-see.eu/index.php/CFDOF

Application and Short Description

We developed universal burner for gas fuel as a part of economic domestic boiler as a European Union FP6 project for Western Balkan using the computational simulation. We met the unpredictable problems in timing during the iterative steps when we used the nodes on long distance. Data speed transport between the far distanced computers is a limit factor. The main problem is insufficient RAM in preprocessing and processing. Last analyzing show that the parallelization process can be successful on cluster, or grid configurations. Processes are highly scalable, because partitioning divides the computing sub volume with small overlapping. Our aim is to develop more sophisticate simulation model by computational simulation on stronger computer system. We choose open source OpenFOAM CFD software under Linux for further development. We isolated one small segment of burner and by method of finite volumes we formed the grid of about one million cells, and included effects of fluid dynamics and solver module of chemical reactions in combustion processes. Over the temperature, species properties and pressure difference as input parameters, we have got 3D dynamics of burning, and various temperature, pressure, velocity field, species and pollutions distribution, etc. Segmented pilot program uses limited amount of RAM and works in parallel processing with 16 CPUs. At the end of project we expect solution with preprocessing grid or supercomputer utilising 128-512 CPUs with linear increase in RAM requirements as well (rougly 1GB per core).

Problems Solved

Use of 64-bit architectures and HPC resources and software gives us the opportunity to simulate the complete burner, design and building more realistic model of real device in the future. Simulations are carried out by OpenFOAM software with combustion and chemistry modules, the size of the mesh is over 13 million elements. Parallel processing of the simulation partitioning will occupy approximately 1 GB of RAM per core involved in computation at full scale. Special problems are presented in dividing the domain in such a way to enable all the CPUs to be optimally utilized.

Scientific and Social Impact

We expect a better sight into the real chemical reaction during the combustion process and better control of fluid dynamics parameter in geometry design. We expect that Eddy concept theory with species calculation can explain some nonlinear and acoustic influence in burning dynamics.

The goal is design of flexible burner for gases fuel of low quality and with great efficiency and with small emission of pollutants.

Collaborations

  • Faculties of the University of Banja Luka: Faculty of Mechanical Engineering in Banja Luka, Dept. of Thermotechnic; Faculty of Science. Dept of Physics; Technology Faculty in Banja Luka, Dept. of Chemistry; Electro technical Faculty in Banja Luka, Dept of Computing technology.
  • Faculties in Bosnia and Herzegovina, as the Faculty of Mechanical Engineering in

Sarajevo.

  • In Balkan countries: Faculty of Mechanical engineering, Dept. for fluid dynamics in Belgrade; Institute Vinca in Belgrade

Beneficiaries

Faculties of the University of Banja Luka: Faculty of Mechanical Engineering in Banja Luka, Dept. of Thermotechnic; Faculty of Science. Dept of Physics; Technology Faculty in Banja Luka, Dept. of Chemistry; Electro technical Faculty in Banja Luka, Dept of Computing technology.

Number of users

7

Development Plan

  • Concept: Done before the project started.
  • Start of alpha stage: M4
  • Start of beta stage: M8
  • Start of testing stage: M10
  • Start of deployment stage: M13
  • Start of production stage: M18

Resource Requirements

  • Number of cores required for a single run: Up to 512
  • Minimum RAM/core required: 1 GB
  • Storage space during a single run: 40 GB
  • Long-term data storage: 200 GB
  • Total core hours required: 8000

Technical Features and HP-SEE Implementation

  • Primary programming language: C,OpenFOAM
  • Parallel programming paradigm: MPI
  • Main parallel code: MPI
  • Pre/post processing code: OpenFOAM, ParaView
  • Application tools and libraries: OpenFOAM, ParaView

Usage Example

Exploring various configurations of universal gas burners - single layer, dual layer, dimensions, varying boundary conditions and analyzing simulation output (varying concentration of CH4, rate of entry, etc).

Infrastructure Usage

  • Home system: PARADOX/IPB
    • Applied for access on: 09.2010
    • Access granted on: 09.2010
    • Achieved scalability: 64 cores
  • Accessed production systems:
  1. PARADOX/IPB
    • Applied for access on: 09.2010
    • Access granted on: 09.2010
    • Achieved scalability: 128 cores
  2. Pecs SC
    • Applied for access on: 2013-01-18
    • Access granted on: 2013-01-21
    • Achieved scalability: (currently testing)
  • Porting activities: The application has been sucessfully ported to x86_64 architecture and Linux OS from proprietary solutions on Windows 32-bit systems.
  • Scalability studies: Small scale scalability and benchmarking for up to 32 cores - scales very well. Larger scale models scale but it is not fesible to run several 64 or 128 CPU runs just for benchmarking. Comparison of transient and steady-state approach shows significantly better performance for steady-state.

Running on Several HP-SEE Centres

  • Benchmarking activities and results: Not applicable yet
  • Other issues: Given up on porting the system to BlueGene/P after investigation. Only version that can (currently) be compiled on this architecture does not support all the modules needed for our simulation.

Achieved Results

Not applicable yet


Publications

Not applicable yet

Foreseen Activities

  • Analysis of migration to to OpenFOAM 2.0
  • Scalability studies for up to 256 CPUs
  • Resolving bottlenecks in our workflow (recomposition of model after simulation, visualization)
Personal tools