http://hpseewiki.ipb.ac.rs/index.php?title=Special:Contributions/Lifesci&feed=atom&limit=50&target=Lifesci&year=&month= HP-SEE Wiki - User contributions [en] 2024-03-28T12:30:21Z From HP-SEE Wiki MediaWiki 1.16.0 http://hpseewiki.ipb.ac.rs/index.php/Presentations Presentations 2013-04-05T13:21:29Z <p>Lifesci: /* GE */</p> <hr /> <div>== Scientific and HPC presentations and posters ==<br /> <br /> === GR ===<br /> * [GRNET IMBB] P. Poirazi&lt;br&gt;'''Dendrites and information processing: insights from compartmental models'''&lt;br&gt;''CNS*2011 workshop on ‘Dendrite function and wiring: experiments and theory”,'', July 27, 2011, Stockholm, Sweden<br /> * [GRNET IMBB] &lt;br&gt;'''Poster presentation'''&lt;br&gt;''EMBO Conference Series on THE ASSEMBLY AND FUNCTION OF NEURONAL CIRCUITS'', September 23-29, 2011, Ascona, Switzerland<br /> * [GRNET IMBB] A. Oulas&lt;br&gt;&lt;br&gt;''62nd Conference of the Hellenic Society for Biochemistry and Molecular Biology'', December 9-11, 2011, Athens,Greece<br /> * [GRNET IMBB] P. Poirazi&lt;br&gt;'''Spatio-temporal encoding of input characteristics in biophysical model cells and circuits'''&lt;br&gt;''FENS-HERTIE-PENS Winter School'', January 8-15, 2012, Oburgurgl, Austria<br /> <br /> === BG ===<br /> * [IICT-BAS] E. Atanassov, A. Karaivanova, S. Ivanovska, M. Durchova,&lt;br&gt;'''“Two Algorithms for Modified Owen Scrambling on GPU”,'''&lt;br&gt;''Fifth Annual meeting of the Bulgarian Section of SIAM – BGSIAM’2010'', December 20-21, 2010 Sofia, Bulgaria<br /> * [IICT-BAS] E. Atanassov, S. Ivanovska&lt;br&gt;'''“Efficient Implementation of Heston Stochastic Volatility Model Using GPGPU”, Special Session “High Performance Monte Carlo Simulation”'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] E. Atanassov, T. Gurov, A. Karaivanova,&lt;br&gt;'''Message Oriented Framework with Low Overhead for Efficient Use of HPC Resources'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] G. Bencheva&lt;br&gt;'''Computer Modelling of Haematopoietic Stem Cells Migration'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] N. Kosturski, S. Margenov, Y. Vutov&lt;br&gt;'''Improving the Efficiency of Parallel FEM Simulations on Voxel Domains'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] N. Kosturski, S. Margenov, Y. Vutov&lt;br&gt;'''Optimizing the Performance of a Parallel Unstructured Grid AMG Solver'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] S. Ivanovska, A. Karaivanova, N. Manev&lt;br&gt;'''Numerical Integration Using Sequences Generating Permutations'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] K. Shterev, S. Stefanov, E. Atanassov&lt;br&gt;'''A Parallel Algorithm with Improved Performance of Finite Volume Method (SIMPLE-TS)'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] E. Atanassov, T. Gurov, A. Karaivanova&lt;br&gt;'''How to Use HPC Resources efficiently by a Message Oriented Framework'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS] K. Georgiev, I. Lirkov, S. Margenov&lt;br&gt;'''Highly Parallel Alternating Directions Algorithm for Time Dependent Problems'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS] N. Kosturski, S. Margenov, Y. Vutov&lt;br&gt;'''Comparison of Two Techniques for Radio-Frequency Hepatic Tumor Ablation through Numerical Simulation'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS] T. Gurov, A. Karaivanova, N. Manev&lt;br&gt;'''Monte Carlo Simulations of Electron Transport using a Class of Sequences Generating Permutations'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS] A. Karaivanova&lt;br&gt;'''Message Oriented Framework for Efficient Use of HPC Resources”, Special Session “High-Performance Computations for Monte Carlo Applications”,'''&lt;br&gt;''8th IMACS Seminar on Monte Carlo Methods'', August 29 - September 2, 2011, Borovets, Bulgaria<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Efficient Implementation of Heston Model Using GPGPU”, Special Session “High-Performance Computations for Monte Carlo Applications”'''&lt;br&gt;''8th IMACS Seminar on Monte Carlo Methods'', August 29 - September 2, 2011, Borovets, Bulgaria<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Message Oriented Framework with Low Overhead for Efficient Use of HPC Resources'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> * [IICT-BAS] N. Manev&lt;br&gt;'''Monte Carlo Methods using a New Class of Congruential Generators'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''Study Scalability of SET Application using The Bulgarian HPC Infrastructure'''&lt;br&gt;''8th International Conference on Computer Science and Information Technologies – CSIT2011'', September 26-30, 2011, Yerevan, Armenia<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Stochastic Modeling of Electron Transport on different HPC architectures'''&lt;br&gt;''PRACE Workshop on HPC approaches on Life Sciences and Chemistry'', February 17-18, 2012, Sofia, Bulgaria<br /> * [IICT-BAS] N. Manev&lt;br&gt;'''ECM Integer factorization on GPU Cluster'''&lt;br&gt;''Jubilee 35th International Convention on Information and Communication Technology, electronics and microelectronics (MIPRO2012)'', May 21-25, 2012, Opatija, Croatia.<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Efficient Implementation of a Stochastic Electron Transport Simulation Algorithm Using GPGPU Computing'''&lt;br&gt;''4th AMITANS 2012 Conference'', June 11-16, 2012, Varna, Bulgaria<br /> * [IICT-BAS] N. Manev&lt;br&gt;'''Twister Edwards Curves Integer Factorization on GPU Cluster'''&lt;br&gt;''4th AMITANS 2012 Conference'', June 11-16, 2012, Varna, Bulgaria<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Monte Carlo Methods for Electron Transport: Scalability Study'''&lt;br&gt;''11th ISPDC2012 Conference'', June 25-29, 2012, Munich, Germany<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''Efficient Monte Carlo algorithms for Inverse Matrix Problems'''&lt;br&gt;''7th PMAA’12 Conference'', June 28-30, Birkbeck University of London, UK<br /> * [IICT-BAS] A. Karaivanova&lt;br&gt;'''Randomized quasi-Monte Carlo for Matrix Computations'''&lt;br&gt;''7th PMAA’12 Conference'', June 28-30, Birkbeck University of London, UK<br /> * [IICT-BAS IM-BAS] K. Shterev, N. Kulakarni, S. Stefanov&lt;br&gt;'''Influence of Reservoirs on Pressure Driven Gas Glowin a Micro-channel'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS IM-BAS] K. Shterev&lt;br&gt;'''Comparison of Some Approximation Schemes for Convective Terms for Solving Gas Flow past a Square in a Microchannel'''&lt;br&gt;''4th AMITANS 2012 Conference'', June 11-16, 2012, Varna, Bulgaria<br /> * [IICT-BAS]: A. Karaivanova, “Monte Carlo methods for Electron Transport: Scalability Study Using HP-SEE Infrastructure”, invited talk during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia.<br /> * [IICT-BAS]: E. Atanassov, “Efficient Parallel Simulations of Large-Ring Cyclodextrins on HPC cluster”, contribution talk during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia.<br /> * [IICT-BAS]: D. Georgiev, “Number Theory Algorithms on GPU cluster”, contribution talk during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia. <br /> * [IICT-BAS]: E. Atanassov, “Conformational Analysis of Kyotorphin Analogues Containing Unnatural Amino Acids”, poster presentation during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia.<br /> * [IM-BAS]: K. Shterev, “Determination of zone of flow instability in a gas flow past a square particle in a narrow microchannel”, contribution talk during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia.<br /> <br /> <br /> === RO ===<br /> * [IFIN HH] Silviu Panica, Dana Petcu, Daniela Zaharie&lt;br&gt;'''Gaining Experience in BlueGene/P Application Development. A Case Study in Remote Sensing Data Analysis, HPCe – High Performance Computing with application in environment'''&lt;br&gt;''SYNASC 2011'', September 26-29, 2011, Timisoara, Romania<br /> * [IFIN HH] Ionut Vasile, Dragos Ciobanu-Zabet&lt;br&gt;'''Software tools for HPC users at IFIN-HH'''&lt;br&gt;''RO-LCG 2011 Workshop - Applications of Grid Technology and High Performance Computing in Advanced Research'', November 29-30, 2011, Bucharest, Romania<br /> * [IFIN HH] Silviu Panica, Marian Neagul, Daniela Zaharie and Dana Petcu&lt;br&gt;'''Services for Earth observations: from cluster and cloud to HPC and Grid services'''&lt;br&gt;''COST Action 0805 Complex HPC'', January 26, 2012, Timisoara, Romania<br /> * [IFIN HH] HP-SEE User Forum, Belgrade:<br /> ** Emergence of resonant waves in cigar-shaped Bose-Einstein condensates Alexandru NICOLIN <br /> ** On HPC for Hyperspectral Image Processing, Silviu Panica, Daniela Zaharie, Dana Petcu<br /> ** Investigations of biomolecular systems within the ISyMAB simulation framework Ionut VASILE, Dragos Ciobanu-Zabet <br /> ** Formation of Faraday and Resonant Waves in Driven High-Density Bose-Einstein Condensates Mihaela Carina RAPORTARU (poster)<br /> * [IFIN HH] Procs. of the RO-LCG 2012 IEEE International Conference, Cluj, Romania, 25-27.10.2012, IEEE CFP1232T-PRT, ISBN 978-973-662-710-1:<br /> ** National and regional organization of collaborations in advanced computing, Mihnea Dulea, pp. 63-66 <br /> ** Computational Challenges in Processing Large Hyperspectral Images, Dana Petcu et al <br /> ** Eagle Eye – Feature Extraction from Satellite Images on a 3D Map of Romania, Razvan Dobre et al<br /> ** Integrated System for Modeling and data Analysis of complex Biomolecules (ISyMAB), I. Vasile, D. Ciobanu-Zabet<br /> <br /> === HU ===<br /> * [NIIF] M. Kozlovszky, Gergely Windisch, Akos Balasko&lt;br&gt;'''Bioinformatics eScience gateway services using the HP-SEE infrastructure'''&lt;br&gt;''Summer School on Workflows and Gateways for Grids and Clouds 2012'', July 2-6, 2012, Budapest ,Hungary<br /> * [NIIF SZTAKI] M. Kozlovszky, A. Balasko, P. Kacsuk&lt;br&gt;'''Enabling JChem application on grid'''&lt;br&gt;''ISGC 2011 &amp; OGF 31, International Symposium on Grids and Clouds (ISGC 2011) &amp; Open Grid Forum (OGF 31)'', March 21-25, 2011, Taipei Taiwan<br /> * [NIIF SZTAKI] M. Kozlovszky, A. Balasko, I. Marton, K. Karoczkai, A. Szikszay Fabri, P. Kacsuk&lt;br&gt;'''New developments of gUSE &amp; WS-PGRADE to support e-science gateways'''&lt;br&gt;''EGI User Forum Lithuania'', April, 2011, Vilnius, Lithuania<br /> * [NIIFI] Presentation at the HP-SEE User Forum 2012 (“Performance and scalability evaluation of short fragment sequence alignment applications“ by G. Windisch , A. Balaso, M.Kozlovszky).<br /> * [NIIFI] Presentation at the HP-SEE User Forum 2012 (“Advanced Vulnerability Assessment Tool for Distributed Systems “ by S. Acs, M.Kozlovszky).<br /> <br /> === RS ===<br /> * [IPB] D. Vudragovic&lt;br&gt;'''Extensions of the SPEEDUP Path Integral Monte Carlo Code'''&lt;br&gt;''8th IMACS Seminar on Monte Carlo Methods'', August 29 - September 2, 2011, Borovets, Bulgaria<br /> * [IPB] A. Balaz&lt;br&gt;'''Numerical Study of Surface Waves in Binary Bose-Einstein Condensates'''&lt;br&gt;''Seminar at Vinca Institute of Nuclear Sciences'', November 17, 2011, Belgrade, Serbia<br /> * [IPB] A. Balaz&lt;br&gt;'''Numerical simulations of Faraday waves in binary Bose-Einstein condensates'''&lt;br&gt;''CompPhys11 conference'', November 24-26, 2011, Leipzig, Germany<br /> * [IPB] A. Balaz&lt;br&gt;'''Parametric and Geometric Resonances of Collective Oscillation Modes in Bose-Einstein Condensates'''&lt;br&gt;''Photonica 2011 conference'', August 29 - September 2, 2011, Belgrade, Serbia<br /> * [IPB] Antun Balaz: &quot;Numerical Study of Ultracold Quantum Gases: Formation of Faraday Patterns, Geometric Resonances, and Fragmentation&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Nenad Vukmirovic, &quot;Computational approaches for electronic properties of semiconducting materials and nanostructures&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Igor Stankovic, &quot;Numerical Simulations of the Structure and Transport Properties of the Complex Networks&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Jaksa Vucicevic, &quot;Iterative Perturbative Method for a Study of Disordered Strongly Correlated Systems&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Milos Radonjic, &quot;Electronic Structure and Lattice Dynamics Calculations of FeSb2 and CoSb2&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Dusan Stankovic, &quot;SCL Quantum Espresso Extensions&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Josip Jakic, &quot;An Analysis of FFTW and FFTE Performance&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Petar Jovanovic, &quot;GPAW optimisations&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB-FCUB] Ivan Juranic, “Use of High Performance Computing in (Bio)Chemistry”, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB-FCUB] Branko Drakulic, “Dynamics of uninhibited and covalently inhibited cysteine protease on non-physiological”, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB-FCUB] Branko Drakulic, “Free-energy surfaces of 2-[(carboxymethyl)sulfanyl]-4-oxo-4-arylbutanoic acids. Molecular dynamics study in explicit solvents”, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB-FCUB] Ilija Cvijetic, “In the search of the HDAC-1 inhibitors. The preliminary results of ligand based virtual screening”, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> <br /> <br /> === AL ===<br /> * [UPT] Neki Frasheri, Betim Cico&lt;br&gt;'''Analysis of the Convergence of Iterative Geophysical Inversion in Parallel Systems'''&lt;br&gt;''ICT Innovations 2012'', September 14-16, 2011, Skopje, FYROM<br /> * [UPT] Neki Frasheri, Betim Cico&lt;br&gt;'''Convergence of Gravity Inversion Using OpenMP'''&lt;br&gt;''XVII Scientific-Professional Information Technology Conference 2012'', February 27 – March 2, 2012, Zabljak Montenegro<br /> * [UPT] N. Frasheri, B. Cico. Reflections on parallelization of Gravity Inversion. HP-SEE User Forum, 17-19 October 2012, Belgrade, Republic of Serbia.<br /> * [UPT] R. Zeqirllari. Quenched Hadron Spectroscopy Using FermiCQD. HP-SEE User Forum, 17-19 October 2012, Belgrade, Republic of Serbia. (poster)<br /> * [UPT] D. Xhako. Using Parallel Computing to Calculate Quark-Antiquark Potential from Lattice CQD. HP-SEE User Forum, 17-19 October 2012, Belgrade, Republic of Serbia. (poster)<br /> <br /> === MK ===<br /> * [UKIM] Anastas Misev&lt;br&gt;'''Software Engineering for HPC'''&lt;br&gt;''DAAD workshop in software engineering'', August 22-27, 2011, August, FYROM<br /> * [UKIM] Jane Jovanovski, Boro Jakimovski and Dragan Jakimovski&lt;br&gt;'''Parallel Genetic Algorithms for Finding Solution of System of Ordinary Differential Equations'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> * [UKIM] Anastas Misev, Dragan Sahpaski and Ljupco Pejov&lt;br&gt;'''Implementation of Hybrid Monte Carlo (Molecular Dynamics) – Quantum Mechanical Methodology for Modelling of Condensed Phases on High Performance Computing Environment'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> <br /> === ME ===<br /> * [UOM] Luka Filipović, Danilo Mrdak, Božo Krstajić, &quot;DNA Multigene aproach on HPC Using RAxML Software&quot;, HP SEE User forum, Belgrade, Serbia<br /> <br /> === MD ===<br /> * [RENAM] Dr. P. Bogatencov, Dr. G. Secrieru&lt;br&gt;'''Numerical analysis of the coupled problem on interaction of ground and elastic - plastic shell under high – speed loads'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [RENAM] Participation of RENAM representative Dr. Petru Bogatencov in &quot;HP-SEE User Forum 2012&quot;, Belgrade, Serbia on October 17-19, 2012 with presentation “Using Structured Adaptive Computational Grid for Solving Multidimensional Computational Physics Tasks”;<br /> <br /> === GE ===<br /> * [GRENA] G. Mikuchadze made presentation “Quantum-Chemical Calculations for the Quantitative Estimations of the Processes in DNA” at the HP-SEE User Forum on October 17-19, 2012 in Belgrade, Serbia.<br /> <br /> == HP-SEE dissemination presentations and posters at external events ==<br /> === GR ===<br /> <br /> * [GRNET] O.Prnjat&lt;br&gt;'''High-Performance Computing Infrastructure for South East Europe’s Research Communities'''&lt;br&gt;''EGI technical forum'', September 14-17, 2010, Amsterdam, The Netherlands<br /> * [GRNET] I. Liabotis&lt;br&gt;'''HP-SEE - Regional HPC development activities in South-Eastern Europe'''&lt;br&gt;''2nd HellasHPC Workshop'', October 22, 2010, Athens, Greece.<br /> * [GRNET] I. Liabotis&lt;br&gt;'''HP-SEE project'''&lt;br&gt;''8th e-Infrastructure Concertation Meeting'', November 4-5, 2010, CERN, Geneva, Switzerland<br /> * [GRNET] O. Prnjat&lt;br&gt;'''HP-SEE Project (overall SEE)'''&lt;br&gt;''The Forum on Research for Innovation in ICT for the Western Balkan Countries'', November 30, 2010, Belgrade, Serbia<br /> * [GRNET] O. Prnjat&lt;br&gt;'''HP-SEE Project (overall SEE)'''&lt;br&gt;''2010 Euro-Africa Conference on e- Infrastructures'', December 9-10, 2010, Helsinki, Finland<br /> * [GRNET] O. Prnjat&lt;br&gt;'''HP-SEE Project (overall SEE)'''&lt;br&gt;''SEERA-EI Sixth Networking Meeting'', January, 2011, Sarajevo, Bosnia and Herzegovina<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI Seventh Networking Meeting'', April, 2011, Chisinau, Moldova<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''NATO Advanced Networking Workshop on &quot;Advanced cooperation in the area of disaster prevention for human security in Central Asia&quot;'', May 16, 2011, Dushanbe, Tajikistan<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI Eight networking meeting'', July, 2011, Tirana, Albania<br /> * [GRNET] O.Prnjat&lt;br&gt;'''HP-SEE project'''&lt;br&gt;''eInfrastructures concertation meeting'', September 2011, Lyon, France<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''EuroRIs- Net Workshop'', October 11, 2011, Athens, Greece.<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI Ninth Networking Meeting'', November, 2011, Bucharest, Romania<br /> * [GRNET] Dr Tryfon Chiotis&lt;br&gt;'''HCP and Cloud Technologies in Greece and South Eastern Europe'''&lt;br&gt;''e-AGE 2011 - Integrating Arab e-Infrastructures in a global environment'', December, 2011, Amman, Jordan<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI project conference for the Western audience,'', February, 2012, Istanbul, Turkey<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI cloud policy workshop'', April, 2011, Chisinau, Moldova<br /> * [GRNET] &lt;br&gt;'''poster entitled “Advanced High Performance Computing Services for networked regional Virtual Research Communities”'''&lt;br&gt;''TNC2012'', May, 21 – 24, 2012, Reykjavík, Iceland<br /> * [GRNET] &lt;br&gt;'''presentation to the high-level European policy-makers'''&lt;br&gt;''eInfrastructure Reflection Group workshop'', June, 2012, Copenhagen, Denmark<br /> * [GRNET] HP-SEE regional HPC policy, operations and user support approach presented to highest-possible worldwide audience during CHAIN project workshops in China, India and Sub-Saharan Africa.<br /> <br /> === BG ===<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Bulgarian HPC Infrastructure'''&lt;br&gt;''SEERA-EI training series on best practices: Bulgarian HPC policy and programmes”'', May 17, 2011, Sofia, Bulgaria<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''e-Infrastructure for e-Science'''&lt;br&gt;''seminar at the Institute on Oceanology,'', March 11, 2011, Varna, Bulgaria<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''“High-Performance Computing in South East Europe and Monte Carlo Simulations”, Special Session “High-Performance Computations for Monte Carlo Applications”'''&lt;br&gt;''8th IMACS Seminar on Monte Carlo Methods'', August 29 - September 2, 2011, Borovets, Bulgaria<br /> * [IICT-BAS] A. Karaivanova&lt;br&gt;'''HP-SEE and SEERA-EI'''&lt;br&gt;''ICRI 2012'', March 01-23 2012, Copenhagen, Denmark.<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''National e-Infrastructure responsibilities of IICT-BAS, during the “Day of open doors for young researchers in field of mathematics and informatics”'''&lt;br&gt;''annual conference &quot;European Student Conference in Mathematics - EUROMATH - 2012&quot;'', March 21-25, 2012, Sofia, Bulgaria<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Poster presentation of HP-SEE activities'''&lt;br&gt;''RIDE 2012'', May 21-25, 2012, Opatija, Croatia.<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''High-Performance Computing Infrastructure for South East’s Research Communities'''&lt;br&gt;''2nd Workshop of Networking Initiatives for ICT related projects'', September 10- 11, 2010, Varna, Bulgaria.<br /> * [IICT-BAS]: E. Atanassov and T. Gurov, ë-Infrastructures for Scientific Computations”, talk during the Researchers’ Night at IICT-BAS, 28 September, 2012, Sofia, Bulgaria. <br /> <br /> === RO ===<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''The HP-SEE Project'''&lt;br&gt;''RO-LCG 2010 Conference - Grid and High Performance Computing in Scientific Collaborations'', 6-7 December 6-7, 2010, Bucharest, Romania<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''National infrastructure for high performance scientific computing'''&lt;br&gt;''GRID and HPC in Modern Medical Environments&quot; Conference'', May 08, 2011, Bucharest, Romania,<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''Romanian HPC Infrastructure'''&lt;br&gt;''SEERA-EI training series on best practices: Bulgarian HPC policy and programmes,'', May 17, 2011, Sofia, Bulgaria<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''Romanian Association for the Promotion of the Advanced Computational Methods in Scientific Research (ARCAŞ)'''&lt;br&gt;''Policies for Development of E-Infrastructures in Eastern European Countries'', November 07-08, 2011, Bucharest, Romania<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''National and regional HPC collaboration,'''&lt;br&gt;''RO-LCG 2011 Workshop - Applications of Grid Technology and High Performance Computing in Advanced Research'', November 29-30, 2011, Bucharest, Romania<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''High performance computing for Romanian scientific community'''&lt;br&gt;''“Study for the operationalization of the National Supercomputing Centre”,'', March 15, 2012, Bucharest, Romania<br /> <br /> <br /> === TR ===<br /> * [TUBITAK-ULAKBIM] Murat Soysal&lt;br&gt;'''Integration of South Caucasus National Research and Education Networks to Europe and Limitations'''&lt;br&gt;''Policy Stakeholder Conference “EU – Eastern Europe/Central Asia Cooperation in Research and Innovation: The way towards 2020”'', November 15-16, 2011, Warsaw, Poland<br /> <br /> === HU ===<br /> <br /> * [NIIF] M.Kozlovszky&lt;br&gt;'''HP-SEE project and the HPC Bioinformatics Life Science gateway'''&lt;br&gt;''Summer School on Workflows and Gateways for Grids and Clouds 2012 '', July 2-6, 2012, Budapest, Hungary<br /> <br /> === RS ===<br /> * [IPB] &lt;br&gt;'''HP-SEE'''&lt;br&gt;''50 years of IPB'', May 2011, Belgrade, Serbia<br /> * [IPB] &lt;br&gt;'''Einfrastructures for science in Europe'''&lt;br&gt;''ISC'11, BoF session'', June 20-22, 2011, Hamburg<br /> * [IPB] D. Stojiljkovic&lt;br&gt;'''High Performance Computing Infrastructure for South East Europe’s Research Communities (HP-SEE) project along with some selected supported applications'''&lt;br&gt;''PRACE Workshop on HPC Approaches on Life Sciences and Chemistry'', February 17-18, 2012, Sofia, Bulgaria<br /> <br /> === AL ===<br /> * [UPT] &lt;br&gt;'''Presentation of HP-SEE and application GMI'''&lt;br&gt;''round table of AITA (Albanian IT Association)'', <br /> * [UPT] N. Frasheri, B. Cico. Computing developments in Albania and it’s applications. International Workshop on recent LHC results and related topics. 8-9 October 2012, Tirana, Albania<br /> * [UPT] N. Frasheri. HP and High Performance Computing in Albania. HP Solutions event, Tirana, 26 September 2012<br /> <br /> === MK ===<br /> * [UKIM] Anastas Misev&lt;br&gt;'''High performance computing – in Europe, region and at home'''&lt;br&gt;''first LANFest event'', May 20-22, 2011, Skopje, FYROM<br /> * [UKIM] Anastas Misev&lt;br&gt;'''Presentation of the HP-SEE project at the special HPC session'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> * [UKIM] &quot;HPC in the industry&quot;, Anastas Mishev, presentation at the „High Performance computing – main driver of modern e-science and economy“ dissemination and promotional event co-organized by HP and FINKI.<br /> <br /> === ME ===<br /> * [UOM] &lt;br&gt;'''Presentation of HP SEE project'''&lt;br&gt;''XVII Scientific-Professional Information Technology Conference 2012'', February 28, 2012, Zabljak, Montenegro<br /> <br /> === MD ===<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''CURRENT STATE OF DISTRIBUTED COMPUTING INFRASTRUCTURE DEPLOYMENT IN MOLDOVA'''&lt;br&gt;''10th RoEduNet International Conference - Networking in Education and Research'', June 23–25, 2011, Iasi, Romania<br /> * [RENAM] A. Andries, A. Altuhov, P. Bogatencov, N. Iliuha, G. Secrieru&lt;br&gt;'''eInfrastructure development projects in Moldova at FP7 part 1'''&lt;br&gt;''FP7 Projects Information Day organized by FP7 National contact points with international participation “New status-new opportunities for Moldova”'', September 05, 2011, Chisinau, Moldova<br /> * [RENAM] A. Andries, A. Altuhov, P. Bogatencov, N. Iliuha, G. Secrieru&lt;br&gt;'''eInfrastructure development projects in Moldova at FP7 part 2'''&lt;br&gt;''FP7 Projects Information Day organized by FP7 National contact points with international participation “New status-new opportunities for Moldova”'', September 05, 2011, Chisinau, Moldova<br /> * [RENAM] Mr. Nicolai Iliuha&lt;br&gt;'''HPC and Cloud computing technologies'''&lt;br&gt;''Master-class „Cloud computing and computing in the Cloud”'', May 11, 2012, Chisinau, Moldova<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''CBF SOLUTION IMPLEMENTATION FOR LINKING MOLDOVAN R&amp;E NETWORK TO GEANT'''&lt;br&gt;''10th RoEduNet International Conference - Networking in Education and Research'', June 23–25, 2011, Iasi, Romania<br /> * [RENAM] Nicolai Iliuha&lt;br&gt;'''High Performance Computing: current state of HP-SEE Project'''&lt;br&gt;''5th RENAM User’s Conference–2011: “Informational Services for Research and Educational Society of Moldova”'', September 22-23, 2011, Chisinau, Moldova<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''Participation in European eInfrastructure Projects - Experience and Case Studies”'''&lt;br&gt;''Official Launching of Moldova Association to FP7'', January 27, 2012, Chisinau, Moldova<br /> * [RENAM] Mr. Nicolai Iliuha&lt;br&gt;'''COMPUTATIONAL RESOURCES OF THE REGIONAL SOUTH-EAST EUROPE HIGH PERFORMANCE COMPUTING INFRASTRUCTURE'''&lt;br&gt;''4-th International Conference &quot;Telecommunications, Electronics and Informatics&quot;'', May 17-19, 2012, Chisinau, Moldova<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''PERSPECTIVES OF REGIONAL CROSS-BORDER FIBER CONNECTIONS DEVELOPMENT'''&lt;br&gt;''4-th International Conference &quot;Telecommunications, Electronics and Informatics&quot;'', May 17-19, 2012, Chisinau, Moldova<br /> * [RENAM] Dr. G. Secrieru&lt;br&gt;'''“СЕТЬ НАУКИ И ОБРАЗОВАНИЯ КАК ИНФРАСТРУКТУРА ДЛЯ GRID – ПРИЛОЖЕНИЙ”'''&lt;br&gt;''4-th International Conference &quot;Telecommunications, Electronics and Informatics&quot;'', May 17-19, 2012, Chisinau, Moldova<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''International and regional projects for computing technologies development'''&lt;br&gt;''Seminar entitled “Access to regional High Performance Computing (HPC) resources”'', May 30, 2012, Chisinau, Moldova<br /> * [RENAM] Mr. Nicolai Iliuha&lt;br&gt;'''Access to regional High Performance Computing (HPC) resources'''&lt;br&gt;''Seminar entitled “Access to regional High Performance Computing (HPC) resources”'', May 30, 2012, Chisinau, Moldova<br /> * [RENAM IMI ASM] Dr. P. Bogatencov&lt;br&gt;'''eInfrastructure Calls in FP7 Capacities specific programme'''&lt;br&gt;''Launching new FP7 Calls for 2013'', July 12, 2012, Chisinau, Moldova<br /> * [RENAM IMI ASM] Dr. G. Secrieru&lt;br&gt;'''Access to the Regional HPC Resources and Strategy of Their Development'''&lt;br&gt;''20th Conference on Applied and Industrial Mathematics'', August 22-25, 2012, Chisinau, Moldova<br /> * [RENAM IMI ASM] Dr. P. Bogatencov and Dr. G. Secrieru&lt;br&gt;'''HP-SEE project, local HPC resources and complex applications developing in IMI ASM '''&lt;br&gt;''Joint meeting with specialists from Global Initiatives for Proliferation Prevention (GIPP) of the US Department of Energy organized in IMI ASM by the Science and Technology Centre'', June 22, 2012, Ukraine<br /> * [RENAM] Participation of RENAM representative Nicolai Iliuha at “The 6th International Conference on Application of Information and Communication Technologies&quot; Georgia, Tbilisi, 17-19.10. 2012 with presentation &quot;Computing Infrastructure and Services Deployment for Research Community of Moldova&quot; (http://aict.info/2012/);<br /> * [RENAM] Participation of RENAM representative Dr. P. Bogatencov at “The Fourth International Scientific Conference &quot;Supercomputer Systems and Applications&quot; (SSA`2012)”, Minsk, UIIP NAS Belarus, 23-25.10. 2012 with keynote presentation &quot;COMPUTATIONAL RESOURCES OF THE REGIONAL SOUTH-EAST EUROPE HIGH PERFORMANCE COMPUTING INFRASTRUCTURE&quot; (http://ssa2012.bas-net.by/en/) and presentation of the session report entitled “Modeling of three-dimensional gas dynamics problems on multiprocessor systems and graphical processors” prepared together with Prof. Boris Rybakin.<br /> * [RENAM] Participation of RENAM representative Nicolai Iliuha in the technical-scientific workshop „Computational resources, special software and procedures of obtaining access to the HPC cluster in the State University of Moldova”, 7 November 2012, the State University of Moldova with report „National, regional and European Grid infrastructures; participation of Moldova in EGI-Inspire project and regional HP-SEE project”.<br /> <br /> === AM ===<br /> * [IIAP_NAS_RA] Yuri Shoukourian&lt;br&gt;'''Stimulating and Revealing Technological Innovations in Academic Institutions,'''&lt;br&gt;''ArmTech Congress'', October 10-11, 2011, Yerevan, Armenia<br /> * [IIAP_NAS_RA] Yuri Shoukourian&lt;br&gt;'''Armenian Research &amp; Educational E-Infrastructure'''&lt;br&gt;''Eastern Partnership Event'', November 07-08, 2011, Bucharest, Romania<br /> <br /> === GE ===<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''Georgian Research and Educational Networking Association GRENA'''&lt;br&gt;''First ATLAS/South Caucasus Software / Computing Workshop &amp; Tutorial'', October 26, 2010, Tbilisi, Georgia<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''E-Infrastructure in South Caucasus'''&lt;br&gt;''Eastern Europe Partnership Event – Policies for Development of E-Infrastructures in Eastern European Countries'', November 07-08, 2011, Bucharest, Romania<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''Participation of GRENA in European Commission projects'''&lt;br&gt;''Meeting of EU Delegation in Georgia'', June 18, 2012, Tbilisi, Georgia<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''E-infrastructure in South Caucasus Countries for Science'''&lt;br&gt;''EC funded GEO-RECAP and IDEALIST projects meetings '', June 27-28, 2012, Tbilisi, Georgia<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''E-Infrastructure for Science in Georgia'''&lt;br&gt;''6th International Conference on Application of Information and Communication Technologies '', October 17-19, 2012 in Tbilisi, Georgia. http://aict.info/2012/ <br /> * [GRENA] R. Kvatadze&lt;br&gt;'''E-Infrastructure for Science in Georgia'''&lt;br&gt;''2nd ATLAS-SouthCaucasus Software/Computing Workshop '', October 23-26, 2012 in Tbilisi, Georgia. http://dmu-atlas.web.cern.ch/dmu-atlas/2012/index.html<br /> <br /> <br /> <br /> &lt;!--------- OLD LIST<br /> === GR ===<br /> * (GRNET), O. Prnjat, &quot;High-Performance Computing Infrastructure for South East Europe’s Research Communities&quot; EGI technical forum, September 2010.<br /> * (GRNET), I. Liabotis, Presentation of the HP-SEE project in the 8th e-Infrastructure Concertation Meeting 4-5 November 2010, CERN, Geneva. (http://www.e-sciencetalk.org/econcertation/)<br /> * (GRNET), I. Liabotis, Presentation of the project entitled &quot;HP-SEE - Regional HPC development activities in South-Eastern Europe&quot; in the 2nd HellasHPC Workshop. 22nd of October 2010, Athens, Greece.<br /> * (GRNET), O. Prnjat, 2010 Euro-Africa Conference on e- Infrastructures 9-10 December 2010, Helsinki, Finland<br /> * (GRNET), O. Prnjat, The Forum on Research for Innovation in ICT for the Western Balkan Countries, 30 November 2010, Belgrade, Serbia<br /> * (GRNET), O. Prnjat, SEERA-EI Sixth Networking Meeting, January 2011, Sarajevo<br /> * (GRNET), I. Liabotis Presentation of the project in the National HPC conference, 9 December 2010 Sofia Bulgaria,<br /> * (GRNET), Presentation of HP-SEE at the NATO Advanced Networking Workshop on &quot;Advanced cooperation in the area of disaster prevention for human security in Central Asia&quot;, May 2011, Dushanbe.<br /> * (GRNET), HP-SEE was presented as part of overall SEE activities’ presentation under the umbrella of the SEERAEI project, at the following events:<br /> ** SEERA-EI Seventh Networking Meeting, April 2011, Chisinau.<br /> ** SEERA-EI cloud policy workshop, April 2011, Chisinau.<br /> <br /> === BG ===<br /> * (IPP-BAS/IICT-BAS) T. Gurov, &quot;High-Performance Computing Infrastructure for South East’s Research Communities&quot;, 2nd Workshop of Networking Initiatives for ICT related projects, 10-11.09.2010, Varna, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) A. Karaivanova, T. Gurov and E. Atanassov, &quot;HP-SEE Overview&quot;, HP-SEE Regional Training, 29-30.11.2010, Sofia, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) E. Atanassov, &quot;HP-SEE Infrastructure and Access&quot;, HP-SEE Regional Training, 29-30.11.2010, Sofia, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) T. Gurov, &quot;Introduction to Parallel Computing: Message Passing and Shared Memory&quot;, HP-SEE Regional Training, 29-30.11.2010, Sofia, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) E. Atanassov, &quot;Advanced Application Porting and Optimization&quot;, HP-SEE Regional Training, 29-30.11.2010, Sofia, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) E. Atanassov, A. Karaivanova, S. Ivanovska, M. Durchova, “Two Algorithms for Modified Owen Scrambling on GPU”, Fifth Annual meeting of the Bulgarian Section of SIAM – BGSIAM’2010, 20-21 December, 2010 Sofia, Bulgaria.<br /> * (IICT-BAS), T. Gurov, “Overview of the HP-SEE project”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), E. Atanassov, “HPC Cluster at IICT-BAS and HP-SEE Infrastructure”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), E. Atanassov, “Introduction to GPU Computing”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), T. Gurov, “e-Infrastructure for e-Science”, presented on a seminar at the Institute on Oceanology, 11 March, 2011, Varna, Bulgaria.<br /> * (IICT-BAS), G. Bencheva, “Introduction to Parallel Computing”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), Y. Vutov, N. Kosturski, “Application software deployed on BG/P”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), Sv. Margenov, “Supercomputer applications on BG/P”, National workshop for supercomputer applications, 20-22, May, 2011, Hissar, Bulgaria.<br /> * (IICT-BAS), K. Shterev, “Computer Simulation on micro-gas flows in elements of Micro-Electro-Mechanical Systems (MEMS), National workshop for supercomputer applications, 20-22, May, 2011, Hissar, Bulgaria.<br /> * (IICT-BAS), Y. Vutov, “Optimizing the Performance of a Parallel Unstructured Grid AMG Solver”, National workshop for supercomputer applications, 20-22, May, 2011, Hissar, Bulgaria.<br /> * (IICT-BAS), E. Atanassov, “Bulgarian HPC Infrastructure”, presented on “SEERA-EI training series on best practices: Bulgarian HPC policy and programmes”, 17 May, 2011, Sofia, Bulgaria.<br /> <br /> === RO ===<br /> * (IFIN HH), “The HP-SEE Project”, M. Dulea, RO-LCG 2010 Conference - Grid and High Performance Computing in Scientific Collaborations, Bucharest, 6-7 December 2010, http://wlcg10.nipne.ro<br /> * (IFIN), “National infrastructure for high performance scientific computing”, M. Dulea, at the ”GRID and HPC in Modern Medical Environments&quot; Conference, Carol Davila University of Medicine and Pharmacy, Bucharest, 08.03.2011.<br /> * (IFIN), “High performance computing for Romanian scientific community”, M. Dulea, in the framework of the “Study for the operationalization of the National Supercomputing Centre”, Ministry of Communications and Informational Society, Bucharest, 15.03.2011.<br /> * (IFIN), “Romanian HPC Infrastructure”, M. Dulea, SEERA-EI training series on best practices: Bulgarian HPC policy and programmes, Sofia, 17.04.2011.<br /> * (UPT), Presentation of the project during the training event in Tirana, 21 Dec. 2010<br /> * (UPB), Computing Infrastructure for Scientific Computing – the UPB-NCIT-Cluster, October 25th 2010, Bucharest, Romania. CEEMEA Blue Gene Research Collaboration and Community Building.<br /> <br /> === HU ===<br /> * (NIIF), S. Péter, &quot;The new national HPC infrastructure and services, operated by NIIF&quot;, 20th Networkshop, April 2011.<br /> * (NIIF), R. Gábor, &quot;South East European HPC Project&quot;, 20th Networkshop, April 2011.<br /> * (SZTAKI), B. Ákos, “Workflow level interoperability between ARC and Glite middlewares for HPC and Grid applications&quot;, 20th Networkshop, April 2011.<br /> * (SZTAKI), M. Kozlovszky, A. Balasko, P. Kacsuk, Enabling JChem application on grid, ISGC 2011 &amp; OGF 31, International Symposium on Grids and Clouds (ISGC 2011) &amp; Open Grid Forum (OGF 31) 21-25 March 2011, Taipei Taiwan.<br /> * (SZTAKI), M. Kozlovszky, A. Balasko, I. Marton, K. Karoczkai, A. Szikszay Fabri, P. Kacsuk: New developments of gUSE &amp; WS-PGRADE to support e-science gateways, EGI User Forum Lithuania, Vilnius, 2011 April.<br /> <br /> === RS ===<br /> * (IPB), During the May the Institute of Physics Belgrade officially celebrated 50 years of its establishment. For this occasion, the dedicated exhibition was hosted at the Gallery of Science and Technology of the Serbian Sciences and Arts in Belgrade, Serbia. As a part of the exhibition a set of HPC oriented sessions were organized, during which HP-SEE project was presented.<br /> * (IPB), The International Particle Physics MasterClass Belgrade 2011 was organized by the University of Belgrade, in collaboration with the European Particle Physics Outreach Group, and was held on 14 March 2011. SCL's Antun Balaz gave a short talk to high school students on Grid computing, HPC resourcers and its applications in particle physics.<br /> * (IPB), At the &quot;Advanced School in High Performance and Grid Computing&quot; organized by the International Centre for Theoretical Physics (ICTP) in Trieste, Italy from 11 to 22 April 2011, Antun Balaz (who was also one of the organizers) gave the Lecture on &quot;Advanced Profiling and Optimization Techniques&quot; by A. Balaz. Milan Zezelj presented his work on &quot;Parallel Implementation of a Monte Carlo Molecular Simulation Program&quot;<br /> * (IPB), The Executive Agency “Electronic Communications Networks and Information Systems” of the Bulgarian government, as a part of SEERA –EI project activities, has organized a meeting on “HPC related policy and programs” for South Eastern Europe policy makers. Antun Balaz present Serbian experiences, plans and expectations towards SEE high performance computing infrastructures development during the round table on &quot;HPC initiatives in other SEE countries: Romania, Serbia, FYROM, Greece&quot;. Presentation was titled &quot;Infrastructure for the National Supercomputing Initiative&quot;.<br /> <br /> === MK ===<br /> * (UKIM), Presentation titled: High performance computing – in Europe, region and at home was delivered by Anastas Misev at the first LANFest event in Skopje, 20.-22.05.2011 . http://lanfest.mk/ex/LANFestConferenceAgenda.pdf The presentation contains a special part on HP-SEE project.<br /> <br /> === MD ===<br /> * (RENAM), Presentation of Mr. Nicolai Iliuha about HP-SEE Kick-off-meeting was made on 30.09. 2010 for members of RENAM Scientific and Technical Council and invited representatives from universities and research institutions<br /> * (RENAM), Presentation of Mr. Nicolai Iliuha about HP-SEE training Event in Bulgaria (29-30.11.2010), ability to access and parameters of HPC resources available in IICT-BAS was made for members of RENAM Scientific and Technical Council and invited representatives from universities and research institutions of the Academy of Sciences of Moldova<br /> * (RENAM), On 25th and 26th of May 2011 RENAM presented two reports at the joint training and dissemination event entitled “Computational structures and Technologies for Solving Large Scale Problems” for specialists of the Institute of Mathematics and Computer Science&quot; and the Institute of Economy, Finance and Statistics of ASM:<br /> ** Dr. Peter Bogatencov - &quot;Scientific computing – current state and perspectives of development&quot;;<br /> ** Mr. Nicolai Iliuha - &quot;Computational structures and Technologies to meet challenges of modeling&quot;<br /> <br /> <br /> === GE ===<br /> * (GRENA), On October 26, 2010 R. Kvatadze made presentation about Georgian Research and Educational Networking Association GRENA activities at First ATLAS/South Caucasus Software / Computing Workshop &amp; Tutorial held in Tbilisi, Georgia on October 25 – 29, 2010.<br /> * (GRENA), R. Kvatadze participated in kick-off meeting of the project on September 6-8, 2010 in Athens.<br /> * (GRENA), GRENA as a co-organizer of the First ATLAS/South Caucasus Software / Computing Workshop &amp; Tutorial took part in preparation and conduction of the event. Workshop was held in Tbilisi, Georgia during October 25 – 29, 2010 http://dmu-atlas.web.cern.ch/dmuatlas/2010/.<br /> * (GRENA), N. Gamtsemlidze participated in HP-SEE Regional Training on November 29-30, 2010 in Sofia.<br /> <br /> ---------------&gt;</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/Presentations Presentations 2013-04-05T13:18:55Z <p>Lifesci: /* GE */</p> <hr /> <div>== Scientific and HPC presentations and posters ==<br /> <br /> === GR ===<br /> * [GRNET IMBB] P. Poirazi&lt;br&gt;'''Dendrites and information processing: insights from compartmental models'''&lt;br&gt;''CNS*2011 workshop on ‘Dendrite function and wiring: experiments and theory”,'', July 27, 2011, Stockholm, Sweden<br /> * [GRNET IMBB] &lt;br&gt;'''Poster presentation'''&lt;br&gt;''EMBO Conference Series on THE ASSEMBLY AND FUNCTION OF NEURONAL CIRCUITS'', September 23-29, 2011, Ascona, Switzerland<br /> * [GRNET IMBB] A. Oulas&lt;br&gt;&lt;br&gt;''62nd Conference of the Hellenic Society for Biochemistry and Molecular Biology'', December 9-11, 2011, Athens,Greece<br /> * [GRNET IMBB] P. Poirazi&lt;br&gt;'''Spatio-temporal encoding of input characteristics in biophysical model cells and circuits'''&lt;br&gt;''FENS-HERTIE-PENS Winter School'', January 8-15, 2012, Oburgurgl, Austria<br /> <br /> === BG ===<br /> * [IICT-BAS] E. Atanassov, A. Karaivanova, S. Ivanovska, M. Durchova,&lt;br&gt;'''“Two Algorithms for Modified Owen Scrambling on GPU”,'''&lt;br&gt;''Fifth Annual meeting of the Bulgarian Section of SIAM – BGSIAM’2010'', December 20-21, 2010 Sofia, Bulgaria<br /> * [IICT-BAS] E. Atanassov, S. Ivanovska&lt;br&gt;'''“Efficient Implementation of Heston Stochastic Volatility Model Using GPGPU”, Special Session “High Performance Monte Carlo Simulation”'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] E. Atanassov, T. Gurov, A. Karaivanova,&lt;br&gt;'''Message Oriented Framework with Low Overhead for Efficient Use of HPC Resources'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] G. Bencheva&lt;br&gt;'''Computer Modelling of Haematopoietic Stem Cells Migration'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] N. Kosturski, S. Margenov, Y. Vutov&lt;br&gt;'''Improving the Efficiency of Parallel FEM Simulations on Voxel Domains'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] N. Kosturski, S. Margenov, Y. Vutov&lt;br&gt;'''Optimizing the Performance of a Parallel Unstructured Grid AMG Solver'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] S. Ivanovska, A. Karaivanova, N. Manev&lt;br&gt;'''Numerical Integration Using Sequences Generating Permutations'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] K. Shterev, S. Stefanov, E. Atanassov&lt;br&gt;'''A Parallel Algorithm with Improved Performance of Finite Volume Method (SIMPLE-TS)'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] E. Atanassov, T. Gurov, A. Karaivanova&lt;br&gt;'''How to Use HPC Resources efficiently by a Message Oriented Framework'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS] K. Georgiev, I. Lirkov, S. Margenov&lt;br&gt;'''Highly Parallel Alternating Directions Algorithm for Time Dependent Problems'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS] N. Kosturski, S. Margenov, Y. Vutov&lt;br&gt;'''Comparison of Two Techniques for Radio-Frequency Hepatic Tumor Ablation through Numerical Simulation'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS] T. Gurov, A. Karaivanova, N. Manev&lt;br&gt;'''Monte Carlo Simulations of Electron Transport using a Class of Sequences Generating Permutations'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS] A. Karaivanova&lt;br&gt;'''Message Oriented Framework for Efficient Use of HPC Resources”, Special Session “High-Performance Computations for Monte Carlo Applications”,'''&lt;br&gt;''8th IMACS Seminar on Monte Carlo Methods'', August 29 - September 2, 2011, Borovets, Bulgaria<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Efficient Implementation of Heston Model Using GPGPU”, Special Session “High-Performance Computations for Monte Carlo Applications”'''&lt;br&gt;''8th IMACS Seminar on Monte Carlo Methods'', August 29 - September 2, 2011, Borovets, Bulgaria<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Message Oriented Framework with Low Overhead for Efficient Use of HPC Resources'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> * [IICT-BAS] N. Manev&lt;br&gt;'''Monte Carlo Methods using a New Class of Congruential Generators'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''Study Scalability of SET Application using The Bulgarian HPC Infrastructure'''&lt;br&gt;''8th International Conference on Computer Science and Information Technologies – CSIT2011'', September 26-30, 2011, Yerevan, Armenia<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Stochastic Modeling of Electron Transport on different HPC architectures'''&lt;br&gt;''PRACE Workshop on HPC approaches on Life Sciences and Chemistry'', February 17-18, 2012, Sofia, Bulgaria<br /> * [IICT-BAS] N. Manev&lt;br&gt;'''ECM Integer factorization on GPU Cluster'''&lt;br&gt;''Jubilee 35th International Convention on Information and Communication Technology, electronics and microelectronics (MIPRO2012)'', May 21-25, 2012, Opatija, Croatia.<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Efficient Implementation of a Stochastic Electron Transport Simulation Algorithm Using GPGPU Computing'''&lt;br&gt;''4th AMITANS 2012 Conference'', June 11-16, 2012, Varna, Bulgaria<br /> * [IICT-BAS] N. Manev&lt;br&gt;'''Twister Edwards Curves Integer Factorization on GPU Cluster'''&lt;br&gt;''4th AMITANS 2012 Conference'', June 11-16, 2012, Varna, Bulgaria<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Monte Carlo Methods for Electron Transport: Scalability Study'''&lt;br&gt;''11th ISPDC2012 Conference'', June 25-29, 2012, Munich, Germany<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''Efficient Monte Carlo algorithms for Inverse Matrix Problems'''&lt;br&gt;''7th PMAA’12 Conference'', June 28-30, Birkbeck University of London, UK<br /> * [IICT-BAS] A. Karaivanova&lt;br&gt;'''Randomized quasi-Monte Carlo for Matrix Computations'''&lt;br&gt;''7th PMAA’12 Conference'', June 28-30, Birkbeck University of London, UK<br /> * [IICT-BAS IM-BAS] K. Shterev, N. Kulakarni, S. Stefanov&lt;br&gt;'''Influence of Reservoirs on Pressure Driven Gas Glowin a Micro-channel'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS IM-BAS] K. Shterev&lt;br&gt;'''Comparison of Some Approximation Schemes for Convective Terms for Solving Gas Flow past a Square in a Microchannel'''&lt;br&gt;''4th AMITANS 2012 Conference'', June 11-16, 2012, Varna, Bulgaria<br /> * [IICT-BAS]: A. Karaivanova, “Monte Carlo methods for Electron Transport: Scalability Study Using HP-SEE Infrastructure”, invited talk during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia.<br /> * [IICT-BAS]: E. Atanassov, “Efficient Parallel Simulations of Large-Ring Cyclodextrins on HPC cluster”, contribution talk during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia.<br /> * [IICT-BAS]: D. Georgiev, “Number Theory Algorithms on GPU cluster”, contribution talk during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia. <br /> * [IICT-BAS]: E. Atanassov, “Conformational Analysis of Kyotorphin Analogues Containing Unnatural Amino Acids”, poster presentation during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia.<br /> * [IM-BAS]: K. Shterev, “Determination of zone of flow instability in a gas flow past a square particle in a narrow microchannel”, contribution talk during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia.<br /> <br /> <br /> === RO ===<br /> * [IFIN HH] Silviu Panica, Dana Petcu, Daniela Zaharie&lt;br&gt;'''Gaining Experience in BlueGene/P Application Development. A Case Study in Remote Sensing Data Analysis, HPCe – High Performance Computing with application in environment'''&lt;br&gt;''SYNASC 2011'', September 26-29, 2011, Timisoara, Romania<br /> * [IFIN HH] Ionut Vasile, Dragos Ciobanu-Zabet&lt;br&gt;'''Software tools for HPC users at IFIN-HH'''&lt;br&gt;''RO-LCG 2011 Workshop - Applications of Grid Technology and High Performance Computing in Advanced Research'', November 29-30, 2011, Bucharest, Romania<br /> * [IFIN HH] Silviu Panica, Marian Neagul, Daniela Zaharie and Dana Petcu&lt;br&gt;'''Services for Earth observations: from cluster and cloud to HPC and Grid services'''&lt;br&gt;''COST Action 0805 Complex HPC'', January 26, 2012, Timisoara, Romania<br /> * [IFIN HH] HP-SEE User Forum, Belgrade:<br /> ** Emergence of resonant waves in cigar-shaped Bose-Einstein condensates Alexandru NICOLIN <br /> ** On HPC for Hyperspectral Image Processing, Silviu Panica, Daniela Zaharie, Dana Petcu<br /> ** Investigations of biomolecular systems within the ISyMAB simulation framework Ionut VASILE, Dragos Ciobanu-Zabet <br /> ** Formation of Faraday and Resonant Waves in Driven High-Density Bose-Einstein Condensates Mihaela Carina RAPORTARU (poster)<br /> * [IFIN HH] Procs. of the RO-LCG 2012 IEEE International Conference, Cluj, Romania, 25-27.10.2012, IEEE CFP1232T-PRT, ISBN 978-973-662-710-1:<br /> ** National and regional organization of collaborations in advanced computing, Mihnea Dulea, pp. 63-66 <br /> ** Computational Challenges in Processing Large Hyperspectral Images, Dana Petcu et al <br /> ** Eagle Eye – Feature Extraction from Satellite Images on a 3D Map of Romania, Razvan Dobre et al<br /> ** Integrated System for Modeling and data Analysis of complex Biomolecules (ISyMAB), I. Vasile, D. Ciobanu-Zabet<br /> <br /> === HU ===<br /> * [NIIF] M. Kozlovszky, Gergely Windisch, Akos Balasko&lt;br&gt;'''Bioinformatics eScience gateway services using the HP-SEE infrastructure'''&lt;br&gt;''Summer School on Workflows and Gateways for Grids and Clouds 2012'', July 2-6, 2012, Budapest ,Hungary<br /> * [NIIF SZTAKI] M. Kozlovszky, A. Balasko, P. Kacsuk&lt;br&gt;'''Enabling JChem application on grid'''&lt;br&gt;''ISGC 2011 &amp; OGF 31, International Symposium on Grids and Clouds (ISGC 2011) &amp; Open Grid Forum (OGF 31)'', March 21-25, 2011, Taipei Taiwan<br /> * [NIIF SZTAKI] M. Kozlovszky, A. Balasko, I. Marton, K. Karoczkai, A. Szikszay Fabri, P. Kacsuk&lt;br&gt;'''New developments of gUSE &amp; WS-PGRADE to support e-science gateways'''&lt;br&gt;''EGI User Forum Lithuania'', April, 2011, Vilnius, Lithuania<br /> * [NIIFI] Presentation at the HP-SEE User Forum 2012 (“Performance and scalability evaluation of short fragment sequence alignment applications“ by G. Windisch , A. Balaso, M.Kozlovszky).<br /> * [NIIFI] Presentation at the HP-SEE User Forum 2012 (“Advanced Vulnerability Assessment Tool for Distributed Systems “ by S. Acs, M.Kozlovszky).<br /> <br /> === RS ===<br /> * [IPB] D. Vudragovic&lt;br&gt;'''Extensions of the SPEEDUP Path Integral Monte Carlo Code'''&lt;br&gt;''8th IMACS Seminar on Monte Carlo Methods'', August 29 - September 2, 2011, Borovets, Bulgaria<br /> * [IPB] A. Balaz&lt;br&gt;'''Numerical Study of Surface Waves in Binary Bose-Einstein Condensates'''&lt;br&gt;''Seminar at Vinca Institute of Nuclear Sciences'', November 17, 2011, Belgrade, Serbia<br /> * [IPB] A. Balaz&lt;br&gt;'''Numerical simulations of Faraday waves in binary Bose-Einstein condensates'''&lt;br&gt;''CompPhys11 conference'', November 24-26, 2011, Leipzig, Germany<br /> * [IPB] A. Balaz&lt;br&gt;'''Parametric and Geometric Resonances of Collective Oscillation Modes in Bose-Einstein Condensates'''&lt;br&gt;''Photonica 2011 conference'', August 29 - September 2, 2011, Belgrade, Serbia<br /> * [IPB] Antun Balaz: &quot;Numerical Study of Ultracold Quantum Gases: Formation of Faraday Patterns, Geometric Resonances, and Fragmentation&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Nenad Vukmirovic, &quot;Computational approaches for electronic properties of semiconducting materials and nanostructures&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Igor Stankovic, &quot;Numerical Simulations of the Structure and Transport Properties of the Complex Networks&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Jaksa Vucicevic, &quot;Iterative Perturbative Method for a Study of Disordered Strongly Correlated Systems&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Milos Radonjic, &quot;Electronic Structure and Lattice Dynamics Calculations of FeSb2 and CoSb2&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Dusan Stankovic, &quot;SCL Quantum Espresso Extensions&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Josip Jakic, &quot;An Analysis of FFTW and FFTE Performance&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Petar Jovanovic, &quot;GPAW optimisations&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB-FCUB] Ivan Juranic, “Use of High Performance Computing in (Bio)Chemistry”, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB-FCUB] Branko Drakulic, “Dynamics of uninhibited and covalently inhibited cysteine protease on non-physiological”, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB-FCUB] Branko Drakulic, “Free-energy surfaces of 2-[(carboxymethyl)sulfanyl]-4-oxo-4-arylbutanoic acids. Molecular dynamics study in explicit solvents”, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB-FCUB] Ilija Cvijetic, “In the search of the HDAC-1 inhibitors. The preliminary results of ligand based virtual screening”, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> <br /> <br /> === AL ===<br /> * [UPT] Neki Frasheri, Betim Cico&lt;br&gt;'''Analysis of the Convergence of Iterative Geophysical Inversion in Parallel Systems'''&lt;br&gt;''ICT Innovations 2012'', September 14-16, 2011, Skopje, FYROM<br /> * [UPT] Neki Frasheri, Betim Cico&lt;br&gt;'''Convergence of Gravity Inversion Using OpenMP'''&lt;br&gt;''XVII Scientific-Professional Information Technology Conference 2012'', February 27 – March 2, 2012, Zabljak Montenegro<br /> * [UPT] N. Frasheri, B. Cico. Reflections on parallelization of Gravity Inversion. HP-SEE User Forum, 17-19 October 2012, Belgrade, Republic of Serbia.<br /> * [UPT] R. Zeqirllari. Quenched Hadron Spectroscopy Using FermiCQD. HP-SEE User Forum, 17-19 October 2012, Belgrade, Republic of Serbia. (poster)<br /> * [UPT] D. Xhako. Using Parallel Computing to Calculate Quark-Antiquark Potential from Lattice CQD. HP-SEE User Forum, 17-19 October 2012, Belgrade, Republic of Serbia. (poster)<br /> <br /> === MK ===<br /> * [UKIM] Anastas Misev&lt;br&gt;'''Software Engineering for HPC'''&lt;br&gt;''DAAD workshop in software engineering'', August 22-27, 2011, August, FYROM<br /> * [UKIM] Jane Jovanovski, Boro Jakimovski and Dragan Jakimovski&lt;br&gt;'''Parallel Genetic Algorithms for Finding Solution of System of Ordinary Differential Equations'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> * [UKIM] Anastas Misev, Dragan Sahpaski and Ljupco Pejov&lt;br&gt;'''Implementation of Hybrid Monte Carlo (Molecular Dynamics) – Quantum Mechanical Methodology for Modelling of Condensed Phases on High Performance Computing Environment'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> <br /> === ME ===<br /> * [UOM] Luka Filipović, Danilo Mrdak, Božo Krstajić, &quot;DNA Multigene aproach on HPC Using RAxML Software&quot;, HP SEE User forum, Belgrade, Serbia<br /> <br /> === MD ===<br /> * [RENAM] Dr. P. Bogatencov, Dr. G. Secrieru&lt;br&gt;'''Numerical analysis of the coupled problem on interaction of ground and elastic - plastic shell under high – speed loads'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [RENAM] Participation of RENAM representative Dr. Petru Bogatencov in &quot;HP-SEE User Forum 2012&quot;, Belgrade, Serbia on October 17-19, 2012 with presentation “Using Structured Adaptive Computational Grid for Solving Multidimensional Computational Physics Tasks”;<br /> <br /> === GE ===<br /> * [GRENA] G. Mikuchadze made presentation “Quantum-Chemical Calculations for the Quantitative Estimations of the Processes in DNA” at the HP-SEE User Forum on October 17-19, 2012 in Belgrade, Serbia.<br /> <br /> == HP-SEE dissemination presentations and posters at external events ==<br /> === GR ===<br /> <br /> * [GRNET] O.Prnjat&lt;br&gt;'''High-Performance Computing Infrastructure for South East Europe’s Research Communities'''&lt;br&gt;''EGI technical forum'', September 14-17, 2010, Amsterdam, The Netherlands<br /> * [GRNET] I. Liabotis&lt;br&gt;'''HP-SEE - Regional HPC development activities in South-Eastern Europe'''&lt;br&gt;''2nd HellasHPC Workshop'', October 22, 2010, Athens, Greece.<br /> * [GRNET] I. Liabotis&lt;br&gt;'''HP-SEE project'''&lt;br&gt;''8th e-Infrastructure Concertation Meeting'', November 4-5, 2010, CERN, Geneva, Switzerland<br /> * [GRNET] O. Prnjat&lt;br&gt;'''HP-SEE Project (overall SEE)'''&lt;br&gt;''The Forum on Research for Innovation in ICT for the Western Balkan Countries'', November 30, 2010, Belgrade, Serbia<br /> * [GRNET] O. Prnjat&lt;br&gt;'''HP-SEE Project (overall SEE)'''&lt;br&gt;''2010 Euro-Africa Conference on e- Infrastructures'', December 9-10, 2010, Helsinki, Finland<br /> * [GRNET] O. Prnjat&lt;br&gt;'''HP-SEE Project (overall SEE)'''&lt;br&gt;''SEERA-EI Sixth Networking Meeting'', January, 2011, Sarajevo, Bosnia and Herzegovina<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI Seventh Networking Meeting'', April, 2011, Chisinau, Moldova<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''NATO Advanced Networking Workshop on &quot;Advanced cooperation in the area of disaster prevention for human security in Central Asia&quot;'', May 16, 2011, Dushanbe, Tajikistan<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI Eight networking meeting'', July, 2011, Tirana, Albania<br /> * [GRNET] O.Prnjat&lt;br&gt;'''HP-SEE project'''&lt;br&gt;''eInfrastructures concertation meeting'', September 2011, Lyon, France<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''EuroRIs- Net Workshop'', October 11, 2011, Athens, Greece.<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI Ninth Networking Meeting'', November, 2011, Bucharest, Romania<br /> * [GRNET] Dr Tryfon Chiotis&lt;br&gt;'''HCP and Cloud Technologies in Greece and South Eastern Europe'''&lt;br&gt;''e-AGE 2011 - Integrating Arab e-Infrastructures in a global environment'', December, 2011, Amman, Jordan<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI project conference for the Western audience,'', February, 2012, Istanbul, Turkey<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI cloud policy workshop'', April, 2011, Chisinau, Moldova<br /> * [GRNET] &lt;br&gt;'''poster entitled “Advanced High Performance Computing Services for networked regional Virtual Research Communities”'''&lt;br&gt;''TNC2012'', May, 21 – 24, 2012, Reykjavík, Iceland<br /> * [GRNET] &lt;br&gt;'''presentation to the high-level European policy-makers'''&lt;br&gt;''eInfrastructure Reflection Group workshop'', June, 2012, Copenhagen, Denmark<br /> * [GRNET] HP-SEE regional HPC policy, operations and user support approach presented to highest-possible worldwide audience during CHAIN project workshops in China, India and Sub-Saharan Africa.<br /> <br /> === BG ===<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Bulgarian HPC Infrastructure'''&lt;br&gt;''SEERA-EI training series on best practices: Bulgarian HPC policy and programmes”'', May 17, 2011, Sofia, Bulgaria<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''e-Infrastructure for e-Science'''&lt;br&gt;''seminar at the Institute on Oceanology,'', March 11, 2011, Varna, Bulgaria<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''“High-Performance Computing in South East Europe and Monte Carlo Simulations”, Special Session “High-Performance Computations for Monte Carlo Applications”'''&lt;br&gt;''8th IMACS Seminar on Monte Carlo Methods'', August 29 - September 2, 2011, Borovets, Bulgaria<br /> * [IICT-BAS] A. Karaivanova&lt;br&gt;'''HP-SEE and SEERA-EI'''&lt;br&gt;''ICRI 2012'', March 01-23 2012, Copenhagen, Denmark.<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''National e-Infrastructure responsibilities of IICT-BAS, during the “Day of open doors for young researchers in field of mathematics and informatics”'''&lt;br&gt;''annual conference &quot;European Student Conference in Mathematics - EUROMATH - 2012&quot;'', March 21-25, 2012, Sofia, Bulgaria<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Poster presentation of HP-SEE activities'''&lt;br&gt;''RIDE 2012'', May 21-25, 2012, Opatija, Croatia.<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''High-Performance Computing Infrastructure for South East’s Research Communities'''&lt;br&gt;''2nd Workshop of Networking Initiatives for ICT related projects'', September 10- 11, 2010, Varna, Bulgaria.<br /> * [IICT-BAS]: E. Atanassov and T. Gurov, ë-Infrastructures for Scientific Computations”, talk during the Researchers’ Night at IICT-BAS, 28 September, 2012, Sofia, Bulgaria. <br /> <br /> === RO ===<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''The HP-SEE Project'''&lt;br&gt;''RO-LCG 2010 Conference - Grid and High Performance Computing in Scientific Collaborations'', 6-7 December 6-7, 2010, Bucharest, Romania<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''National infrastructure for high performance scientific computing'''&lt;br&gt;''GRID and HPC in Modern Medical Environments&quot; Conference'', May 08, 2011, Bucharest, Romania,<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''Romanian HPC Infrastructure'''&lt;br&gt;''SEERA-EI training series on best practices: Bulgarian HPC policy and programmes,'', May 17, 2011, Sofia, Bulgaria<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''Romanian Association for the Promotion of the Advanced Computational Methods in Scientific Research (ARCAŞ)'''&lt;br&gt;''Policies for Development of E-Infrastructures in Eastern European Countries'', November 07-08, 2011, Bucharest, Romania<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''National and regional HPC collaboration,'''&lt;br&gt;''RO-LCG 2011 Workshop - Applications of Grid Technology and High Performance Computing in Advanced Research'', November 29-30, 2011, Bucharest, Romania<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''High performance computing for Romanian scientific community'''&lt;br&gt;''“Study for the operationalization of the National Supercomputing Centre”,'', March 15, 2012, Bucharest, Romania<br /> <br /> <br /> === TR ===<br /> * [TUBITAK-ULAKBIM] Murat Soysal&lt;br&gt;'''Integration of South Caucasus National Research and Education Networks to Europe and Limitations'''&lt;br&gt;''Policy Stakeholder Conference “EU – Eastern Europe/Central Asia Cooperation in Research and Innovation: The way towards 2020”'', November 15-16, 2011, Warsaw, Poland<br /> <br /> === HU ===<br /> <br /> * [NIIF] M.Kozlovszky&lt;br&gt;'''HP-SEE project and the HPC Bioinformatics Life Science gateway'''&lt;br&gt;''Summer School on Workflows and Gateways for Grids and Clouds 2012 '', July 2-6, 2012, Budapest, Hungary<br /> <br /> === RS ===<br /> * [IPB] &lt;br&gt;'''HP-SEE'''&lt;br&gt;''50 years of IPB'', May 2011, Belgrade, Serbia<br /> * [IPB] &lt;br&gt;'''Einfrastructures for science in Europe'''&lt;br&gt;''ISC'11, BoF session'', June 20-22, 2011, Hamburg<br /> * [IPB] D. Stojiljkovic&lt;br&gt;'''High Performance Computing Infrastructure for South East Europe’s Research Communities (HP-SEE) project along with some selected supported applications'''&lt;br&gt;''PRACE Workshop on HPC Approaches on Life Sciences and Chemistry'', February 17-18, 2012, Sofia, Bulgaria<br /> <br /> === AL ===<br /> * [UPT] &lt;br&gt;'''Presentation of HP-SEE and application GMI'''&lt;br&gt;''round table of AITA (Albanian IT Association)'', <br /> * [UPT] N. Frasheri, B. Cico. Computing developments in Albania and it’s applications. International Workshop on recent LHC results and related topics. 8-9 October 2012, Tirana, Albania<br /> * [UPT] N. Frasheri. HP and High Performance Computing in Albania. HP Solutions event, Tirana, 26 September 2012<br /> <br /> === MK ===<br /> * [UKIM] Anastas Misev&lt;br&gt;'''High performance computing – in Europe, region and at home'''&lt;br&gt;''first LANFest event'', May 20-22, 2011, Skopje, FYROM<br /> * [UKIM] Anastas Misev&lt;br&gt;'''Presentation of the HP-SEE project at the special HPC session'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> * [UKIM] &quot;HPC in the industry&quot;, Anastas Mishev, presentation at the „High Performance computing – main driver of modern e-science and economy“ dissemination and promotional event co-organized by HP and FINKI.<br /> <br /> === ME ===<br /> * [UOM] &lt;br&gt;'''Presentation of HP SEE project'''&lt;br&gt;''XVII Scientific-Professional Information Technology Conference 2012'', February 28, 2012, Zabljak, Montenegro<br /> <br /> === MD ===<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''CURRENT STATE OF DISTRIBUTED COMPUTING INFRASTRUCTURE DEPLOYMENT IN MOLDOVA'''&lt;br&gt;''10th RoEduNet International Conference - Networking in Education and Research'', June 23–25, 2011, Iasi, Romania<br /> * [RENAM] A. Andries, A. Altuhov, P. Bogatencov, N. Iliuha, G. Secrieru&lt;br&gt;'''eInfrastructure development projects in Moldova at FP7 part 1'''&lt;br&gt;''FP7 Projects Information Day organized by FP7 National contact points with international participation “New status-new opportunities for Moldova”'', September 05, 2011, Chisinau, Moldova<br /> * [RENAM] A. Andries, A. Altuhov, P. Bogatencov, N. Iliuha, G. Secrieru&lt;br&gt;'''eInfrastructure development projects in Moldova at FP7 part 2'''&lt;br&gt;''FP7 Projects Information Day organized by FP7 National contact points with international participation “New status-new opportunities for Moldova”'', September 05, 2011, Chisinau, Moldova<br /> * [RENAM] Mr. Nicolai Iliuha&lt;br&gt;'''HPC and Cloud computing technologies'''&lt;br&gt;''Master-class „Cloud computing and computing in the Cloud”'', May 11, 2012, Chisinau, Moldova<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''CBF SOLUTION IMPLEMENTATION FOR LINKING MOLDOVAN R&amp;E NETWORK TO GEANT'''&lt;br&gt;''10th RoEduNet International Conference - Networking in Education and Research'', June 23–25, 2011, Iasi, Romania<br /> * [RENAM] Nicolai Iliuha&lt;br&gt;'''High Performance Computing: current state of HP-SEE Project'''&lt;br&gt;''5th RENAM User’s Conference–2011: “Informational Services for Research and Educational Society of Moldova”'', September 22-23, 2011, Chisinau, Moldova<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''Participation in European eInfrastructure Projects - Experience and Case Studies”'''&lt;br&gt;''Official Launching of Moldova Association to FP7'', January 27, 2012, Chisinau, Moldova<br /> * [RENAM] Mr. Nicolai Iliuha&lt;br&gt;'''COMPUTATIONAL RESOURCES OF THE REGIONAL SOUTH-EAST EUROPE HIGH PERFORMANCE COMPUTING INFRASTRUCTURE'''&lt;br&gt;''4-th International Conference &quot;Telecommunications, Electronics and Informatics&quot;'', May 17-19, 2012, Chisinau, Moldova<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''PERSPECTIVES OF REGIONAL CROSS-BORDER FIBER CONNECTIONS DEVELOPMENT'''&lt;br&gt;''4-th International Conference &quot;Telecommunications, Electronics and Informatics&quot;'', May 17-19, 2012, Chisinau, Moldova<br /> * [RENAM] Dr. G. Secrieru&lt;br&gt;'''“СЕТЬ НАУКИ И ОБРАЗОВАНИЯ КАК ИНФРАСТРУКТУРА ДЛЯ GRID – ПРИЛОЖЕНИЙ”'''&lt;br&gt;''4-th International Conference &quot;Telecommunications, Electronics and Informatics&quot;'', May 17-19, 2012, Chisinau, Moldova<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''International and regional projects for computing technologies development'''&lt;br&gt;''Seminar entitled “Access to regional High Performance Computing (HPC) resources”'', May 30, 2012, Chisinau, Moldova<br /> * [RENAM] Mr. Nicolai Iliuha&lt;br&gt;'''Access to regional High Performance Computing (HPC) resources'''&lt;br&gt;''Seminar entitled “Access to regional High Performance Computing (HPC) resources”'', May 30, 2012, Chisinau, Moldova<br /> * [RENAM IMI ASM] Dr. P. Bogatencov&lt;br&gt;'''eInfrastructure Calls in FP7 Capacities specific programme'''&lt;br&gt;''Launching new FP7 Calls for 2013'', July 12, 2012, Chisinau, Moldova<br /> * [RENAM IMI ASM] Dr. G. Secrieru&lt;br&gt;'''Access to the Regional HPC Resources and Strategy of Their Development'''&lt;br&gt;''20th Conference on Applied and Industrial Mathematics'', August 22-25, 2012, Chisinau, Moldova<br /> * [RENAM IMI ASM] Dr. P. Bogatencov and Dr. G. Secrieru&lt;br&gt;'''HP-SEE project, local HPC resources and complex applications developing in IMI ASM '''&lt;br&gt;''Joint meeting with specialists from Global Initiatives for Proliferation Prevention (GIPP) of the US Department of Energy organized in IMI ASM by the Science and Technology Centre'', June 22, 2012, Ukraine<br /> * [RENAM] Participation of RENAM representative Nicolai Iliuha at “The 6th International Conference on Application of Information and Communication Technologies&quot; Georgia, Tbilisi, 17-19.10. 2012 with presentation &quot;Computing Infrastructure and Services Deployment for Research Community of Moldova&quot; (http://aict.info/2012/);<br /> * [RENAM] Participation of RENAM representative Dr. P. Bogatencov at “The Fourth International Scientific Conference &quot;Supercomputer Systems and Applications&quot; (SSA`2012)”, Minsk, UIIP NAS Belarus, 23-25.10. 2012 with keynote presentation &quot;COMPUTATIONAL RESOURCES OF THE REGIONAL SOUTH-EAST EUROPE HIGH PERFORMANCE COMPUTING INFRASTRUCTURE&quot; (http://ssa2012.bas-net.by/en/) and presentation of the session report entitled “Modeling of three-dimensional gas dynamics problems on multiprocessor systems and graphical processors” prepared together with Prof. Boris Rybakin.<br /> * [RENAM] Participation of RENAM representative Nicolai Iliuha in the technical-scientific workshop „Computational resources, special software and procedures of obtaining access to the HPC cluster in the State University of Moldova”, 7 November 2012, the State University of Moldova with report „National, regional and European Grid infrastructures; participation of Moldova in EGI-Inspire project and regional HP-SEE project”.<br /> <br /> === AM ===<br /> * [IIAP_NAS_RA] Yuri Shoukourian&lt;br&gt;'''Stimulating and Revealing Technological Innovations in Academic Institutions,'''&lt;br&gt;''ArmTech Congress'', October 10-11, 2011, Yerevan, Armenia<br /> * [IIAP_NAS_RA] Yuri Shoukourian&lt;br&gt;'''Armenian Research &amp; Educational E-Infrastructure'''&lt;br&gt;''Eastern Partnership Event'', November 07-08, 2011, Bucharest, Romania<br /> <br /> === GE ===<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''Georgian Research and Educational Networking Association GRENA'''&lt;br&gt;''First ATLAS/South Caucasus Software / Computing Workshop &amp; Tutorial'', October 26, 2010, Tbilisi, Georgia<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''E-Infrastructure in South Caucasus'''&lt;br&gt;''Eastern Europe Partnership Event – Policies for Development of E-Infrastructures in Eastern European Countries'', November 07-08, 2011, Bucharest, Romania<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''Participation of GRENA in European Commission projects'''&lt;br&gt;''Meeting of EU Delegation in Georgia'', June 18, 2012, Tbilisi, Georgia<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''E-infrastructure in South Caucasus Countries for Science'''&lt;br&gt;''EC funded GEO-RECAP and IDEALIST projects meetings '', June 27-28, 2012, Tbilisi, Georgia<br /> *[GRENA] R. Kvatadze&lt;br&gt;'''E-Infrastructure for Science in Georgia'''&lt;br&gt;''6th International Conference on Application of Information and Communication Technologies '', October 17-19, 2012 in Tbilisi, Georgia. http://aict.info/2012/ <br /> * [GRENA] R. Kvatadze made presentation “E-Infrastructure for Science in Georgia” at the 2nd ATLAS-SouthCaucasus Software/Computing Workshop on October 23-26, 2012 in Tbilisi, Georgia. http://dmu-atlas.web.cern.ch/dmu-atlas/2012/index.html<br /> <br /> <br /> <br /> &lt;!--------- OLD LIST<br /> === GR ===<br /> * (GRNET), O. Prnjat, &quot;High-Performance Computing Infrastructure for South East Europe’s Research Communities&quot; EGI technical forum, September 2010.<br /> * (GRNET), I. Liabotis, Presentation of the HP-SEE project in the 8th e-Infrastructure Concertation Meeting 4-5 November 2010, CERN, Geneva. (http://www.e-sciencetalk.org/econcertation/)<br /> * (GRNET), I. Liabotis, Presentation of the project entitled &quot;HP-SEE - Regional HPC development activities in South-Eastern Europe&quot; in the 2nd HellasHPC Workshop. 22nd of October 2010, Athens, Greece.<br /> * (GRNET), O. Prnjat, 2010 Euro-Africa Conference on e- Infrastructures 9-10 December 2010, Helsinki, Finland<br /> * (GRNET), O. Prnjat, The Forum on Research for Innovation in ICT for the Western Balkan Countries, 30 November 2010, Belgrade, Serbia<br /> * (GRNET), O. Prnjat, SEERA-EI Sixth Networking Meeting, January 2011, Sarajevo<br /> * (GRNET), I. Liabotis Presentation of the project in the National HPC conference, 9 December 2010 Sofia Bulgaria,<br /> * (GRNET), Presentation of HP-SEE at the NATO Advanced Networking Workshop on &quot;Advanced cooperation in the area of disaster prevention for human security in Central Asia&quot;, May 2011, Dushanbe.<br /> * (GRNET), HP-SEE was presented as part of overall SEE activities’ presentation under the umbrella of the SEERAEI project, at the following events:<br /> ** SEERA-EI Seventh Networking Meeting, April 2011, Chisinau.<br /> ** SEERA-EI cloud policy workshop, April 2011, Chisinau.<br /> <br /> === BG ===<br /> * (IPP-BAS/IICT-BAS) T. Gurov, &quot;High-Performance Computing Infrastructure for South East’s Research Communities&quot;, 2nd Workshop of Networking Initiatives for ICT related projects, 10-11.09.2010, Varna, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) A. Karaivanova, T. Gurov and E. Atanassov, &quot;HP-SEE Overview&quot;, HP-SEE Regional Training, 29-30.11.2010, Sofia, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) E. Atanassov, &quot;HP-SEE Infrastructure and Access&quot;, HP-SEE Regional Training, 29-30.11.2010, Sofia, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) T. Gurov, &quot;Introduction to Parallel Computing: Message Passing and Shared Memory&quot;, HP-SEE Regional Training, 29-30.11.2010, Sofia, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) E. Atanassov, &quot;Advanced Application Porting and Optimization&quot;, HP-SEE Regional Training, 29-30.11.2010, Sofia, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) E. Atanassov, A. Karaivanova, S. Ivanovska, M. Durchova, “Two Algorithms for Modified Owen Scrambling on GPU”, Fifth Annual meeting of the Bulgarian Section of SIAM – BGSIAM’2010, 20-21 December, 2010 Sofia, Bulgaria.<br /> * (IICT-BAS), T. Gurov, “Overview of the HP-SEE project”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), E. Atanassov, “HPC Cluster at IICT-BAS and HP-SEE Infrastructure”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), E. Atanassov, “Introduction to GPU Computing”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), T. Gurov, “e-Infrastructure for e-Science”, presented on a seminar at the Institute on Oceanology, 11 March, 2011, Varna, Bulgaria.<br /> * (IICT-BAS), G. Bencheva, “Introduction to Parallel Computing”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), Y. Vutov, N. Kosturski, “Application software deployed on BG/P”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), Sv. Margenov, “Supercomputer applications on BG/P”, National workshop for supercomputer applications, 20-22, May, 2011, Hissar, Bulgaria.<br /> * (IICT-BAS), K. Shterev, “Computer Simulation on micro-gas flows in elements of Micro-Electro-Mechanical Systems (MEMS), National workshop for supercomputer applications, 20-22, May, 2011, Hissar, Bulgaria.<br /> * (IICT-BAS), Y. Vutov, “Optimizing the Performance of a Parallel Unstructured Grid AMG Solver”, National workshop for supercomputer applications, 20-22, May, 2011, Hissar, Bulgaria.<br /> * (IICT-BAS), E. Atanassov, “Bulgarian HPC Infrastructure”, presented on “SEERA-EI training series on best practices: Bulgarian HPC policy and programmes”, 17 May, 2011, Sofia, Bulgaria.<br /> <br /> === RO ===<br /> * (IFIN HH), “The HP-SEE Project”, M. Dulea, RO-LCG 2010 Conference - Grid and High Performance Computing in Scientific Collaborations, Bucharest, 6-7 December 2010, http://wlcg10.nipne.ro<br /> * (IFIN), “National infrastructure for high performance scientific computing”, M. Dulea, at the ”GRID and HPC in Modern Medical Environments&quot; Conference, Carol Davila University of Medicine and Pharmacy, Bucharest, 08.03.2011.<br /> * (IFIN), “High performance computing for Romanian scientific community”, M. Dulea, in the framework of the “Study for the operationalization of the National Supercomputing Centre”, Ministry of Communications and Informational Society, Bucharest, 15.03.2011.<br /> * (IFIN), “Romanian HPC Infrastructure”, M. Dulea, SEERA-EI training series on best practices: Bulgarian HPC policy and programmes, Sofia, 17.04.2011.<br /> * (UPT), Presentation of the project during the training event in Tirana, 21 Dec. 2010<br /> * (UPB), Computing Infrastructure for Scientific Computing – the UPB-NCIT-Cluster, October 25th 2010, Bucharest, Romania. CEEMEA Blue Gene Research Collaboration and Community Building.<br /> <br /> === HU ===<br /> * (NIIF), S. Péter, &quot;The new national HPC infrastructure and services, operated by NIIF&quot;, 20th Networkshop, April 2011.<br /> * (NIIF), R. Gábor, &quot;South East European HPC Project&quot;, 20th Networkshop, April 2011.<br /> * (SZTAKI), B. Ákos, “Workflow level interoperability between ARC and Glite middlewares for HPC and Grid applications&quot;, 20th Networkshop, April 2011.<br /> * (SZTAKI), M. Kozlovszky, A. Balasko, P. Kacsuk, Enabling JChem application on grid, ISGC 2011 &amp; OGF 31, International Symposium on Grids and Clouds (ISGC 2011) &amp; Open Grid Forum (OGF 31) 21-25 March 2011, Taipei Taiwan.<br /> * (SZTAKI), M. Kozlovszky, A. Balasko, I. Marton, K. Karoczkai, A. Szikszay Fabri, P. Kacsuk: New developments of gUSE &amp; WS-PGRADE to support e-science gateways, EGI User Forum Lithuania, Vilnius, 2011 April.<br /> <br /> === RS ===<br /> * (IPB), During the May the Institute of Physics Belgrade officially celebrated 50 years of its establishment. For this occasion, the dedicated exhibition was hosted at the Gallery of Science and Technology of the Serbian Sciences and Arts in Belgrade, Serbia. As a part of the exhibition a set of HPC oriented sessions were organized, during which HP-SEE project was presented.<br /> * (IPB), The International Particle Physics MasterClass Belgrade 2011 was organized by the University of Belgrade, in collaboration with the European Particle Physics Outreach Group, and was held on 14 March 2011. SCL's Antun Balaz gave a short talk to high school students on Grid computing, HPC resourcers and its applications in particle physics.<br /> * (IPB), At the &quot;Advanced School in High Performance and Grid Computing&quot; organized by the International Centre for Theoretical Physics (ICTP) in Trieste, Italy from 11 to 22 April 2011, Antun Balaz (who was also one of the organizers) gave the Lecture on &quot;Advanced Profiling and Optimization Techniques&quot; by A. Balaz. Milan Zezelj presented his work on &quot;Parallel Implementation of a Monte Carlo Molecular Simulation Program&quot;<br /> * (IPB), The Executive Agency “Electronic Communications Networks and Information Systems” of the Bulgarian government, as a part of SEERA –EI project activities, has organized a meeting on “HPC related policy and programs” for South Eastern Europe policy makers. Antun Balaz present Serbian experiences, plans and expectations towards SEE high performance computing infrastructures development during the round table on &quot;HPC initiatives in other SEE countries: Romania, Serbia, FYROM, Greece&quot;. Presentation was titled &quot;Infrastructure for the National Supercomputing Initiative&quot;.<br /> <br /> === MK ===<br /> * (UKIM), Presentation titled: High performance computing – in Europe, region and at home was delivered by Anastas Misev at the first LANFest event in Skopje, 20.-22.05.2011 . http://lanfest.mk/ex/LANFestConferenceAgenda.pdf The presentation contains a special part on HP-SEE project.<br /> <br /> === MD ===<br /> * (RENAM), Presentation of Mr. Nicolai Iliuha about HP-SEE Kick-off-meeting was made on 30.09. 2010 for members of RENAM Scientific and Technical Council and invited representatives from universities and research institutions<br /> * (RENAM), Presentation of Mr. Nicolai Iliuha about HP-SEE training Event in Bulgaria (29-30.11.2010), ability to access and parameters of HPC resources available in IICT-BAS was made for members of RENAM Scientific and Technical Council and invited representatives from universities and research institutions of the Academy of Sciences of Moldova<br /> * (RENAM), On 25th and 26th of May 2011 RENAM presented two reports at the joint training and dissemination event entitled “Computational structures and Technologies for Solving Large Scale Problems” for specialists of the Institute of Mathematics and Computer Science&quot; and the Institute of Economy, Finance and Statistics of ASM:<br /> ** Dr. Peter Bogatencov - &quot;Scientific computing – current state and perspectives of development&quot;;<br /> ** Mr. Nicolai Iliuha - &quot;Computational structures and Technologies to meet challenges of modeling&quot;<br /> <br /> <br /> === GE ===<br /> * (GRENA), On October 26, 2010 R. Kvatadze made presentation about Georgian Research and Educational Networking Association GRENA activities at First ATLAS/South Caucasus Software / Computing Workshop &amp; Tutorial held in Tbilisi, Georgia on October 25 – 29, 2010.<br /> * (GRENA), R. Kvatadze participated in kick-off meeting of the project on September 6-8, 2010 in Athens.<br /> * (GRENA), GRENA as a co-organizer of the First ATLAS/South Caucasus Software / Computing Workshop &amp; Tutorial took part in preparation and conduction of the event. Workshop was held in Tbilisi, Georgia during October 25 – 29, 2010 http://dmu-atlas.web.cern.ch/dmuatlas/2010/.<br /> * (GRENA), N. Gamtsemlidze participated in HP-SEE Regional Training on November 29-30, 2010 in Sofia.<br /> <br /> ---------------&gt;</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/Presentations Presentations 2013-04-05T13:14:58Z <p>Lifesci: /* GE */</p> <hr /> <div>== Scientific and HPC presentations and posters ==<br /> <br /> === GR ===<br /> * [GRNET IMBB] P. Poirazi&lt;br&gt;'''Dendrites and information processing: insights from compartmental models'''&lt;br&gt;''CNS*2011 workshop on ‘Dendrite function and wiring: experiments and theory”,'', July 27, 2011, Stockholm, Sweden<br /> * [GRNET IMBB] &lt;br&gt;'''Poster presentation'''&lt;br&gt;''EMBO Conference Series on THE ASSEMBLY AND FUNCTION OF NEURONAL CIRCUITS'', September 23-29, 2011, Ascona, Switzerland<br /> * [GRNET IMBB] A. Oulas&lt;br&gt;&lt;br&gt;''62nd Conference of the Hellenic Society for Biochemistry and Molecular Biology'', December 9-11, 2011, Athens,Greece<br /> * [GRNET IMBB] P. Poirazi&lt;br&gt;'''Spatio-temporal encoding of input characteristics in biophysical model cells and circuits'''&lt;br&gt;''FENS-HERTIE-PENS Winter School'', January 8-15, 2012, Oburgurgl, Austria<br /> <br /> === BG ===<br /> * [IICT-BAS] E. Atanassov, A. Karaivanova, S. Ivanovska, M. Durchova,&lt;br&gt;'''“Two Algorithms for Modified Owen Scrambling on GPU”,'''&lt;br&gt;''Fifth Annual meeting of the Bulgarian Section of SIAM – BGSIAM’2010'', December 20-21, 2010 Sofia, Bulgaria<br /> * [IICT-BAS] E. Atanassov, S. Ivanovska&lt;br&gt;'''“Efficient Implementation of Heston Stochastic Volatility Model Using GPGPU”, Special Session “High Performance Monte Carlo Simulation”'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] E. Atanassov, T. Gurov, A. Karaivanova,&lt;br&gt;'''Message Oriented Framework with Low Overhead for Efficient Use of HPC Resources'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] G. Bencheva&lt;br&gt;'''Computer Modelling of Haematopoietic Stem Cells Migration'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] N. Kosturski, S. Margenov, Y. Vutov&lt;br&gt;'''Improving the Efficiency of Parallel FEM Simulations on Voxel Domains'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] N. Kosturski, S. Margenov, Y. Vutov&lt;br&gt;'''Optimizing the Performance of a Parallel Unstructured Grid AMG Solver'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] S. Ivanovska, A. Karaivanova, N. Manev&lt;br&gt;'''Numerical Integration Using Sequences Generating Permutations'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] K. Shterev, S. Stefanov, E. Atanassov&lt;br&gt;'''A Parallel Algorithm with Improved Performance of Finite Volume Method (SIMPLE-TS)'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [IICT-BAS] E. Atanassov, T. Gurov, A. Karaivanova&lt;br&gt;'''How to Use HPC Resources efficiently by a Message Oriented Framework'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS] K. Georgiev, I. Lirkov, S. Margenov&lt;br&gt;'''Highly Parallel Alternating Directions Algorithm for Time Dependent Problems'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS] N. Kosturski, S. Margenov, Y. Vutov&lt;br&gt;'''Comparison of Two Techniques for Radio-Frequency Hepatic Tumor Ablation through Numerical Simulation'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS] T. Gurov, A. Karaivanova, N. Manev&lt;br&gt;'''Monte Carlo Simulations of Electron Transport using a Class of Sequences Generating Permutations'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS] A. Karaivanova&lt;br&gt;'''Message Oriented Framework for Efficient Use of HPC Resources”, Special Session “High-Performance Computations for Monte Carlo Applications”,'''&lt;br&gt;''8th IMACS Seminar on Monte Carlo Methods'', August 29 - September 2, 2011, Borovets, Bulgaria<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Efficient Implementation of Heston Model Using GPGPU”, Special Session “High-Performance Computations for Monte Carlo Applications”'''&lt;br&gt;''8th IMACS Seminar on Monte Carlo Methods'', August 29 - September 2, 2011, Borovets, Bulgaria<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Message Oriented Framework with Low Overhead for Efficient Use of HPC Resources'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> * [IICT-BAS] N. Manev&lt;br&gt;'''Monte Carlo Methods using a New Class of Congruential Generators'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''Study Scalability of SET Application using The Bulgarian HPC Infrastructure'''&lt;br&gt;''8th International Conference on Computer Science and Information Technologies – CSIT2011'', September 26-30, 2011, Yerevan, Armenia<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Stochastic Modeling of Electron Transport on different HPC architectures'''&lt;br&gt;''PRACE Workshop on HPC approaches on Life Sciences and Chemistry'', February 17-18, 2012, Sofia, Bulgaria<br /> * [IICT-BAS] N. Manev&lt;br&gt;'''ECM Integer factorization on GPU Cluster'''&lt;br&gt;''Jubilee 35th International Convention on Information and Communication Technology, electronics and microelectronics (MIPRO2012)'', May 21-25, 2012, Opatija, Croatia.<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Efficient Implementation of a Stochastic Electron Transport Simulation Algorithm Using GPGPU Computing'''&lt;br&gt;''4th AMITANS 2012 Conference'', June 11-16, 2012, Varna, Bulgaria<br /> * [IICT-BAS] N. Manev&lt;br&gt;'''Twister Edwards Curves Integer Factorization on GPU Cluster'''&lt;br&gt;''4th AMITANS 2012 Conference'', June 11-16, 2012, Varna, Bulgaria<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Monte Carlo Methods for Electron Transport: Scalability Study'''&lt;br&gt;''11th ISPDC2012 Conference'', June 25-29, 2012, Munich, Germany<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''Efficient Monte Carlo algorithms for Inverse Matrix Problems'''&lt;br&gt;''7th PMAA’12 Conference'', June 28-30, Birkbeck University of London, UK<br /> * [IICT-BAS] A. Karaivanova&lt;br&gt;'''Randomized quasi-Monte Carlo for Matrix Computations'''&lt;br&gt;''7th PMAA’12 Conference'', June 28-30, Birkbeck University of London, UK<br /> * [IICT-BAS IM-BAS] K. Shterev, N. Kulakarni, S. Stefanov&lt;br&gt;'''Influence of Reservoirs on Pressure Driven Gas Glowin a Micro-channel'''&lt;br&gt;''3rd AMITANS’11 Conference'', June 20-25, 2011, Albena, Bulgaria.<br /> * [IICT-BAS IM-BAS] K. Shterev&lt;br&gt;'''Comparison of Some Approximation Schemes for Convective Terms for Solving Gas Flow past a Square in a Microchannel'''&lt;br&gt;''4th AMITANS 2012 Conference'', June 11-16, 2012, Varna, Bulgaria<br /> * [IICT-BAS]: A. Karaivanova, “Monte Carlo methods for Electron Transport: Scalability Study Using HP-SEE Infrastructure”, invited talk during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia.<br /> * [IICT-BAS]: E. Atanassov, “Efficient Parallel Simulations of Large-Ring Cyclodextrins on HPC cluster”, contribution talk during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia.<br /> * [IICT-BAS]: D. Georgiev, “Number Theory Algorithms on GPU cluster”, contribution talk during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia. <br /> * [IICT-BAS]: E. Atanassov, “Conformational Analysis of Kyotorphin Analogues Containing Unnatural Amino Acids”, poster presentation during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia.<br /> * [IM-BAS]: K. Shterev, “Determination of zone of flow instability in a gas flow past a square particle in a narrow microchannel”, contribution talk during HP-SEE User Forum 2012, October 17-19, Belgrade, Serbia.<br /> <br /> <br /> === RO ===<br /> * [IFIN HH] Silviu Panica, Dana Petcu, Daniela Zaharie&lt;br&gt;'''Gaining Experience in BlueGene/P Application Development. A Case Study in Remote Sensing Data Analysis, HPCe – High Performance Computing with application in environment'''&lt;br&gt;''SYNASC 2011'', September 26-29, 2011, Timisoara, Romania<br /> * [IFIN HH] Ionut Vasile, Dragos Ciobanu-Zabet&lt;br&gt;'''Software tools for HPC users at IFIN-HH'''&lt;br&gt;''RO-LCG 2011 Workshop - Applications of Grid Technology and High Performance Computing in Advanced Research'', November 29-30, 2011, Bucharest, Romania<br /> * [IFIN HH] Silviu Panica, Marian Neagul, Daniela Zaharie and Dana Petcu&lt;br&gt;'''Services for Earth observations: from cluster and cloud to HPC and Grid services'''&lt;br&gt;''COST Action 0805 Complex HPC'', January 26, 2012, Timisoara, Romania<br /> * [IFIN HH] HP-SEE User Forum, Belgrade:<br /> ** Emergence of resonant waves in cigar-shaped Bose-Einstein condensates Alexandru NICOLIN <br /> ** On HPC for Hyperspectral Image Processing, Silviu Panica, Daniela Zaharie, Dana Petcu<br /> ** Investigations of biomolecular systems within the ISyMAB simulation framework Ionut VASILE, Dragos Ciobanu-Zabet <br /> ** Formation of Faraday and Resonant Waves in Driven High-Density Bose-Einstein Condensates Mihaela Carina RAPORTARU (poster)<br /> * [IFIN HH] Procs. of the RO-LCG 2012 IEEE International Conference, Cluj, Romania, 25-27.10.2012, IEEE CFP1232T-PRT, ISBN 978-973-662-710-1:<br /> ** National and regional organization of collaborations in advanced computing, Mihnea Dulea, pp. 63-66 <br /> ** Computational Challenges in Processing Large Hyperspectral Images, Dana Petcu et al <br /> ** Eagle Eye – Feature Extraction from Satellite Images on a 3D Map of Romania, Razvan Dobre et al<br /> ** Integrated System for Modeling and data Analysis of complex Biomolecules (ISyMAB), I. Vasile, D. Ciobanu-Zabet<br /> <br /> === HU ===<br /> * [NIIF] M. Kozlovszky, Gergely Windisch, Akos Balasko&lt;br&gt;'''Bioinformatics eScience gateway services using the HP-SEE infrastructure'''&lt;br&gt;''Summer School on Workflows and Gateways for Grids and Clouds 2012'', July 2-6, 2012, Budapest ,Hungary<br /> * [NIIF SZTAKI] M. Kozlovszky, A. Balasko, P. Kacsuk&lt;br&gt;'''Enabling JChem application on grid'''&lt;br&gt;''ISGC 2011 &amp; OGF 31, International Symposium on Grids and Clouds (ISGC 2011) &amp; Open Grid Forum (OGF 31)'', March 21-25, 2011, Taipei Taiwan<br /> * [NIIF SZTAKI] M. Kozlovszky, A. Balasko, I. Marton, K. Karoczkai, A. Szikszay Fabri, P. Kacsuk&lt;br&gt;'''New developments of gUSE &amp; WS-PGRADE to support e-science gateways'''&lt;br&gt;''EGI User Forum Lithuania'', April, 2011, Vilnius, Lithuania<br /> * [NIIFI] Presentation at the HP-SEE User Forum 2012 (“Performance and scalability evaluation of short fragment sequence alignment applications“ by G. Windisch , A. Balaso, M.Kozlovszky).<br /> * [NIIFI] Presentation at the HP-SEE User Forum 2012 (“Advanced Vulnerability Assessment Tool for Distributed Systems “ by S. Acs, M.Kozlovszky).<br /> <br /> === RS ===<br /> * [IPB] D. Vudragovic&lt;br&gt;'''Extensions of the SPEEDUP Path Integral Monte Carlo Code'''&lt;br&gt;''8th IMACS Seminar on Monte Carlo Methods'', August 29 - September 2, 2011, Borovets, Bulgaria<br /> * [IPB] A. Balaz&lt;br&gt;'''Numerical Study of Surface Waves in Binary Bose-Einstein Condensates'''&lt;br&gt;''Seminar at Vinca Institute of Nuclear Sciences'', November 17, 2011, Belgrade, Serbia<br /> * [IPB] A. Balaz&lt;br&gt;'''Numerical simulations of Faraday waves in binary Bose-Einstein condensates'''&lt;br&gt;''CompPhys11 conference'', November 24-26, 2011, Leipzig, Germany<br /> * [IPB] A. Balaz&lt;br&gt;'''Parametric and Geometric Resonances of Collective Oscillation Modes in Bose-Einstein Condensates'''&lt;br&gt;''Photonica 2011 conference'', August 29 - September 2, 2011, Belgrade, Serbia<br /> * [IPB] Antun Balaz: &quot;Numerical Study of Ultracold Quantum Gases: Formation of Faraday Patterns, Geometric Resonances, and Fragmentation&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Nenad Vukmirovic, &quot;Computational approaches for electronic properties of semiconducting materials and nanostructures&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Igor Stankovic, &quot;Numerical Simulations of the Structure and Transport Properties of the Complex Networks&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Jaksa Vucicevic, &quot;Iterative Perturbative Method for a Study of Disordered Strongly Correlated Systems&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Milos Radonjic, &quot;Electronic Structure and Lattice Dynamics Calculations of FeSb2 and CoSb2&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Dusan Stankovic, &quot;SCL Quantum Espresso Extensions&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Josip Jakic, &quot;An Analysis of FFTW and FFTE Performance&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB] Petar Jovanovic, &quot;GPAW optimisations&quot;, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB-FCUB] Ivan Juranic, “Use of High Performance Computing in (Bio)Chemistry”, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB-FCUB] Branko Drakulic, “Dynamics of uninhibited and covalently inhibited cysteine protease on non-physiological”, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB-FCUB] Branko Drakulic, “Free-energy surfaces of 2-[(carboxymethyl)sulfanyl]-4-oxo-4-arylbutanoic acids. Molecular dynamics study in explicit solvents”, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> * [IPB-FCUB] Ilija Cvijetic, “In the search of the HDAC-1 inhibitors. The preliminary results of ligand based virtual screening”, HP-SEE User Forum 2012, 17-19 October 2012, National Library of Serbia, Belgrade, Serbia.<br /> <br /> <br /> === AL ===<br /> * [UPT] Neki Frasheri, Betim Cico&lt;br&gt;'''Analysis of the Convergence of Iterative Geophysical Inversion in Parallel Systems'''&lt;br&gt;''ICT Innovations 2012'', September 14-16, 2011, Skopje, FYROM<br /> * [UPT] Neki Frasheri, Betim Cico&lt;br&gt;'''Convergence of Gravity Inversion Using OpenMP'''&lt;br&gt;''XVII Scientific-Professional Information Technology Conference 2012'', February 27 – March 2, 2012, Zabljak Montenegro<br /> * [UPT] N. Frasheri, B. Cico. Reflections on parallelization of Gravity Inversion. HP-SEE User Forum, 17-19 October 2012, Belgrade, Republic of Serbia.<br /> * [UPT] R. Zeqirllari. Quenched Hadron Spectroscopy Using FermiCQD. HP-SEE User Forum, 17-19 October 2012, Belgrade, Republic of Serbia. (poster)<br /> * [UPT] D. Xhako. Using Parallel Computing to Calculate Quark-Antiquark Potential from Lattice CQD. HP-SEE User Forum, 17-19 October 2012, Belgrade, Republic of Serbia. (poster)<br /> <br /> === MK ===<br /> * [UKIM] Anastas Misev&lt;br&gt;'''Software Engineering for HPC'''&lt;br&gt;''DAAD workshop in software engineering'', August 22-27, 2011, August, FYROM<br /> * [UKIM] Jane Jovanovski, Boro Jakimovski and Dragan Jakimovski&lt;br&gt;'''Parallel Genetic Algorithms for Finding Solution of System of Ordinary Differential Equations'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> * [UKIM] Anastas Misev, Dragan Sahpaski and Ljupco Pejov&lt;br&gt;'''Implementation of Hybrid Monte Carlo (Molecular Dynamics) – Quantum Mechanical Methodology for Modelling of Condensed Phases on High Performance Computing Environment'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> <br /> === ME ===<br /> * [UOM] Luka Filipović, Danilo Mrdak, Božo Krstajić, &quot;DNA Multigene aproach on HPC Using RAxML Software&quot;, HP SEE User forum, Belgrade, Serbia<br /> <br /> === MD ===<br /> * [RENAM] Dr. P. Bogatencov, Dr. G. Secrieru&lt;br&gt;'''Numerical analysis of the coupled problem on interaction of ground and elastic - plastic shell under high – speed loads'''&lt;br&gt;''8th LSSC’11 Conference'', June 6-10, 2011, Sozopol, Bulgaria<br /> * [RENAM] Participation of RENAM representative Dr. Petru Bogatencov in &quot;HP-SEE User Forum 2012&quot;, Belgrade, Serbia on October 17-19, 2012 with presentation “Using Structured Adaptive Computational Grid for Solving Multidimensional Computational Physics Tasks”;<br /> <br /> === GE ===<br /> * [GRENA] G. Mikuchadze made presentation “Quantum-Chemical Calculations for the Quantitative Estimations of the Processes in DNA” at the HP-SEE User Forum on October 17-19, 2012 in Belgrade, Serbia.<br /> <br /> == HP-SEE dissemination presentations and posters at external events ==<br /> === GR ===<br /> <br /> * [GRNET] O.Prnjat&lt;br&gt;'''High-Performance Computing Infrastructure for South East Europe’s Research Communities'''&lt;br&gt;''EGI technical forum'', September 14-17, 2010, Amsterdam, The Netherlands<br /> * [GRNET] I. Liabotis&lt;br&gt;'''HP-SEE - Regional HPC development activities in South-Eastern Europe'''&lt;br&gt;''2nd HellasHPC Workshop'', October 22, 2010, Athens, Greece.<br /> * [GRNET] I. Liabotis&lt;br&gt;'''HP-SEE project'''&lt;br&gt;''8th e-Infrastructure Concertation Meeting'', November 4-5, 2010, CERN, Geneva, Switzerland<br /> * [GRNET] O. Prnjat&lt;br&gt;'''HP-SEE Project (overall SEE)'''&lt;br&gt;''The Forum on Research for Innovation in ICT for the Western Balkan Countries'', November 30, 2010, Belgrade, Serbia<br /> * [GRNET] O. Prnjat&lt;br&gt;'''HP-SEE Project (overall SEE)'''&lt;br&gt;''2010 Euro-Africa Conference on e- Infrastructures'', December 9-10, 2010, Helsinki, Finland<br /> * [GRNET] O. Prnjat&lt;br&gt;'''HP-SEE Project (overall SEE)'''&lt;br&gt;''SEERA-EI Sixth Networking Meeting'', January, 2011, Sarajevo, Bosnia and Herzegovina<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI Seventh Networking Meeting'', April, 2011, Chisinau, Moldova<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''NATO Advanced Networking Workshop on &quot;Advanced cooperation in the area of disaster prevention for human security in Central Asia&quot;'', May 16, 2011, Dushanbe, Tajikistan<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI Eight networking meeting'', July, 2011, Tirana, Albania<br /> * [GRNET] O.Prnjat&lt;br&gt;'''HP-SEE project'''&lt;br&gt;''eInfrastructures concertation meeting'', September 2011, Lyon, France<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''EuroRIs- Net Workshop'', October 11, 2011, Athens, Greece.<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI Ninth Networking Meeting'', November, 2011, Bucharest, Romania<br /> * [GRNET] Dr Tryfon Chiotis&lt;br&gt;'''HCP and Cloud Technologies in Greece and South Eastern Europe'''&lt;br&gt;''e-AGE 2011 - Integrating Arab e-Infrastructures in a global environment'', December, 2011, Amman, Jordan<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI project conference for the Western audience,'', February, 2012, Istanbul, Turkey<br /> * [GRNET] &lt;br&gt;'''HP-SEE project'''&lt;br&gt;''SEERA-EI cloud policy workshop'', April, 2011, Chisinau, Moldova<br /> * [GRNET] &lt;br&gt;'''poster entitled “Advanced High Performance Computing Services for networked regional Virtual Research Communities”'''&lt;br&gt;''TNC2012'', May, 21 – 24, 2012, Reykjavík, Iceland<br /> * [GRNET] &lt;br&gt;'''presentation to the high-level European policy-makers'''&lt;br&gt;''eInfrastructure Reflection Group workshop'', June, 2012, Copenhagen, Denmark<br /> * [GRNET] HP-SEE regional HPC policy, operations and user support approach presented to highest-possible worldwide audience during CHAIN project workshops in China, India and Sub-Saharan Africa.<br /> <br /> === BG ===<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Bulgarian HPC Infrastructure'''&lt;br&gt;''SEERA-EI training series on best practices: Bulgarian HPC policy and programmes”'', May 17, 2011, Sofia, Bulgaria<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''e-Infrastructure for e-Science'''&lt;br&gt;''seminar at the Institute on Oceanology,'', March 11, 2011, Varna, Bulgaria<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''“High-Performance Computing in South East Europe and Monte Carlo Simulations”, Special Session “High-Performance Computations for Monte Carlo Applications”'''&lt;br&gt;''8th IMACS Seminar on Monte Carlo Methods'', August 29 - September 2, 2011, Borovets, Bulgaria<br /> * [IICT-BAS] A. Karaivanova&lt;br&gt;'''HP-SEE and SEERA-EI'''&lt;br&gt;''ICRI 2012'', March 01-23 2012, Copenhagen, Denmark.<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''National e-Infrastructure responsibilities of IICT-BAS, during the “Day of open doors for young researchers in field of mathematics and informatics”'''&lt;br&gt;''annual conference &quot;European Student Conference in Mathematics - EUROMATH - 2012&quot;'', March 21-25, 2012, Sofia, Bulgaria<br /> * [IICT-BAS] E. Atanassov&lt;br&gt;'''Poster presentation of HP-SEE activities'''&lt;br&gt;''RIDE 2012'', May 21-25, 2012, Opatija, Croatia.<br /> * [IICT-BAS] T. Gurov&lt;br&gt;'''High-Performance Computing Infrastructure for South East’s Research Communities'''&lt;br&gt;''2nd Workshop of Networking Initiatives for ICT related projects'', September 10- 11, 2010, Varna, Bulgaria.<br /> * [IICT-BAS]: E. Atanassov and T. Gurov, ë-Infrastructures for Scientific Computations”, talk during the Researchers’ Night at IICT-BAS, 28 September, 2012, Sofia, Bulgaria. <br /> <br /> === RO ===<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''The HP-SEE Project'''&lt;br&gt;''RO-LCG 2010 Conference - Grid and High Performance Computing in Scientific Collaborations'', 6-7 December 6-7, 2010, Bucharest, Romania<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''National infrastructure for high performance scientific computing'''&lt;br&gt;''GRID and HPC in Modern Medical Environments&quot; Conference'', May 08, 2011, Bucharest, Romania,<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''Romanian HPC Infrastructure'''&lt;br&gt;''SEERA-EI training series on best practices: Bulgarian HPC policy and programmes,'', May 17, 2011, Sofia, Bulgaria<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''Romanian Association for the Promotion of the Advanced Computational Methods in Scientific Research (ARCAŞ)'''&lt;br&gt;''Policies for Development of E-Infrastructures in Eastern European Countries'', November 07-08, 2011, Bucharest, Romania<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''National and regional HPC collaboration,'''&lt;br&gt;''RO-LCG 2011 Workshop - Applications of Grid Technology and High Performance Computing in Advanced Research'', November 29-30, 2011, Bucharest, Romania<br /> * [IFIN HH] M. Dulea&lt;br&gt;'''High performance computing for Romanian scientific community'''&lt;br&gt;''“Study for the operationalization of the National Supercomputing Centre”,'', March 15, 2012, Bucharest, Romania<br /> <br /> <br /> === TR ===<br /> * [TUBITAK-ULAKBIM] Murat Soysal&lt;br&gt;'''Integration of South Caucasus National Research and Education Networks to Europe and Limitations'''&lt;br&gt;''Policy Stakeholder Conference “EU – Eastern Europe/Central Asia Cooperation in Research and Innovation: The way towards 2020”'', November 15-16, 2011, Warsaw, Poland<br /> <br /> === HU ===<br /> <br /> * [NIIF] M.Kozlovszky&lt;br&gt;'''HP-SEE project and the HPC Bioinformatics Life Science gateway'''&lt;br&gt;''Summer School on Workflows and Gateways for Grids and Clouds 2012 '', July 2-6, 2012, Budapest, Hungary<br /> <br /> === RS ===<br /> * [IPB] &lt;br&gt;'''HP-SEE'''&lt;br&gt;''50 years of IPB'', May 2011, Belgrade, Serbia<br /> * [IPB] &lt;br&gt;'''Einfrastructures for science in Europe'''&lt;br&gt;''ISC'11, BoF session'', June 20-22, 2011, Hamburg<br /> * [IPB] D. Stojiljkovic&lt;br&gt;'''High Performance Computing Infrastructure for South East Europe’s Research Communities (HP-SEE) project along with some selected supported applications'''&lt;br&gt;''PRACE Workshop on HPC Approaches on Life Sciences and Chemistry'', February 17-18, 2012, Sofia, Bulgaria<br /> <br /> === AL ===<br /> * [UPT] &lt;br&gt;'''Presentation of HP-SEE and application GMI'''&lt;br&gt;''round table of AITA (Albanian IT Association)'', <br /> * [UPT] N. Frasheri, B. Cico. Computing developments in Albania and it’s applications. International Workshop on recent LHC results and related topics. 8-9 October 2012, Tirana, Albania<br /> * [UPT] N. Frasheri. HP and High Performance Computing in Albania. HP Solutions event, Tirana, 26 September 2012<br /> <br /> === MK ===<br /> * [UKIM] Anastas Misev&lt;br&gt;'''High performance computing – in Europe, region and at home'''&lt;br&gt;''first LANFest event'', May 20-22, 2011, Skopje, FYROM<br /> * [UKIM] Anastas Misev&lt;br&gt;'''Presentation of the HP-SEE project at the special HPC session'''&lt;br&gt;''ICT Innovations 2011 conference'', September 14-16, 2011, Skopje, FYROM<br /> * [UKIM] &quot;HPC in the industry&quot;, Anastas Mishev, presentation at the „High Performance computing – main driver of modern e-science and economy“ dissemination and promotional event co-organized by HP and FINKI.<br /> <br /> === ME ===<br /> * [UOM] &lt;br&gt;'''Presentation of HP SEE project'''&lt;br&gt;''XVII Scientific-Professional Information Technology Conference 2012'', February 28, 2012, Zabljak, Montenegro<br /> <br /> === MD ===<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''CURRENT STATE OF DISTRIBUTED COMPUTING INFRASTRUCTURE DEPLOYMENT IN MOLDOVA'''&lt;br&gt;''10th RoEduNet International Conference - Networking in Education and Research'', June 23–25, 2011, Iasi, Romania<br /> * [RENAM] A. Andries, A. Altuhov, P. Bogatencov, N. Iliuha, G. Secrieru&lt;br&gt;'''eInfrastructure development projects in Moldova at FP7 part 1'''&lt;br&gt;''FP7 Projects Information Day organized by FP7 National contact points with international participation “New status-new opportunities for Moldova”'', September 05, 2011, Chisinau, Moldova<br /> * [RENAM] A. Andries, A. Altuhov, P. Bogatencov, N. Iliuha, G. Secrieru&lt;br&gt;'''eInfrastructure development projects in Moldova at FP7 part 2'''&lt;br&gt;''FP7 Projects Information Day organized by FP7 National contact points with international participation “New status-new opportunities for Moldova”'', September 05, 2011, Chisinau, Moldova<br /> * [RENAM] Mr. Nicolai Iliuha&lt;br&gt;'''HPC and Cloud computing technologies'''&lt;br&gt;''Master-class „Cloud computing and computing in the Cloud”'', May 11, 2012, Chisinau, Moldova<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''CBF SOLUTION IMPLEMENTATION FOR LINKING MOLDOVAN R&amp;E NETWORK TO GEANT'''&lt;br&gt;''10th RoEduNet International Conference - Networking in Education and Research'', June 23–25, 2011, Iasi, Romania<br /> * [RENAM] Nicolai Iliuha&lt;br&gt;'''High Performance Computing: current state of HP-SEE Project'''&lt;br&gt;''5th RENAM User’s Conference–2011: “Informational Services for Research and Educational Society of Moldova”'', September 22-23, 2011, Chisinau, Moldova<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''Participation in European eInfrastructure Projects - Experience and Case Studies”'''&lt;br&gt;''Official Launching of Moldova Association to FP7'', January 27, 2012, Chisinau, Moldova<br /> * [RENAM] Mr. Nicolai Iliuha&lt;br&gt;'''COMPUTATIONAL RESOURCES OF THE REGIONAL SOUTH-EAST EUROPE HIGH PERFORMANCE COMPUTING INFRASTRUCTURE'''&lt;br&gt;''4-th International Conference &quot;Telecommunications, Electronics and Informatics&quot;'', May 17-19, 2012, Chisinau, Moldova<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''PERSPECTIVES OF REGIONAL CROSS-BORDER FIBER CONNECTIONS DEVELOPMENT'''&lt;br&gt;''4-th International Conference &quot;Telecommunications, Electronics and Informatics&quot;'', May 17-19, 2012, Chisinau, Moldova<br /> * [RENAM] Dr. G. Secrieru&lt;br&gt;'''“СЕТЬ НАУКИ И ОБРАЗОВАНИЯ КАК ИНФРАСТРУКТУРА ДЛЯ GRID – ПРИЛОЖЕНИЙ”'''&lt;br&gt;''4-th International Conference &quot;Telecommunications, Electronics and Informatics&quot;'', May 17-19, 2012, Chisinau, Moldova<br /> * [RENAM] Dr. P. Bogatencov&lt;br&gt;'''International and regional projects for computing technologies development'''&lt;br&gt;''Seminar entitled “Access to regional High Performance Computing (HPC) resources”'', May 30, 2012, Chisinau, Moldova<br /> * [RENAM] Mr. Nicolai Iliuha&lt;br&gt;'''Access to regional High Performance Computing (HPC) resources'''&lt;br&gt;''Seminar entitled “Access to regional High Performance Computing (HPC) resources”'', May 30, 2012, Chisinau, Moldova<br /> * [RENAM IMI ASM] Dr. P. Bogatencov&lt;br&gt;'''eInfrastructure Calls in FP7 Capacities specific programme'''&lt;br&gt;''Launching new FP7 Calls for 2013'', July 12, 2012, Chisinau, Moldova<br /> * [RENAM IMI ASM] Dr. G. Secrieru&lt;br&gt;'''Access to the Regional HPC Resources and Strategy of Their Development'''&lt;br&gt;''20th Conference on Applied and Industrial Mathematics'', August 22-25, 2012, Chisinau, Moldova<br /> * [RENAM IMI ASM] Dr. P. Bogatencov and Dr. G. Secrieru&lt;br&gt;'''HP-SEE project, local HPC resources and complex applications developing in IMI ASM '''&lt;br&gt;''Joint meeting with specialists from Global Initiatives for Proliferation Prevention (GIPP) of the US Department of Energy organized in IMI ASM by the Science and Technology Centre'', June 22, 2012, Ukraine<br /> * [RENAM] Participation of RENAM representative Nicolai Iliuha at “The 6th International Conference on Application of Information and Communication Technologies&quot; Georgia, Tbilisi, 17-19.10. 2012 with presentation &quot;Computing Infrastructure and Services Deployment for Research Community of Moldova&quot; (http://aict.info/2012/);<br /> * [RENAM] Participation of RENAM representative Dr. P. Bogatencov at “The Fourth International Scientific Conference &quot;Supercomputer Systems and Applications&quot; (SSA`2012)”, Minsk, UIIP NAS Belarus, 23-25.10. 2012 with keynote presentation &quot;COMPUTATIONAL RESOURCES OF THE REGIONAL SOUTH-EAST EUROPE HIGH PERFORMANCE COMPUTING INFRASTRUCTURE&quot; (http://ssa2012.bas-net.by/en/) and presentation of the session report entitled “Modeling of three-dimensional gas dynamics problems on multiprocessor systems and graphical processors” prepared together with Prof. Boris Rybakin.<br /> * [RENAM] Participation of RENAM representative Nicolai Iliuha in the technical-scientific workshop „Computational resources, special software and procedures of obtaining access to the HPC cluster in the State University of Moldova”, 7 November 2012, the State University of Moldova with report „National, regional and European Grid infrastructures; participation of Moldova in EGI-Inspire project and regional HP-SEE project”.<br /> <br /> === AM ===<br /> * [IIAP_NAS_RA] Yuri Shoukourian&lt;br&gt;'''Stimulating and Revealing Technological Innovations in Academic Institutions,'''&lt;br&gt;''ArmTech Congress'', October 10-11, 2011, Yerevan, Armenia<br /> * [IIAP_NAS_RA] Yuri Shoukourian&lt;br&gt;'''Armenian Research &amp; Educational E-Infrastructure'''&lt;br&gt;''Eastern Partnership Event'', November 07-08, 2011, Bucharest, Romania<br /> <br /> === GE ===<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''Georgian Research and Educational Networking Association GRENA'''&lt;br&gt;''First ATLAS/South Caucasus Software / Computing Workshop &amp; Tutorial'', October 26, 2010, Tbilisi, Georgia<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''E-Infrastructure in South Caucasus'''&lt;br&gt;''Eastern Europe Partnership Event – Policies for Development of E-Infrastructures in Eastern European Countries'', November 07-08, 2011, Bucharest, Romania<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''Participation of GRENA in European Commission projects'''&lt;br&gt;''Meeting of EU Delegation in Georgia'', June 18, 2012, Tbilisi, Georgia<br /> * [GRENA] R. Kvatadze&lt;br&gt;'''E-infrastructure in South Caucasus Countries for Science'''&lt;br&gt;''EC funded GEO-RECAP and IDEALIST projects meetings '', June 27-28, 2012, Tbilisi, Georgia<br /> * [GRENA] R. Kvatadze made presentation “E-Infrastructure for Science in Georgia” at the 6th International Conference on Application of Information and Communication Technologies on October 17-19, 2012 in Tbilisi, Georgia. http://aict.info/2012/ <br /> * [GRENA] R. Kvatadze made presentation “E-Infrastructure for Science in Georgia” at the 2nd ATLAS-SouthCaucasus Software/Computing Workshop on October 23-26, 2012 in Tbilisi, Georgia. http://dmu-atlas.web.cern.ch/dmu-atlas/2012/index.html<br /> <br /> <br /> <br /> &lt;!--------- OLD LIST<br /> === GR ===<br /> * (GRNET), O. Prnjat, &quot;High-Performance Computing Infrastructure for South East Europe’s Research Communities&quot; EGI technical forum, September 2010.<br /> * (GRNET), I. Liabotis, Presentation of the HP-SEE project in the 8th e-Infrastructure Concertation Meeting 4-5 November 2010, CERN, Geneva. (http://www.e-sciencetalk.org/econcertation/)<br /> * (GRNET), I. Liabotis, Presentation of the project entitled &quot;HP-SEE - Regional HPC development activities in South-Eastern Europe&quot; in the 2nd HellasHPC Workshop. 22nd of October 2010, Athens, Greece.<br /> * (GRNET), O. Prnjat, 2010 Euro-Africa Conference on e- Infrastructures 9-10 December 2010, Helsinki, Finland<br /> * (GRNET), O. Prnjat, The Forum on Research for Innovation in ICT for the Western Balkan Countries, 30 November 2010, Belgrade, Serbia<br /> * (GRNET), O. Prnjat, SEERA-EI Sixth Networking Meeting, January 2011, Sarajevo<br /> * (GRNET), I. Liabotis Presentation of the project in the National HPC conference, 9 December 2010 Sofia Bulgaria,<br /> * (GRNET), Presentation of HP-SEE at the NATO Advanced Networking Workshop on &quot;Advanced cooperation in the area of disaster prevention for human security in Central Asia&quot;, May 2011, Dushanbe.<br /> * (GRNET), HP-SEE was presented as part of overall SEE activities’ presentation under the umbrella of the SEERAEI project, at the following events:<br /> ** SEERA-EI Seventh Networking Meeting, April 2011, Chisinau.<br /> ** SEERA-EI cloud policy workshop, April 2011, Chisinau.<br /> <br /> === BG ===<br /> * (IPP-BAS/IICT-BAS) T. Gurov, &quot;High-Performance Computing Infrastructure for South East’s Research Communities&quot;, 2nd Workshop of Networking Initiatives for ICT related projects, 10-11.09.2010, Varna, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) A. Karaivanova, T. Gurov and E. Atanassov, &quot;HP-SEE Overview&quot;, HP-SEE Regional Training, 29-30.11.2010, Sofia, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) E. Atanassov, &quot;HP-SEE Infrastructure and Access&quot;, HP-SEE Regional Training, 29-30.11.2010, Sofia, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) T. Gurov, &quot;Introduction to Parallel Computing: Message Passing and Shared Memory&quot;, HP-SEE Regional Training, 29-30.11.2010, Sofia, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) E. Atanassov, &quot;Advanced Application Porting and Optimization&quot;, HP-SEE Regional Training, 29-30.11.2010, Sofia, Bulgaria.<br /> * (IPP-BAS/IICT-BAS) E. Atanassov, A. Karaivanova, S. Ivanovska, M. Durchova, “Two Algorithms for Modified Owen Scrambling on GPU”, Fifth Annual meeting of the Bulgarian Section of SIAM – BGSIAM’2010, 20-21 December, 2010 Sofia, Bulgaria.<br /> * (IICT-BAS), T. Gurov, “Overview of the HP-SEE project”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), E. Atanassov, “HPC Cluster at IICT-BAS and HP-SEE Infrastructure”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), E. Atanassov, “Introduction to GPU Computing”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), T. Gurov, “e-Infrastructure for e-Science”, presented on a seminar at the Institute on Oceanology, 11 March, 2011, Varna, Bulgaria.<br /> * (IICT-BAS), G. Bencheva, “Introduction to Parallel Computing”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), Y. Vutov, N. Kosturski, “Application software deployed on BG/P”, 1st National HPC training, 23-24 March, Sofia, Bulgaria.<br /> * (IICT-BAS), Sv. Margenov, “Supercomputer applications on BG/P”, National workshop for supercomputer applications, 20-22, May, 2011, Hissar, Bulgaria.<br /> * (IICT-BAS), K. Shterev, “Computer Simulation on micro-gas flows in elements of Micro-Electro-Mechanical Systems (MEMS), National workshop for supercomputer applications, 20-22, May, 2011, Hissar, Bulgaria.<br /> * (IICT-BAS), Y. Vutov, “Optimizing the Performance of a Parallel Unstructured Grid AMG Solver”, National workshop for supercomputer applications, 20-22, May, 2011, Hissar, Bulgaria.<br /> * (IICT-BAS), E. Atanassov, “Bulgarian HPC Infrastructure”, presented on “SEERA-EI training series on best practices: Bulgarian HPC policy and programmes”, 17 May, 2011, Sofia, Bulgaria.<br /> <br /> === RO ===<br /> * (IFIN HH), “The HP-SEE Project”, M. Dulea, RO-LCG 2010 Conference - Grid and High Performance Computing in Scientific Collaborations, Bucharest, 6-7 December 2010, http://wlcg10.nipne.ro<br /> * (IFIN), “National infrastructure for high performance scientific computing”, M. Dulea, at the ”GRID and HPC in Modern Medical Environments&quot; Conference, Carol Davila University of Medicine and Pharmacy, Bucharest, 08.03.2011.<br /> * (IFIN), “High performance computing for Romanian scientific community”, M. Dulea, in the framework of the “Study for the operationalization of the National Supercomputing Centre”, Ministry of Communications and Informational Society, Bucharest, 15.03.2011.<br /> * (IFIN), “Romanian HPC Infrastructure”, M. Dulea, SEERA-EI training series on best practices: Bulgarian HPC policy and programmes, Sofia, 17.04.2011.<br /> * (UPT), Presentation of the project during the training event in Tirana, 21 Dec. 2010<br /> * (UPB), Computing Infrastructure for Scientific Computing – the UPB-NCIT-Cluster, October 25th 2010, Bucharest, Romania. CEEMEA Blue Gene Research Collaboration and Community Building.<br /> <br /> === HU ===<br /> * (NIIF), S. Péter, &quot;The new national HPC infrastructure and services, operated by NIIF&quot;, 20th Networkshop, April 2011.<br /> * (NIIF), R. Gábor, &quot;South East European HPC Project&quot;, 20th Networkshop, April 2011.<br /> * (SZTAKI), B. Ákos, “Workflow level interoperability between ARC and Glite middlewares for HPC and Grid applications&quot;, 20th Networkshop, April 2011.<br /> * (SZTAKI), M. Kozlovszky, A. Balasko, P. Kacsuk, Enabling JChem application on grid, ISGC 2011 &amp; OGF 31, International Symposium on Grids and Clouds (ISGC 2011) &amp; Open Grid Forum (OGF 31) 21-25 March 2011, Taipei Taiwan.<br /> * (SZTAKI), M. Kozlovszky, A. Balasko, I. Marton, K. Karoczkai, A. Szikszay Fabri, P. Kacsuk: New developments of gUSE &amp; WS-PGRADE to support e-science gateways, EGI User Forum Lithuania, Vilnius, 2011 April.<br /> <br /> === RS ===<br /> * (IPB), During the May the Institute of Physics Belgrade officially celebrated 50 years of its establishment. For this occasion, the dedicated exhibition was hosted at the Gallery of Science and Technology of the Serbian Sciences and Arts in Belgrade, Serbia. As a part of the exhibition a set of HPC oriented sessions were organized, during which HP-SEE project was presented.<br /> * (IPB), The International Particle Physics MasterClass Belgrade 2011 was organized by the University of Belgrade, in collaboration with the European Particle Physics Outreach Group, and was held on 14 March 2011. SCL's Antun Balaz gave a short talk to high school students on Grid computing, HPC resourcers and its applications in particle physics.<br /> * (IPB), At the &quot;Advanced School in High Performance and Grid Computing&quot; organized by the International Centre for Theoretical Physics (ICTP) in Trieste, Italy from 11 to 22 April 2011, Antun Balaz (who was also one of the organizers) gave the Lecture on &quot;Advanced Profiling and Optimization Techniques&quot; by A. Balaz. Milan Zezelj presented his work on &quot;Parallel Implementation of a Monte Carlo Molecular Simulation Program&quot;<br /> * (IPB), The Executive Agency “Electronic Communications Networks and Information Systems” of the Bulgarian government, as a part of SEERA –EI project activities, has organized a meeting on “HPC related policy and programs” for South Eastern Europe policy makers. Antun Balaz present Serbian experiences, plans and expectations towards SEE high performance computing infrastructures development during the round table on &quot;HPC initiatives in other SEE countries: Romania, Serbia, FYROM, Greece&quot;. Presentation was titled &quot;Infrastructure for the National Supercomputing Initiative&quot;.<br /> <br /> === MK ===<br /> * (UKIM), Presentation titled: High performance computing – in Europe, region and at home was delivered by Anastas Misev at the first LANFest event in Skopje, 20.-22.05.2011 . http://lanfest.mk/ex/LANFestConferenceAgenda.pdf The presentation contains a special part on HP-SEE project.<br /> <br /> === MD ===<br /> * (RENAM), Presentation of Mr. Nicolai Iliuha about HP-SEE Kick-off-meeting was made on 30.09. 2010 for members of RENAM Scientific and Technical Council and invited representatives from universities and research institutions<br /> * (RENAM), Presentation of Mr. Nicolai Iliuha about HP-SEE training Event in Bulgaria (29-30.11.2010), ability to access and parameters of HPC resources available in IICT-BAS was made for members of RENAM Scientific and Technical Council and invited representatives from universities and research institutions of the Academy of Sciences of Moldova<br /> * (RENAM), On 25th and 26th of May 2011 RENAM presented two reports at the joint training and dissemination event entitled “Computational structures and Technologies for Solving Large Scale Problems” for specialists of the Institute of Mathematics and Computer Science&quot; and the Institute of Economy, Finance and Statistics of ASM:<br /> ** Dr. Peter Bogatencov - &quot;Scientific computing – current state and perspectives of development&quot;;<br /> ** Mr. Nicolai Iliuha - &quot;Computational structures and Technologies to meet challenges of modeling&quot;<br /> <br /> <br /> === GE ===<br /> * (GRENA), On October 26, 2010 R. Kvatadze made presentation about Georgian Research and Educational Networking Association GRENA activities at First ATLAS/South Caucasus Software / Computing Workshop &amp; Tutorial held in Tbilisi, Georgia on October 25 – 29, 2010.<br /> * (GRENA), R. Kvatadze participated in kick-off meeting of the project on September 6-8, 2010 in Athens.<br /> * (GRENA), GRENA as a co-organizer of the First ATLAS/South Caucasus Software / Computing Workshop &amp; Tutorial took part in preparation and conduction of the event. Workshop was held in Tbilisi, Georgia during October 25 – 29, 2010 http://dmu-atlas.web.cern.ch/dmuatlas/2010/.<br /> * (GRENA), N. Gamtsemlidze participated in HP-SEE Regional Training on November 29-30, 2010 in Sofia.<br /> <br /> ---------------&gt;</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/MSBP MSBP 2013-03-10T11:41:27Z <p>Lifesci: /* Short Description */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Modeling of some biochemical processes with the purpose of realization of their thin and purposeful synthesis''<br /> * Application's acronym: ''MSBP''<br /> * Virtual Research Community: ''Life Sciences'' <br /> * Scientific contact: ''Jumber Kereselidze, Ramaz Kvatadze ramaz[at]grena.ge''<br /> * Technical contact: ''George Mikuchadze gmikuchadze[at]gmail.com''<br /> * Developers: ''Scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University''<br /> * Web site: http://wiki.hp-see.eu/index.php/MSBP<br /> <br /> == Short Description ==<br /> <br /> One of the priority directions of modern natural sciences is the research and creation of an opportunity of realization of thin end purposeful synthesis of nucleotide bases. Solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. The scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University during last years are engaged in research of modeling of transformations of biochemical macromolecular systems (amino acids, proteins and DNA) with the use of the appropriate computer programs (program package Nature - Moscow State University). The main characteristics of the Nature quantum-chemical program designed for the study of complex molecular systems by the density functional theory, at the MP2, MP3, and MP4 levels of multiparticle perturbation theory, and by the coupled-cluster single and double excitations method (CCSD) with the application of parallel computing.<br /> <br /> == Problems Solved ==<br /> <br /> The energy characteristics of the tautomeric transformations of cytosine, thymine, and uracil have been calculated within the framework of the quantum chemistry theory of functional density. It was obtained that the directions of the tautomeric conversions are characterized by energies of activation calculated according to the theory of functional density.<br /> The published data on the prototropic tautomerism of some carbonyl and nitrogen-containing acyclic and heterocyclic compounds are systematized. Mechanisms of the intramolecular and intermolecular proton transfer in tautomerisation reactions was considered. On the basis of the results of semiempirical and quantum-chemical calculations, preference is given to an intermolecular collective (dimeric, trimeric, tetrameric or oligomeric) mechanism. A new approach to the description of the solvent effect on the prototropic tautomeric equilibrium was proposed.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. Obtained results will be important for the prediction of denaturation of DNA. Working on this project will significantly improve research capacity of the scientists involved and will rise educational level in biophysical chemistry at the Tbilisi State University and Sokhumi State University. Researches will improve their experience in participation in European Programmes and will contribute to the integration of the Georgian research potential to the European Research Area.<br /> <br /> == Collaborations ==<br /> <br /> * Tbilisi Statae University, Georgia<br /> * Sokhumi State University, Georgia<br /> * Moscow State University, Russia<br /> <br /> == Beneficiaries ==<br /> <br /> Primary beneficiaries will be research groups from Tbilisi, Sokhumi and Moscow State Universities, however obtained results can be used by all scientists working on realization of thin end purposeful synthesis of nucleotide bases. Students involved in this research will gain experience in scientific collaboration.<br /> <br /> == Number of users ==<br /> <br /> 9<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''M1''<br /> * Start of beta stage: ''M6''<br /> * Start of testing stage: ''M8''<br /> * Start of deployment stage: ''M11''<br /> * Start of production stage: ''M15''<br /> * Production is in progress, different configurations are under investigations: ''M16-M36''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''From 8 to up to 64''<br /> * Minimum RAM/core required: ''1 GB/24''<br /> * Storage space during a single run: ''200 - 500 MB''<br /> * Long-term data storage: ''not required''<br /> * Total core hours required: ''not clear yet''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C, Fortran''<br /> * Parallel programming paradigm: ''MPI/OpenMP''<br /> * Main parallel code: ''OpenMPI/OpenMP''<br /> * Pre/post processing code: ''in-house development, C''<br /> * Application tools and libraries: ''Intel C/Fortran compilers, GCC/GFortran, PGI Fortran''<br /> <br /> == Usage Example ==<br /> <br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''NCIT-Cluster'' <br /> ** Applied for access on: ''04.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''32 cores''<br /> * Accessed production systems:<br /> # ''HPC centre in Debrecen (Debrecen SC)''<br /> #* Applied for access on: ''07.2012''<br /> #* Access granted on: ''08.2012''<br /> #* Achieved scalability: ''32 cores''<br /> * Porting activities: ''The application has been successfully ported at NCIT-Cluster. George Mikuchadze was assisted by Mihnea Dulea and then by Emil Slusanschi Associate Professor of the Department of Computer Science and Engineering of University Politehnica of Bucharest. In August 2012 application was successfully ported at HPC centre in Debrecen.''<br /> * Scalability studies: ''Tests on 8, 16, 24, 32 and 64 cores.''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: '' After successful deployment on 8 cores benchmarking was initiated for 16, 32 and 64 cores.''<br /> * Other issues: ''Further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> <br /> * The quantum-chemical modeling of a proton transfer in nitrogen containing biological active compounds using the modern non empirical method - Density Function Theory. As a result of calculations of the energetic, electronic and structural characteristics of protons transfer in the nucleotide bases the mutation processes in DNA are quantitatively described. The quantum-chemical model of the stacking and pentameric mechanisms of the tautomeric transformations of the heterocyclic compounds is constructed.<br /> <br /> == Publications ==<br /> <br /> * T. Zarqua, J. Kereselidze and Z. Pachulia. Quantum-chemical description of the influenceof electronic effects of proton transfer in guanine-cytosine base pairs. J.Biol. Phys. Chem., v.10, pp.71-73 (2010).<br /> * J. Kereselidze, T. Zarqua, Z. Paculia, M. Kvaraia. Quantum Chemical Modeling of the Mechanism of Formation of the Peptide Bond.International Conference on Computational Biology. Tokyo, Japan May 26-28, 2010.WASET, 65, p.1469 (2010).<br /> * M. Kvaraia, J. Kereselidze, Z. Pachulia and T. Zarqua. Quantum-Chemical Study of the Solvent Effect on Process of a Proton Transfer in Nucleotide Bases. Proceed. Georgian NA Sciences, v. 36, pp 306-308 (2010).<br /> * J. Kereselidze, Z. Pachulia and M. Kvaraia. Quantum-chemical modeling of tendency of DNA to denaturetion. J.Biol. Phys. Chem., 11, 51-53 (2011).<br /> <br /> <br /> == Foreseen Activities ==<br /> <br /> * DNA tendency to denaturation is stipulated by the elevation of ethanol’s concentration in the ambient. The proton transfer between nucleobases of DNA (Adenine-Thimine, Guanine-Cytosine) causes rare tautomeric transformation of the nucleobases pair, which in turn increases both probability of denaturation and frequency of mutation. In the next runs we will increase number of nucleobases (from 4 to 8) in the molecular structures. <br /> * Preparation of publication Solvatochromic effect for the denaturation and mutation processes in DNA. Computational study.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/MSBP MSBP 2013-03-10T11:38:10Z <p>Lifesci: /* Development Plan */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Modeling of some biochemical processes with the purpose of realization of their thin and purposeful synthesis''<br /> * Application's acronym: ''MSBP''<br /> * Virtual Research Community: ''Life Sciences'' <br /> * Scientific contact: ''Jumber Kereselidze, Ramaz Kvatadze ramaz[at]grena.ge''<br /> * Technical contact: ''George Mikuchadze gmikuchadze[at]gmail.com''<br /> * Developers: ''Scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University''<br /> * Web site: http://wiki.hp-see.eu/index.php/MSBP<br /> <br /> == Short Description ==<br /> <br /> One of the priority directions of modern natural sciences is the research and creation of an opportunity of realization of thin end purposeful synthesis of nucleotide bases. Solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. The scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University during last years are engaged in research of modeling of transformations of biochemical macromolecular systems (amino acids, proteins and DNA) with the use of the appropriate computer programs (program package „ PRIRODA – 04“, P6, P32 - Moscow State University). The main characteristics of the PRIRODA quantum-chemical program designed for the study of complex molecular systems by the density functional theory, at the MP2, MP3, and MP4 levels of multiparticle perturbation theory, and by the coupled-cluster single and double excitations method (CCSD) with the application of parallel computing.<br /> <br /> == Problems Solved ==<br /> <br /> The energy characteristics of the tautomeric transformations of cytosine, thymine, and uracil have been calculated within the framework of the quantum chemistry theory of functional density. It was obtained that the directions of the tautomeric conversions are characterized by energies of activation calculated according to the theory of functional density.<br /> The published data on the prototropic tautomerism of some carbonyl and nitrogen-containing acyclic and heterocyclic compounds are systematized. Mechanisms of the intramolecular and intermolecular proton transfer in tautomerisation reactions was considered. On the basis of the results of semiempirical and quantum-chemical calculations, preference is given to an intermolecular collective (dimeric, trimeric, tetrameric or oligomeric) mechanism. A new approach to the description of the solvent effect on the prototropic tautomeric equilibrium was proposed.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. Obtained results will be important for the prediction of denaturation of DNA. Working on this project will significantly improve research capacity of the scientists involved and will rise educational level in biophysical chemistry at the Tbilisi State University and Sokhumi State University. Researches will improve their experience in participation in European Programmes and will contribute to the integration of the Georgian research potential to the European Research Area.<br /> <br /> == Collaborations ==<br /> <br /> * Tbilisi Statae University, Georgia<br /> * Sokhumi State University, Georgia<br /> * Moscow State University, Russia<br /> <br /> == Beneficiaries ==<br /> <br /> Primary beneficiaries will be research groups from Tbilisi, Sokhumi and Moscow State Universities, however obtained results can be used by all scientists working on realization of thin end purposeful synthesis of nucleotide bases. Students involved in this research will gain experience in scientific collaboration.<br /> <br /> == Number of users ==<br /> <br /> 9<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''M1''<br /> * Start of beta stage: ''M6''<br /> * Start of testing stage: ''M8''<br /> * Start of deployment stage: ''M11''<br /> * Start of production stage: ''M15''<br /> * Production is in progress, different configurations are under investigations: ''M16-M36''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''From 8 to up to 64''<br /> * Minimum RAM/core required: ''1 GB/24''<br /> * Storage space during a single run: ''200 - 500 MB''<br /> * Long-term data storage: ''not required''<br /> * Total core hours required: ''not clear yet''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C, Fortran''<br /> * Parallel programming paradigm: ''MPI/OpenMP''<br /> * Main parallel code: ''OpenMPI/OpenMP''<br /> * Pre/post processing code: ''in-house development, C''<br /> * Application tools and libraries: ''Intel C/Fortran compilers, GCC/GFortran, PGI Fortran''<br /> <br /> == Usage Example ==<br /> <br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''NCIT-Cluster'' <br /> ** Applied for access on: ''04.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''32 cores''<br /> * Accessed production systems:<br /> # ''HPC centre in Debrecen (Debrecen SC)''<br /> #* Applied for access on: ''07.2012''<br /> #* Access granted on: ''08.2012''<br /> #* Achieved scalability: ''32 cores''<br /> * Porting activities: ''The application has been successfully ported at NCIT-Cluster. George Mikuchadze was assisted by Mihnea Dulea and then by Emil Slusanschi Associate Professor of the Department of Computer Science and Engineering of University Politehnica of Bucharest. In August 2012 application was successfully ported at HPC centre in Debrecen.''<br /> * Scalability studies: ''Tests on 8, 16, 24, 32 and 64 cores.''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: '' After successful deployment on 8 cores benchmarking was initiated for 16, 32 and 64 cores.''<br /> * Other issues: ''Further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> <br /> * The quantum-chemical modeling of a proton transfer in nitrogen containing biological active compounds using the modern non empirical method - Density Function Theory. As a result of calculations of the energetic, electronic and structural characteristics of protons transfer in the nucleotide bases the mutation processes in DNA are quantitatively described. The quantum-chemical model of the stacking and pentameric mechanisms of the tautomeric transformations of the heterocyclic compounds is constructed.<br /> <br /> == Publications ==<br /> <br /> * T. Zarqua, J. Kereselidze and Z. Pachulia. Quantum-chemical description of the influenceof electronic effects of proton transfer in guanine-cytosine base pairs. J.Biol. Phys. Chem., v.10, pp.71-73 (2010).<br /> * J. Kereselidze, T. Zarqua, Z. Paculia, M. Kvaraia. Quantum Chemical Modeling of the Mechanism of Formation of the Peptide Bond.International Conference on Computational Biology. Tokyo, Japan May 26-28, 2010.WASET, 65, p.1469 (2010).<br /> * M. Kvaraia, J. Kereselidze, Z. Pachulia and T. Zarqua. Quantum-Chemical Study of the Solvent Effect on Process of a Proton Transfer in Nucleotide Bases. Proceed. Georgian NA Sciences, v. 36, pp 306-308 (2010).<br /> * J. Kereselidze, Z. Pachulia and M. Kvaraia. Quantum-chemical modeling of tendency of DNA to denaturetion. J.Biol. Phys. Chem., 11, 51-53 (2011).<br /> <br /> <br /> == Foreseen Activities ==<br /> <br /> * DNA tendency to denaturation is stipulated by the elevation of ethanol’s concentration in the ambient. The proton transfer between nucleobases of DNA (Adenine-Thimine, Guanine-Cytosine) causes rare tautomeric transformation of the nucleobases pair, which in turn increases both probability of denaturation and frequency of mutation. In the next runs we will increase number of nucleobases (from 4 to 8) in the molecular structures. <br /> * Preparation of publication Solvatochromic effect for the denaturation and mutation processes in DNA. Computational study.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/MSBP MSBP 2013-03-10T11:37:19Z <p>Lifesci: /* Foreseen Activities */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Modeling of some biochemical processes with the purpose of realization of their thin and purposeful synthesis''<br /> * Application's acronym: ''MSBP''<br /> * Virtual Research Community: ''Life Sciences'' <br /> * Scientific contact: ''Jumber Kereselidze, Ramaz Kvatadze ramaz[at]grena.ge''<br /> * Technical contact: ''George Mikuchadze gmikuchadze[at]gmail.com''<br /> * Developers: ''Scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University''<br /> * Web site: http://wiki.hp-see.eu/index.php/MSBP<br /> <br /> == Short Description ==<br /> <br /> One of the priority directions of modern natural sciences is the research and creation of an opportunity of realization of thin end purposeful synthesis of nucleotide bases. Solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. The scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University during last years are engaged in research of modeling of transformations of biochemical macromolecular systems (amino acids, proteins and DNA) with the use of the appropriate computer programs (program package „ PRIRODA – 04“, P6, P32 - Moscow State University). The main characteristics of the PRIRODA quantum-chemical program designed for the study of complex molecular systems by the density functional theory, at the MP2, MP3, and MP4 levels of multiparticle perturbation theory, and by the coupled-cluster single and double excitations method (CCSD) with the application of parallel computing.<br /> <br /> == Problems Solved ==<br /> <br /> The energy characteristics of the tautomeric transformations of cytosine, thymine, and uracil have been calculated within the framework of the quantum chemistry theory of functional density. It was obtained that the directions of the tautomeric conversions are characterized by energies of activation calculated according to the theory of functional density.<br /> The published data on the prototropic tautomerism of some carbonyl and nitrogen-containing acyclic and heterocyclic compounds are systematized. Mechanisms of the intramolecular and intermolecular proton transfer in tautomerisation reactions was considered. On the basis of the results of semiempirical and quantum-chemical calculations, preference is given to an intermolecular collective (dimeric, trimeric, tetrameric or oligomeric) mechanism. A new approach to the description of the solvent effect on the prototropic tautomeric equilibrium was proposed.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. Obtained results will be important for the prediction of denaturation of DNA. Working on this project will significantly improve research capacity of the scientists involved and will rise educational level in biophysical chemistry at the Tbilisi State University and Sokhumi State University. Researches will improve their experience in participation in European Programmes and will contribute to the integration of the Georgian research potential to the European Research Area.<br /> <br /> == Collaborations ==<br /> <br /> * Tbilisi Statae University, Georgia<br /> * Sokhumi State University, Georgia<br /> * Moscow State University, Russia<br /> <br /> == Beneficiaries ==<br /> <br /> Primary beneficiaries will be research groups from Tbilisi, Sokhumi and Moscow State Universities, however obtained results can be used by all scientists working on realization of thin end purposeful synthesis of nucleotide bases. Students involved in this research will gain experience in scientific collaboration.<br /> <br /> == Number of users ==<br /> <br /> 9<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''M1''<br /> * Start of beta stage: ''M6''<br /> * Start of testing stage: ''M8''<br /> * Start of deployment stage: ''M11''<br /> * Start of production stage: ''M15''<br /> * Production is in progress different configurations are under investigations: ''M16-M36''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''From 8 to up to 64''<br /> * Minimum RAM/core required: ''1 GB/24''<br /> * Storage space during a single run: ''200 - 500 MB''<br /> * Long-term data storage: ''not required''<br /> * Total core hours required: ''not clear yet''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C, Fortran''<br /> * Parallel programming paradigm: ''MPI/OpenMP''<br /> * Main parallel code: ''OpenMPI/OpenMP''<br /> * Pre/post processing code: ''in-house development, C''<br /> * Application tools and libraries: ''Intel C/Fortran compilers, GCC/GFortran, PGI Fortran''<br /> <br /> == Usage Example ==<br /> <br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''NCIT-Cluster'' <br /> ** Applied for access on: ''04.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''32 cores''<br /> * Accessed production systems:<br /> # ''HPC centre in Debrecen (Debrecen SC)''<br /> #* Applied for access on: ''07.2012''<br /> #* Access granted on: ''08.2012''<br /> #* Achieved scalability: ''32 cores''<br /> * Porting activities: ''The application has been successfully ported at NCIT-Cluster. George Mikuchadze was assisted by Mihnea Dulea and then by Emil Slusanschi Associate Professor of the Department of Computer Science and Engineering of University Politehnica of Bucharest. In August 2012 application was successfully ported at HPC centre in Debrecen.''<br /> * Scalability studies: ''Tests on 8, 16, 24, 32 and 64 cores.''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: '' After successful deployment on 8 cores benchmarking was initiated for 16, 32 and 64 cores.''<br /> * Other issues: ''Further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> <br /> * The quantum-chemical modeling of a proton transfer in nitrogen containing biological active compounds using the modern non empirical method - Density Function Theory. As a result of calculations of the energetic, electronic and structural characteristics of protons transfer in the nucleotide bases the mutation processes in DNA are quantitatively described. The quantum-chemical model of the stacking and pentameric mechanisms of the tautomeric transformations of the heterocyclic compounds is constructed.<br /> <br /> == Publications ==<br /> <br /> * T. Zarqua, J. Kereselidze and Z. Pachulia. Quantum-chemical description of the influenceof electronic effects of proton transfer in guanine-cytosine base pairs. J.Biol. Phys. Chem., v.10, pp.71-73 (2010).<br /> * J. Kereselidze, T. Zarqua, Z. Paculia, M. Kvaraia. Quantum Chemical Modeling of the Mechanism of Formation of the Peptide Bond.International Conference on Computational Biology. Tokyo, Japan May 26-28, 2010.WASET, 65, p.1469 (2010).<br /> * M. Kvaraia, J. Kereselidze, Z. Pachulia and T. Zarqua. Quantum-Chemical Study of the Solvent Effect on Process of a Proton Transfer in Nucleotide Bases. Proceed. Georgian NA Sciences, v. 36, pp 306-308 (2010).<br /> * J. Kereselidze, Z. Pachulia and M. Kvaraia. Quantum-chemical modeling of tendency of DNA to denaturetion. J.Biol. Phys. Chem., 11, 51-53 (2011).<br /> <br /> <br /> == Foreseen Activities ==<br /> <br /> * DNA tendency to denaturation is stipulated by the elevation of ethanol’s concentration in the ambient. The proton transfer between nucleobases of DNA (Adenine-Thimine, Guanine-Cytosine) causes rare tautomeric transformation of the nucleobases pair, which in turn increases both probability of denaturation and frequency of mutation. In the next runs we will increase number of nucleobases (from 4 to 8) in the molecular structures. <br /> * Preparation of publication Solvatochromic effect for the denaturation and mutation processes in DNA. Computational study.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/MSBP MSBP 2013-03-10T11:35:28Z <p>Lifesci: /* Development Plan */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Modeling of some biochemical processes with the purpose of realization of their thin and purposeful synthesis''<br /> * Application's acronym: ''MSBP''<br /> * Virtual Research Community: ''Life Sciences'' <br /> * Scientific contact: ''Jumber Kereselidze, Ramaz Kvatadze ramaz[at]grena.ge''<br /> * Technical contact: ''George Mikuchadze gmikuchadze[at]gmail.com''<br /> * Developers: ''Scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University''<br /> * Web site: http://wiki.hp-see.eu/index.php/MSBP<br /> <br /> == Short Description ==<br /> <br /> One of the priority directions of modern natural sciences is the research and creation of an opportunity of realization of thin end purposeful synthesis of nucleotide bases. Solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. The scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University during last years are engaged in research of modeling of transformations of biochemical macromolecular systems (amino acids, proteins and DNA) with the use of the appropriate computer programs (program package „ PRIRODA – 04“, P6, P32 - Moscow State University). The main characteristics of the PRIRODA quantum-chemical program designed for the study of complex molecular systems by the density functional theory, at the MP2, MP3, and MP4 levels of multiparticle perturbation theory, and by the coupled-cluster single and double excitations method (CCSD) with the application of parallel computing.<br /> <br /> == Problems Solved ==<br /> <br /> The energy characteristics of the tautomeric transformations of cytosine, thymine, and uracil have been calculated within the framework of the quantum chemistry theory of functional density. It was obtained that the directions of the tautomeric conversions are characterized by energies of activation calculated according to the theory of functional density.<br /> The published data on the prototropic tautomerism of some carbonyl and nitrogen-containing acyclic and heterocyclic compounds are systematized. Mechanisms of the intramolecular and intermolecular proton transfer in tautomerisation reactions was considered. On the basis of the results of semiempirical and quantum-chemical calculations, preference is given to an intermolecular collective (dimeric, trimeric, tetrameric or oligomeric) mechanism. A new approach to the description of the solvent effect on the prototropic tautomeric equilibrium was proposed.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. Obtained results will be important for the prediction of denaturation of DNA. Working on this project will significantly improve research capacity of the scientists involved and will rise educational level in biophysical chemistry at the Tbilisi State University and Sokhumi State University. Researches will improve their experience in participation in European Programmes and will contribute to the integration of the Georgian research potential to the European Research Area.<br /> <br /> == Collaborations ==<br /> <br /> * Tbilisi Statae University, Georgia<br /> * Sokhumi State University, Georgia<br /> * Moscow State University, Russia<br /> <br /> == Beneficiaries ==<br /> <br /> Primary beneficiaries will be research groups from Tbilisi, Sokhumi and Moscow State Universities, however obtained results can be used by all scientists working on realization of thin end purposeful synthesis of nucleotide bases. Students involved in this research will gain experience in scientific collaboration.<br /> <br /> == Number of users ==<br /> <br /> 9<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''M1''<br /> * Start of beta stage: ''M6''<br /> * Start of testing stage: ''M8''<br /> * Start of deployment stage: ''M11''<br /> * Start of production stage: ''M15''<br /> * Production is in progress different configurations are under investigations: ''M16-M36''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''From 8 to up to 64''<br /> * Minimum RAM/core required: ''1 GB/24''<br /> * Storage space during a single run: ''200 - 500 MB''<br /> * Long-term data storage: ''not required''<br /> * Total core hours required: ''not clear yet''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C, Fortran''<br /> * Parallel programming paradigm: ''MPI/OpenMP''<br /> * Main parallel code: ''OpenMPI/OpenMP''<br /> * Pre/post processing code: ''in-house development, C''<br /> * Application tools and libraries: ''Intel C/Fortran compilers, GCC/GFortran, PGI Fortran''<br /> <br /> == Usage Example ==<br /> <br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''NCIT-Cluster'' <br /> ** Applied for access on: ''04.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''32 cores''<br /> * Accessed production systems:<br /> # ''HPC centre in Debrecen (Debrecen SC)''<br /> #* Applied for access on: ''07.2012''<br /> #* Access granted on: ''08.2012''<br /> #* Achieved scalability: ''32 cores''<br /> * Porting activities: ''The application has been successfully ported at NCIT-Cluster. George Mikuchadze was assisted by Mihnea Dulea and then by Emil Slusanschi Associate Professor of the Department of Computer Science and Engineering of University Politehnica of Bucharest. In August 2012 application was successfully ported at HPC centre in Debrecen.''<br /> * Scalability studies: ''Tests on 8, 16, 24, 32 and 64 cores.''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: '' After successful deployment on 8 cores benchmarking was initiated for 16, 32 and 64 cores.''<br /> * Other issues: ''Further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> <br /> * The quantum-chemical modeling of a proton transfer in nitrogen containing biological active compounds using the modern non empirical method - Density Function Theory. As a result of calculations of the energetic, electronic and structural characteristics of protons transfer in the nucleotide bases the mutation processes in DNA are quantitatively described. The quantum-chemical model of the stacking and pentameric mechanisms of the tautomeric transformations of the heterocyclic compounds is constructed.<br /> <br /> == Publications ==<br /> <br /> * T. Zarqua, J. Kereselidze and Z. Pachulia. Quantum-chemical description of the influenceof electronic effects of proton transfer in guanine-cytosine base pairs. J.Biol. Phys. Chem., v.10, pp.71-73 (2010).<br /> * J. Kereselidze, T. Zarqua, Z. Paculia, M. Kvaraia. Quantum Chemical Modeling of the Mechanism of Formation of the Peptide Bond.International Conference on Computational Biology. Tokyo, Japan May 26-28, 2010.WASET, 65, p.1469 (2010).<br /> * M. Kvaraia, J. Kereselidze, Z. Pachulia and T. Zarqua. Quantum-Chemical Study of the Solvent Effect on Process of a Proton Transfer in Nucleotide Bases. Proceed. Georgian NA Sciences, v. 36, pp 306-308 (2010).<br /> * J. Kereselidze, Z. Pachulia and M. Kvaraia. Quantum-chemical modeling of tendency of DNA to denaturetion. J.Biol. Phys. Chem., 11, 51-53 (2011).<br /> <br /> <br /> == Foreseen Activities ==<br /> <br /> * DNA tendency to denaturation is stipulated by the elevation of ethanol’s concentration in the ambient. The proton transfer between nucleobases of DNA (Adenine-Thimine, Guanine-Cytosine) causes rare tautomeric transformation of the nucleobases pair, which in turn increases both probability of denaturation and frequency of mutation. In the next runs we will increase number of nucleobases (from 4 to 8) in the molecular structures. <br /> * Preparation of publication based on results of simulations on HP-SEE infrastructure.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DNAMA DNAMA 2013-03-06T13:28:43Z <p>Lifesci: /* Foreseen Activities */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''DNA Multicore Analysis''<br /> * Application's acronym: ''DNAMA''<br /> * Virtual Research Community: ''Life Sciences ''<br /> * Scientific contact: ''Danilo Mrdak, danilomrdak@gmail.com''<br /> * Technical contact: ''Luka Filipovic, lukaf@ac.me''<br /> * Developers: ''Center of Information System &amp; Faculty of Natural Sciences - University of Montenegro''<br /> * Web site: http://wiki.hp-see.eu/index.php/DNAMA<br /> <br /> == Short Description ==<br /> <br /> Using of Network Cluster Web with potential of super-computer performances for DNA sequences analyzing will give us unlimited potential for DNA research. This will give us unlimited potential in term of analyzed sequence number and time consumption for analysis to be carried out. As many of DNA comparing and analyzing software use Monte Carlo and Markov chain algorithms that are time consuming regarding to sequence numbers, super-computer resource will faster our job and make the robust and overall analysis possible.Using of all published sequences for one group (e.g. for all salmonid species: salmons, trout, grayling, river huchon) from the same DNA region (mitochondrial D-loop DNA, Cytochrom b gene…) will give us more detailed insight in their relationships and phylogeny relationships.<br /> <br /> DNAMA application is based on RAxML application from The Exelixis Lab.<br /> <br /> == Problems Solved ==<br /> <br /> The working resource that is possible to use trough network computer clustering will allow us to put in analysis as much samples as we wish and that those analysis will be finished in one to few hours. Moreover, we will tray to modified the algorithms in order to have multi-loci analysis to get a consensus three that will suggest the most possible pathways of phylogeny with much higher level of confidence<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Use of Network Cluster Web with potential of super-computer performances for DNA sequence comparison analysis will give unlimited potential for DNA research. Use of all published sequences for one group (e.g. for all salmonid species: salmons, trout, grayling, river huchon) from the same DNA region (mitochondrial D-loop DNA, Cytochrom b gene…) will give a more detailed insight in their relationships and phylogeny relationships (RAXML). <br /> <br /> Problems solved : Analyze as many samples as possible within a few hours. Modification of algorithms in order to have multi-loci analysis to get a consensus tree that will suggest the most probable pathways of phylogeny with much higher level of confidence.<br /> <br /> Impact : Assessing whether computer resources are reliable is one of the main obstacle for making scientific breakthroughs in field of Molecular Biology and Phylogeny (Evolution). HPC allows for faster and more reliable results. Enhancement of competitiveness in terms of regional and European collaboration. Drawing the attention of national stakeholders in future building of Montenegro as a &quot;society of knowledge”<br /> <br /> == Collaborations ==<br /> <br /> * <br /> <br /> == Beneficiaries ==<br /> <br /> * University of Montenegro - Faculty of Natural sciences - Biology Department<br /> <br /> == Number of users ==<br /> 15-20 from Faculty of natural Sciences, University of Montenegro<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''before the start of the project - finished by RAxML developers''<br /> * Start of alpha stage: ''before the start of the project''<br /> * Start of beta stage: ''09.2010''<br /> * Start of testing stage: ''03.2011''<br /> * Start of deployment stage: ''11.2011''<br /> * Start of production stage: ''11.2011''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''up to 512 cores''<br /> * Minimum RAM/core required: ''1 GB''<br /> * Storage space during a single run: ''256 MB''<br /> * Long-term data storage: ''1 GB''<br /> * Total core hours required: ''.''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C''<br /> * Parallel programming paradigm: ''MPI, OpenMPI''<br /> * Main parallel code: ''MPI, OpenMPI''<br /> * Pre/post processing code: ''C, Dendroscope (for visualization for results)''<br /> * Application tools and libraries: ''RAxML''<br /> <br /> == Usage Example ==<br /> <br /> Execution from command line :<br /> /opt/exp_software/mpi/mpiexec/mpiexec-0.84-mpich2-pmi/bin/mpiexec -np 128 /home/lukaf/raxml/RAxML-7.2.6/raxmlHPC-MPI -m GTRGAMMA -s /home/lukaf/raxml/trutte_input.txt -# 1000 -n T16x8<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''HPCG, Bulgaria''<br /> ** Applied for access on: ''05.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''up to 256 cores''<br /> * Accessed production systems:<br /> # ''Debrecen SC, NIIF''<br /> #* Applied for access on: ''02.2012''<br /> #* Access granted on: ''03.2012''<br /> #* Achieved scalability: ''240 cores''<br /> # ''Pesc SC, NIIF, HU''<br /> #* Applied for access on: ''02.2012''<br /> #* Access granted on: ''03.2012''<br /> #* Achieved scalability: ''... cores''<br /> * Porting activities: ''...''<br /> * Scalability studies: ''...''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''.''<br /> * Other issues: ''.''<br /> <br /> == Achieved Results ==<br /> <br /> <br /> == Publications ==<br /> <br /> * Luka Filipović, Danilo Mrdak and Božo Krstajić, &quot;Performance evaluation of computational phylogeny software in parallel computing environment&quot;, ICT Innovations 2012<br /> <br /> == Foreseen Activities ==<br /> <br /> * Analysis for new DNA sequence sets<br /> * Multigene analysys and benchmark on multigene datasets</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DiseaseGene DiseaseGene 2013-01-25T13:36:07Z <p>Lifesci: /* Scalability */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''In-silico Disease Gene Mapper''<br /> * Application's acronym: ''DiseaseGene''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics <br /> * Web site: <br /> http://ls-hpsee.nik.uni-obuda.hu:8080/liferay-portal-6.0.5<br /> http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Complex data mining and data processing tool using large-scale external open-access databases. The aim of the task is to port a data mining tool to the SEE-HPC infrastructure, which can help researchers to do comparative analysis and target candidate genes for further research of polygene type diseases. The implemented solution is capable to target candidate genes for various diseases such as asthma, diabetes, epilepsy, hypertension or schizophrenia using external online open-access eukaryotic (animal: mouse, rat, B. rerio, etc.) databases. The application does an in-silico mapping between the genes coming from the different model animals and search for unexplored potential target genes. With small modification the application is useful to target human genes too. <br /> <br /> == Problems Solved ==<br /> <br /> The implemented solution is capable to target candidate genes for various diseases such as asthma, diabetes, epilepsy, hypertension or schizophrenia using external online open-access eukaryotic (animal: mouse, rat, B. rerio, etc.) databases. The application does an in-silico mapping between the genes coming from the different model animals and search for unexplored potential target genes. With small modification the application is useful to target human genes too. Grid's reliability parameters and response time (1-5 min) is not suitable for such service.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Researchers in the region will be able to target candidate genes for further research of polygene type diseases.<br /> Create a data mining a service to the SEE-HPC infrastructure, which can help researchers to do comparative analysis.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> <br /> == Beneficiaries ==<br /> <br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 2-5 scientific groups (5-15 researchers) world wide will use our service.<br /> <br /> == Number of users ==<br /> 6<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M19 (delayed for storage access issues)''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128 – 256''<br /> * Minimum RAM/core required: ''4 - 8 GB''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''5-10TB''<br /> * Total core hours required: ''1 300 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C/C++'' <br /> * Parallel programming paradigm: ''Clustered multiprocessing (ex. using MPI) + Multiple serial jobs (data-splitting, parametric studies)''<br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as a gene mapper applications and sequence alignment applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''8 cores''<br /> * Accessed production systems:<br /> # ''NIIF's infrastructure/HU''<br /> #* Applied for access on: ''09.2010''<br /> #* Access granted on: ''10.2010''<br /> #* Achieved scalability: ''16 cores--&gt;96 cores''<br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 8 and 16 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 8 cores benchmaring was initiated for 16 and 96 cores, further scaling is planned to higher number of cores. ''<br /> * Other issues: ''There were painful (ARC) authentication problems/access issues and with the supercomputing infrastructure's local storage during porting. Further study for higher scaling is still required.''<br /> <br /> == Scalability ==<br /> Benchmark dataset<br /> The blast database size was 5.1 GB, and the input sequence size was 29.13 kB. Each measurement was executed 10 times, the average of the 10 executions was taken as the final result <br /> Hardware platforms<br /> A number of hardware platforms have been used for the testing of the applications. The portlet we have developed is connected to all these different HPC infrastructures and it is the job of the middleware to choose the appropriate for each execution. For our benchmarks we specified the infrastructure the application was supposed to use.<br /> The benchmarks were executed on five different HPC infrastructures:<br /> *Debrecen<br /> **Intel Xeon X5680 (Westmere EP) 6 core nodes, SGI Altix ICE8400EX<br /> **1536 CPU cores<br /> **6 TB memory<br /> **0.5 PB storage<br /> **Total capacity: ~18 TFlops<br /> *Budapest (NIIF)<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **~700 cores<br /> **Total Capacity ~5 TFlops<br /> *Pecs<br /> **SGI UltraViolet 1000 - SMP (ccNUMA) <br /> **CPU: Intel Xeon X7542 (Nehalem EX) - 6 cores<br /> **1152 cores<br /> **6 TB memory<br /> **0.5 PB memory<br /> **Total capacity: ~10 TFlops<br /> *Szeged<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **2112 cores<br /> **5.6 TB memory<br /> **0.25 PB storage<br /> **Total Capacity ~14 TFlops<br /> *Bulgaria<br /> **Blue Gene/P with PowerPC CPUs<br /> **2048 PowerPC 450 based compute nodes<br /> **8192 cores<br /> **4 TB memory<br /> <br /> Software platforms<br /> The applications were tested using multiple software stack<br /> *Different MPI implementations<br /> **openmpi_gcc-1.4.3<br /> **openmpi_open64-1.6<br /> **mpt-2.04<br /> **openmpi-1.4.2<br /> **openmpi-1.3.2<br /> *Different compilers<br /> **opencc<br /> **icc<br /> **openmpi-gcc<br /> <br /> Each of the different hardware platforms have multiple MPI environments. We have tested our applications with multiple versions. There usually is one specific preferred at each of the HPC centers which we preferred using.<br /> <br /> *Execution times<br /> The following graphs show the results of the executions. The execution times varied a little depending on the hpc ceter used, but they were more or less stable so we only include the results from the Budapest server. The following graphs show the result of multiple executions of mpiBlast on the same database with the same input sequence on the same computer. The only difference being the number of CPU cores allocated to the MPI job . Figure 1 shows the execution times measured by mpiBlast. If executed on just one CPU it takes 3376 seconds for the job to finish (about 53 minutes). As we can see the applications scales well, the execution times drop when we add more and more CPUs.<br /> The scalability is linear until 128 cores.<br /> <br /> [[File:e1.png]]<br /> [[File:e2.png]]<br /> [[File:e3.png]]<br /> <br /> Further optimization<br /> The first task when using mpiBlast is to split the blast database into multiple fragments. According to previous research, the number of database fragments have a direct impact on the performance of the application. Finding an optimal number was essential, so our database was split into different sizes. Figure 4 shows the measured execution times. The measurements were executed on 64 cores.<br /> The execution times show that the application performs best when the number of DB segments are integer multiples of the number of CPU cores. The reason is straightforward: this is the only way an even data distribution can be achieved amongst the cores.<br /> *Profiling<br /> The two applications we have created share some of the code base which results in a similar behavior. Both applications consist of three jobs in a WS-PGrade workflow with job 1 being the preprocessor, job 2 doing the calculations and job 3 collecting the results and providing it to the user. The current implementation for the preprocessing is serial, we have investigated parallelizing but according to our profiling approximately 0.02 % of the total execution time is spent on Job 1 in Deep Aligner, so yields no real performance gain but can cause problems so we voted againts it. Job3 is 0.01% - most of the work is done in Job2. Job2 consists mainly of mpiBlast, the profiling shows the following results.<br /> <br /> Execution time ratio of the jobs in the whole Disease Gene Mapper portlet.<br /> Job1: 0,09%<br /> Job2: 99,90%<br /> Job3: 0,01%<br /> <br /> <br /> Execution time ratio inside Job2<br /> Init: 1,79%<br /> BLAST: 97,18%<br /> Write: 0,19%<br /> Other: 0,84%<br /> <br /> <br /> *Memory<br /> Memory usage while executing the application. The results come from the maxvmem parameter of qacct:<br /> 1: 1,257<br /> 2: 2,112<br /> 4: 3,345<br /> 8: 4,131<br /> 16: 5,434<br /> 32: 6,012<br /> 48: 4,153<br /> 64: 8,745<br /> 96: 9,897<br /> 128:12,465 <br /> <br /> <br /> As we can see the memory consumtion (measured by qacct) increases as the number of cores is increased.<br /> <br /> *Communication<br /> mpiBlast uses a pre-segmented database and each node have their own part where it searches for the input sequence so the communication overhead is very small. <br /> <br /> *I/O<br /> I/O as measured using the io parameter of qacct:<br /> 1: 0,001<br /> 2: 0,001<br /> 4: 0,002<br /> 8: 0,003<br /> 16: 0,004<br /> 32: 0,011<br /> 48: 0,016<br /> 64: 0,019<br /> 96: 0,027<br /> 128:0,029 <br /> <br /> As we can see on the previous table the I/O use increases as we increase the number of CPU cores in the job.<br /> <br /> *Analysis<br /> From our tests, we conclude that our application scales reasonably well up until about 128 cores. When the appropriate MPI implementation is used on the HPC infrastructure the performance figures are quite similar – the scalability results are within the same region as expected. The number of database fragments play a significant role in the whole application and the best result can be obtained when that number is equal to or is an integer multiple of the number of cores. We have also noted that because of the high utilization of the supercomputing centers real life performance – wall clock time measured from the initialization of the job until the results are provided – could be better when using a smaller number of cores because small jobs tend to get scheduled easier and earlier.<br /> <br /> == Achieved Results ==<br /> In-silico Disease Gene Mapper was tested with some poligene diseases (e.g.:asthma) successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> * G. Windisch, M. Kozlovszky, Á. Balaskó;Performance and scalability evaluation of short fragment sequence alignment applications;HPSEE User Forum 2012<br /> * M. Kozlovszky, G. Windisch, Á. Balaskó;Short fragment sequence alignment on the HP-SEE infrastructure;MIPRO 2012<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012<br /> <br /> == Foreseen Activities ==<br /> More scientific publications about the porting of the data mining tool (show results of some comparative data analysis targeting polygene type diseases).</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DeepAligner DeepAligner 2013-01-25T13:34:57Z <p>Lifesci: /* Scalability */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Deep sequencing for short fragment alignment''<br /> * Application's acronym: ''DeepAligner''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics''<br /> * Web site: <br /> http://ls-hpsee.nik.uni-obuda.hu:8080/liferay-portal-6.0.5<br /> http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Mapping short fragment reads to open-access eukaryotic genomes is solvable by BLAST and BWA and other sequence alignment tools - BLAST is one of the most frequently used tool in bioinformatics and BWA is a relative new fast light-weighted tool that aligns short sequences. Local installations of these algorithms are typically not able to handle such problem size therefore the procedure runs slowly, while web based implementations cannot accept high number of queries. SEE-HPC infrastructure allows accessing massively parallel architectures and the sequence alignment code is distributed free for academia. Due to the response time and service reliability requirements grid can not be an option for the DeepAligner application.<br /> <br /> == Problems Solved ==<br /> <br /> The recently used deep sequencing techniques present a new data processing challenge: mapping short fragment reads to open-access eukaryotic (animal: focusing on mouse and rat) genomes at the scale of several hundred thousands.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The aim of the task is threefold, the first task is to port the BLAST/BWA algorithms to the massively parallel HP-SEE infrastructure create a BLAST/BWA service, which is capable to serve the short fragment sequence alignment demand of the regional bioinformatics communities, to do sequence analysis with high throughput short fragment sequence alignments against the eukaryotic genomes to search for regulatory mechanisms controlled by short fragments.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> Planned collaboration with the MoSGrid consortium (D-GRID based project, Germany) <br /> <br /> == Beneficiaries ==<br /> <br /> Serve the short fragment sequence alignment demand of the regional bioinformatics communities.<br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 5-15 scientific groups worldwide will use our service.<br /> <br /> == Number of users ==<br /> 5<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M18''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128-256''<br /> * Minimum RAM/core required: ''4-8 Gb''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''1-2 TB''<br /> * Total core hours required: ''1 500 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C,C++''<br /> * Parallel programming paradigm: ''Master-slave, MPI, + Multiple serial jobs (data-splitting, parametric studies)'' <br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as BLAST, BWA, or gene mapper applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> 2. IMPLEMENTATION OF THE GENERIC BLAST WORKFLOW<br /> <br /> Normal applications need to be firstly ported for use with gUSE/WS-PGRADE. Our used porting methodology includes two main steps: workflow development and user specific web interface development based on gUSE’s ASM (shown in Fig. 2.). gUSE is using a DAG (directed acyclic graph) based workflow concept. In a generic workflow, nodes represent jobs, which are basically batch programs to be executed on one of the DCI’s computing element. Ports represent input/output files the jobs receiving or producing. Arcs between ports represent file transfer operations. gUSE supports Parameter Study type high level parallelization. In the workflow special Generator ports can be used to generate the input files for all parallel jobs automatically while Collector jobs can run after all parallel execution to collect all parallel outputs. During the BLAST porting, we have exploited all the PS capabilities of gUSE.<br /> [[File:Devel wf.jpg|200px|thumb|left|Porting steps of the application]]<br /> <br /> Parallel job submission into the DCI environment needs to have parameter assignment of the generated parameters. gUSE’s PS workflow components were used to create a DCI-aware parallel BLAST application and realize a complex DCI workflow as a proof of concept. Later on the web-based DCI user interface was created using the Application Specific Module (ASM) of gUSE. On this web GUI, end-users can configure the input parameter like the “e” value or the number of MPI tasks and they can submit the alignment into the DCI environment with arbitrary large parameter fields.<br /> During the development of the workflow structure, we have aimed to construct a workflow that will be able to handle the main properties of the parallel BLAST application. To exploit the mechanism of Parameter Study used by gUSE the workflow has developed as a Parameter Study workflow with usage of autogenerator port (second small box around left top box in Fig 5.) and collector job (right bottom box in Fig. 5). The preprocessor job generates a set of input files from some pre-adjusted parameter. Then the second job (middle box in Fig. 5) will be executed as many times as the input files specify. <br /> The last job of the workflow is a Collector which is used to collect several files and then process them as a single input. Collectors force delayed job execution until the last file of the input file set to be collected has arrived to the Collector job. The workflow engine computes the expected number of input files at run time. When all the expected inputs arrived to the Collector it starts to process all the incoming inputs files as a single input set. Finally output files will be generated, and will be stored on a Storage Element of the DCI shown as little box around the Collector in. <br /> <br /> [[File:Blast wf.jpg|200px|thumb|left|Internal architecture of the generic blast workflow]]<br /> <br /> Due to the strict HPC security constraints, end users should posses valid certificate to utilize the HP-SEE Bioinformatics eScience Gateway. Users can utilize seamlessly the developed workflows on ARC based infrastructure (like the NIIF’s Hungarian supercomputing infrastructure) or on gLite/EMI based infrastructure (Service Grids like SEE-GRID-SCI, or SHIWA). After login, the users should create their own workflow based application instances, which are derived from pre-developed and well-tested workflows.<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''4 nodes 8 cores''<br /> * Accessed production systems:<br /> ''NIIF's infrastructure/HU''<br /> ** Applied for access on: ''09.2010''<br /> ** Access granted on: ''10.2010''<br /> ** Achieved scalability: ''96 cores''<br /> <br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 32, 59 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 32 cores benchmaring was initiated for 59 and 96 cores''<br /> * Other issues: ''There were painful authentication problems and access issues with the supercomputing infrastructure's local storage during porting. Some input parameter assignment optimisation and further study for higher scaling is still required.''<br /> <br /> == Scalability ==<br /> Benchmark dataset<br /> The blast database size was 5.1 GB, and the input sequence size was 29.13 kB. Each measurement was executed 10 times, the average of the 10 executions was taken as the final result <br /> Hardware platforms<br /> A number of hardware platforms have been used for the testing of the applications. The portlet we have developed is connected to all these different HPC infrastructures and it is the job of the middleware to choose the appropriate for each execution. For our benchmarks we specified the infrastructure the application was supposed to use.<br /> The benchmarks were executed on five different HPC infrastructures:<br /> *Debrecen<br /> **Intel Xeon X5680 (Westmere EP) 6 core nodes, SGI Altix ICE8400EX<br /> **1536 CPU cores<br /> **6 TB memory<br /> **0.5 PB storage<br /> **Total capacity: ~18 TFlops<br /> *Budapest (NIIF)<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **~700 cores<br /> **Total Capacity ~5 TFlops<br /> *Pecs<br /> **SGI UltraViolet 1000 - SMP (ccNUMA) <br /> **CPU: Intel Xeon X7542 (Nehalem EX) - 6 cores<br /> **1152 cores<br /> **6 TB memory<br /> **0.5 PB memory<br /> **Total capacity: ~10 TFlops<br /> *Szeged<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **2112 cores<br /> **5.6 TB memory<br /> **0.25 PB storage<br /> **Total Capacity ~14 TFlops<br /> *Bulgaria<br /> **Blue Gene/P with PowerPC CPUs<br /> **2048 PowerPC 450 based compute nodes<br /> **8192 cores<br /> **4 TB memory<br /> <br /> Software platforms<br /> The applications were tested using multiple software stack<br /> *Different MPI implementations<br /> **openmpi_gcc-1.4.3<br /> **openmpi_open64-1.6<br /> **mpt-2.04<br /> **openmpi-1.4.2<br /> **openmpi-1.3.2<br /> *Different compilers<br /> **opencc<br /> **icc<br /> **openmpi-gcc<br /> <br /> Each of the different hardware platforms have multiple MPI environments. We have tested our applications with multiple versions. There usually is one specific preferred at each of the HPC centers which we preferred using.<br /> <br /> *Execution times<br /> The following graphs show the results of the executions. The execution times varied a little depending on the hpc ceter used, but they were more or less stable so we only include the results from the Budapest server. The following graphs show the result of multiple executions of mpiBlast on the same database with the same input sequence on the same computer. The only difference being the number of CPU cores allocated to the MPI job . Figure 1 shows the execution times measured by mpiBlast. If executed on just one CPU it takes 3376 seconds for the job to finish (about 53 minutes). As we can see the applications scales well, the execution times drop when we add more and more CPUs.<br /> The scalability is linear until 128 cores.<br /> <br /> [[File:e1.png]]<br /> [[File:e2.png]]<br /> [[File:e3.png]]<br /> <br /> Further optimization<br /> The first task when using mpiBlast is to split the blast database into multiple fragments. According to previous research, the number of database fragments have a direct impact on the performance of the application. Finding an optimal number was essential, so our database was split into different sizes. Figure 4 shows the measured execution times. The measurements were executed on 64 cores.<br /> The execution times show that the application performs best when the number of DB segments are integer multiples of the number of CPU cores. The reason is straightforward: this is the only way an even data distribution can be achieved amongst the cores.<br /> *Profiling<br /> The two applications we have created share some of the code base which results in a similar behavior. Both applications consist of three jobs in a WS-PGrade workflow with job 1 being the preprocessor, job 2 doing the calculations and job 3 collecting the results and providing it to the user. The current implementation for the preprocessing is serial, we have investigated parallelizing but according to our profiling approximately 0.02 % of the total execution time is spent on Job 1 in Deep Aligner, so yields no real performance gain but can cause problems so we voted againts it. Job3 is 0.01% - most of the work is done in Job2. Job2 consists mainly of mpiBlast, the profiling shows the following results.<br /> <br /> Execution time ratio of the jobs in the whole Deep Aligner portlet.<br /> Job1: 0,02%<br /> Job2: 99,97%<br /> Job3: 0,01%<br /> <br /> <br /> Execution time ratio inside Job2<br /> Init: 1,79%<br /> BLAST: 97,18%<br /> Write: 0,19%<br /> Other: 0,84%<br /> <br /> <br /> *Memory<br /> Memory usage while executing the application. The results come from the maxvmem parameter of qacct:<br /> 1: 1,257<br /> 2: 2,112<br /> 4: 3,345<br /> 8: 4,131<br /> 16: 5,434<br /> 32: 6,012<br /> 48: 4,153<br /> 64: 8,745<br /> 96: 9,897<br /> 128:12,465 <br /> <br /> <br /> As we can see the memory consumtion (measured by qacct) increases as the number of cores is increased.<br /> <br /> *Communication<br /> mpiBlast uses a pre-segmented database and each node have their own part where it searches for the input sequence so the communication overhead is very small. <br /> <br /> *I/O<br /> I/O as measured using the io parameter of qacct:<br /> 1: 0,001<br /> 2: 0,001<br /> 4: 0,002<br /> 8: 0,003<br /> 16: 0,004<br /> 32: 0,011<br /> 48: 0,016<br /> 64: 0,019<br /> 96: 0,027<br /> 128:0,029 <br /> <br /> As we can see on the previous table the I/O use increases as we increase the number of CPU cores in the job.<br /> <br /> *Analysis<br /> From our tests, we conclude that our application scales reasonably well up until about 128 cores. When the appropriate MPI implementation is used on the HPC infrastructure the performance figures are quite similar – the scalability results are within the same region as expected. The number of database fragments play a significant role in the whole application and the best result can be obtained when that number is equal to or is an integer multiple of the number of cores. We have also noted that because of the high utilization of the supercomputing centers real life performance – wall clock time measured from the initialization of the job until the results are provided – could be better when using a smaller number of cores because small jobs tend to get scheduled easier and earlier.<br /> <br /> == Achieved Results ==<br /> The DeepAligner application was tested with parallel short DNA sequence searches successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> * G. Windisch, M. Kozlovszky, Á. Balaskó;Performance and scalability evaluation of short fragment sequence alignment applications;HPSEE User Forum 2012<br /> * M. Kozlovszky, G. Windisch, Á. Balaskó;Short fragment sequence alignment on the HP-SEE infrastructure;MIPRO 2012<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012<br /> <br /> == Foreseen Activities ==<br /> Parameter assignements optimisation of the GUI, more scientific publications about short sequence alignment.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/File:E3.png File:E3.png 2013-01-25T13:34:07Z <p>Lifesci: </p> <hr /> <div></div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/File:E2.png File:E2.png 2013-01-25T13:33:58Z <p>Lifesci: </p> <hr /> <div></div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/File:E1.png File:E1.png 2013-01-25T13:33:34Z <p>Lifesci: </p> <hr /> <div></div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DeepAligner DeepAligner 2013-01-25T13:31:40Z <p>Lifesci: /* Publications */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Deep sequencing for short fragment alignment''<br /> * Application's acronym: ''DeepAligner''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics''<br /> * Web site: <br /> http://ls-hpsee.nik.uni-obuda.hu:8080/liferay-portal-6.0.5<br /> http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Mapping short fragment reads to open-access eukaryotic genomes is solvable by BLAST and BWA and other sequence alignment tools - BLAST is one of the most frequently used tool in bioinformatics and BWA is a relative new fast light-weighted tool that aligns short sequences. Local installations of these algorithms are typically not able to handle such problem size therefore the procedure runs slowly, while web based implementations cannot accept high number of queries. SEE-HPC infrastructure allows accessing massively parallel architectures and the sequence alignment code is distributed free for academia. Due to the response time and service reliability requirements grid can not be an option for the DeepAligner application.<br /> <br /> == Problems Solved ==<br /> <br /> The recently used deep sequencing techniques present a new data processing challenge: mapping short fragment reads to open-access eukaryotic (animal: focusing on mouse and rat) genomes at the scale of several hundred thousands.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The aim of the task is threefold, the first task is to port the BLAST/BWA algorithms to the massively parallel HP-SEE infrastructure create a BLAST/BWA service, which is capable to serve the short fragment sequence alignment demand of the regional bioinformatics communities, to do sequence analysis with high throughput short fragment sequence alignments against the eukaryotic genomes to search for regulatory mechanisms controlled by short fragments.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> Planned collaboration with the MoSGrid consortium (D-GRID based project, Germany) <br /> <br /> == Beneficiaries ==<br /> <br /> Serve the short fragment sequence alignment demand of the regional bioinformatics communities.<br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 5-15 scientific groups worldwide will use our service.<br /> <br /> == Number of users ==<br /> 5<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M18''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128-256''<br /> * Minimum RAM/core required: ''4-8 Gb''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''1-2 TB''<br /> * Total core hours required: ''1 500 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C,C++''<br /> * Parallel programming paradigm: ''Master-slave, MPI, + Multiple serial jobs (data-splitting, parametric studies)'' <br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as BLAST, BWA, or gene mapper applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> 2. IMPLEMENTATION OF THE GENERIC BLAST WORKFLOW<br /> <br /> Normal applications need to be firstly ported for use with gUSE/WS-PGRADE. Our used porting methodology includes two main steps: workflow development and user specific web interface development based on gUSE’s ASM (shown in Fig. 2.). gUSE is using a DAG (directed acyclic graph) based workflow concept. In a generic workflow, nodes represent jobs, which are basically batch programs to be executed on one of the DCI’s computing element. Ports represent input/output files the jobs receiving or producing. Arcs between ports represent file transfer operations. gUSE supports Parameter Study type high level parallelization. In the workflow special Generator ports can be used to generate the input files for all parallel jobs automatically while Collector jobs can run after all parallel execution to collect all parallel outputs. During the BLAST porting, we have exploited all the PS capabilities of gUSE.<br /> [[File:Devel wf.jpg|200px|thumb|left|Porting steps of the application]]<br /> <br /> Parallel job submission into the DCI environment needs to have parameter assignment of the generated parameters. gUSE’s PS workflow components were used to create a DCI-aware parallel BLAST application and realize a complex DCI workflow as a proof of concept. Later on the web-based DCI user interface was created using the Application Specific Module (ASM) of gUSE. On this web GUI, end-users can configure the input parameter like the “e” value or the number of MPI tasks and they can submit the alignment into the DCI environment with arbitrary large parameter fields.<br /> During the development of the workflow structure, we have aimed to construct a workflow that will be able to handle the main properties of the parallel BLAST application. To exploit the mechanism of Parameter Study used by gUSE the workflow has developed as a Parameter Study workflow with usage of autogenerator port (second small box around left top box in Fig 5.) and collector job (right bottom box in Fig. 5). The preprocessor job generates a set of input files from some pre-adjusted parameter. Then the second job (middle box in Fig. 5) will be executed as many times as the input files specify. <br /> The last job of the workflow is a Collector which is used to collect several files and then process them as a single input. Collectors force delayed job execution until the last file of the input file set to be collected has arrived to the Collector job. The workflow engine computes the expected number of input files at run time. When all the expected inputs arrived to the Collector it starts to process all the incoming inputs files as a single input set. Finally output files will be generated, and will be stored on a Storage Element of the DCI shown as little box around the Collector in. <br /> <br /> [[File:Blast wf.jpg|200px|thumb|left|Internal architecture of the generic blast workflow]]<br /> <br /> Due to the strict HPC security constraints, end users should posses valid certificate to utilize the HP-SEE Bioinformatics eScience Gateway. Users can utilize seamlessly the developed workflows on ARC based infrastructure (like the NIIF’s Hungarian supercomputing infrastructure) or on gLite/EMI based infrastructure (Service Grids like SEE-GRID-SCI, or SHIWA). After login, the users should create their own workflow based application instances, which are derived from pre-developed and well-tested workflows.<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''4 nodes 8 cores''<br /> * Accessed production systems:<br /> ''NIIF's infrastructure/HU''<br /> ** Applied for access on: ''09.2010''<br /> ** Access granted on: ''10.2010''<br /> ** Achieved scalability: ''96 cores''<br /> <br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 32, 59 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 32 cores benchmaring was initiated for 59 and 96 cores''<br /> * Other issues: ''There were painful authentication problems and access issues with the supercomputing infrastructure's local storage during porting. Some input parameter assignment optimisation and further study for higher scaling is still required.''<br /> <br /> == Scalability ==<br /> Benchmark dataset<br /> The blast database size was 5.1 GB, and the input sequence size was 29.13 kB. Each measurement was executed 10 times, the average of the 10 executions was taken as the final result <br /> Hardware platforms<br /> A number of hardware platforms have been used for the testing of the applications. The portlet we have developed is connected to all these different HPC infrastructures and it is the job of the middleware to choose the appropriate for each execution. For our benchmarks we specified the infrastructure the application was supposed to use.<br /> The benchmarks were executed on five different HPC infrastructures:<br /> *Debrecen<br /> **Intel Xeon X5680 (Westmere EP) 6 core nodes, SGI Altix ICE8400EX<br /> **1536 CPU cores<br /> **6 TB memory<br /> **0.5 PB storage<br /> **Total capacity: ~18 TFlops<br /> *Budapest (NIIF)<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **~700 cores<br /> **Total Capacity ~5 TFlops<br /> *Pecs<br /> **SGI UltraViolet 1000 - SMP (ccNUMA) <br /> **CPU: Intel Xeon X7542 (Nehalem EX) - 6 cores<br /> **1152 cores<br /> **6 TB memory<br /> **0.5 PB memory<br /> **Total capacity: ~10 TFlops<br /> *Szeged<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **2112 cores<br /> **5.6 TB memory<br /> **0.25 PB storage<br /> **Total Capacity ~14 TFlops<br /> *Bulgaria<br /> **Blue Gene/P with PowerPC CPUs<br /> **2048 PowerPC 450 based compute nodes<br /> **8192 cores<br /> **4 TB memory<br /> <br /> Software platforms<br /> The applications were tested using multiple software stack<br /> *Different MPI implementations<br /> **openmpi_gcc-1.4.3<br /> **openmpi_open64-1.6<br /> **mpt-2.04<br /> **openmpi-1.4.2<br /> **openmpi-1.3.2<br /> *Different compilers<br /> **opencc<br /> **icc<br /> **openmpi-gcc<br /> <br /> Each of the different hardware platforms have multiple MPI environments. We have tested our applications with multiple versions. There usually is one specific preferred at each of the HPC centers which we preferred using.<br /> <br /> *Execution times<br /> The following graphs show the results of the executions. The execution times varied a little depending on the hpc ceter used, but they were more or less stable so we only include the results from the Budapest server. The following graphs show the result of multiple executions of mpiBlast on the same database with the same input sequence on the same computer. The only difference being the number of CPU cores allocated to the MPI job . Figure 1 shows the execution times measured by mpiBlast. If executed on just one CPU it takes 3376 seconds for the job to finish (about 53 minutes). As we can see the applications scales well, the execution times drop when we add more and more CPUs.<br /> The scalability is linear until 128 cores.<br /> <br /> Further optimization<br /> The first task when using mpiBlast is to split the blast database into multiple fragments. According to previous research, the number of database fragments have a direct impact on the performance of the application. Finding an optimal number was essential, so our database was split into different sizes. Figure 4 shows the measured execution times. The measurements were executed on 64 cores.<br /> The execution times show that the application performs best when the number of DB segments are integer multiples of the number of CPU cores. The reason is straightforward: this is the only way an even data distribution can be achieved amongst the cores.<br /> *Profiling<br /> The two applications we have created share some of the code base which results in a similar behavior. Both applications consist of three jobs in a WS-PGrade workflow with job 1 being the preprocessor, job 2 doing the calculations and job 3 collecting the results and providing it to the user. The current implementation for the preprocessing is serial, we have investigated parallelizing but according to our profiling approximately 0.02 % of the total execution time is spent on Job 1 in Deep Aligner, so yields no real performance gain but can cause problems so we voted againts it. Job3 is 0.01% - most of the work is done in Job2. Job2 consists mainly of mpiBlast, the profiling shows the following results.<br /> <br /> Execution time ratio of the jobs in the whole Deep Aligner portlet.<br /> Job1: 0,02%<br /> Job2: 99,97%<br /> Job3: 0,01%<br /> <br /> <br /> Execution time ratio inside Job2<br /> Init: 1,79%<br /> BLAST: 97,18%<br /> Write: 0,19%<br /> Other: 0,84%<br /> <br /> <br /> *Memory<br /> Memory usage while executing the application. The results come from the maxvmem parameter of qacct:<br /> 1: 1,257<br /> 2: 2,112<br /> 4: 3,345<br /> 8: 4,131<br /> 16: 5,434<br /> 32: 6,012<br /> 48: 4,153<br /> 64: 8,745<br /> 96: 9,897<br /> 128:12,465 <br /> <br /> <br /> As we can see the memory consumtion (measured by qacct) increases as the number of cores is increased.<br /> <br /> *Communication<br /> mpiBlast uses a pre-segmented database and each node have their own part where it searches for the input sequence so the communication overhead is very small. <br /> <br /> *I/O<br /> I/O as measured using the io parameter of qacct:<br /> 1: 0,001<br /> 2: 0,001<br /> 4: 0,002<br /> 8: 0,003<br /> 16: 0,004<br /> 32: 0,011<br /> 48: 0,016<br /> 64: 0,019<br /> 96: 0,027<br /> 128:0,029 <br /> <br /> As we can see on the previous table the I/O use increases as we increase the number of CPU cores in the job.<br /> <br /> *Analysis<br /> From our tests, we conclude that our application scales reasonably well up until about 128 cores. When the appropriate MPI implementation is used on the HPC infrastructure the performance figures are quite similar – the scalability results are within the same region as expected. The number of database fragments play a significant role in the whole application and the best result can be obtained when that number is equal to or is an integer multiple of the number of cores. We have also noted that because of the high utilization of the supercomputing centers real life performance – wall clock time measured from the initialization of the job until the results are provided – could be better when using a smaller number of cores because small jobs tend to get scheduled easier and earlier.<br /> <br /> <br /> == Achieved Results ==<br /> The DeepAligner application was tested with parallel short DNA sequence searches successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> * G. Windisch, M. Kozlovszky, Á. Balaskó;Performance and scalability evaluation of short fragment sequence alignment applications;HPSEE User Forum 2012<br /> * M. Kozlovszky, G. Windisch, Á. Balaskó;Short fragment sequence alignment on the HP-SEE infrastructure;MIPRO 2012<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012<br /> <br /> == Foreseen Activities ==<br /> Parameter assignements optimisation of the GUI, more scientific publications about short sequence alignment.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DiseaseGene DiseaseGene 2013-01-25T13:31:19Z <p>Lifesci: /* Publications */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''In-silico Disease Gene Mapper''<br /> * Application's acronym: ''DiseaseGene''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics <br /> * Web site: <br /> http://ls-hpsee.nik.uni-obuda.hu:8080/liferay-portal-6.0.5<br /> http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Complex data mining and data processing tool using large-scale external open-access databases. The aim of the task is to port a data mining tool to the SEE-HPC infrastructure, which can help researchers to do comparative analysis and target candidate genes for further research of polygene type diseases. The implemented solution is capable to target candidate genes for various diseases such as asthma, diabetes, epilepsy, hypertension or schizophrenia using external online open-access eukaryotic (animal: mouse, rat, B. rerio, etc.) databases. The application does an in-silico mapping between the genes coming from the different model animals and search for unexplored potential target genes. With small modification the application is useful to target human genes too. <br /> <br /> == Problems Solved ==<br /> <br /> The implemented solution is capable to target candidate genes for various diseases such as asthma, diabetes, epilepsy, hypertension or schizophrenia using external online open-access eukaryotic (animal: mouse, rat, B. rerio, etc.) databases. The application does an in-silico mapping between the genes coming from the different model animals and search for unexplored potential target genes. With small modification the application is useful to target human genes too. Grid's reliability parameters and response time (1-5 min) is not suitable for such service.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Researchers in the region will be able to target candidate genes for further research of polygene type diseases.<br /> Create a data mining a service to the SEE-HPC infrastructure, which can help researchers to do comparative analysis.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> <br /> == Beneficiaries ==<br /> <br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 2-5 scientific groups (5-15 researchers) world wide will use our service.<br /> <br /> == Number of users ==<br /> 6<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M19 (delayed for storage access issues)''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128 – 256''<br /> * Minimum RAM/core required: ''4 - 8 GB''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''5-10TB''<br /> * Total core hours required: ''1 300 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C/C++'' <br /> * Parallel programming paradigm: ''Clustered multiprocessing (ex. using MPI) + Multiple serial jobs (data-splitting, parametric studies)''<br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as a gene mapper applications and sequence alignment applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''8 cores''<br /> * Accessed production systems:<br /> # ''NIIF's infrastructure/HU''<br /> #* Applied for access on: ''09.2010''<br /> #* Access granted on: ''10.2010''<br /> #* Achieved scalability: ''16 cores--&gt;96 cores''<br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 8 and 16 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 8 cores benchmaring was initiated for 16 and 96 cores, further scaling is planned to higher number of cores. ''<br /> * Other issues: ''There were painful (ARC) authentication problems/access issues and with the supercomputing infrastructure's local storage during porting. Further study for higher scaling is still required.''<br /> <br /> == Scalability ==<br /> Benchmark dataset<br /> The blast database size was 5.1 GB, and the input sequence size was 29.13 kB. Each measurement was executed 10 times, the average of the 10 executions was taken as the final result <br /> Hardware platforms<br /> A number of hardware platforms have been used for the testing of the applications. The portlet we have developed is connected to all these different HPC infrastructures and it is the job of the middleware to choose the appropriate for each execution. For our benchmarks we specified the infrastructure the application was supposed to use.<br /> The benchmarks were executed on five different HPC infrastructures:<br /> *Debrecen<br /> **Intel Xeon X5680 (Westmere EP) 6 core nodes, SGI Altix ICE8400EX<br /> **1536 CPU cores<br /> **6 TB memory<br /> **0.5 PB storage<br /> **Total capacity: ~18 TFlops<br /> *Budapest (NIIF)<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **~700 cores<br /> **Total Capacity ~5 TFlops<br /> *Pecs<br /> **SGI UltraViolet 1000 - SMP (ccNUMA) <br /> **CPU: Intel Xeon X7542 (Nehalem EX) - 6 cores<br /> **1152 cores<br /> **6 TB memory<br /> **0.5 PB memory<br /> **Total capacity: ~10 TFlops<br /> *Szeged<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **2112 cores<br /> **5.6 TB memory<br /> **0.25 PB storage<br /> **Total Capacity ~14 TFlops<br /> *Bulgaria<br /> **Blue Gene/P with PowerPC CPUs<br /> **2048 PowerPC 450 based compute nodes<br /> **8192 cores<br /> **4 TB memory<br /> <br /> Software platforms<br /> The applications were tested using multiple software stack<br /> *Different MPI implementations<br /> **openmpi_gcc-1.4.3<br /> **openmpi_open64-1.6<br /> **mpt-2.04<br /> **openmpi-1.4.2<br /> **openmpi-1.3.2<br /> *Different compilers<br /> **opencc<br /> **icc<br /> **openmpi-gcc<br /> <br /> Each of the different hardware platforms have multiple MPI environments. We have tested our applications with multiple versions. There usually is one specific preferred at each of the HPC centers which we preferred using.<br /> <br /> *Execution times<br /> The following graphs show the results of the executions. The execution times varied a little depending on the hpc ceter used, but they were more or less stable so we only include the results from the Budapest server. The following graphs show the result of multiple executions of mpiBlast on the same database with the same input sequence on the same computer. The only difference being the number of CPU cores allocated to the MPI job . Figure 1 shows the execution times measured by mpiBlast. If executed on just one CPU it takes 3376 seconds for the job to finish (about 53 minutes). As we can see the applications scales well, the execution times drop when we add more and more CPUs.<br /> The scalability is linear until 128 cores.<br /> <br /> Further optimization<br /> The first task when using mpiBlast is to split the blast database into multiple fragments. According to previous research, the number of database fragments have a direct impact on the performance of the application. Finding an optimal number was essential, so our database was split into different sizes. Figure 4 shows the measured execution times. The measurements were executed on 64 cores.<br /> The execution times show that the application performs best when the number of DB segments are integer multiples of the number of CPU cores. The reason is straightforward: this is the only way an even data distribution can be achieved amongst the cores.<br /> *Profiling<br /> The two applications we have created share some of the code base which results in a similar behavior. Both applications consist of three jobs in a WS-PGrade workflow with job 1 being the preprocessor, job 2 doing the calculations and job 3 collecting the results and providing it to the user. The current implementation for the preprocessing is serial, we have investigated parallelizing but according to our profiling approximately 0.02 % of the total execution time is spent on Job 1 in Deep Aligner, so yields no real performance gain but can cause problems so we voted againts it. Job3 is 0.01% - most of the work is done in Job2. Job2 consists mainly of mpiBlast, the profiling shows the following results.<br /> <br /> Execution time ratio of the jobs in the whole Deep Aligner portlet.<br /> Job1: 0,02%<br /> Job2: 99,97%<br /> Job3: 0,01%<br /> <br /> <br /> Execution time ratio inside Job2<br /> Init: 1,79%<br /> BLAST: 97,18%<br /> Write: 0,19%<br /> Other: 0,84%<br /> <br /> <br /> *Memory<br /> Memory usage while executing the application. The results come from the maxvmem parameter of qacct:<br /> 1: 1,257<br /> 2: 2,112<br /> 4: 3,345<br /> 8: 4,131<br /> 16: 5,434<br /> 32: 6,012<br /> 48: 4,153<br /> 64: 8,745<br /> 96: 9,897<br /> 128:12,465 <br /> <br /> <br /> As we can see the memory consumtion (measured by qacct) increases as the number of cores is increased.<br /> <br /> *Communication<br /> mpiBlast uses a pre-segmented database and each node have their own part where it searches for the input sequence so the communication overhead is very small. <br /> <br /> *I/O<br /> I/O as measured using the io parameter of qacct:<br /> 1: 0,001<br /> 2: 0,001<br /> 4: 0,002<br /> 8: 0,003<br /> 16: 0,004<br /> 32: 0,011<br /> 48: 0,016<br /> 64: 0,019<br /> 96: 0,027<br /> 128:0,029 <br /> <br /> As we can see on the previous table the I/O use increases as we increase the number of CPU cores in the job.<br /> <br /> *Analysis<br /> From our tests, we conclude that our application scales reasonably well up until about 128 cores. When the appropriate MPI implementation is used on the HPC infrastructure the performance figures are quite similar – the scalability results are within the same region as expected. The number of database fragments play a significant role in the whole application and the best result can be obtained when that number is equal to or is an integer multiple of the number of cores. We have also noted that because of the high utilization of the supercomputing centers real life performance – wall clock time measured from the initialization of the job until the results are provided – could be better when using a smaller number of cores because small jobs tend to get scheduled easier and earlier.<br /> <br /> == Achieved Results ==<br /> In-silico Disease Gene Mapper was tested with some poligene diseases (e.g.:asthma) successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> * G. Windisch, M. Kozlovszky, Á. Balaskó;Performance and scalability evaluation of short fragment sequence alignment applications;HPSEE User Forum 2012<br /> * M. Kozlovszky, G. Windisch, Á. Balaskó;Short fragment sequence alignment on the HP-SEE infrastructure;MIPRO 2012<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012<br /> <br /> == Foreseen Activities ==<br /> More scientific publications about the porting of the data mining tool (show results of some comparative data analysis targeting polygene type diseases).</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DiseaseGene DiseaseGene 2013-01-25T13:28:58Z <p>Lifesci: </p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''In-silico Disease Gene Mapper''<br /> * Application's acronym: ''DiseaseGene''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics <br /> * Web site: <br /> http://ls-hpsee.nik.uni-obuda.hu:8080/liferay-portal-6.0.5<br /> http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Complex data mining and data processing tool using large-scale external open-access databases. The aim of the task is to port a data mining tool to the SEE-HPC infrastructure, which can help researchers to do comparative analysis and target candidate genes for further research of polygene type diseases. The implemented solution is capable to target candidate genes for various diseases such as asthma, diabetes, epilepsy, hypertension or schizophrenia using external online open-access eukaryotic (animal: mouse, rat, B. rerio, etc.) databases. The application does an in-silico mapping between the genes coming from the different model animals and search for unexplored potential target genes. With small modification the application is useful to target human genes too. <br /> <br /> == Problems Solved ==<br /> <br /> The implemented solution is capable to target candidate genes for various diseases such as asthma, diabetes, epilepsy, hypertension or schizophrenia using external online open-access eukaryotic (animal: mouse, rat, B. rerio, etc.) databases. The application does an in-silico mapping between the genes coming from the different model animals and search for unexplored potential target genes. With small modification the application is useful to target human genes too. Grid's reliability parameters and response time (1-5 min) is not suitable for such service.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Researchers in the region will be able to target candidate genes for further research of polygene type diseases.<br /> Create a data mining a service to the SEE-HPC infrastructure, which can help researchers to do comparative analysis.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> <br /> == Beneficiaries ==<br /> <br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 2-5 scientific groups (5-15 researchers) world wide will use our service.<br /> <br /> == Number of users ==<br /> 6<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M19 (delayed for storage access issues)''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128 – 256''<br /> * Minimum RAM/core required: ''4 - 8 GB''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''5-10TB''<br /> * Total core hours required: ''1 300 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C/C++'' <br /> * Parallel programming paradigm: ''Clustered multiprocessing (ex. using MPI) + Multiple serial jobs (data-splitting, parametric studies)''<br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as a gene mapper applications and sequence alignment applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''8 cores''<br /> * Accessed production systems:<br /> # ''NIIF's infrastructure/HU''<br /> #* Applied for access on: ''09.2010''<br /> #* Access granted on: ''10.2010''<br /> #* Achieved scalability: ''16 cores--&gt;96 cores''<br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 8 and 16 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 8 cores benchmaring was initiated for 16 and 96 cores, further scaling is planned to higher number of cores. ''<br /> * Other issues: ''There were painful (ARC) authentication problems/access issues and with the supercomputing infrastructure's local storage during porting. Further study for higher scaling is still required.''<br /> <br /> == Scalability ==<br /> Benchmark dataset<br /> The blast database size was 5.1 GB, and the input sequence size was 29.13 kB. Each measurement was executed 10 times, the average of the 10 executions was taken as the final result <br /> Hardware platforms<br /> A number of hardware platforms have been used for the testing of the applications. The portlet we have developed is connected to all these different HPC infrastructures and it is the job of the middleware to choose the appropriate for each execution. For our benchmarks we specified the infrastructure the application was supposed to use.<br /> The benchmarks were executed on five different HPC infrastructures:<br /> *Debrecen<br /> **Intel Xeon X5680 (Westmere EP) 6 core nodes, SGI Altix ICE8400EX<br /> **1536 CPU cores<br /> **6 TB memory<br /> **0.5 PB storage<br /> **Total capacity: ~18 TFlops<br /> *Budapest (NIIF)<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **~700 cores<br /> **Total Capacity ~5 TFlops<br /> *Pecs<br /> **SGI UltraViolet 1000 - SMP (ccNUMA) <br /> **CPU: Intel Xeon X7542 (Nehalem EX) - 6 cores<br /> **1152 cores<br /> **6 TB memory<br /> **0.5 PB memory<br /> **Total capacity: ~10 TFlops<br /> *Szeged<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **2112 cores<br /> **5.6 TB memory<br /> **0.25 PB storage<br /> **Total Capacity ~14 TFlops<br /> *Bulgaria<br /> **Blue Gene/P with PowerPC CPUs<br /> **2048 PowerPC 450 based compute nodes<br /> **8192 cores<br /> **4 TB memory<br /> <br /> Software platforms<br /> The applications were tested using multiple software stack<br /> *Different MPI implementations<br /> **openmpi_gcc-1.4.3<br /> **openmpi_open64-1.6<br /> **mpt-2.04<br /> **openmpi-1.4.2<br /> **openmpi-1.3.2<br /> *Different compilers<br /> **opencc<br /> **icc<br /> **openmpi-gcc<br /> <br /> Each of the different hardware platforms have multiple MPI environments. We have tested our applications with multiple versions. There usually is one specific preferred at each of the HPC centers which we preferred using.<br /> <br /> *Execution times<br /> The following graphs show the results of the executions. The execution times varied a little depending on the hpc ceter used, but they were more or less stable so we only include the results from the Budapest server. The following graphs show the result of multiple executions of mpiBlast on the same database with the same input sequence on the same computer. The only difference being the number of CPU cores allocated to the MPI job . Figure 1 shows the execution times measured by mpiBlast. If executed on just one CPU it takes 3376 seconds for the job to finish (about 53 minutes). As we can see the applications scales well, the execution times drop when we add more and more CPUs.<br /> The scalability is linear until 128 cores.<br /> <br /> Further optimization<br /> The first task when using mpiBlast is to split the blast database into multiple fragments. According to previous research, the number of database fragments have a direct impact on the performance of the application. Finding an optimal number was essential, so our database was split into different sizes. Figure 4 shows the measured execution times. The measurements were executed on 64 cores.<br /> The execution times show that the application performs best when the number of DB segments are integer multiples of the number of CPU cores. The reason is straightforward: this is the only way an even data distribution can be achieved amongst the cores.<br /> *Profiling<br /> The two applications we have created share some of the code base which results in a similar behavior. Both applications consist of three jobs in a WS-PGrade workflow with job 1 being the preprocessor, job 2 doing the calculations and job 3 collecting the results and providing it to the user. The current implementation for the preprocessing is serial, we have investigated parallelizing but according to our profiling approximately 0.02 % of the total execution time is spent on Job 1 in Deep Aligner, so yields no real performance gain but can cause problems so we voted againts it. Job3 is 0.01% - most of the work is done in Job2. Job2 consists mainly of mpiBlast, the profiling shows the following results.<br /> <br /> Execution time ratio of the jobs in the whole Deep Aligner portlet.<br /> Job1: 0,02%<br /> Job2: 99,97%<br /> Job3: 0,01%<br /> <br /> <br /> Execution time ratio inside Job2<br /> Init: 1,79%<br /> BLAST: 97,18%<br /> Write: 0,19%<br /> Other: 0,84%<br /> <br /> <br /> *Memory<br /> Memory usage while executing the application. The results come from the maxvmem parameter of qacct:<br /> 1: 1,257<br /> 2: 2,112<br /> 4: 3,345<br /> 8: 4,131<br /> 16: 5,434<br /> 32: 6,012<br /> 48: 4,153<br /> 64: 8,745<br /> 96: 9,897<br /> 128:12,465 <br /> <br /> <br /> As we can see the memory consumtion (measured by qacct) increases as the number of cores is increased.<br /> <br /> *Communication<br /> mpiBlast uses a pre-segmented database and each node have their own part where it searches for the input sequence so the communication overhead is very small. <br /> <br /> *I/O<br /> I/O as measured using the io parameter of qacct:<br /> 1: 0,001<br /> 2: 0,001<br /> 4: 0,002<br /> 8: 0,003<br /> 16: 0,004<br /> 32: 0,011<br /> 48: 0,016<br /> 64: 0,019<br /> 96: 0,027<br /> 128:0,029 <br /> <br /> As we can see on the previous table the I/O use increases as we increase the number of CPU cores in the job.<br /> <br /> *Analysis<br /> From our tests, we conclude that our application scales reasonably well up until about 128 cores. When the appropriate MPI implementation is used on the HPC infrastructure the performance figures are quite similar – the scalability results are within the same region as expected. The number of database fragments play a significant role in the whole application and the best result can be obtained when that number is equal to or is an integer multiple of the number of cores. We have also noted that because of the high utilization of the supercomputing centers real life performance – wall clock time measured from the initialization of the job until the results are provided – could be better when using a smaller number of cores because small jobs tend to get scheduled easier and earlier.<br /> <br /> == Achieved Results ==<br /> In-silico Disease Gene Mapper was tested with some poligene diseases (e.g.:asthma) successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012<br /> <br /> == Foreseen Activities ==<br /> More scientific publications about the porting of the data mining tool (show results of some comparative data analysis targeting polygene type diseases).</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DeepAligner DeepAligner 2013-01-25T13:27:00Z <p>Lifesci: </p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Deep sequencing for short fragment alignment''<br /> * Application's acronym: ''DeepAligner''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics''<br /> * Web site: <br /> http://ls-hpsee.nik.uni-obuda.hu:8080/liferay-portal-6.0.5<br /> http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Mapping short fragment reads to open-access eukaryotic genomes is solvable by BLAST and BWA and other sequence alignment tools - BLAST is one of the most frequently used tool in bioinformatics and BWA is a relative new fast light-weighted tool that aligns short sequences. Local installations of these algorithms are typically not able to handle such problem size therefore the procedure runs slowly, while web based implementations cannot accept high number of queries. SEE-HPC infrastructure allows accessing massively parallel architectures and the sequence alignment code is distributed free for academia. Due to the response time and service reliability requirements grid can not be an option for the DeepAligner application.<br /> <br /> == Problems Solved ==<br /> <br /> The recently used deep sequencing techniques present a new data processing challenge: mapping short fragment reads to open-access eukaryotic (animal: focusing on mouse and rat) genomes at the scale of several hundred thousands.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The aim of the task is threefold, the first task is to port the BLAST/BWA algorithms to the massively parallel HP-SEE infrastructure create a BLAST/BWA service, which is capable to serve the short fragment sequence alignment demand of the regional bioinformatics communities, to do sequence analysis with high throughput short fragment sequence alignments against the eukaryotic genomes to search for regulatory mechanisms controlled by short fragments.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> Planned collaboration with the MoSGrid consortium (D-GRID based project, Germany) <br /> <br /> == Beneficiaries ==<br /> <br /> Serve the short fragment sequence alignment demand of the regional bioinformatics communities.<br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 5-15 scientific groups worldwide will use our service.<br /> <br /> == Number of users ==<br /> 5<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M18''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128-256''<br /> * Minimum RAM/core required: ''4-8 Gb''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''1-2 TB''<br /> * Total core hours required: ''1 500 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C,C++''<br /> * Parallel programming paradigm: ''Master-slave, MPI, + Multiple serial jobs (data-splitting, parametric studies)'' <br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as BLAST, BWA, or gene mapper applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> 2. IMPLEMENTATION OF THE GENERIC BLAST WORKFLOW<br /> <br /> Normal applications need to be firstly ported for use with gUSE/WS-PGRADE. Our used porting methodology includes two main steps: workflow development and user specific web interface development based on gUSE’s ASM (shown in Fig. 2.). gUSE is using a DAG (directed acyclic graph) based workflow concept. In a generic workflow, nodes represent jobs, which are basically batch programs to be executed on one of the DCI’s computing element. Ports represent input/output files the jobs receiving or producing. Arcs between ports represent file transfer operations. gUSE supports Parameter Study type high level parallelization. In the workflow special Generator ports can be used to generate the input files for all parallel jobs automatically while Collector jobs can run after all parallel execution to collect all parallel outputs. During the BLAST porting, we have exploited all the PS capabilities of gUSE.<br /> [[File:Devel wf.jpg|200px|thumb|left|Porting steps of the application]]<br /> <br /> Parallel job submission into the DCI environment needs to have parameter assignment of the generated parameters. gUSE’s PS workflow components were used to create a DCI-aware parallel BLAST application and realize a complex DCI workflow as a proof of concept. Later on the web-based DCI user interface was created using the Application Specific Module (ASM) of gUSE. On this web GUI, end-users can configure the input parameter like the “e” value or the number of MPI tasks and they can submit the alignment into the DCI environment with arbitrary large parameter fields.<br /> During the development of the workflow structure, we have aimed to construct a workflow that will be able to handle the main properties of the parallel BLAST application. To exploit the mechanism of Parameter Study used by gUSE the workflow has developed as a Parameter Study workflow with usage of autogenerator port (second small box around left top box in Fig 5.) and collector job (right bottom box in Fig. 5). The preprocessor job generates a set of input files from some pre-adjusted parameter. Then the second job (middle box in Fig. 5) will be executed as many times as the input files specify. <br /> The last job of the workflow is a Collector which is used to collect several files and then process them as a single input. Collectors force delayed job execution until the last file of the input file set to be collected has arrived to the Collector job. The workflow engine computes the expected number of input files at run time. When all the expected inputs arrived to the Collector it starts to process all the incoming inputs files as a single input set. Finally output files will be generated, and will be stored on a Storage Element of the DCI shown as little box around the Collector in. <br /> <br /> [[File:Blast wf.jpg|200px|thumb|left|Internal architecture of the generic blast workflow]]<br /> <br /> Due to the strict HPC security constraints, end users should posses valid certificate to utilize the HP-SEE Bioinformatics eScience Gateway. Users can utilize seamlessly the developed workflows on ARC based infrastructure (like the NIIF’s Hungarian supercomputing infrastructure) or on gLite/EMI based infrastructure (Service Grids like SEE-GRID-SCI, or SHIWA). After login, the users should create their own workflow based application instances, which are derived from pre-developed and well-tested workflows.<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''4 nodes 8 cores''<br /> * Accessed production systems:<br /> ''NIIF's infrastructure/HU''<br /> ** Applied for access on: ''09.2010''<br /> ** Access granted on: ''10.2010''<br /> ** Achieved scalability: ''96 cores''<br /> <br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 32, 59 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 32 cores benchmaring was initiated for 59 and 96 cores''<br /> * Other issues: ''There were painful authentication problems and access issues with the supercomputing infrastructure's local storage during porting. Some input parameter assignment optimisation and further study for higher scaling is still required.''<br /> <br /> == Scalability ==<br /> Benchmark dataset<br /> The blast database size was 5.1 GB, and the input sequence size was 29.13 kB. Each measurement was executed 10 times, the average of the 10 executions was taken as the final result <br /> Hardware platforms<br /> A number of hardware platforms have been used for the testing of the applications. The portlet we have developed is connected to all these different HPC infrastructures and it is the job of the middleware to choose the appropriate for each execution. For our benchmarks we specified the infrastructure the application was supposed to use.<br /> The benchmarks were executed on five different HPC infrastructures:<br /> *Debrecen<br /> **Intel Xeon X5680 (Westmere EP) 6 core nodes, SGI Altix ICE8400EX<br /> **1536 CPU cores<br /> **6 TB memory<br /> **0.5 PB storage<br /> **Total capacity: ~18 TFlops<br /> *Budapest (NIIF)<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **~700 cores<br /> **Total Capacity ~5 TFlops<br /> *Pecs<br /> **SGI UltraViolet 1000 - SMP (ccNUMA) <br /> **CPU: Intel Xeon X7542 (Nehalem EX) - 6 cores<br /> **1152 cores<br /> **6 TB memory<br /> **0.5 PB memory<br /> **Total capacity: ~10 TFlops<br /> *Szeged<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **2112 cores<br /> **5.6 TB memory<br /> **0.25 PB storage<br /> **Total Capacity ~14 TFlops<br /> *Bulgaria<br /> **Blue Gene/P with PowerPC CPUs<br /> **2048 PowerPC 450 based compute nodes<br /> **8192 cores<br /> **4 TB memory<br /> <br /> Software platforms<br /> The applications were tested using multiple software stack<br /> *Different MPI implementations<br /> **openmpi_gcc-1.4.3<br /> **openmpi_open64-1.6<br /> **mpt-2.04<br /> **openmpi-1.4.2<br /> **openmpi-1.3.2<br /> *Different compilers<br /> **opencc<br /> **icc<br /> **openmpi-gcc<br /> <br /> Each of the different hardware platforms have multiple MPI environments. We have tested our applications with multiple versions. There usually is one specific preferred at each of the HPC centers which we preferred using.<br /> <br /> *Execution times<br /> The following graphs show the results of the executions. The execution times varied a little depending on the hpc ceter used, but they were more or less stable so we only include the results from the Budapest server. The following graphs show the result of multiple executions of mpiBlast on the same database with the same input sequence on the same computer. The only difference being the number of CPU cores allocated to the MPI job . Figure 1 shows the execution times measured by mpiBlast. If executed on just one CPU it takes 3376 seconds for the job to finish (about 53 minutes). As we can see the applications scales well, the execution times drop when we add more and more CPUs.<br /> The scalability is linear until 128 cores.<br /> <br /> Further optimization<br /> The first task when using mpiBlast is to split the blast database into multiple fragments. According to previous research, the number of database fragments have a direct impact on the performance of the application. Finding an optimal number was essential, so our database was split into different sizes. Figure 4 shows the measured execution times. The measurements were executed on 64 cores.<br /> The execution times show that the application performs best when the number of DB segments are integer multiples of the number of CPU cores. The reason is straightforward: this is the only way an even data distribution can be achieved amongst the cores.<br /> *Profiling<br /> The two applications we have created share some of the code base which results in a similar behavior. Both applications consist of three jobs in a WS-PGrade workflow with job 1 being the preprocessor, job 2 doing the calculations and job 3 collecting the results and providing it to the user. The current implementation for the preprocessing is serial, we have investigated parallelizing but according to our profiling approximately 0.02 % of the total execution time is spent on Job 1 in Deep Aligner, so yields no real performance gain but can cause problems so we voted againts it. Job3 is 0.01% - most of the work is done in Job2. Job2 consists mainly of mpiBlast, the profiling shows the following results.<br /> <br /> Execution time ratio of the jobs in the whole Deep Aligner portlet.<br /> Job1: 0,02%<br /> Job2: 99,97%<br /> Job3: 0,01%<br /> <br /> <br /> Execution time ratio inside Job2<br /> Init: 1,79%<br /> BLAST: 97,18%<br /> Write: 0,19%<br /> Other: 0,84%<br /> <br /> <br /> *Memory<br /> Memory usage while executing the application. The results come from the maxvmem parameter of qacct:<br /> 1: 1,257<br /> 2: 2,112<br /> 4: 3,345<br /> 8: 4,131<br /> 16: 5,434<br /> 32: 6,012<br /> 48: 4,153<br /> 64: 8,745<br /> 96: 9,897<br /> 128:12,465 <br /> <br /> <br /> As we can see the memory consumtion (measured by qacct) increases as the number of cores is increased.<br /> <br /> *Communication<br /> mpiBlast uses a pre-segmented database and each node have their own part where it searches for the input sequence so the communication overhead is very small. <br /> <br /> *I/O<br /> I/O as measured using the io parameter of qacct:<br /> 1: 0,001<br /> 2: 0,001<br /> 4: 0,002<br /> 8: 0,003<br /> 16: 0,004<br /> 32: 0,011<br /> 48: 0,016<br /> 64: 0,019<br /> 96: 0,027<br /> 128:0,029 <br /> <br /> As we can see on the previous table the I/O use increases as we increase the number of CPU cores in the job.<br /> <br /> *Analysis<br /> From our tests, we conclude that our application scales reasonably well up until about 128 cores. When the appropriate MPI implementation is used on the HPC infrastructure the performance figures are quite similar – the scalability results are within the same region as expected. The number of database fragments play a significant role in the whole application and the best result can be obtained when that number is equal to or is an integer multiple of the number of cores. We have also noted that because of the high utilization of the supercomputing centers real life performance – wall clock time measured from the initialization of the job until the results are provided – could be better when using a smaller number of cores because small jobs tend to get scheduled easier and earlier.<br /> <br /> <br /> == Achieved Results ==<br /> The DeepAligner application was tested with parallel short DNA sequence searches successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> <br /> * M. Kozlovszky, G. Windisch, Á. Balaskó;Short fragment sequence alignment on the HP-SEE infrastructure;MIPRO 2012<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012<br /> <br /> == Foreseen Activities ==<br /> Parameter assignements optimisation of the GUI, more scientific publications about short sequence alignment.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DeepAligner DeepAligner 2013-01-25T13:21:56Z <p>Lifesci: </p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Deep sequencing for short fragment alignment''<br /> * Application's acronym: ''DeepAligner''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics''<br /> * Web site: <br /> http://ls-hpsee.nik.uni-obuda.hu:8080/liferay-portal-6.0.5<br /> http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Mapping short fragment reads to open-access eukaryotic genomes is solvable by BLAST and BWA and other sequence alignment tools - BLAST is one of the most frequently used tool in bioinformatics and BWA is a relative new fast light-weighted tool that aligns short sequences. Local installations of these algorithms are typically not able to handle such problem size therefore the procedure runs slowly, while web based implementations cannot accept high number of queries. SEE-HPC infrastructure allows accessing massively parallel architectures and the sequence alignment code is distributed free for academia. Due to the response time and service reliability requirements grid can not be an option for the DeepAligner application.<br /> <br /> == Problems Solved ==<br /> <br /> The recently used deep sequencing techniques present a new data processing challenge: mapping short fragment reads to open-access eukaryotic (animal: focusing on mouse and rat) genomes at the scale of several hundred thousands.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The aim of the task is threefold, the first task is to port the BLAST/BWA algorithms to the massively parallel HP-SEE infrastructure create a BLAST/BWA service, which is capable to serve the short fragment sequence alignment demand of the regional bioinformatics communities, to do sequence analysis with high throughput short fragment sequence alignments against the eukaryotic genomes to search for regulatory mechanisms controlled by short fragments.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> Planned collaboration with the MoSGrid consortium (D-GRID based project, Germany) <br /> <br /> == Beneficiaries ==<br /> <br /> Serve the short fragment sequence alignment demand of the regional bioinformatics communities.<br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 5-15 scientific groups worldwide will use our service.<br /> <br /> == Number of users ==<br /> 5<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M18''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128-256''<br /> * Minimum RAM/core required: ''4-8 Gb''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''1-2 TB''<br /> * Total core hours required: ''1 500 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C,C++''<br /> * Parallel programming paradigm: ''Master-slave, MPI, + Multiple serial jobs (data-splitting, parametric studies)'' <br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as BLAST, BWA, or gene mapper applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> 2. IMPLEMENTATION OF THE GENERIC BLAST WORKFLOW<br /> <br /> Normal applications need to be firstly ported for use with gUSE/WS-PGRADE. Our used porting methodology includes two main steps: workflow development and user specific web interface development based on gUSE’s ASM (shown in Fig. 2.). gUSE is using a DAG (directed acyclic graph) based workflow concept. In a generic workflow, nodes represent jobs, which are basically batch programs to be executed on one of the DCI’s computing element. Ports represent input/output files the jobs receiving or producing. Arcs between ports represent file transfer operations. gUSE supports Parameter Study type high level parallelization. In the workflow special Generator ports can be used to generate the input files for all parallel jobs automatically while Collector jobs can run after all parallel execution to collect all parallel outputs. During the BLAST porting, we have exploited all the PS capabilities of gUSE.<br /> [[File:Devel wf.jpg|200px|thumb|left|Porting steps of the application]]<br /> <br /> Parallel job submission into the DCI environment needs to have parameter assignment of the generated parameters. gUSE’s PS workflow components were used to create a DCI-aware parallel BLAST application and realize a complex DCI workflow as a proof of concept. Later on the web-based DCI user interface was created using the Application Specific Module (ASM) of gUSE. On this web GUI, end-users can configure the input parameter like the “e” value or the number of MPI tasks and they can submit the alignment into the DCI environment with arbitrary large parameter fields.<br /> During the development of the workflow structure, we have aimed to construct a workflow that will be able to handle the main properties of the parallel BLAST application. To exploit the mechanism of Parameter Study used by gUSE the workflow has developed as a Parameter Study workflow with usage of autogenerator port (second small box around left top box in Fig 5.) and collector job (right bottom box in Fig. 5). The preprocessor job generates a set of input files from some pre-adjusted parameter. Then the second job (middle box in Fig. 5) will be executed as many times as the input files specify. <br /> The last job of the workflow is a Collector which is used to collect several files and then process them as a single input. Collectors force delayed job execution until the last file of the input file set to be collected has arrived to the Collector job. The workflow engine computes the expected number of input files at run time. When all the expected inputs arrived to the Collector it starts to process all the incoming inputs files as a single input set. Finally output files will be generated, and will be stored on a Storage Element of the DCI shown as little box around the Collector in. <br /> <br /> [[File:Blast wf.jpg|200px|thumb|left|Internal architecture of the generic blast workflow]]<br /> <br /> Due to the strict HPC security constraints, end users should posses valid certificate to utilize the HP-SEE Bioinformatics eScience Gateway. Users can utilize seamlessly the developed workflows on ARC based infrastructure (like the NIIF’s Hungarian supercomputing infrastructure) or on gLite/EMI based infrastructure (Service Grids like SEE-GRID-SCI, or SHIWA). After login, the users should create their own workflow based application instances, which are derived from pre-developed and well-tested workflows.<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''4 nodes 8 cores''<br /> * Accessed production systems:<br /> ''NIIF's infrastructure/HU''<br /> ** Applied for access on: ''09.2010''<br /> ** Access granted on: ''10.2010''<br /> ** Achieved scalability: ''96 cores''<br /> <br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 32, 59 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 32 cores benchmaring was initiated for 59 and 96 cores''<br /> * Other issues: ''There were painful authentication problems and access issues with the supercomputing infrastructure's local storage during porting. Some input parameter assignment optimisation and further study for higher scaling is still required.''<br /> <br /> == Scalability ==<br /> Benchmark dataset<br /> The blast database size was 5.1 GB, and the input sequence size was 29.13 kB. Each measurement was executed 10 times, the average of the 10 executions was taken as the final result <br /> Hardware platforms<br /> A number of hardware platforms have been used for the testing of the applications. The portlet we have developed is connected to all these different HPC infrastructures and it is the job of the middleware to choose the appropriate for each execution. For our benchmarks we specified the infrastructure the application was supposed to use.<br /> The benchmarks were executed on five different HPC infrastructures:<br /> *Debrecen<br /> **Intel Xeon X5680 (Westmere EP) 6 core nodes, SGI Altix ICE8400EX<br /> **1536 CPU cores<br /> **6 TB memory<br /> **0.5 PB storage<br /> **Total capacity: ~18 TFlops<br /> *Budapest (NIIF)<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **~700 cores<br /> **Total Capacity ~5 TFlops<br /> *Pecs<br /> **SGI UltraViolet 1000 - SMP (ccNUMA) <br /> **CPU: Intel Xeon X7542 (Nehalem EX) - 6 cores<br /> **1152 cores<br /> **6 TB memory<br /> **0.5 PB memory<br /> **Total capacity: ~10 TFlops<br /> *Szeged<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **2112 cores<br /> **5.6 TB memory<br /> **0.25 PB storage<br /> **Total Capacity ~14 TFlops<br /> *Bulgaria<br /> **Blue Gene/P with PowerPC CPUs<br /> **2048 PowerPC 450 based compute nodes<br /> **8192 cores<br /> **4 TB memory<br /> <br /> Software platforms<br /> The applications were tested using multiple software stack<br /> *Different MPI implementations<br /> **openmpi_gcc-1.4.3<br /> **openmpi_open64-1.6<br /> **mpt-2.04<br /> **openmpi-1.4.2<br /> **openmpi-1.3.2<br /> *Different compilers<br /> **opencc<br /> **icc<br /> **openmpi-gcc<br /> <br /> Each of the different hardware platforms have multiple MPI environments. We have tested our applications with multiple versions. There usually is one specific preferred at each of the HPC centers which we preferred using.<br /> <br /> *Execution times<br /> The following graphs show the results of the executions. The execution times varied a little depending on the hpc ceter used, but they were more or less stable so we only include the results from the Budapest server. The following graphs show the result of multiple executions of mpiBlast on the same database with the same input sequence on the same computer. The only difference being the number of CPU cores allocated to the MPI job . Figure 1 shows the execution times measured by mpiBlast. If executed on just one CPU it takes 3376 seconds for the job to finish (about 53 minutes). As we can see the applications scales well, the execution times drop when we add more and more CPUs.<br /> The scalability is linear until 128 cores.<br /> <br /> Further optimization<br /> The first task when using mpiBlast is to split the blast database into multiple fragments. According to previous research, the number of database fragments have a direct impact on the performance of the application. Finding an optimal number was essential, so our database was split into different sizes. Figure 4 shows the measured execution times. The measurements were executed on 64 cores.<br /> The execution times show that the application performs best when the number of DB segments are integer multiples of the number of CPU cores. The reason is straightforward: this is the only way an even data distribution can be achieved amongst the cores.<br /> *Profiling<br /> The two applications we have created share some of the code base which results in a similar behavior. Both applications consist of three jobs in a WS-PGrade workflow with job 1 being the preprocessor, job 2 doing the calculations and job 3 collecting the results and providing it to the user. The current implementation for the preprocessing is serial, we have investigated parallelizing but according to our profiling approximately 0.02 % of the total execution time is spent on Job 1 in Deep Aligner, so yields no real performance gain but can cause problems so we voted againts it. Job3 is 0.01% - most of the work is done in Job2. Job2 consists mainly of mpiBlast, the profiling shows the following results.<br /> <br /> Job1 Job2 Job3<br /> 0,02% 99,97% 0,01%<br /> <br /> <br /> Init BLAST Write Other<br /> 1,79% 97,18% 0,19% 0,84%<br /> <br /> *Memory<br /> 1 2 4 8 16 32 48 64 96 128<br /> 1,257 2,112 3,345 4,131 5,434 6,012 4,153 8,745 9,897 12,465<br /> <br /> As we can see the memory consumtion (measured by qacct) increases as the number of cores is increased.<br /> <br /> *Communication<br /> mpiBlast uses a pre-segmented database and each node have their own part where it searches for the input sequence so the communication overhead is very small. <br /> <br /> *I/O<br /> I/O as measured using the io parameter of qacct:<br /> 1 2 4 8 16 32 48 64 96 128<br /> 0,001 0,001 0,002 0,003 0,004 0,011 0,016 0,019 0,027 0,029<br /> <br /> As we can see on the previous table the I/O use increases as we increase the number of CPU cores in the job.<br /> <br /> *Analysis<br /> From our tests, we conclude that our application scales reasonably well up until about 128 cores. When the appropriate MPI implementation is used on the HPC infrastructure the performance figures are quite similar – the scalability results are within the same region as expected. The number of database fragments play a significant role in the whole application and the best result can be obtained when that number is equal to or is an integer multiple of the number of cores. We have also noted that because of the high utilization of the supercomputing centers real life performance – wall clock time measured from the initialization of the job until the results are provided – could be better when using a smaller number of cores because small jobs tend to get scheduled easier and earlier.<br /> <br /> <br /> == Achieved Results ==<br /> The DeepAligner application was tested with parallel short DNA sequence searches successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> <br /> * M. Kozlovszky, G. Windisch, Á. Balaskó;Short fragment sequence alignment on the HP-SEE infrastructure;MIPRO 2012<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012<br /> <br /> == Foreseen Activities ==<br /> Parameter assignements optimisation of the GUI, more scientific publications about short sequence alignment.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DeepAligner DeepAligner 2013-01-25T12:01:20Z <p>Lifesci: </p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Deep sequencing for short fragment alignment''<br /> * Application's acronym: ''DeepAligner''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics''<br /> * Web site: <br /> http://ls-hpsee.nik.uni-obuda.hu:8080/liferay-portal-6.0.5<br /> http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Mapping short fragment reads to open-access eukaryotic genomes is solvable by BLAST and BWA and other sequence alignment tools - BLAST is one of the most frequently used tool in bioinformatics and BWA is a relative new fast light-weighted tool that aligns short sequences. Local installations of these algorithms are typically not able to handle such problem size therefore the procedure runs slowly, while web based implementations cannot accept high number of queries. SEE-HPC infrastructure allows accessing massively parallel architectures and the sequence alignment code is distributed free for academia. Due to the response time and service reliability requirements grid can not be an option for the DeepAligner application.<br /> <br /> == Problems Solved ==<br /> <br /> The recently used deep sequencing techniques present a new data processing challenge: mapping short fragment reads to open-access eukaryotic (animal: focusing on mouse and rat) genomes at the scale of several hundred thousands.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The aim of the task is threefold, the first task is to port the BLAST/BWA algorithms to the massively parallel HP-SEE infrastructure create a BLAST/BWA service, which is capable to serve the short fragment sequence alignment demand of the regional bioinformatics communities, to do sequence analysis with high throughput short fragment sequence alignments against the eukaryotic genomes to search for regulatory mechanisms controlled by short fragments.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> Planned collaboration with the MoSGrid consortium (D-GRID based project, Germany) <br /> <br /> == Beneficiaries ==<br /> <br /> Serve the short fragment sequence alignment demand of the regional bioinformatics communities.<br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 5-15 scientific groups worldwide will use our service.<br /> <br /> == Number of users ==<br /> 5<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M18''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128-256''<br /> * Minimum RAM/core required: ''4-8 Gb''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''1-2 TB''<br /> * Total core hours required: ''1 500 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C,C++''<br /> * Parallel programming paradigm: ''Master-slave, MPI, + Multiple serial jobs (data-splitting, parametric studies)'' <br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as BLAST, BWA, or gene mapper applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> 2. IMPLEMENTATION OF THE GENERIC BLAST WORKFLOW<br /> <br /> Normal applications need to be firstly ported for use with gUSE/WS-PGRADE. Our used porting methodology includes two main steps: workflow development and user specific web interface development based on gUSE’s ASM (shown in Fig. 2.). gUSE is using a DAG (directed acyclic graph) based workflow concept. In a generic workflow, nodes represent jobs, which are basically batch programs to be executed on one of the DCI’s computing element. Ports represent input/output files the jobs receiving or producing. Arcs between ports represent file transfer operations. gUSE supports Parameter Study type high level parallelization. In the workflow special Generator ports can be used to generate the input files for all parallel jobs automatically while Collector jobs can run after all parallel execution to collect all parallel outputs. During the BLAST porting, we have exploited all the PS capabilities of gUSE.<br /> [[File:Devel wf.jpg|200px|thumb|left|Porting steps of the application]]<br /> <br /> Parallel job submission into the DCI environment needs to have parameter assignment of the generated parameters. gUSE’s PS workflow components were used to create a DCI-aware parallel BLAST application and realize a complex DCI workflow as a proof of concept. Later on the web-based DCI user interface was created using the Application Specific Module (ASM) of gUSE. On this web GUI, end-users can configure the input parameter like the “e” value or the number of MPI tasks and they can submit the alignment into the DCI environment with arbitrary large parameter fields.<br /> During the development of the workflow structure, we have aimed to construct a workflow that will be able to handle the main properties of the parallel BLAST application. To exploit the mechanism of Parameter Study used by gUSE the workflow has developed as a Parameter Study workflow with usage of autogenerator port (second small box around left top box in Fig 5.) and collector job (right bottom box in Fig. 5). The preprocessor job generates a set of input files from some pre-adjusted parameter. Then the second job (middle box in Fig. 5) will be executed as many times as the input files specify. <br /> The last job of the workflow is a Collector which is used to collect several files and then process them as a single input. Collectors force delayed job execution until the last file of the input file set to be collected has arrived to the Collector job. The workflow engine computes the expected number of input files at run time. When all the expected inputs arrived to the Collector it starts to process all the incoming inputs files as a single input set. Finally output files will be generated, and will be stored on a Storage Element of the DCI shown as little box around the Collector in. <br /> <br /> [[File:Blast wf.jpg|200px|thumb|left|Internal architecture of the generic blast workflow]]<br /> <br /> Due to the strict HPC security constraints, end users should posses valid certificate to utilize the HP-SEE Bioinformatics eScience Gateway. Users can utilize seamlessly the developed workflows on ARC based infrastructure (like the NIIF’s Hungarian supercomputing infrastructure) or on gLite/EMI based infrastructure (Service Grids like SEE-GRID-SCI, or SHIWA). After login, the users should create their own workflow based application instances, which are derived from pre-developed and well-tested workflows.<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''4 nodes 8 cores''<br /> * Accessed production systems:<br /> ''NIIF's infrastructure/HU''<br /> ** Applied for access on: ''09.2010''<br /> ** Access granted on: ''10.2010''<br /> ** Achieved scalability: ''96 cores''<br /> <br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 32, 59 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 32 cores benchmaring was initiated for 59 and 96 cores''<br /> * Other issues: ''There were painful authentication problems and access issues with the supercomputing infrastructure's local storage during porting. Some input parameter assignment optimisation and further study for higher scaling is still required.''<br /> <br /> == Scalability ==<br /> Benchmark dataset<br /> The blast database size was 5.1 GB, and the input sequence size was 29.13 kB. Each measurement was executed 10 times, the average of the 10 executions was taken as the final result <br /> Hardware platforms<br /> A number of hardware platforms have been used for the testing of the applications. The portlet we have developed is connected to all these different HPC infrastructures and it is the job of the middleware to choose the appropriate for each execution. For our benchmarks we specified the infrastructure the application was supposed to use.<br /> The benchmarks were executed on five different HPC infrastructures:<br /> *Debrecen<br /> **Intel Xeon X5680 (Westmere EP) 6 core nodes, SGI Altix ICE8400EX<br /> **1536 CPU cores<br /> **6 TB memory<br /> **0.5 PB storage<br /> **Total capacity: ~18 TFlops<br /> *Budapest (NIIF)<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **~700 cores<br /> **Total Capacity ~5 TFlops<br /> *Pecs<br /> **SGI UltraViolet 1000 - SMP (ccNUMA) <br /> **CPU: Intel Xeon X7542 (Nehalem EX) - 6 cores<br /> **1152 cores<br /> **6 TB memory<br /> **0.5 PB memory<br /> **Total capacity: ~10 TFlops<br /> *Szeged<br /> **fat-node cluster using CP4000BL blade<br /> **AMD Opteron 6174 CPUs, 12 cores (Magny Cours)<br /> **2112 cores<br /> **5.6 TB memory<br /> **0.25 PB storage<br /> **Total Capacity ~14 TFlops<br /> *Bulgaria<br /> **Blue Gene/P with PowerPC CPUs<br /> **2048 PowerPC 450 based compute nodes<br /> **8192 cores<br /> **4 TB memory<br /> <br /> Software platforms<br /> The applications were tested using multiple software stack<br /> *Different MPI implementations<br /> **openmpi_gcc-1.4.3<br /> **openmpi_open64-1.6<br /> **mpt-2.04<br /> **openmpi-1.4.2<br /> **openmpi-1.3.2<br /> *Different compilers<br /> **opencc<br /> **icc<br /> **openmpi-gcc<br /> <br /> Each of the different hardware platforms have multiple MPI environments. We have tested our applications with multiple versions. There usually is one specific preferred at each of the HPC centers which we preferred using.<br /> <br /> <br /> == Achieved Results ==<br /> The DeepAligner application was tested with parallel short DNA sequence searches successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> <br /> * M. Kozlovszky, G. Windisch, Á. Balaskó;Short fragment sequence alignment on the HP-SEE infrastructure;MIPRO 2012<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012<br /> <br /> == Foreseen Activities ==<br /> Parameter assignements optimisation of the GUI, more scientific publications about short sequence alignment.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/CMSLTM CMSLTM 2013-01-16T14:22:42Z <p>Lifesci: /* Running on Several HP-SEE Centres */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Computational Models of Short and Long Term Memory''<br /> * Application's acronym: ''CMSLTM'' <br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Dr. Panayiota Poirazi, poirazi@imbb.forth.gr''<br /> * Technical contact: ''George Kastellakis, gkastel@imbb.forth.gr''<br /> * Developers: ''George Kastellakis, IMBB-FORTH, Greece''<br /> * Web site: http://www.imbb.forth.gr/people/poirazi/drupal/?q=node/7<br /> <br /> == Short Description ==<br /> <br /> This project involves the development of biologically relevant compartmentalized models of neurons and neuronal networks. We are interested in the modeling of processes related to:<br /> <br /> 1. Models of sustained activity in the Prefrontal Cortex <br /> <br /> Neurons in the prefrontal cortex display sustained activity in response to environmental or internal stimuli, that is continue to fire until the behavioral outcome or a reward signal. Mostly large-scale modeling studies have proposed intensive recurrence and slow excitation mediated by NMDA receptors as crucial mechanisms able to support the sustained excitation in these neurons. In addition, electrophysiological studies suggest that single-cell intrinsic currents also underlie the delayed excitation of prefrontal neurons.<br /> This project is focused on the interplay of both the computational and electrophysiological approaches in characterizing the activity observed at layer V prefrontal pyramidal neurons. Towards this goal we use morphologically simplified compartmental models of layer V neurons (both pyramidal and interneurons) implemented in the NEURON simulation environment. These neurons are fully interconnected in a small network, the properties of which are extensively based on anatomical and electrophysiological data. <br /> <br /> 2. Models of Fear Memory Allocation in the Amygdala<br /> <br /> One of the goals of neuroscience is to understand the process via which memories are encoded and stored in the brain. Recent experiments have demonstrated how memories are encoded in specific neuron groups in the brain. Traditionally, it is thought that the strengthening of synaptic connections via synaptic plasticity is the mechanism underlying memory formation in the cortex. New insights indicate that other factors, such as neuronal excitability and competition among neurons may crucially affect the formation of a memory trace. The transcription factor CREB has been shown to modulate the probability of allocation of memory to specific groups of neurons in the Lateral Amygdala. The goal of our computational work is to investigate the process of memory allocation and the properties of the memory trace.<br /> <br /> == Problems Solved ==<br /> <br /> a) The PFC microcircuit is used to characterize:<br /> <br /> - the generation of UP and Down states, observed during both in vivo and in vitro recordings;<br /> <br /> - the interplay of single cell ionic with synaptic currents for the emergence of sustained excitability;<br /> <br /> - the role of both synaptic and intrinsic plasticity in long term memory formation in the prefrontal cortex.<br /> <br /> b) By creating a large scale computational model of the lateral amygdala, we aim to investigate how the modulation of excitability, synaptic plasticity, homeostatic plasticity and neuronal inhibition affect the formation of fear memories in the lateral amygdala. <br /> <br /> == Scientific and Social Impact ==<br /> <br /> Understanding the properties that make these neurons special in carrying temporal distinct information by using a bottom-up approach is a key issue in unraveling the complicated dynamics and flexibility of prefrontal neurons during behavioral tasks. <br /> <br /> Our fear memory simulations can provide insights in the relationships between memory traces and the role of CREB. Our results can be useful in understanding the outcomes of related behavioral and electrophysiological studies.<br /> <br /> == Collaborations ==<br /> * Univeristy of Crete - Biology Department<br /> * Buszaki Lab, Rutgers Univ.<br /> <br /> == Beneficiaries ==<br /> * Univeristy of Crete - Biology Department<br /> * Computational Biology Lab - IMBB/FORTH<br /> <br /> == Number of users ==<br /> 2<br /> <br /> == Development plan== <br /> <br /> * Concept: ''Completed.''<br /> * Start of alpha stage: ''M5''<br /> * Start of beta stage: ''M8''<br /> * Start of testing stage: ''M9''<br /> * Start of deployment stage: ''M10''<br /> * Start of production stage: ''M15''<br /> <br /> == Resource requirements == <br /> <br /> * Number of cores required for a single run: ''60''<br /> * Minimum RAM/core required: ''2GB''<br /> * Storage space during a single run: ''6GB''<br /> * Long-term data storage: ''10GB''<br /> * Total core hours required: ''15000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''NEURON''<br /> * Parallel programming paradigm: ''MPI''<br /> * Main parallel code: ''OpenMPI''<br /> * Pre/post processing code: ''Python and matlab scripts''<br /> * Application tools and libraries: ''NEURON, SciPy, Matplotlib''<br /> <br /> == Usage Example ==<br /> The application requires compilation of the neuron mechanism libraries in the target platform:<br /> cd APP_DIR/mechanisms; nnrivmodl<br /> <br /> The launcher script &quot;start.sh&quot; starts a neuron process with the correct parameters, and it is submitted to the HPC cluster through PBS<br /> mpiexec ../mechanism/x86_64/special -mpi main_code.hoc<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''HPCG/BG''<br /> ** Applied for access on: ''02.2011''<br /> ** Access granted on: ''02/2011''<br /> ** Achieved scalability: ''50 cores''<br /> <br /> * Accessed production systems:<br /> ** Applied for access on: ''02.2011''<br /> ** Access granted on: ''02/2011''<br /> ** Achieved scalability: ''50 cores''<br /> <br /> * Porting activities: ''Application ported and running since 04/2011''<br /> * Scalability studies: ''5-20 Cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> [[CMSLTM/Scalability|Click here for full Benchmark Analysis]]<br /> <br /> <br /> * Benchmarking activities and results: ''Project deployed on HPCG-BG''<br /> <br /> [[Image:cmsltm_B1.jpg]]<br /> <br /> <br /> * Other issues: ''.''<br /> <br /> == Achieved Results ==<br /> * Parallel Simulations scaled our existing models up by a factor of 10<br /> <br /> * Extensive parameter exploration of the CA1 stimulation model has been completed<br /> [[Image:Pex.jpg]]<br /> <br /> == Publications ==<br /> * Daphne Krioneriti*, Athanasia Papoutsi and Panayiota Poirazi (2011) Mechanisms underlying the emergence of Up and Down states in a model PFC microcircuit. BMC Neuroscience 2011,12 (Suppl 1):O7<br /> * Papoutsi A, Sidiropoulou K and Poirazi P., “Temporal Dynamics underlie bi-stability in a model PFC microcircuit” (submitted)<br /> * Poirazi P. “Dendrites and Information processing” invited seminar, Bernstein Center Freiburg, 27/3/2012<br /> <br /> == Foreseen activities ==<br /> * Simulations for larger time frames. <br /> * Simulations with variable connectivity (i.e. not all-to-all networks)<br /> * Assessment of the role of feedback inhibition in sustained activity</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/CMSLTM CMSLTM 2013-01-16T14:22:24Z <p>Lifesci: /* Running on Several HP-SEE Centres */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Computational Models of Short and Long Term Memory''<br /> * Application's acronym: ''CMSLTM'' <br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Dr. Panayiota Poirazi, poirazi@imbb.forth.gr''<br /> * Technical contact: ''George Kastellakis, gkastel@imbb.forth.gr''<br /> * Developers: ''George Kastellakis, IMBB-FORTH, Greece''<br /> * Web site: http://www.imbb.forth.gr/people/poirazi/drupal/?q=node/7<br /> <br /> == Short Description ==<br /> <br /> This project involves the development of biologically relevant compartmentalized models of neurons and neuronal networks. We are interested in the modeling of processes related to:<br /> <br /> 1. Models of sustained activity in the Prefrontal Cortex <br /> <br /> Neurons in the prefrontal cortex display sustained activity in response to environmental or internal stimuli, that is continue to fire until the behavioral outcome or a reward signal. Mostly large-scale modeling studies have proposed intensive recurrence and slow excitation mediated by NMDA receptors as crucial mechanisms able to support the sustained excitation in these neurons. In addition, electrophysiological studies suggest that single-cell intrinsic currents also underlie the delayed excitation of prefrontal neurons.<br /> This project is focused on the interplay of both the computational and electrophysiological approaches in characterizing the activity observed at layer V prefrontal pyramidal neurons. Towards this goal we use morphologically simplified compartmental models of layer V neurons (both pyramidal and interneurons) implemented in the NEURON simulation environment. These neurons are fully interconnected in a small network, the properties of which are extensively based on anatomical and electrophysiological data. <br /> <br /> 2. Models of Fear Memory Allocation in the Amygdala<br /> <br /> One of the goals of neuroscience is to understand the process via which memories are encoded and stored in the brain. Recent experiments have demonstrated how memories are encoded in specific neuron groups in the brain. Traditionally, it is thought that the strengthening of synaptic connections via synaptic plasticity is the mechanism underlying memory formation in the cortex. New insights indicate that other factors, such as neuronal excitability and competition among neurons may crucially affect the formation of a memory trace. The transcription factor CREB has been shown to modulate the probability of allocation of memory to specific groups of neurons in the Lateral Amygdala. The goal of our computational work is to investigate the process of memory allocation and the properties of the memory trace.<br /> <br /> == Problems Solved ==<br /> <br /> a) The PFC microcircuit is used to characterize:<br /> <br /> - the generation of UP and Down states, observed during both in vivo and in vitro recordings;<br /> <br /> - the interplay of single cell ionic with synaptic currents for the emergence of sustained excitability;<br /> <br /> - the role of both synaptic and intrinsic plasticity in long term memory formation in the prefrontal cortex.<br /> <br /> b) By creating a large scale computational model of the lateral amygdala, we aim to investigate how the modulation of excitability, synaptic plasticity, homeostatic plasticity and neuronal inhibition affect the formation of fear memories in the lateral amygdala. <br /> <br /> == Scientific and Social Impact ==<br /> <br /> Understanding the properties that make these neurons special in carrying temporal distinct information by using a bottom-up approach is a key issue in unraveling the complicated dynamics and flexibility of prefrontal neurons during behavioral tasks. <br /> <br /> Our fear memory simulations can provide insights in the relationships between memory traces and the role of CREB. Our results can be useful in understanding the outcomes of related behavioral and electrophysiological studies.<br /> <br /> == Collaborations ==<br /> * Univeristy of Crete - Biology Department<br /> * Buszaki Lab, Rutgers Univ.<br /> <br /> == Beneficiaries ==<br /> * Univeristy of Crete - Biology Department<br /> * Computational Biology Lab - IMBB/FORTH<br /> <br /> == Number of users ==<br /> 2<br /> <br /> == Development plan== <br /> <br /> * Concept: ''Completed.''<br /> * Start of alpha stage: ''M5''<br /> * Start of beta stage: ''M8''<br /> * Start of testing stage: ''M9''<br /> * Start of deployment stage: ''M10''<br /> * Start of production stage: ''M15''<br /> <br /> == Resource requirements == <br /> <br /> * Number of cores required for a single run: ''60''<br /> * Minimum RAM/core required: ''2GB''<br /> * Storage space during a single run: ''6GB''<br /> * Long-term data storage: ''10GB''<br /> * Total core hours required: ''15000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''NEURON''<br /> * Parallel programming paradigm: ''MPI''<br /> * Main parallel code: ''OpenMPI''<br /> * Pre/post processing code: ''Python and matlab scripts''<br /> * Application tools and libraries: ''NEURON, SciPy, Matplotlib''<br /> <br /> == Usage Example ==<br /> The application requires compilation of the neuron mechanism libraries in the target platform:<br /> cd APP_DIR/mechanisms; nnrivmodl<br /> <br /> The launcher script &quot;start.sh&quot; starts a neuron process with the correct parameters, and it is submitted to the HPC cluster through PBS<br /> mpiexec ../mechanism/x86_64/special -mpi main_code.hoc<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''HPCG/BG''<br /> ** Applied for access on: ''02.2011''<br /> ** Access granted on: ''02/2011''<br /> ** Achieved scalability: ''50 cores''<br /> <br /> * Accessed production systems:<br /> ** Applied for access on: ''02.2011''<br /> ** Access granted on: ''02/2011''<br /> ** Achieved scalability: ''50 cores''<br /> <br /> * Porting activities: ''Application ported and running since 04/2011''<br /> * Scalability studies: ''5-20 Cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> [[CMSLTM/Scalability Click here for full Benchmark Analysis]]<br /> <br /> <br /> * Benchmarking activities and results: ''Project deployed on HPCG-BG''<br /> <br /> [[Image:cmsltm_B1.jpg]]<br /> <br /> <br /> * Other issues: ''.''<br /> <br /> == Achieved Results ==<br /> * Parallel Simulations scaled our existing models up by a factor of 10<br /> <br /> * Extensive parameter exploration of the CA1 stimulation model has been completed<br /> [[Image:Pex.jpg]]<br /> <br /> == Publications ==<br /> * Daphne Krioneriti*, Athanasia Papoutsi and Panayiota Poirazi (2011) Mechanisms underlying the emergence of Up and Down states in a model PFC microcircuit. BMC Neuroscience 2011,12 (Suppl 1):O7<br /> * Papoutsi A, Sidiropoulou K and Poirazi P., “Temporal Dynamics underlie bi-stability in a model PFC microcircuit” (submitted)<br /> * Poirazi P. “Dendrites and Information processing” invited seminar, Bernstein Center Freiburg, 27/3/2012<br /> <br /> == Foreseen activities ==<br /> * Simulations for larger time frames. <br /> * Simulations with variable connectivity (i.e. not all-to-all networks)<br /> * Assessment of the role of feedback inhibition in sustained activity</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/CMSLTM/Scalability CMSLTM/Scalability 2013-01-16T14:21:35Z <p>Lifesci: /* Examples */</p> <hr /> <div>__TOC__<br /> <br /> {| style=&quot;width: 100%; border-collapse: separate; border-spacing: 0; border-width: 1px; border-style: solid; border-color: #000; padding: 0&quot;<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot; | Code author(s): George Kastellakis<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| Application areas: Life Sciences<br /> |-<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Language: NEURON<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Estimated lines of code: 2000<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| URL: http://wiki.hp-see.eu/index.php/CMSLTM<br /> |-<br /> |}<br /> <br /> == Implemented scalability actions ==<br /> <br /> Actions:<br /> * Estimation of performance by running a smaller network in different number of processors<br /> * Our simulations were implemented in the NEURON simulator, which uses MPI for parallelization. We weren't able to evaluate different MPI implementations, because it was only possible to compile NEURON with openMPI.<br /> * Different compilers: We attempted to compile NEURON using intel compilers, however it was not possible due to incompatibilities. Although a precompiled binary is provided in the system, NEURON needs to recompile its own modules every time a change is made, therefore it is not possible to run the intel-compiled binary.<br /> <br /> == Benchmark dataset ==<br /> The simulation consisted of a network configuration that was smaller than the one used in production. <br /> <br /> == Hardware platforms ==<br /> Simulations were run on HPCG/BG in Sofia, Bulgaria.<br /> <br /> == Execution times ==<br /> <br /> Simulation times and graphs for different numbers of simulated neurons and number of computational nodes are shown below<br /> <br /> [[Image:cmsltm_B1.jpg]]<br /> <br /> == Memory Usage ==<br /> Memory usage is stable &lt; 2GB as our application is not memory-intensive.<br /> <br /> == Profiling ==<br /> We could not perform profiling due to lack of profiling support by the underlying platform (NEURON)<br /> <br /> == Communication ==<br /> Interprocess communication through MPI. OpenMPI was the MPI implementation we used. We were unable to compile NEURON to work correctly with other MPI implementations.<br /> <br /> == I/O ==<br /> We did not perform benchmarks, as our application is not IO-intensive<br /> <br /> == CPU and cache ==<br /> No data<br /> <br /> == Derived metrics ==<br /> <br /> <br /> == Analysis ==<br /> <br /> Our simulations show sub-linear scaling as new computational nodes are added. We identified optimal configurations that compromise between simulation size and number of nodes.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/CMSLTM/Scalability CMSLTM/Scalability 2013-01-16T14:21:05Z <p>Lifesci: /* I/O */</p> <hr /> <div>__TOC__<br /> <br /> {| style=&quot;width: 100%; border-collapse: separate; border-spacing: 0; border-width: 1px; border-style: solid; border-color: #000; padding: 0&quot;<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot; | Code author(s): George Kastellakis<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| Application areas: Life Sciences<br /> |-<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Language: NEURON<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Estimated lines of code: 2000<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| URL: http://wiki.hp-see.eu/index.php/CMSLTM<br /> |-<br /> |}<br /> <br /> == Implemented scalability actions ==<br /> <br /> Actions:<br /> * Estimation of performance by running a smaller network in different number of processors<br /> * Our simulations were implemented in the NEURON simulator, which uses MPI for parallelization. We weren't able to evaluate different MPI implementations, because it was only possible to compile NEURON with openMPI.<br /> * Different compilers: We attempted to compile NEURON using intel compilers, however it was not possible due to incompatibilities. Although a precompiled binary is provided in the system, NEURON needs to recompile its own modules every time a change is made, therefore it is not possible to run the intel-compiled binary.<br /> <br /> == Benchmark dataset ==<br /> The simulation consisted of a network configuration that was smaller than the one used in production. <br /> <br /> == Hardware platforms ==<br /> Simulations were run on HPCG/BG in Sofia, Bulgaria.<br /> <br /> == Execution times ==<br /> <br /> Simulation times and graphs for different numbers of simulated neurons and number of computational nodes are shown below<br /> <br /> [[Image:cmsltm_B1.jpg]]<br /> <br /> == Memory Usage ==<br /> Memory usage is stable &lt; 2GB as our application is not memory-intensive.<br /> <br /> == Profiling ==<br /> We could not perform profiling due to lack of profiling support by the underlying platform (NEURON)<br /> <br /> == Communication ==<br /> Interprocess communication through MPI. OpenMPI was the MPI implementation we used. We were unable to compile NEURON to work correctly with other MPI implementations.<br /> <br /> == I/O ==<br /> We did not perform benchmarks, as our application is not IO-intensive<br /> <br /> == CPU and cache ==<br /> No data<br /> <br /> == Derived metrics ==<br /> <br /> <br /> == Analysis ==<br /> <br /> Our simulations show sub-linear scaling as new computational nodes are added. We identified optimal configurations that compromise between simulation size and number of nodes.<br /> <br /> <br /> <br /> = Examples =<br /> <br /> == SET ==<br /> <br /> {| style=&quot;width: 100%; border-collapse: separate; border-spacing: 0; border-width: 1px; border-style: solid; border-color: #000; padding: 0&quot;<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot; | Code author(s): Team leader Emanouil Atanassov<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| Application areas: Computational Physics<br /> |-<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Language: C/C++<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Estimated lines of code: 6000<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| URL: http://wiki.hp-see.eu/index.php/SET<br /> |-<br /> |}<br /> <br /> === Implemented scalability actions ===<br /> <br /> * Our focus in this application was to achieve the optimal output from the hardware platforms that were available to us. Achieving good scalability depends mostly on avoiding bottlenecks and using good parallel pseudorandom number generators and generators for low-discrepancy sequences. Because of the high requirements for computing time we took several actions in order to achieve the optimal output. <br /> * The parallelization has been performed with MPI. Different version of MPI were tested and we found that the particular choice of MPI does not change much the scalability results. This was fortunate outcome as it allowed porting to the Blue Gene/P architecture without substantial changes. <br /> * Once we ensured that the MPI parallelization model we implemented achieves good parallel efficiency, we concentrated on achieving the best possible results from using single CPU core. <br /> * We performed profiling and benchmarking, also tested different generators and compared different pseudo-random number generators and low-discrepancy sequences. <br /> * We tested various compilers and we concluded that the Intel compiler currently provides the best results for the CPU version running at our Intel Xeon cluster. For the IBM Blue Gene/P architecture the obvious choice was the IBM XL compiler suite since it has advantage versus the GNU Compiler Collection in that it supports the double-hammer mode of the CPUs, achieving twice the floating point calculation speeds. For the GPU-based version that we developed recently we relay on the C++ compiler supplied by NVIDIA. <br /> * For all the choosen compilers we performed tests to choose the best possible compiler and linker options. For the Intel-based cluster one important source of ideas for the options was the website of the SPEC tests, where one can see what options were used for each particular sub-test of the SPEC suite. From there we also took the idea to perform two-pass compilation, where the results from profiling on the first pass were fed to the second pass of the compilation to optimise further. <br /> * For the HPCG cluster we also measured the performance of the parallel code with and without hyperthreading. It is well known that hyperthreading does not always improve the overall speed of calculations, because the floating point units of the processor are shared between the threads and thus if the code is highly intensive in such computations, there is no gain to be made from hyperthreading. Our experience with other application of the HP-SEE project yields such examples. But for the SET application we found about 30% improvement when hyperthreading is turned on, which should be considered a good results and also shows that our overall code is efficient in the sense that most of it is now floating point computations, unlike some earlier version where the gain from hyperthreading was larger. <br /> * For the NVIDIA-based version we found that we have much better performance using the newer M2090 cards versus the old GTX295, which was to be expected because the integer performance of the GTX 295 is comparable to that of M2090, but the floating performance of the GTX is many times smaller. <br /> <br /> === Benchmark dataset ===<br /> <br /> For the benchmarking we fixed a particular division of the domain into 800 by 260 points, electric field of 15 and 180 femto-seconds evolution time. The computational time in such case becomes proportational to the number of Markov Chain Monte Carlo trajectories. In most tests we used 1 billion (10^9) trajectories, but for some tests we decreased that in order to shorten the overall testing time. <br /> <br /> === Hardware platforms ===<br /> <br /> HPCG cluster and Blue Gene/P supercomputer. <br /> <br /> Four distinct hardware platforms were used:<br /> * the HPCG cluster with Intel Xeon X5560 CPU @2.8 Ghz, <br /> * Blue Gene/P with PowerPC CPUs, <br /> * our GTX 295-based GPU cluster (with processors Intel Core i7 920) <br /> * our new M2090-based resource with processors Intel Xeon X5650. <br /> <br /> === Execution times ===<br /> <br /> [[File:Scalability_example1.png|center|500px]]<br /> <br /> Comparison of the execution time and parallel efficiency of SET application are shown on HPCG (Table below) and BlueGene/P ( Table above).<br /> <br /> [[File:Scalability_example2.png|center|500px]]<br /> <br /> [[File:SET-scalability-graph-BG-P.jpg|center|500px]]<br /> <br /> === Memory Usage ===<br /> <br /> The maximum memory usage of a single computational thread is relatively small, in the order of 100 MB. On the GPUs there are several different kinds of memory, some of them rather limited. The available registers and the shared memory are especially problematic, since there is a risk if the available registers are all used some local variables to be spilled to global memory, encountering high latency and other issues. Still we found reasonable performance using 256 GPU threads, which is an acceptable number.<br /> <br /> === Profiling ===<br /> <br /> Profiling was performed in order to improve the compiler optimisation during the second pass and also in order to understand what kind of issues we may be having in the application. We found as expected that most of the computational time is spent in computing of transcendental function like sin, cos, exp, and also in the generation of pseudorandom numbers. We attempted in the GPU version to replace the regular sin, cos, etc., with the less-accurate versions that are more efficient, but we found that the gain from that is relatively small and is not worth the loss of accuracy. For the GPU-based version we obtained relatively high percentage of divergence within warps, which means that some logical statements are resolved differently within threads of the same warp and there is substantial loss of performance. So far we have not been able to re-order the computation so as to avoid it.<br /> <br /> === Communication ===<br /> <br /> The communication for this application is not critical in the sense that the communication takes less than 10% of the execution time.<br /> <br /> === I/O ===<br /> <br /> The input for the application is small, containing the parameters of the problem at hand. The output is written out at the end of the computation and its size depends on the parameters. For a reasonable size of the domain the output is in the order of several megabytes. More accurate mesh is reasonable only for smaller evolution times and the output size will be proportional to the size of the mesh<br /> <br /> === CPU and cache ===<br /> <br /> We believe that most of the computations of the CPU-based version fit in the cache for the Intel-based version. For the PowerPC processors of the Blue Gene/P some lookup operations when sampling the random variables use the main memory and thus entice higher latency. For the GPU-based version the situation is similar, since some of the tables are larger than the size of the so-called shared-memory. In both cases, the overall significance of these operations is less than 5%. <br /> <br /> === Analysis ===<br /> <br /> From our testing we concluded that hyperthreading should be used when available, production Tesla cards have much higher performance than essentially gaming cards like GTX 295, two passes of compilation should be used for the Intel compiler targeting Intel CPUs and that the application is scalable to the maximum number of available cores/threads at our disposable. For future work it remains to find an efficient strategy of reordering of the computations on the GPUs in order to avoid warp divergence. For the CPU-based version we have also developed an MPI meta-program that measures the variation and uses genetic algorithm (from galib library) to optimise the transition density. This step will be added as a pre-processing stage of the program in order to provide some speedup in order of 20% to the overall computations, but to do so we need to find the right balance between this stage and the main computational stage.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/CMSLTM/Scalability CMSLTM/Scalability 2013-01-16T14:20:34Z <p>Lifesci: </p> <hr /> <div>__TOC__<br /> <br /> {| style=&quot;width: 100%; border-collapse: separate; border-spacing: 0; border-width: 1px; border-style: solid; border-color: #000; padding: 0&quot;<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot; | Code author(s): George Kastellakis<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| Application areas: Life Sciences<br /> |-<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Language: NEURON<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Estimated lines of code: 2000<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| URL: http://wiki.hp-see.eu/index.php/CMSLTM<br /> |-<br /> |}<br /> <br /> == Implemented scalability actions ==<br /> <br /> Actions:<br /> * Estimation of performance by running a smaller network in different number of processors<br /> * Our simulations were implemented in the NEURON simulator, which uses MPI for parallelization. We weren't able to evaluate different MPI implementations, because it was only possible to compile NEURON with openMPI.<br /> * Different compilers: We attempted to compile NEURON using intel compilers, however it was not possible due to incompatibilities. Although a precompiled binary is provided in the system, NEURON needs to recompile its own modules every time a change is made, therefore it is not possible to run the intel-compiled binary.<br /> <br /> == Benchmark dataset ==<br /> The simulation consisted of a network configuration that was smaller than the one used in production. <br /> <br /> == Hardware platforms ==<br /> Simulations were run on HPCG/BG in Sofia, Bulgaria.<br /> <br /> == Execution times ==<br /> <br /> Simulation times and graphs for different numbers of simulated neurons and number of computational nodes are shown below<br /> <br /> [[Image:cmsltm_B1.jpg]]<br /> <br /> == Memory Usage ==<br /> Memory usage is stable &lt; 2GB as our application is not memory-intensive.<br /> <br /> == Profiling ==<br /> We could not perform profiling due to lack of profiling support by the underlying platform (NEURON)<br /> <br /> == Communication ==<br /> Interprocess communication through MPI. OpenMPI was the MPI implementation we used. We were unable to compile NEURON to work correctly with other MPI implementations.<br /> <br /> == I/O ==<br /> No data<br /> <br /> == CPU and cache ==<br /> No data<br /> <br /> == Derived metrics ==<br /> <br /> <br /> == Analysis ==<br /> <br /> Our simulations show sub-linear scaling as new computational nodes are added. We identified optimal configurations that compromise between simulation size and number of nodes.<br /> <br /> <br /> <br /> = Examples =<br /> <br /> == SET ==<br /> <br /> {| style=&quot;width: 100%; border-collapse: separate; border-spacing: 0; border-width: 1px; border-style: solid; border-color: #000; padding: 0&quot;<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot; | Code author(s): Team leader Emanouil Atanassov<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| Application areas: Computational Physics<br /> |-<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Language: C/C++<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Estimated lines of code: 6000<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| URL: http://wiki.hp-see.eu/index.php/SET<br /> |-<br /> |}<br /> <br /> === Implemented scalability actions ===<br /> <br /> * Our focus in this application was to achieve the optimal output from the hardware platforms that were available to us. Achieving good scalability depends mostly on avoiding bottlenecks and using good parallel pseudorandom number generators and generators for low-discrepancy sequences. Because of the high requirements for computing time we took several actions in order to achieve the optimal output. <br /> * The parallelization has been performed with MPI. Different version of MPI were tested and we found that the particular choice of MPI does not change much the scalability results. This was fortunate outcome as it allowed porting to the Blue Gene/P architecture without substantial changes. <br /> * Once we ensured that the MPI parallelization model we implemented achieves good parallel efficiency, we concentrated on achieving the best possible results from using single CPU core. <br /> * We performed profiling and benchmarking, also tested different generators and compared different pseudo-random number generators and low-discrepancy sequences. <br /> * We tested various compilers and we concluded that the Intel compiler currently provides the best results for the CPU version running at our Intel Xeon cluster. For the IBM Blue Gene/P architecture the obvious choice was the IBM XL compiler suite since it has advantage versus the GNU Compiler Collection in that it supports the double-hammer mode of the CPUs, achieving twice the floating point calculation speeds. For the GPU-based version that we developed recently we relay on the C++ compiler supplied by NVIDIA. <br /> * For all the choosen compilers we performed tests to choose the best possible compiler and linker options. For the Intel-based cluster one important source of ideas for the options was the website of the SPEC tests, where one can see what options were used for each particular sub-test of the SPEC suite. From there we also took the idea to perform two-pass compilation, where the results from profiling on the first pass were fed to the second pass of the compilation to optimise further. <br /> * For the HPCG cluster we also measured the performance of the parallel code with and without hyperthreading. It is well known that hyperthreading does not always improve the overall speed of calculations, because the floating point units of the processor are shared between the threads and thus if the code is highly intensive in such computations, there is no gain to be made from hyperthreading. Our experience with other application of the HP-SEE project yields such examples. But for the SET application we found about 30% improvement when hyperthreading is turned on, which should be considered a good results and also shows that our overall code is efficient in the sense that most of it is now floating point computations, unlike some earlier version where the gain from hyperthreading was larger. <br /> * For the NVIDIA-based version we found that we have much better performance using the newer M2090 cards versus the old GTX295, which was to be expected because the integer performance of the GTX 295 is comparable to that of M2090, but the floating performance of the GTX is many times smaller. <br /> <br /> === Benchmark dataset ===<br /> <br /> For the benchmarking we fixed a particular division of the domain into 800 by 260 points, electric field of 15 and 180 femto-seconds evolution time. The computational time in such case becomes proportational to the number of Markov Chain Monte Carlo trajectories. In most tests we used 1 billion (10^9) trajectories, but for some tests we decreased that in order to shorten the overall testing time. <br /> <br /> === Hardware platforms ===<br /> <br /> HPCG cluster and Blue Gene/P supercomputer. <br /> <br /> Four distinct hardware platforms were used:<br /> * the HPCG cluster with Intel Xeon X5560 CPU @2.8 Ghz, <br /> * Blue Gene/P with PowerPC CPUs, <br /> * our GTX 295-based GPU cluster (with processors Intel Core i7 920) <br /> * our new M2090-based resource with processors Intel Xeon X5650. <br /> <br /> === Execution times ===<br /> <br /> [[File:Scalability_example1.png|center|500px]]<br /> <br /> Comparison of the execution time and parallel efficiency of SET application are shown on HPCG (Table below) and BlueGene/P ( Table above).<br /> <br /> [[File:Scalability_example2.png|center|500px]]<br /> <br /> [[File:SET-scalability-graph-BG-P.jpg|center|500px]]<br /> <br /> === Memory Usage ===<br /> <br /> The maximum memory usage of a single computational thread is relatively small, in the order of 100 MB. On the GPUs there are several different kinds of memory, some of them rather limited. The available registers and the shared memory are especially problematic, since there is a risk if the available registers are all used some local variables to be spilled to global memory, encountering high latency and other issues. Still we found reasonable performance using 256 GPU threads, which is an acceptable number.<br /> <br /> === Profiling ===<br /> <br /> Profiling was performed in order to improve the compiler optimisation during the second pass and also in order to understand what kind of issues we may be having in the application. We found as expected that most of the computational time is spent in computing of transcendental function like sin, cos, exp, and also in the generation of pseudorandom numbers. We attempted in the GPU version to replace the regular sin, cos, etc., with the less-accurate versions that are more efficient, but we found that the gain from that is relatively small and is not worth the loss of accuracy. For the GPU-based version we obtained relatively high percentage of divergence within warps, which means that some logical statements are resolved differently within threads of the same warp and there is substantial loss of performance. So far we have not been able to re-order the computation so as to avoid it.<br /> <br /> === Communication ===<br /> <br /> The communication for this application is not critical in the sense that the communication takes less than 10% of the execution time.<br /> <br /> === I/O ===<br /> <br /> The input for the application is small, containing the parameters of the problem at hand. The output is written out at the end of the computation and its size depends on the parameters. For a reasonable size of the domain the output is in the order of several megabytes. More accurate mesh is reasonable only for smaller evolution times and the output size will be proportional to the size of the mesh<br /> <br /> === CPU and cache ===<br /> <br /> We believe that most of the computations of the CPU-based version fit in the cache for the Intel-based version. For the PowerPC processors of the Blue Gene/P some lookup operations when sampling the random variables use the main memory and thus entice higher latency. For the GPU-based version the situation is similar, since some of the tables are larger than the size of the so-called shared-memory. In both cases, the overall significance of these operations is less than 5%. <br /> <br /> === Analysis ===<br /> <br /> From our testing we concluded that hyperthreading should be used when available, production Tesla cards have much higher performance than essentially gaming cards like GTX 295, two passes of compilation should be used for the Intel compiler targeting Intel CPUs and that the application is scalable to the maximum number of available cores/threads at our disposable. For future work it remains to find an efficient strategy of reordering of the computations on the GPUs in order to avoid warp divergence. For the CPU-based version we have also developed an MPI meta-program that measures the variation and uses genetic algorithm (from galib library) to optimise the transition density. This step will be added as a pre-processing stage of the program in order to provide some speedup in order of 20% to the overall computations, but to do so we need to find the right balance between this stage and the main computational stage.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/CMSLTM/Scalability CMSLTM/Scalability 2013-01-16T14:10:54Z <p>Lifesci: </p> <hr /> <div>__TOC__<br /> <br /> {| style=&quot;width: 100%; border-collapse: separate; border-spacing: 0; border-width: 1px; border-style: solid; border-color: #000; padding: 0&quot;<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot; | Code author(s): George Kastellakis<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| Application areas: Life Sciences<br /> |-<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Language: NEURON<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Estimated lines of code: 2000<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| URL: http://wiki.hp-see.eu/index.php/CMSLTM<br /> |-<br /> |}<br /> <br /> == Implemented scalability actions ==<br /> <br /> Actions:<br /> * Estimation of performance by running a smaller network in different number of processors<br /> * Usage of parallel libraries and different compilers<br /> <br /> == Benchmark dataset ==<br /> The simulation consisted of a smaller network configuration<br /> <br /> == Hardware platforms ==<br /> Simulations were run on HPCG/BG in Sofia, Bulgaria.<br /> <br /> == Execution times ==<br /> Important: please provide here a graph too, how your application scales<br /> <br /> [[Image:cmsltm_B1.jpg]]<br /> <br /> == Memory Usage ==<br /> == Profiling ==<br /> == Communication ==<br /> == I/O ==<br /> == CPU and cache ==<br /> == Derived metrics ==<br /> == Analysis ==<br /> <br /> Hints: Please provide here a summary how the scalability has been improved. Explain what you did and what are the improvements.<br /> <br /> = PRACE document about application scalabilities =<br /> <br /> http://www.prace-ri.eu/IMG/pdf/D6-2-2.pdf <br /> <br /> You do not need to follow its format, this is just only a reference.<br /> <br /> = Examples =<br /> <br /> == SET ==<br /> <br /> {| style=&quot;width: 100%; border-collapse: separate; border-spacing: 0; border-width: 1px; border-style: solid; border-color: #000; padding: 0&quot;<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot; | Code author(s): Team leader Emanouil Atanassov<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| Application areas: Computational Physics<br /> |-<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Language: C/C++<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Estimated lines of code: 6000<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| URL: http://wiki.hp-see.eu/index.php/SET<br /> |-<br /> |}<br /> <br /> === Implemented scalability actions ===<br /> <br /> * Our focus in this application was to achieve the optimal output from the hardware platforms that were available to us. Achieving good scalability depends mostly on avoiding bottlenecks and using good parallel pseudorandom number generators and generators for low-discrepancy sequences. Because of the high requirements for computing time we took several actions in order to achieve the optimal output. <br /> * The parallelization has been performed with MPI. Different version of MPI were tested and we found that the particular choice of MPI does not change much the scalability results. This was fortunate outcome as it allowed porting to the Blue Gene/P architecture without substantial changes. <br /> * Once we ensured that the MPI parallelization model we implemented achieves good parallel efficiency, we concentrated on achieving the best possible results from using single CPU core. <br /> * We performed profiling and benchmarking, also tested different generators and compared different pseudo-random number generators and low-discrepancy sequences. <br /> * We tested various compilers and we concluded that the Intel compiler currently provides the best results for the CPU version running at our Intel Xeon cluster. For the IBM Blue Gene/P architecture the obvious choice was the IBM XL compiler suite since it has advantage versus the GNU Compiler Collection in that it supports the double-hammer mode of the CPUs, achieving twice the floating point calculation speeds. For the GPU-based version that we developed recently we relay on the C++ compiler supplied by NVIDIA. <br /> * For all the choosen compilers we performed tests to choose the best possible compiler and linker options. For the Intel-based cluster one important source of ideas for the options was the website of the SPEC tests, where one can see what options were used for each particular sub-test of the SPEC suite. From there we also took the idea to perform two-pass compilation, where the results from profiling on the first pass were fed to the second pass of the compilation to optimise further. <br /> * For the HPCG cluster we also measured the performance of the parallel code with and without hyperthreading. It is well known that hyperthreading does not always improve the overall speed of calculations, because the floating point units of the processor are shared between the threads and thus if the code is highly intensive in such computations, there is no gain to be made from hyperthreading. Our experience with other application of the HP-SEE project yields such examples. But for the SET application we found about 30% improvement when hyperthreading is turned on, which should be considered a good results and also shows that our overall code is efficient in the sense that most of it is now floating point computations, unlike some earlier version where the gain from hyperthreading was larger. <br /> * For the NVIDIA-based version we found that we have much better performance using the newer M2090 cards versus the old GTX295, which was to be expected because the integer performance of the GTX 295 is comparable to that of M2090, but the floating performance of the GTX is many times smaller. <br /> <br /> === Benchmark dataset ===<br /> <br /> For the benchmarking we fixed a particular division of the domain into 800 by 260 points, electric field of 15 and 180 femto-seconds evolution time. The computational time in such case becomes proportational to the number of Markov Chain Monte Carlo trajectories. In most tests we used 1 billion (10^9) trajectories, but for some tests we decreased that in order to shorten the overall testing time. <br /> <br /> === Hardware platforms ===<br /> <br /> HPCG cluster and Blue Gene/P supercomputer. <br /> <br /> Four distinct hardware platforms were used:<br /> * the HPCG cluster with Intel Xeon X5560 CPU @2.8 Ghz, <br /> * Blue Gene/P with PowerPC CPUs, <br /> * our GTX 295-based GPU cluster (with processors Intel Core i7 920) <br /> * our new M2090-based resource with processors Intel Xeon X5650. <br /> <br /> === Execution times ===<br /> <br /> [[File:Scalability_example1.png|center|500px]]<br /> <br /> Comparison of the execution time and parallel efficiency of SET application are shown on HPCG (Table below) and BlueGene/P ( Table above).<br /> <br /> [[File:Scalability_example2.png|center|500px]]<br /> <br /> [[File:SET-scalability-graph-BG-P.jpg|center|500px]]<br /> <br /> === Memory Usage ===<br /> <br /> The maximum memory usage of a single computational thread is relatively small, in the order of 100 MB. On the GPUs there are several different kinds of memory, some of them rather limited. The available registers and the shared memory are especially problematic, since there is a risk if the available registers are all used some local variables to be spilled to global memory, encountering high latency and other issues. Still we found reasonable performance using 256 GPU threads, which is an acceptable number.<br /> <br /> === Profiling ===<br /> <br /> Profiling was performed in order to improve the compiler optimisation during the second pass and also in order to understand what kind of issues we may be having in the application. We found as expected that most of the computational time is spent in computing of transcendental function like sin, cos, exp, and also in the generation of pseudorandom numbers. We attempted in the GPU version to replace the regular sin, cos, etc., with the less-accurate versions that are more efficient, but we found that the gain from that is relatively small and is not worth the loss of accuracy. For the GPU-based version we obtained relatively high percentage of divergence within warps, which means that some logical statements are resolved differently within threads of the same warp and there is substantial loss of performance. So far we have not been able to re-order the computation so as to avoid it.<br /> <br /> === Communication ===<br /> <br /> The communication for this application is not critical in the sense that the communication takes less than 10% of the execution time.<br /> <br /> === I/O ===<br /> <br /> The input for the application is small, containing the parameters of the problem at hand. The output is written out at the end of the computation and its size depends on the parameters. For a reasonable size of the domain the output is in the order of several megabytes. More accurate mesh is reasonable only for smaller evolution times and the output size will be proportional to the size of the mesh<br /> <br /> === CPU and cache ===<br /> <br /> We believe that most of the computations of the CPU-based version fit in the cache for the Intel-based version. For the PowerPC processors of the Blue Gene/P some lookup operations when sampling the random variables use the main memory and thus entice higher latency. For the GPU-based version the situation is similar, since some of the tables are larger than the size of the so-called shared-memory. In both cases, the overall significance of these operations is less than 5%. <br /> <br /> === Analysis ===<br /> <br /> From our testing we concluded that hyperthreading should be used when available, production Tesla cards have much higher performance than essentially gaming cards like GTX 295, two passes of compilation should be used for the Intel compiler targeting Intel CPUs and that the application is scalable to the maximum number of available cores/threads at our disposable. For future work it remains to find an efficient strategy of reordering of the computations on the GPUs in order to avoid warp divergence. For the CPU-based version we have also developed an MPI meta-program that measures the variation and uses genetic algorithm (from galib library) to optimise the transition density. This step will be added as a pre-processing stage of the program in order to provide some speedup in order of 20% to the overall computations, but to do so we need to find the right balance between this stage and the main computational stage.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/CMSLTM/Scalability CMSLTM/Scalability 2013-01-16T14:09:28Z <p>Lifesci: Created page with &quot;__TOC__ {| style=&quot;width: 100%; border-collapse: separate; border-spacing: 0; border-width: 1px; border-style: solid; border-color: #000; padding: 0&quot; |- |colspan=&quot;2&quot; align=&quot;left...&quot;</p> <hr /> <div>__TOC__<br /> <br /> {| style=&quot;width: 100%; border-collapse: separate; border-spacing: 0; border-width: 1px; border-style: solid; border-color: #000; padding: 0&quot;<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot; | Code author(s): XY<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| Application areas: Computational Chemistry<br /> |-<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Language: C++<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Estimated lines of code: 200<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| URL: http://wiki.hp-see.eu/index.php/XY<br /> |-<br /> |}<br /> <br /> == Implemented scalability actions ==<br /> <br /> Hints: these actions are from the previous deliverable (D8.2). Please write here down how the following actions have been implemented (one by one line comment for each action).<br /> D8.2: http://www.hp-see.eu/files/public/HPSEE-WP8-HU-020-D8.2-f-2011-08-29.pdf (Table 18 - Summary of scalability and interoperability actions)<br /> <br /> Actions:<br /> * Estimation of performance by running a smaller network in different number of processors<br /> * Usage of parallel libraries and different compilers<br /> <br /> == Benchmark dataset ==<br /> The simulation consisted of a smaller network configuration<br /> <br /> == Hardware platforms ==<br /> Hints: which HPC sites have you used?<br /> <br /> == Execution times ==<br /> Important: please provide here a graph too, how your application scales<br /> <br /> == Memory Usage ==<br /> == Profiling ==<br /> == Communication ==<br /> == I/O ==<br /> == CPU and cache ==<br /> == Derived metrics ==<br /> == Analysis ==<br /> <br /> Hints: Please provide here a summary how the scalability has been improved. Explain what you did and what are the improvements.<br /> <br /> = PRACE document about application scalabilities =<br /> <br /> http://www.prace-ri.eu/IMG/pdf/D6-2-2.pdf <br /> <br /> You do not need to follow its format, this is just only a reference.<br /> <br /> = Examples =<br /> <br /> == SET ==<br /> <br /> {| style=&quot;width: 100%; border-collapse: separate; border-spacing: 0; border-width: 1px; border-style: solid; border-color: #000; padding: 0&quot;<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot; | Code author(s): Team leader Emanouil Atanassov<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| Application areas: Computational Physics<br /> |-<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Language: C/C++<br /> |align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;|Estimated lines of code: 6000<br /> |-<br /> |colspan=&quot;2&quot; align=&quot;left&quot; style=&quot;border-style: solid; border-width: 1px&quot;| URL: http://wiki.hp-see.eu/index.php/SET<br /> |-<br /> |}<br /> <br /> === Implemented scalability actions ===<br /> <br /> * Our focus in this application was to achieve the optimal output from the hardware platforms that were available to us. Achieving good scalability depends mostly on avoiding bottlenecks and using good parallel pseudorandom number generators and generators for low-discrepancy sequences. Because of the high requirements for computing time we took several actions in order to achieve the optimal output. <br /> * The parallelization has been performed with MPI. Different version of MPI were tested and we found that the particular choice of MPI does not change much the scalability results. This was fortunate outcome as it allowed porting to the Blue Gene/P architecture without substantial changes. <br /> * Once we ensured that the MPI parallelization model we implemented achieves good parallel efficiency, we concentrated on achieving the best possible results from using single CPU core. <br /> * We performed profiling and benchmarking, also tested different generators and compared different pseudo-random number generators and low-discrepancy sequences. <br /> * We tested various compilers and we concluded that the Intel compiler currently provides the best results for the CPU version running at our Intel Xeon cluster. For the IBM Blue Gene/P architecture the obvious choice was the IBM XL compiler suite since it has advantage versus the GNU Compiler Collection in that it supports the double-hammer mode of the CPUs, achieving twice the floating point calculation speeds. For the GPU-based version that we developed recently we relay on the C++ compiler supplied by NVIDIA. <br /> * For all the choosen compilers we performed tests to choose the best possible compiler and linker options. For the Intel-based cluster one important source of ideas for the options was the website of the SPEC tests, where one can see what options were used for each particular sub-test of the SPEC suite. From there we also took the idea to perform two-pass compilation, where the results from profiling on the first pass were fed to the second pass of the compilation to optimise further. <br /> * For the HPCG cluster we also measured the performance of the parallel code with and without hyperthreading. It is well known that hyperthreading does not always improve the overall speed of calculations, because the floating point units of the processor are shared between the threads and thus if the code is highly intensive in such computations, there is no gain to be made from hyperthreading. Our experience with other application of the HP-SEE project yields such examples. But for the SET application we found about 30% improvement when hyperthreading is turned on, which should be considered a good results and also shows that our overall code is efficient in the sense that most of it is now floating point computations, unlike some earlier version where the gain from hyperthreading was larger. <br /> * For the NVIDIA-based version we found that we have much better performance using the newer M2090 cards versus the old GTX295, which was to be expected because the integer performance of the GTX 295 is comparable to that of M2090, but the floating performance of the GTX is many times smaller. <br /> <br /> === Benchmark dataset ===<br /> <br /> For the benchmarking we fixed a particular division of the domain into 800 by 260 points, electric field of 15 and 180 femto-seconds evolution time. The computational time in such case becomes proportational to the number of Markov Chain Monte Carlo trajectories. In most tests we used 1 billion (10^9) trajectories, but for some tests we decreased that in order to shorten the overall testing time. <br /> <br /> === Hardware platforms ===<br /> <br /> HPCG cluster and Blue Gene/P supercomputer. <br /> <br /> Four distinct hardware platforms were used:<br /> * the HPCG cluster with Intel Xeon X5560 CPU @2.8 Ghz, <br /> * Blue Gene/P with PowerPC CPUs, <br /> * our GTX 295-based GPU cluster (with processors Intel Core i7 920) <br /> * our new M2090-based resource with processors Intel Xeon X5650. <br /> <br /> === Execution times ===<br /> <br /> [[File:Scalability_example1.png|center|500px]]<br /> <br /> Comparison of the execution time and parallel efficiency of SET application are shown on HPCG (Table below) and BlueGene/P ( Table above).<br /> <br /> [[File:Scalability_example2.png|center|500px]]<br /> <br /> [[File:SET-scalability-graph-BG-P.jpg|center|500px]]<br /> <br /> === Memory Usage ===<br /> <br /> The maximum memory usage of a single computational thread is relatively small, in the order of 100 MB. On the GPUs there are several different kinds of memory, some of them rather limited. The available registers and the shared memory are especially problematic, since there is a risk if the available registers are all used some local variables to be spilled to global memory, encountering high latency and other issues. Still we found reasonable performance using 256 GPU threads, which is an acceptable number.<br /> <br /> === Profiling ===<br /> <br /> Profiling was performed in order to improve the compiler optimisation during the second pass and also in order to understand what kind of issues we may be having in the application. We found as expected that most of the computational time is spent in computing of transcendental function like sin, cos, exp, and also in the generation of pseudorandom numbers. We attempted in the GPU version to replace the regular sin, cos, etc., with the less-accurate versions that are more efficient, but we found that the gain from that is relatively small and is not worth the loss of accuracy. For the GPU-based version we obtained relatively high percentage of divergence within warps, which means that some logical statements are resolved differently within threads of the same warp and there is substantial loss of performance. So far we have not been able to re-order the computation so as to avoid it.<br /> <br /> === Communication ===<br /> <br /> The communication for this application is not critical in the sense that the communication takes less than 10% of the execution time.<br /> <br /> === I/O ===<br /> <br /> The input for the application is small, containing the parameters of the problem at hand. The output is written out at the end of the computation and its size depends on the parameters. For a reasonable size of the domain the output is in the order of several megabytes. More accurate mesh is reasonable only for smaller evolution times and the output size will be proportional to the size of the mesh<br /> <br /> === CPU and cache ===<br /> <br /> We believe that most of the computations of the CPU-based version fit in the cache for the Intel-based version. For the PowerPC processors of the Blue Gene/P some lookup operations when sampling the random variables use the main memory and thus entice higher latency. For the GPU-based version the situation is similar, since some of the tables are larger than the size of the so-called shared-memory. In both cases, the overall significance of these operations is less than 5%. <br /> <br /> === Analysis ===<br /> <br /> From our testing we concluded that hyperthreading should be used when available, production Tesla cards have much higher performance than essentially gaming cards like GTX 295, two passes of compilation should be used for the Intel compiler targeting Intel CPUs and that the application is scalable to the maximum number of available cores/threads at our disposable. For future work it remains to find an efficient strategy of reordering of the computations on the GPUs in order to avoid warp divergence. For the CPU-based version we have also developed an MPI meta-program that measures the variation and uses genetic algorithm (from galib library) to optimise the transition density. This step will be added as a pre-processing stage of the program in order to provide some speedup in order of 20% to the overall computations, but to do so we need to find the right balance between this stage and the main computational stage.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/MSBP MSBP 2013-01-03T14:27:36Z <p>Lifesci: /* Infrastructure Usage */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Modeling of some biochemical processes with the purpose of realization of their thin and purposeful synthesis''<br /> * Application's acronym: ''MSBP''<br /> * Virtual Research Community: ''Life Sciences'' <br /> * Scientific contact: ''Jumber Kereselidze, Ramaz Kvatadze ramaz[at]grena.ge''<br /> * Technical contact: ''George Mikuchadze gmikuchadze[at]gmail.com''<br /> * Developers: ''Scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University''<br /> * Web site: http://wiki.hp-see.eu/index.php/MSBP<br /> <br /> == Short Description ==<br /> <br /> One of the priority directions of modern natural sciences is the research and creation of an opportunity of realization of thin end purposeful synthesis of nucleotide bases. Solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. The scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University during last years are engaged in research of modeling of transformations of biochemical macromolecular systems (amino acids, proteins and DNA) with the use of the appropriate computer programs (program package „ PRIRODA – 04“, P6, P32 - Moscow State University). The main characteristics of the PRIRODA quantum-chemical program designed for the study of complex molecular systems by the density functional theory, at the MP2, MP3, and MP4 levels of multiparticle perturbation theory, and by the coupled-cluster single and double excitations method (CCSD) with the application of parallel computing.<br /> <br /> == Problems Solved ==<br /> <br /> The energy characteristics of the tautomeric transformations of cytosine, thymine, and uracil have been calculated within the framework of the quantum chemistry theory of functional density. It was obtained that the directions of the tautomeric conversions are characterized by energies of activation calculated according to the theory of functional density.<br /> The published data on the prototropic tautomerism of some carbonyl and nitrogen-containing acyclic and heterocyclic compounds are systematized. Mechanisms of the intramolecular and intermolecular proton transfer in tautomerisation reactions was considered. On the basis of the results of semiempirical and quantum-chemical calculations, preference is given to an intermolecular collective (dimeric, trimeric, tetrameric or oligomeric) mechanism. A new approach to the description of the solvent effect on the prototropic tautomeric equilibrium was proposed.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. Obtained results will be important for the prediction of denaturation of DNA. Working on this project will significantly improve research capacity of the scientists involved and will rise educational level in biophysical chemistry at the Tbilisi State University and Sokhumi State University. Researches will improve their experience in participation in European Programmes and will contribute to the integration of the Georgian research potential to the European Research Area.<br /> <br /> == Collaborations ==<br /> <br /> * Tbilisi Statae University, Georgia<br /> * Sokhumi State University, Georgia<br /> * Moscow State University, Russia<br /> <br /> == Beneficiaries ==<br /> <br /> Primary beneficiaries will be research groups from Tbilisi, Sokhumi and Moscow State Universities, however obtained results can be used by all scientists working on realization of thin end purposeful synthesis of nucleotide bases. Students involved in this research will gain experience in scientific collaboration.<br /> <br /> == Number of users ==<br /> <br /> 9<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''M1''<br /> * Start of beta stage: ''M6''<br /> * Start of testing stage: ''M8''<br /> * Start of deployment stage: ''M11''<br /> * Start of production stage: ''M15''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''From 8 to up to 64''<br /> * Minimum RAM/core required: ''1 GB/24''<br /> * Storage space during a single run: ''200 - 500 MB''<br /> * Long-term data storage: ''not required''<br /> * Total core hours required: ''not clear yet''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C, Fortran''<br /> * Parallel programming paradigm: ''MPI/OpenMP''<br /> * Main parallel code: ''OpenMPI/OpenMP''<br /> * Pre/post processing code: ''in-house development, C''<br /> * Application tools and libraries: ''Intel C/Fortran compilers, GCC/GFortran, PGI Fortran''<br /> <br /> == Usage Example ==<br /> <br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''NCIT-Cluster'' <br /> ** Applied for access on: ''04.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''32 cores''<br /> * Accessed production systems:<br /> # ''HPC centre in Debrecen (Debrecen SC)''<br /> #* Applied for access on: ''07.2012''<br /> #* Access granted on: ''08.2012''<br /> #* Achieved scalability: ''32 cores''<br /> * Porting activities: ''The application has been successfully ported at NCIT-Cluster. George Mikuchadze was assisted by Mihnea Dulea and then by Emil Slusanschi Associate Professor of the Department of Computer Science and Engineering of University Politehnica of Bucharest. In August 2012 application was successfully ported at HPC centre in Debrecen.''<br /> * Scalability studies: ''Tests on 8, 16, 24, 32 and 64 cores.''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: '' After successful deployment on 8 cores benchmarking was initiated for 16, 32 and 64 cores.''<br /> * Other issues: ''Further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> <br /> * The quantum-chemical modeling of a proton transfer in nitrogen containing biological active compounds using the modern non empirical method - Density Function Theory. As a result of calculations of the energetic, electronic and structural characteristics of protons transfer in the nucleotide bases the mutation processes in DNA are quantitatively described. The quantum-chemical model of the stacking and pentameric mechanisms of the tautomeric transformations of the heterocyclic compounds is constructed.<br /> <br /> == Publications ==<br /> <br /> * T. Zarqua, J. Kereselidze and Z. Pachulia. Quantum-chemical description of the influenceof electronic effects of proton transfer in guanine-cytosine base pairs. J.Biol. Phys. Chem., v.10, pp.71-73 (2010).<br /> * J. Kereselidze, T. Zarqua, Z. Paculia, M. Kvaraia. Quantum Chemical Modeling of the Mechanism of Formation of the Peptide Bond.International Conference on Computational Biology. Tokyo, Japan May 26-28, 2010.WASET, 65, p.1469 (2010).<br /> * M. Kvaraia, J. Kereselidze, Z. Pachulia and T. Zarqua. Quantum-Chemical Study of the Solvent Effect on Process of a Proton Transfer in Nucleotide Bases. Proceed. Georgian NA Sciences, v. 36, pp 306-308 (2010).<br /> * J. Kereselidze, Z. Pachulia and M. Kvaraia. Quantum-chemical modeling of tendency of DNA to denaturetion. J.Biol. Phys. Chem., 11, 51-53 (2011).<br /> <br /> <br /> == Foreseen Activities ==<br /> <br /> * DNA tendency to denaturation is stipulated by the elevation of ethanol’s concentration in the ambient. The proton transfer between nucleobases of DNA (Adenine-Thimine, Guanine-Cytosine) causes rare tautomeric transformation of the nucleobases pair, which in turn increases both probability of denaturation and frequency of mutation. In the next runs we will increase number of nucleobases (from 4 to 8) in the molecular structures. <br /> * Preparation of publication based on results of simulations on HP-SEE infrastructure.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/MSBP MSBP 2013-01-03T14:26:08Z <p>Lifesci: /* Resource Requirements */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Modeling of some biochemical processes with the purpose of realization of their thin and purposeful synthesis''<br /> * Application's acronym: ''MSBP''<br /> * Virtual Research Community: ''Life Sciences'' <br /> * Scientific contact: ''Jumber Kereselidze, Ramaz Kvatadze ramaz[at]grena.ge''<br /> * Technical contact: ''George Mikuchadze gmikuchadze[at]gmail.com''<br /> * Developers: ''Scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University''<br /> * Web site: http://wiki.hp-see.eu/index.php/MSBP<br /> <br /> == Short Description ==<br /> <br /> One of the priority directions of modern natural sciences is the research and creation of an opportunity of realization of thin end purposeful synthesis of nucleotide bases. Solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. The scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University during last years are engaged in research of modeling of transformations of biochemical macromolecular systems (amino acids, proteins and DNA) with the use of the appropriate computer programs (program package „ PRIRODA – 04“, P6, P32 - Moscow State University). The main characteristics of the PRIRODA quantum-chemical program designed for the study of complex molecular systems by the density functional theory, at the MP2, MP3, and MP4 levels of multiparticle perturbation theory, and by the coupled-cluster single and double excitations method (CCSD) with the application of parallel computing.<br /> <br /> == Problems Solved ==<br /> <br /> The energy characteristics of the tautomeric transformations of cytosine, thymine, and uracil have been calculated within the framework of the quantum chemistry theory of functional density. It was obtained that the directions of the tautomeric conversions are characterized by energies of activation calculated according to the theory of functional density.<br /> The published data on the prototropic tautomerism of some carbonyl and nitrogen-containing acyclic and heterocyclic compounds are systematized. Mechanisms of the intramolecular and intermolecular proton transfer in tautomerisation reactions was considered. On the basis of the results of semiempirical and quantum-chemical calculations, preference is given to an intermolecular collective (dimeric, trimeric, tetrameric or oligomeric) mechanism. A new approach to the description of the solvent effect on the prototropic tautomeric equilibrium was proposed.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. Obtained results will be important for the prediction of denaturation of DNA. Working on this project will significantly improve research capacity of the scientists involved and will rise educational level in biophysical chemistry at the Tbilisi State University and Sokhumi State University. Researches will improve their experience in participation in European Programmes and will contribute to the integration of the Georgian research potential to the European Research Area.<br /> <br /> == Collaborations ==<br /> <br /> * Tbilisi Statae University, Georgia<br /> * Sokhumi State University, Georgia<br /> * Moscow State University, Russia<br /> <br /> == Beneficiaries ==<br /> <br /> Primary beneficiaries will be research groups from Tbilisi, Sokhumi and Moscow State Universities, however obtained results can be used by all scientists working on realization of thin end purposeful synthesis of nucleotide bases. Students involved in this research will gain experience in scientific collaboration.<br /> <br /> == Number of users ==<br /> <br /> 9<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''M1''<br /> * Start of beta stage: ''M6''<br /> * Start of testing stage: ''M8''<br /> * Start of deployment stage: ''M11''<br /> * Start of production stage: ''M15''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''From 8 to up to 64''<br /> * Minimum RAM/core required: ''1 GB/24''<br /> * Storage space during a single run: ''200 - 500 MB''<br /> * Long-term data storage: ''not required''<br /> * Total core hours required: ''not clear yet''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C, Fortran''<br /> * Parallel programming paradigm: ''MPI/OpenMP''<br /> * Main parallel code: ''OpenMPI/OpenMP''<br /> * Pre/post processing code: ''in-house development, C''<br /> * Application tools and libraries: ''Intel C/Fortran compilers, GCC/GFortran, PGI Fortran''<br /> <br /> == Usage Example ==<br /> <br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''NCIT-Cluster'' <br /> ** Applied for access on: ''04.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''64 cores''<br /> * Accessed production systems:<br /> # ''HPC centre in Debrecen (Debrecen SC)''<br /> #* Applied for access on: ''07.2012''<br /> #* Access granted on: ''08.2012''<br /> #* Achieved scalability: ''64 cores''<br /> * Porting activities: ''The application has been successfully ported at NCIT-Cluster. George Mikuchadze was assisted by Mihnea Dulea and then by Emil Slusanschi Associate Professor of the Department of Computer Science and Engineering of University Politehnica of Bucharest. In August 2012 application was successfully ported at HPC centre in Debrecen.''<br /> * Scalability studies: ''Tests on 8, 16, 32 and 64 cores.''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: '' After successful deployment on 8 cores benchmarking was initiated for 16, 32 and 64 cores.''<br /> * Other issues: ''Further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> <br /> * The quantum-chemical modeling of a proton transfer in nitrogen containing biological active compounds using the modern non empirical method - Density Function Theory. As a result of calculations of the energetic, electronic and structural characteristics of protons transfer in the nucleotide bases the mutation processes in DNA are quantitatively described. The quantum-chemical model of the stacking and pentameric mechanisms of the tautomeric transformations of the heterocyclic compounds is constructed.<br /> <br /> == Publications ==<br /> <br /> * T. Zarqua, J. Kereselidze and Z. Pachulia. Quantum-chemical description of the influenceof electronic effects of proton transfer in guanine-cytosine base pairs. J.Biol. Phys. Chem., v.10, pp.71-73 (2010).<br /> * J. Kereselidze, T. Zarqua, Z. Paculia, M. Kvaraia. Quantum Chemical Modeling of the Mechanism of Formation of the Peptide Bond.International Conference on Computational Biology. Tokyo, Japan May 26-28, 2010.WASET, 65, p.1469 (2010).<br /> * M. Kvaraia, J. Kereselidze, Z. Pachulia and T. Zarqua. Quantum-Chemical Study of the Solvent Effect on Process of a Proton Transfer in Nucleotide Bases. Proceed. Georgian NA Sciences, v. 36, pp 306-308 (2010).<br /> * J. Kereselidze, Z. Pachulia and M. Kvaraia. Quantum-chemical modeling of tendency of DNA to denaturetion. J.Biol. Phys. Chem., 11, 51-53 (2011).<br /> <br /> <br /> == Foreseen Activities ==<br /> <br /> * DNA tendency to denaturation is stipulated by the elevation of ethanol’s concentration in the ambient. The proton transfer between nucleobases of DNA (Adenine-Thimine, Guanine-Cytosine) causes rare tautomeric transformation of the nucleobases pair, which in turn increases both probability of denaturation and frequency of mutation. In the next runs we will increase number of nucleobases (from 4 to 8) in the molecular structures. <br /> * Preparation of publication based on results of simulations on HP-SEE infrastructure.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DNAMA DNAMA 2012-10-04T12:33:48Z <p>Lifesci: /* Publications */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''DNA Multicore Analysis''<br /> * Application's acronym: ''DNAMA''<br /> * Virtual Research Community: ''Life Sciences ''<br /> * Scientific contact: ''Danilo Mrdak, danilomrdak@gmail.com''<br /> * Technical contact: ''Luka Filipovic, lukaf@ac.me''<br /> * Developers: ''Center of Information System &amp; Faculty of Natural Sciences - University of Montenegro''<br /> * Web site: http://wiki.hp-see.eu/index.php/DNAMA<br /> <br /> == Short Description ==<br /> <br /> Using of Network Cluster Web with potential of super-computer performances for DNA sequences analyzing will give us unlimited potential for DNA research. This will give us unlimited potential in term of analyzed sequence number and time consumption for analysis to be carried out. As many of DNA comparing and analyzing software use Monte Carlo and Markov chain algorithms that are time consuming regarding to sequence numbers, super-computer resource will faster our job and make the robust and overall analysis possible.Using of all published sequences for one group (e.g. for all salmonid species: salmons, trout, grayling, river huchon) from the same DNA region (mitochondrial D-loop DNA, Cytochrom b gene…) will give us more detailed insight in their relationships and phylogeny relationships.<br /> <br /> DNAMA application is based on RAxML application from The Exelixis Lab.<br /> <br /> == Problems Solved ==<br /> <br /> The working resource that is possible to use trough network computer clustering will allow us to put in analysis as much samples as we wish and that those analysis will be finished in one to few hours. Moreover, we will tray to modified the algorithms in order to have multi-loci analysis to get a consensus three that will suggest the most possible pathways of phylogeny with much higher level of confidence<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Use of Network Cluster Web with potential of super-computer performances for DNA sequence comparison analysis will give unlimited potential for DNA research. Use of all published sequences for one group (e.g. for all salmonid species: salmons, trout, grayling, river huchon) from the same DNA region (mitochondrial D-loop DNA, Cytochrom b gene…) will give a more detailed insight in their relationships and phylogeny relationships (RAXML). <br /> <br /> Problems solved : Analyze as many samples as possible within a few hours. Modification of algorithms in order to have multi-loci analysis to get a consensus tree that will suggest the most probable pathways of phylogeny with much higher level of confidence.<br /> <br /> Impact : Assessing whether computer resources are reliable is one of the main obstacle for making scientific breakthroughs in field of Molecular Biology and Phylogeny (Evolution). HPC allows for faster and more reliable results. Enhancement of competitiveness in terms of regional and European collaboration. Drawing the attention of national stakeholders in future building of Montenegro as a &quot;society of knowledge”<br /> <br /> == Collaborations ==<br /> <br /> * <br /> <br /> == Beneficiaries ==<br /> <br /> * University of Montenegro - Faculty of Natural sciences - Biology Department<br /> <br /> == Number of users ==<br /> 15-20 from Faculty of natural Sciences, University of Montenegro<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''before the start of the project - finished by RAxML developers''<br /> * Start of alpha stage: ''before the start of the project''<br /> * Start of beta stage: ''09.2010''<br /> * Start of testing stage: ''03.2011''<br /> * Start of deployment stage: ''11.2011''<br /> * Start of production stage: ''11.2011''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''up to 512 cores''<br /> * Minimum RAM/core required: ''1 GB''<br /> * Storage space during a single run: ''256 MB''<br /> * Long-term data storage: ''1 GB''<br /> * Total core hours required: ''.''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C''<br /> * Parallel programming paradigm: ''MPI, OpenMPI''<br /> * Main parallel code: ''MPI, OpenMPI''<br /> * Pre/post processing code: ''C, Dendroscope (for visualization for results)''<br /> * Application tools and libraries: ''RAxML''<br /> <br /> == Usage Example ==<br /> <br /> Execution from command line :<br /> /opt/exp_software/mpi/mpiexec/mpiexec-0.84-mpich2-pmi/bin/mpiexec -np 128 /home/lukaf/raxml/RAxML-7.2.6/raxmlHPC-MPI -m GTRGAMMA -s /home/lukaf/raxml/trutte_input.txt -# 1000 -n T16x8<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''HPCG, Bulgaria''<br /> ** Applied for access on: ''05.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''up to 256 cores''<br /> * Accessed production systems:<br /> # ''Debrecen SC, NIIF''<br /> #* Applied for access on: ''02.2012''<br /> #* Access granted on: ''03.2012''<br /> #* Achieved scalability: ''240 cores''<br /> # ''Pesc SC, NIIF, HU''<br /> #* Applied for access on: ''02.2012''<br /> #* Access granted on: ''03.2012''<br /> #* Achieved scalability: ''... cores''<br /> * Porting activities: ''...''<br /> * Scalability studies: ''...''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''.''<br /> * Other issues: ''.''<br /> <br /> == Achieved Results ==<br /> <br /> <br /> == Publications ==<br /> <br /> * Luka Filipović, Danilo Mrdak and Božo Krstajić, &quot;Performance evaluation of computational phylogeny software in parallel computing environment&quot;, ICT Innovations 2012<br /> <br /> == Foreseen Activities ==<br /> <br /> * data analysis for new DNA sequences<br /> * multigene analysys (mt DNA, Cut B. … ) and 3 simulated genes<br /> * Benchmark activities for MPI, PThreads &amp; Hybrid version</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DNAMA DNAMA 2012-10-04T12:30:42Z <p>Lifesci: /* Infrastructure Usage */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''DNA Multicore Analysis''<br /> * Application's acronym: ''DNAMA''<br /> * Virtual Research Community: ''Life Sciences ''<br /> * Scientific contact: ''Danilo Mrdak, danilomrdak@gmail.com''<br /> * Technical contact: ''Luka Filipovic, lukaf@ac.me''<br /> * Developers: ''Center of Information System &amp; Faculty of Natural Sciences - University of Montenegro''<br /> * Web site: http://wiki.hp-see.eu/index.php/DNAMA<br /> <br /> == Short Description ==<br /> <br /> Using of Network Cluster Web with potential of super-computer performances for DNA sequences analyzing will give us unlimited potential for DNA research. This will give us unlimited potential in term of analyzed sequence number and time consumption for analysis to be carried out. As many of DNA comparing and analyzing software use Monte Carlo and Markov chain algorithms that are time consuming regarding to sequence numbers, super-computer resource will faster our job and make the robust and overall analysis possible.Using of all published sequences for one group (e.g. for all salmonid species: salmons, trout, grayling, river huchon) from the same DNA region (mitochondrial D-loop DNA, Cytochrom b gene…) will give us more detailed insight in their relationships and phylogeny relationships.<br /> <br /> DNAMA application is based on RAxML application from The Exelixis Lab.<br /> <br /> == Problems Solved ==<br /> <br /> The working resource that is possible to use trough network computer clustering will allow us to put in analysis as much samples as we wish and that those analysis will be finished in one to few hours. Moreover, we will tray to modified the algorithms in order to have multi-loci analysis to get a consensus three that will suggest the most possible pathways of phylogeny with much higher level of confidence<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Use of Network Cluster Web with potential of super-computer performances for DNA sequence comparison analysis will give unlimited potential for DNA research. Use of all published sequences for one group (e.g. for all salmonid species: salmons, trout, grayling, river huchon) from the same DNA region (mitochondrial D-loop DNA, Cytochrom b gene…) will give a more detailed insight in their relationships and phylogeny relationships (RAXML). <br /> <br /> Problems solved : Analyze as many samples as possible within a few hours. Modification of algorithms in order to have multi-loci analysis to get a consensus tree that will suggest the most probable pathways of phylogeny with much higher level of confidence.<br /> <br /> Impact : Assessing whether computer resources are reliable is one of the main obstacle for making scientific breakthroughs in field of Molecular Biology and Phylogeny (Evolution). HPC allows for faster and more reliable results. Enhancement of competitiveness in terms of regional and European collaboration. Drawing the attention of national stakeholders in future building of Montenegro as a &quot;society of knowledge”<br /> <br /> == Collaborations ==<br /> <br /> * <br /> <br /> == Beneficiaries ==<br /> <br /> * University of Montenegro - Faculty of Natural sciences - Biology Department<br /> <br /> == Number of users ==<br /> 15-20 from Faculty of natural Sciences, University of Montenegro<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''before the start of the project - finished by RAxML developers''<br /> * Start of alpha stage: ''before the start of the project''<br /> * Start of beta stage: ''09.2010''<br /> * Start of testing stage: ''03.2011''<br /> * Start of deployment stage: ''11.2011''<br /> * Start of production stage: ''11.2011''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''up to 512 cores''<br /> * Minimum RAM/core required: ''1 GB''<br /> * Storage space during a single run: ''256 MB''<br /> * Long-term data storage: ''1 GB''<br /> * Total core hours required: ''.''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C''<br /> * Parallel programming paradigm: ''MPI, OpenMPI''<br /> * Main parallel code: ''MPI, OpenMPI''<br /> * Pre/post processing code: ''C, Dendroscope (for visualization for results)''<br /> * Application tools and libraries: ''RAxML''<br /> <br /> == Usage Example ==<br /> <br /> Execution from command line :<br /> /opt/exp_software/mpi/mpiexec/mpiexec-0.84-mpich2-pmi/bin/mpiexec -np 128 /home/lukaf/raxml/RAxML-7.2.6/raxmlHPC-MPI -m GTRGAMMA -s /home/lukaf/raxml/trutte_input.txt -# 1000 -n T16x8<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''HPCG, Bulgaria''<br /> ** Applied for access on: ''05.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''up to 256 cores''<br /> * Accessed production systems:<br /> # ''Debrecen SC, NIIF''<br /> #* Applied for access on: ''02.2012''<br /> #* Access granted on: ''03.2012''<br /> #* Achieved scalability: ''240 cores''<br /> # ''Pesc SC, NIIF, HU''<br /> #* Applied for access on: ''02.2012''<br /> #* Access granted on: ''03.2012''<br /> #* Achieved scalability: ''... cores''<br /> * Porting activities: ''...''<br /> * Scalability studies: ''...''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''.''<br /> * Other issues: ''.''<br /> <br /> == Achieved Results ==<br /> <br /> <br /> == Publications ==<br /> <br /> * <br /> <br /> == Foreseen Activities ==<br /> <br /> * data analysis for new DNA sequences<br /> * multigene analysys (mt DNA, Cut B. … ) and 3 simulated genes<br /> * Benchmark activities for MPI, PThreads &amp; Hybrid version</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/File:SIMPLE-TS_Picture2.png File:SIMPLE-TS Picture2.png 2012-09-28T14:06:11Z <p>Lifesci: uploaded a new version of &amp;quot;File:SIMPLE-TS Picture2.png&amp;quot;</p> <hr /> <div></div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/File:SIMPLE-TS_Picture2.png File:SIMPLE-TS Picture2.png 2012-09-28T14:04:29Z <p>Lifesci: uploaded a new version of &amp;quot;File:SIMPLE-TS Picture2.png&amp;quot;</p> <hr /> <div></div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/SIMPLE-TS_2D SIMPLE-TS 2D 2012-09-28T14:03:52Z <p>Lifesci: /* Running on Several HP-SEE Centres */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Semi-Implicit Method for Pressure-Linked Equations - Time Step''<br /> * Application's acronym: ''SIMPLE-TS 2D''<br /> * Virtual Research Community: ''Computational Physics [A2]''<br /> * Scientific contact: ''Dr. Kiril Shterev, kshterev@imbm.bas.bg''<br /> * Technical contact: ''Dr. Kiril Shterev, kshterev@imbm.bas.bg''<br /> * Developers: ''Dr. Kiril Shterev and Prof. Stefan Stefanov, Department of Mathematical Modeling and Numerical Simulations, Institute of Mechanics - BAS, Bulgaria''<br /> * Web site: http://www.imbm.bas.bg/index.php/en_US/pressure-based-finite-volume-method<br /> <br /> == Short Description ==<br /> <br /> Micro mechanical devices are rapidly emerging technology, where new potential applications are continuously being found. A simulation of internal and external gas flows in or aroundthese devices is important for their design. The flow motions described on the basis of the Navier-Stokes-Fourier compressible equations with diffusion coefficients determined by the first approximation of the Chapman-Enskog theory for the low Knudsen numbers. The gas flows are characterized with areas of low speed flows (low Reynolds numbers). The flows can go from high speed supersonic to very low speed regimes down to the incompressible limit. This made the pressure based numerical methods very suitable to be used for calculation of this kind of gas flows. The finite volume method SIMPLE-TS (a modification of SIMPLE created by K. S. Shterev and S. K. Stefanov) is used.<br /> <br /> == Problems Solved ==<br /> <br /> Steady and unsteady flow past square in a microchannel.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Micro mechanical devices are rapidly emerging technology, where new potential applications are continuously being found. A simulation of internal and external gas flows in or aroundthese devices is important for their design.<br /> <br /> == Collaborations ==<br /> <br /> * Institute of Mechanics - BAS<br /> * Institute of Information and Communication Technologies - BAS<br /> <br /> == Beneficiaries ==<br /> <br /> * <br /> <br /> == Number of users ==<br /> 2<br /> <br /> == Development Plan ==<br /> <br /> * Concept: The concept was done before the project started<br /> * Start of alpha stage: It was done before the project started <br /> * Start of beta stage: It was done before the project started<br /> * Start of testing stage: M1<br /> * Start of deployment stage: М9<br /> * Start of production stage: М12<br /> <br /> SIMPLE-TS 2D is fully developed MPI application for calculation of 2 dimensional gas microflows. The next step of development is to extend the application for calculation of 3 dimensional gas microflows.<br /> Start development of 3 dimentional application.<br /> * Start of concept stage: M13<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''Strongly depend from calculated problem (from 1 to 600)''<br /> * Minimum RAM/core required: ''Strongly depend from calculated problem (from 10MB to 2GB per core)''<br /> * Storage space during a single run: ''Strongly depend from calculated problem (from 10MB to 200GB)''<br /> * Long-term data storage: ''50GB''<br /> * Total core hours required: 400 000<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C++''<br /> * Parallel programming paradigm: ''SIMD''<br /> * Main parallel code: ''MPI''<br /> * Pre/post processing code: ''None''<br /> * Application tools and libraries: ''C++ compiler''<br /> <br /> == Usage Example ==<br /> <br /> Source code, some examples and publications describing the algorithm and parallel organisations are freely available on: http://www.imbm.bas.bg/index.php/en_US/pressure-based-finite-volume-method<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: HPCG<br /> ** Applied for access on: 09.2010<br /> ** Access granted on: 09.2010<br /> ** Achieved scalability: ''.''<br /> * Accessed production systems:<br /> # BlueGene/P<br /> #* Applied for access on: 10.2010<br /> #* Access granted on: 11.2010<br /> #* Achieved scalability: ''.''<br /> <br /> * Porting activities: Access has been provided to the development team not only to high performance cluster with Infiniband interconnect but also to the IBM Blue Gene machine and to a small cluster with GPGPU capability. The issues that arose when porting the application to these architectures were discussed and paths to mitigate the problems were proposed. In the case of IBM Blue Gene the main issue is the high number of processes that need to communicate with one another, which lead to the idea to move to a hybrid OpenMPI-MPI programming model in order to achieve scalability beyoung 1024 CPU cores.<br /> * Scalability studies: The scalability of this application was stadied on different HPC centers. A good parallel efficiency is optained on HPCG cluster up to 450 cores, on BlueGene/P up to 512 cores, on Debrecen HPC center up to 700 cores, and on Szeged HPC center up to 700 cores.<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: Two versions of the SIMPE-TS-2D application were created –(i) before optimization and (ii) after optimization. The speedup after the MPI realizations of the both versions was measured on HPCG cluster at IICT-BAS. The results for speedup up to 100 cores and for two kind of meshes: (i) 500X100 cells and (ii) 1000X200 cells are shown below. <br /> <br /> [[File:Simple-TS-Figure1.PNG]]<br /> <br /> Speed-up tests of real problem on HP Clusters: Mesh 150 000x30 cells = 4.5million cells<br /> <br /> [[File:SIMPLE-TS_Picture1.png‎|400px]] [[File:SIMPLE-TS_Picture2.png‎|400px]]<br /> <br /> Sped-up (left) and calculation time (right) on HPCG , Szeged , Debrecen, IBM BlueGene/P<br /> <br /> <br /> * Other issues: no<br /> <br /> == Achieved Results ==<br /> New version of the algorithm after optimizations was obtained and scalability study was performed. Results for speed-up of parallel realization of the application after/before optimization of the algorithm for different meshes were compared. Problems connection with steady and unsteady flow past square in a microchannel were solved.<br /> <br /> == Publications ==<br /> <br /> * Kiril S. Shterev, Stefan K. Stefanov, and Emanouil I. Atanassov, A parallel algorithm with improved performance of Finite Volume Method (SIMPLE-TS), ”, 8th LSSC’11, June 6-10, 2011, Sozopol, Bulgaria, accepted to LNCS, 8 pages, 2011.<br /> * K. Shterev, Comparison of Some Approximation Schemes for Convective Terms for Solving Gas Flow past a Square in a Microchannel, AMITANS 2012, June 2012, Varna, Bulgaria (submitted)<br /> <br /> == Foreseen Activities ==<br /> * The application now is porting on Blue Gene/P and developers expect to obtain new scientific results using this supercomputer.<br /> * Derivation of a 3D numerical equation using the system of partial differential equation describing unsteady gas microflows.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/SIMPLE-TS_2D SIMPLE-TS 2D 2012-09-28T14:03:23Z <p>Lifesci: /* Running on Several HP-SEE Centres */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Semi-Implicit Method for Pressure-Linked Equations - Time Step''<br /> * Application's acronym: ''SIMPLE-TS 2D''<br /> * Virtual Research Community: ''Computational Physics [A2]''<br /> * Scientific contact: ''Dr. Kiril Shterev, kshterev@imbm.bas.bg''<br /> * Technical contact: ''Dr. Kiril Shterev, kshterev@imbm.bas.bg''<br /> * Developers: ''Dr. Kiril Shterev and Prof. Stefan Stefanov, Department of Mathematical Modeling and Numerical Simulations, Institute of Mechanics - BAS, Bulgaria''<br /> * Web site: http://www.imbm.bas.bg/index.php/en_US/pressure-based-finite-volume-method<br /> <br /> == Short Description ==<br /> <br /> Micro mechanical devices are rapidly emerging technology, where new potential applications are continuously being found. A simulation of internal and external gas flows in or aroundthese devices is important for their design. The flow motions described on the basis of the Navier-Stokes-Fourier compressible equations with diffusion coefficients determined by the first approximation of the Chapman-Enskog theory for the low Knudsen numbers. The gas flows are characterized with areas of low speed flows (low Reynolds numbers). The flows can go from high speed supersonic to very low speed regimes down to the incompressible limit. This made the pressure based numerical methods very suitable to be used for calculation of this kind of gas flows. The finite volume method SIMPLE-TS (a modification of SIMPLE created by K. S. Shterev and S. K. Stefanov) is used.<br /> <br /> == Problems Solved ==<br /> <br /> Steady and unsteady flow past square in a microchannel.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Micro mechanical devices are rapidly emerging technology, where new potential applications are continuously being found. A simulation of internal and external gas flows in or aroundthese devices is important for their design.<br /> <br /> == Collaborations ==<br /> <br /> * Institute of Mechanics - BAS<br /> * Institute of Information and Communication Technologies - BAS<br /> <br /> == Beneficiaries ==<br /> <br /> * <br /> <br /> == Number of users ==<br /> 2<br /> <br /> == Development Plan ==<br /> <br /> * Concept: The concept was done before the project started<br /> * Start of alpha stage: It was done before the project started <br /> * Start of beta stage: It was done before the project started<br /> * Start of testing stage: M1<br /> * Start of deployment stage: М9<br /> * Start of production stage: М12<br /> <br /> SIMPLE-TS 2D is fully developed MPI application for calculation of 2 dimensional gas microflows. The next step of development is to extend the application for calculation of 3 dimensional gas microflows.<br /> Start development of 3 dimentional application.<br /> * Start of concept stage: M13<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''Strongly depend from calculated problem (from 1 to 600)''<br /> * Minimum RAM/core required: ''Strongly depend from calculated problem (from 10MB to 2GB per core)''<br /> * Storage space during a single run: ''Strongly depend from calculated problem (from 10MB to 200GB)''<br /> * Long-term data storage: ''50GB''<br /> * Total core hours required: 400 000<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C++''<br /> * Parallel programming paradigm: ''SIMD''<br /> * Main parallel code: ''MPI''<br /> * Pre/post processing code: ''None''<br /> * Application tools and libraries: ''C++ compiler''<br /> <br /> == Usage Example ==<br /> <br /> Source code, some examples and publications describing the algorithm and parallel organisations are freely available on: http://www.imbm.bas.bg/index.php/en_US/pressure-based-finite-volume-method<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: HPCG<br /> ** Applied for access on: 09.2010<br /> ** Access granted on: 09.2010<br /> ** Achieved scalability: ''.''<br /> * Accessed production systems:<br /> # BlueGene/P<br /> #* Applied for access on: 10.2010<br /> #* Access granted on: 11.2010<br /> #* Achieved scalability: ''.''<br /> <br /> * Porting activities: Access has been provided to the development team not only to high performance cluster with Infiniband interconnect but also to the IBM Blue Gene machine and to a small cluster with GPGPU capability. The issues that arose when porting the application to these architectures were discussed and paths to mitigate the problems were proposed. In the case of IBM Blue Gene the main issue is the high number of processes that need to communicate with one another, which lead to the idea to move to a hybrid OpenMPI-MPI programming model in order to achieve scalability beyoung 1024 CPU cores.<br /> * Scalability studies: The scalability of this application was stadied on different HPC centers. A good parallel efficiency is optained on HPCG cluster up to 450 cores, on BlueGene/P up to 512 cores, on Debrecen HPC center up to 700 cores, and on Szeged HPC center up to 700 cores.<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: Two versions of the SIMPE-TS-2D application were created –(i) before optimization and (ii) after optimization. The speedup after the MPI realizations of the both versions was measured on HPCG cluster at IICT-BAS. The results for speedup up to 100 cores and for two kind of meshes: (i) 500X100 cells and (ii) 1000X200 cells are shown below. <br /> <br /> [[File:Simple-TS-Figure1.PNG]]<br /> <br /> Speed-up tests of real problem on HP Clusters: Mesh 150 000x30 cells = 4.5million cells<br /> <br /> [[File:SIMPLE-TS_Picture1.png‎|400px|left]] [[File:SIMPLE-TS_Picture2.png‎|400px|thumb|left]]<br /> <br /> Sped-up (left) and calculation time (right) on HPCG , Szeged , Debrecen, IBM BlueGene/P<br /> <br /> <br /> * Other issues: no<br /> <br /> == Achieved Results ==<br /> New version of the algorithm after optimizations was obtained and scalability study was performed. Results for speed-up of parallel realization of the application after/before optimization of the algorithm for different meshes were compared. Problems connection with steady and unsteady flow past square in a microchannel were solved.<br /> <br /> == Publications ==<br /> <br /> * Kiril S. Shterev, Stefan K. Stefanov, and Emanouil I. Atanassov, A parallel algorithm with improved performance of Finite Volume Method (SIMPLE-TS), ”, 8th LSSC’11, June 6-10, 2011, Sozopol, Bulgaria, accepted to LNCS, 8 pages, 2011.<br /> * K. Shterev, Comparison of Some Approximation Schemes for Convective Terms for Solving Gas Flow past a Square in a Microchannel, AMITANS 2012, June 2012, Varna, Bulgaria (submitted)<br /> <br /> == Foreseen Activities ==<br /> * The application now is porting on Blue Gene/P and developers expect to obtain new scientific results using this supercomputer.<br /> * Derivation of a 3D numerical equation using the system of partial differential equation describing unsteady gas microflows.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/File:SIMPLE-TS_Picture1.png File:SIMPLE-TS Picture1.png 2012-09-28T13:59:54Z <p>Lifesci: uploaded a new version of &amp;quot;File:SIMPLE-TS Picture1.png&amp;quot;</p> <hr /> <div></div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/File:SIMPLE-TS_Picture1.png File:SIMPLE-TS Picture1.png 2012-09-28T13:56:57Z <p>Lifesci: uploaded a new version of &amp;quot;File:SIMPLE-TS Picture1.png&amp;quot;</p> <hr /> <div></div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/MSBP MSBP 2012-09-24T12:21:52Z <p>Lifesci: /* Infrastructure Usage */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Modeling of some biochemical processes with the purpose of realization of their thin and purposeful synthesis''<br /> * Application's acronym: ''MSBP''<br /> * Virtual Research Community: ''Life Sciences'' <br /> * Scientific contact: ''Jumber Kereselidze, Ramaz Kvatadze ramaz[at]grena.ge''<br /> * Technical contact: ''George Mikuchadze gmikuchadze[at]gmail.com''<br /> * Developers: ''Scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University''<br /> * Web site: http://wiki.hp-see.eu/index.php/MSBP<br /> <br /> == Short Description ==<br /> <br /> One of the priority directions of modern natural sciences is the research and creation of an opportunity of realization of thin end purposeful synthesis of nucleotide bases. Solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. The scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University during last years are engaged in research of modeling of transformations of biochemical macromolecular systems (amino acids, proteins and DNA) with the use of the appropriate computer programs (program package „ PRIRODA – 04“, P6, P32 - Moscow State University). The main characteristics of the PRIRODA quantum-chemical program designed for the study of complex molecular systems by the density functional theory, at the MP2, MP3, and MP4 levels of multiparticle perturbation theory, and by the coupled-cluster single and double excitations method (CCSD) with the application of parallel computing.<br /> <br /> == Problems Solved ==<br /> <br /> The energy characteristics of the tautomeric transformations of cytosine, thymine, and uracil have been calculated within the framework of the quantum chemistry theory of functional density. It was obtained that the directions of the tautomeric conversions are characterized by energies of activation calculated according to the theory of functional density.<br /> The published data on the prototropic tautomerism of some carbonyl and nitrogen-containing acyclic and heterocyclic compounds are systematized. Mechanisms of the intramolecular and intermolecular proton transfer in tautomerisation reactions was considered. On the basis of the results of semiempirical and quantum-chemical calculations, preference is given to an intermolecular collective (dimeric, trimeric, tetrameric or oligomeric) mechanism. A new approach to the description of the solvent effect on the prototropic tautomeric equilibrium was proposed.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. Obtained results will be important for the prediction of denaturation of DNA. Working on this project will significantly improve research capacity of the scientists involved and will rise educational level in biophysical chemistry at the Tbilisi State University and Sokhumi State University. Researches will improve their experience in participation in European Programmes and will contribute to the integration of the Georgian research potential to the European Research Area.<br /> <br /> == Collaborations ==<br /> <br /> * Tbilisi Statae University, Georgia<br /> * Sokhumi State University, Georgia<br /> * Moscow State University, Russia<br /> <br /> == Beneficiaries ==<br /> <br /> Primary beneficiaries will be research groups from Tbilisi, Sokhumi and Moscow State Universities, however obtained results can be used by all scientists working on realization of thin end purposeful synthesis of nucleotide bases. Students involved in this research will gain experience in scientific collaboration.<br /> <br /> == Number of users ==<br /> <br /> 9<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''M1''<br /> * Start of beta stage: ''M6''<br /> * Start of testing stage: ''M8''<br /> * Start of deployment stage: ''M11''<br /> * Start of production stage: ''M15''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''From 24 to up to 200''<br /> * Minimum RAM/core required: ''1 GB/24''<br /> * Storage space during a single run: ''200 - 500 MB''<br /> * Long-term data storage: ''not required''<br /> * Total core hours required: ''not clear yet''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C, Fortran''<br /> * Parallel programming paradigm: ''MPI/OpenMP''<br /> * Main parallel code: ''OpenMPI/OpenMP''<br /> * Pre/post processing code: ''in-house development, C''<br /> * Application tools and libraries: ''Intel C/Fortran compilers, GCC/GFortran, PGI Fortran''<br /> <br /> == Usage Example ==<br /> <br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''NCIT-Cluster'' <br /> ** Applied for access on: ''04.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''64 cores''<br /> * Accessed production systems:<br /> # ''HPC centre in Debrecen (Debrecen SC)''<br /> #* Applied for access on: ''07.2012''<br /> #* Access granted on: ''08.2012''<br /> #* Achieved scalability: ''64 cores''<br /> * Porting activities: ''The application has been successfully ported at NCIT-Cluster. George Mikuchadze was assisted by Mihnea Dulea and then by Emil Slusanschi Associate Professor of the Department of Computer Science and Engineering of University Politehnica of Bucharest. In August 2012 application was successfully ported at HPC centre in Debrecen.''<br /> * Scalability studies: ''Tests on 8, 16, 32 and 64 cores.''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: '' After successful deployment on 8 cores benchmarking was initiated for 16, 32 and 64 cores.''<br /> * Other issues: ''Further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> <br /> * The quantum-chemical modeling of a proton transfer in nitrogen containing biological active compounds using the modern non empirical method - Density Function Theory. As a result of calculations of the energetic, electronic and structural characteristics of protons transfer in the nucleotide bases the mutation processes in DNA are quantitatively described. The quantum-chemical model of the stacking and pentameric mechanisms of the tautomeric transformations of the heterocyclic compounds is constructed.<br /> <br /> == Publications ==<br /> <br /> * T. Zarqua, J. Kereselidze and Z. Pachulia. Quantum-chemical description of the influenceof electronic effects of proton transfer in guanine-cytosine base pairs. J.Biol. Phys. Chem., v.10, pp.71-73 (2010).<br /> * J. Kereselidze, T. Zarqua, Z. Paculia, M. Kvaraia. Quantum Chemical Modeling of the Mechanism of Formation of the Peptide Bond.International Conference on Computational Biology. Tokyo, Japan May 26-28, 2010.WASET, 65, p.1469 (2010).<br /> * M. Kvaraia, J. Kereselidze, Z. Pachulia and T. Zarqua. Quantum-Chemical Study of the Solvent Effect on Process of a Proton Transfer in Nucleotide Bases. Proceed. Georgian NA Sciences, v. 36, pp 306-308 (2010).<br /> * J. Kereselidze, Z. Pachulia and M. Kvaraia. Quantum-chemical modeling of tendency of DNA to denaturetion. J.Biol. Phys. Chem., 11, 51-53 (2011).<br /> <br /> <br /> == Foreseen Activities ==<br /> <br /> * DNA tendency to denaturation is stipulated by the elevation of ethanol’s concentration in the ambient. The proton transfer between nucleobases of DNA (Adenine-Thimine, Guanine-Cytosine) causes rare tautomeric transformation of the nucleobases pair, which in turn increases both probability of denaturation and frequency of mutation. In the next runs we will increase number of nucleobases (from 4 to 8) in the molecular structures. <br /> * Preparation of publication based on results of simulations on HP-SEE infrastructure.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/MSBP MSBP 2012-09-24T12:10:19Z <p>Lifesci: /* Foreseen Activities */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Modeling of some biochemical processes with the purpose of realization of their thin and purposeful synthesis''<br /> * Application's acronym: ''MSBP''<br /> * Virtual Research Community: ''Life Sciences'' <br /> * Scientific contact: ''Jumber Kereselidze, Ramaz Kvatadze ramaz[at]grena.ge''<br /> * Technical contact: ''George Mikuchadze gmikuchadze[at]gmail.com''<br /> * Developers: ''Scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University''<br /> * Web site: http://wiki.hp-see.eu/index.php/MSBP<br /> <br /> == Short Description ==<br /> <br /> One of the priority directions of modern natural sciences is the research and creation of an opportunity of realization of thin end purposeful synthesis of nucleotide bases. Solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. The scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University during last years are engaged in research of modeling of transformations of biochemical macromolecular systems (amino acids, proteins and DNA) with the use of the appropriate computer programs (program package „ PRIRODA – 04“, P6, P32 - Moscow State University). The main characteristics of the PRIRODA quantum-chemical program designed for the study of complex molecular systems by the density functional theory, at the MP2, MP3, and MP4 levels of multiparticle perturbation theory, and by the coupled-cluster single and double excitations method (CCSD) with the application of parallel computing.<br /> <br /> == Problems Solved ==<br /> <br /> The energy characteristics of the tautomeric transformations of cytosine, thymine, and uracil have been calculated within the framework of the quantum chemistry theory of functional density. It was obtained that the directions of the tautomeric conversions are characterized by energies of activation calculated according to the theory of functional density.<br /> The published data on the prototropic tautomerism of some carbonyl and nitrogen-containing acyclic and heterocyclic compounds are systematized. Mechanisms of the intramolecular and intermolecular proton transfer in tautomerisation reactions was considered. On the basis of the results of semiempirical and quantum-chemical calculations, preference is given to an intermolecular collective (dimeric, trimeric, tetrameric or oligomeric) mechanism. A new approach to the description of the solvent effect on the prototropic tautomeric equilibrium was proposed.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. Obtained results will be important for the prediction of denaturation of DNA. Working on this project will significantly improve research capacity of the scientists involved and will rise educational level in biophysical chemistry at the Tbilisi State University and Sokhumi State University. Researches will improve their experience in participation in European Programmes and will contribute to the integration of the Georgian research potential to the European Research Area.<br /> <br /> == Collaborations ==<br /> <br /> * Tbilisi Statae University, Georgia<br /> * Sokhumi State University, Georgia<br /> * Moscow State University, Russia<br /> <br /> == Beneficiaries ==<br /> <br /> Primary beneficiaries will be research groups from Tbilisi, Sokhumi and Moscow State Universities, however obtained results can be used by all scientists working on realization of thin end purposeful synthesis of nucleotide bases. Students involved in this research will gain experience in scientific collaboration.<br /> <br /> == Number of users ==<br /> <br /> 9<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''M1''<br /> * Start of beta stage: ''M6''<br /> * Start of testing stage: ''M8''<br /> * Start of deployment stage: ''M11''<br /> * Start of production stage: ''M15''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''From 24 to up to 200''<br /> * Minimum RAM/core required: ''1 GB/24''<br /> * Storage space during a single run: ''200 - 500 MB''<br /> * Long-term data storage: ''not required''<br /> * Total core hours required: ''not clear yet''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C, Fortran''<br /> * Parallel programming paradigm: ''MPI/OpenMP''<br /> * Main parallel code: ''OpenMPI/OpenMP''<br /> * Pre/post processing code: ''in-house development, C''<br /> * Application tools and libraries: ''Intel C/Fortran compilers, GCC/GFortran, PGI Fortran''<br /> <br /> == Usage Example ==<br /> <br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''NCIT-Cluster'' <br /> ** Applied for access on: ''04.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''64 cores''<br /> * Accessed production systems:<br /> # ''HPC centre in Debrecen (Debrecen SC)''<br /> #* Applied for access on: ''07.2012''<br /> #* Access granted on: ''08.2012''<br /> #* Achieved scalability: ''64 cores''<br /> * Porting activities: ''The application has been successfully ported at NCIT-Cluster. George Mikuchadze was assisted by Mihnea Dulea and then by Emil Slusanschi Associate Professor of the Department of Computer Science and Engineering of University Politehnica of Bucharest. In August 2012 application was successfully ported at HPC centre in Debrecen.''<br /> * Scalability studies: ''Tests on 8, 16, 33 and 64 cores.''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: '' After successful deployment on 8 cores benchmarking was initiated for 16, 32 and 64 cores.''<br /> * Other issues: ''Further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> <br /> * The quantum-chemical modeling of a proton transfer in nitrogen containing biological active compounds using the modern non empirical method - Density Function Theory. As a result of calculations of the energetic, electronic and structural characteristics of protons transfer in the nucleotide bases the mutation processes in DNA are quantitatively described. The quantum-chemical model of the stacking and pentameric mechanisms of the tautomeric transformations of the heterocyclic compounds is constructed.<br /> <br /> == Publications ==<br /> <br /> * T. Zarqua, J. Kereselidze and Z. Pachulia. Quantum-chemical description of the influenceof electronic effects of proton transfer in guanine-cytosine base pairs. J.Biol. Phys. Chem., v.10, pp.71-73 (2010).<br /> * J. Kereselidze, T. Zarqua, Z. Paculia, M. Kvaraia. Quantum Chemical Modeling of the Mechanism of Formation of the Peptide Bond.International Conference on Computational Biology. Tokyo, Japan May 26-28, 2010.WASET, 65, p.1469 (2010).<br /> * M. Kvaraia, J. Kereselidze, Z. Pachulia and T. Zarqua. Quantum-Chemical Study of the Solvent Effect on Process of a Proton Transfer in Nucleotide Bases. Proceed. Georgian NA Sciences, v. 36, pp 306-308 (2010).<br /> * J. Kereselidze, Z. Pachulia and M. Kvaraia. Quantum-chemical modeling of tendency of DNA to denaturetion. J.Biol. Phys. Chem., 11, 51-53 (2011).<br /> <br /> <br /> == Foreseen Activities ==<br /> <br /> * DNA tendency to denaturation is stipulated by the elevation of ethanol’s concentration in the ambient. The proton transfer between nucleobases of DNA (Adenine-Thimine, Guanine-Cytosine) causes rare tautomeric transformation of the nucleobases pair, which in turn increases both probability of denaturation and frequency of mutation. In the next runs we will increase number of nucleobases (from 4 to 8) in the molecular structures. <br /> * Preparation of publication based on results of simulations on HP-SEE infrastructure.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/MSBP MSBP 2012-09-24T12:02:03Z <p>Lifesci: /* Running on Several HP-SEE Centres */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Modeling of some biochemical processes with the purpose of realization of their thin and purposeful synthesis''<br /> * Application's acronym: ''MSBP''<br /> * Virtual Research Community: ''Life Sciences'' <br /> * Scientific contact: ''Jumber Kereselidze, Ramaz Kvatadze ramaz[at]grena.ge''<br /> * Technical contact: ''George Mikuchadze gmikuchadze[at]gmail.com''<br /> * Developers: ''Scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University''<br /> * Web site: http://wiki.hp-see.eu/index.php/MSBP<br /> <br /> == Short Description ==<br /> <br /> One of the priority directions of modern natural sciences is the research and creation of an opportunity of realization of thin end purposeful synthesis of nucleotide bases. Solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. The scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University during last years are engaged in research of modeling of transformations of biochemical macromolecular systems (amino acids, proteins and DNA) with the use of the appropriate computer programs (program package „ PRIRODA – 04“, P6, P32 - Moscow State University). The main characteristics of the PRIRODA quantum-chemical program designed for the study of complex molecular systems by the density functional theory, at the MP2, MP3, and MP4 levels of multiparticle perturbation theory, and by the coupled-cluster single and double excitations method (CCSD) with the application of parallel computing.<br /> <br /> == Problems Solved ==<br /> <br /> The energy characteristics of the tautomeric transformations of cytosine, thymine, and uracil have been calculated within the framework of the quantum chemistry theory of functional density. It was obtained that the directions of the tautomeric conversions are characterized by energies of activation calculated according to the theory of functional density.<br /> The published data on the prototropic tautomerism of some carbonyl and nitrogen-containing acyclic and heterocyclic compounds are systematized. Mechanisms of the intramolecular and intermolecular proton transfer in tautomerisation reactions was considered. On the basis of the results of semiempirical and quantum-chemical calculations, preference is given to an intermolecular collective (dimeric, trimeric, tetrameric or oligomeric) mechanism. A new approach to the description of the solvent effect on the prototropic tautomeric equilibrium was proposed.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. Obtained results will be important for the prediction of denaturation of DNA. Working on this project will significantly improve research capacity of the scientists involved and will rise educational level in biophysical chemistry at the Tbilisi State University and Sokhumi State University. Researches will improve their experience in participation in European Programmes and will contribute to the integration of the Georgian research potential to the European Research Area.<br /> <br /> == Collaborations ==<br /> <br /> * Tbilisi Statae University, Georgia<br /> * Sokhumi State University, Georgia<br /> * Moscow State University, Russia<br /> <br /> == Beneficiaries ==<br /> <br /> Primary beneficiaries will be research groups from Tbilisi, Sokhumi and Moscow State Universities, however obtained results can be used by all scientists working on realization of thin end purposeful synthesis of nucleotide bases. Students involved in this research will gain experience in scientific collaboration.<br /> <br /> == Number of users ==<br /> <br /> 9<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''M1''<br /> * Start of beta stage: ''M6''<br /> * Start of testing stage: ''M8''<br /> * Start of deployment stage: ''M11''<br /> * Start of production stage: ''M15''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''From 24 to up to 200''<br /> * Minimum RAM/core required: ''1 GB/24''<br /> * Storage space during a single run: ''200 - 500 MB''<br /> * Long-term data storage: ''not required''<br /> * Total core hours required: ''not clear yet''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C, Fortran''<br /> * Parallel programming paradigm: ''MPI/OpenMP''<br /> * Main parallel code: ''OpenMPI/OpenMP''<br /> * Pre/post processing code: ''in-house development, C''<br /> * Application tools and libraries: ''Intel C/Fortran compilers, GCC/GFortran, PGI Fortran''<br /> <br /> == Usage Example ==<br /> <br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''NCIT-Cluster'' <br /> ** Applied for access on: ''04.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''64 cores''<br /> * Accessed production systems:<br /> # ''HPC centre in Debrecen (Debrecen SC)''<br /> #* Applied for access on: ''07.2012''<br /> #* Access granted on: ''08.2012''<br /> #* Achieved scalability: ''64 cores''<br /> * Porting activities: ''The application has been successfully ported at NCIT-Cluster. George Mikuchadze was assisted by Mihnea Dulea and then by Emil Slusanschi Associate Professor of the Department of Computer Science and Engineering of University Politehnica of Bucharest. In August 2012 application was successfully ported at HPC centre in Debrecen.''<br /> * Scalability studies: ''Tests on 8, 16, 33 and 64 cores.''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: '' After successful deployment on 8 cores benchmarking was initiated for 16, 32 and 64 cores.''<br /> * Other issues: ''Further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> <br /> * The quantum-chemical modeling of a proton transfer in nitrogen containing biological active compounds using the modern non empirical method - Density Function Theory. As a result of calculations of the energetic, electronic and structural characteristics of protons transfer in the nucleotide bases the mutation processes in DNA are quantitatively described. The quantum-chemical model of the stacking and pentameric mechanisms of the tautomeric transformations of the heterocyclic compounds is constructed.<br /> <br /> == Publications ==<br /> <br /> * T. Zarqua, J. Kereselidze and Z. Pachulia. Quantum-chemical description of the influenceof electronic effects of proton transfer in guanine-cytosine base pairs. J.Biol. Phys. Chem., v.10, pp.71-73 (2010).<br /> * J. Kereselidze, T. Zarqua, Z. Paculia, M. Kvaraia. Quantum Chemical Modeling of the Mechanism of Formation of the Peptide Bond.International Conference on Computational Biology. Tokyo, Japan May 26-28, 2010.WASET, 65, p.1469 (2010).<br /> * M. Kvaraia, J. Kereselidze, Z. Pachulia and T. Zarqua. Quantum-Chemical Study of the Solvent Effect on Process of a Proton Transfer in Nucleotide Bases. Proceed. Georgian NA Sciences, v. 36, pp 306-308 (2010).<br /> * J. Kereselidze, Z. Pachulia and M. Kvaraia. Quantum-chemical modeling of tendency of DNA to denaturetion. J.Biol. Phys. Chem., 11, 51-53 (2011).<br /> <br /> <br /> == Foreseen Activities ==<br /> <br /> * Until now simulations are performed for the small fragments of the DNA. The main goal is stabile operation of application with reasonably large fragments of the DNA on HP-SEE infrastructure. <br /> * Preparation of publication based on results of simulations on HP-SEE infrastructure.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/MSBP MSBP 2012-09-24T11:59:57Z <p>Lifesci: /* Infrastructure Usage */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Modeling of some biochemical processes with the purpose of realization of their thin and purposeful synthesis''<br /> * Application's acronym: ''MSBP''<br /> * Virtual Research Community: ''Life Sciences'' <br /> * Scientific contact: ''Jumber Kereselidze, Ramaz Kvatadze ramaz[at]grena.ge''<br /> * Technical contact: ''George Mikuchadze gmikuchadze[at]gmail.com''<br /> * Developers: ''Scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University''<br /> * Web site: http://wiki.hp-see.eu/index.php/MSBP<br /> <br /> == Short Description ==<br /> <br /> One of the priority directions of modern natural sciences is the research and creation of an opportunity of realization of thin end purposeful synthesis of nucleotide bases. Solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. The scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University during last years are engaged in research of modeling of transformations of biochemical macromolecular systems (amino acids, proteins and DNA) with the use of the appropriate computer programs (program package „ PRIRODA – 04“, P6, P32 - Moscow State University). The main characteristics of the PRIRODA quantum-chemical program designed for the study of complex molecular systems by the density functional theory, at the MP2, MP3, and MP4 levels of multiparticle perturbation theory, and by the coupled-cluster single and double excitations method (CCSD) with the application of parallel computing.<br /> <br /> == Problems Solved ==<br /> <br /> The energy characteristics of the tautomeric transformations of cytosine, thymine, and uracil have been calculated within the framework of the quantum chemistry theory of functional density. It was obtained that the directions of the tautomeric conversions are characterized by energies of activation calculated according to the theory of functional density.<br /> The published data on the prototropic tautomerism of some carbonyl and nitrogen-containing acyclic and heterocyclic compounds are systematized. Mechanisms of the intramolecular and intermolecular proton transfer in tautomerisation reactions was considered. On the basis of the results of semiempirical and quantum-chemical calculations, preference is given to an intermolecular collective (dimeric, trimeric, tetrameric or oligomeric) mechanism. A new approach to the description of the solvent effect on the prototropic tautomeric equilibrium was proposed.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. Obtained results will be important for the prediction of denaturation of DNA. Working on this project will significantly improve research capacity of the scientists involved and will rise educational level in biophysical chemistry at the Tbilisi State University and Sokhumi State University. Researches will improve their experience in participation in European Programmes and will contribute to the integration of the Georgian research potential to the European Research Area.<br /> <br /> == Collaborations ==<br /> <br /> * Tbilisi Statae University, Georgia<br /> * Sokhumi State University, Georgia<br /> * Moscow State University, Russia<br /> <br /> == Beneficiaries ==<br /> <br /> Primary beneficiaries will be research groups from Tbilisi, Sokhumi and Moscow State Universities, however obtained results can be used by all scientists working on realization of thin end purposeful synthesis of nucleotide bases. Students involved in this research will gain experience in scientific collaboration.<br /> <br /> == Number of users ==<br /> <br /> 9<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''M1''<br /> * Start of beta stage: ''M6''<br /> * Start of testing stage: ''M8''<br /> * Start of deployment stage: ''M11''<br /> * Start of production stage: ''M15''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''From 24 to up to 200''<br /> * Minimum RAM/core required: ''1 GB/24''<br /> * Storage space during a single run: ''200 - 500 MB''<br /> * Long-term data storage: ''not required''<br /> * Total core hours required: ''not clear yet''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C, Fortran''<br /> * Parallel programming paradigm: ''MPI/OpenMP''<br /> * Main parallel code: ''OpenMPI/OpenMP''<br /> * Pre/post processing code: ''in-house development, C''<br /> * Application tools and libraries: ''Intel C/Fortran compilers, GCC/GFortran, PGI Fortran''<br /> <br /> == Usage Example ==<br /> <br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''NCIT-Cluster'' <br /> ** Applied for access on: ''04.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''64 cores''<br /> * Accessed production systems:<br /> # ''HPC centre in Debrecen (Debrecen SC)''<br /> #* Applied for access on: ''07.2012''<br /> #* Access granted on: ''08.2012''<br /> #* Achieved scalability: ''64 cores''<br /> * Porting activities: ''The application has been successfully ported at NCIT-Cluster. George Mikuchadze was assisted by Mihnea Dulea and then by Emil Slusanschi Associate Professor of the Department of Computer Science and Engineering of University Politehnica of Bucharest. In August 2012 application was successfully ported at HPC centre in Debrecen.''<br /> * Scalability studies: ''Tests on 8, 16, 33 and 64 cores.''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''.''<br /> * Other issues: ''.'' <br /> <br /> == Achieved Results ==<br /> <br /> * The quantum-chemical modeling of a proton transfer in nitrogen containing biological active compounds using the modern non empirical method - Density Function Theory. As a result of calculations of the energetic, electronic and structural characteristics of protons transfer in the nucleotide bases the mutation processes in DNA are quantitatively described. The quantum-chemical model of the stacking and pentameric mechanisms of the tautomeric transformations of the heterocyclic compounds is constructed.<br /> <br /> == Publications ==<br /> <br /> * T. Zarqua, J. Kereselidze and Z. Pachulia. Quantum-chemical description of the influenceof electronic effects of proton transfer in guanine-cytosine base pairs. J.Biol. Phys. Chem., v.10, pp.71-73 (2010).<br /> * J. Kereselidze, T. Zarqua, Z. Paculia, M. Kvaraia. Quantum Chemical Modeling of the Mechanism of Formation of the Peptide Bond.International Conference on Computational Biology. Tokyo, Japan May 26-28, 2010.WASET, 65, p.1469 (2010).<br /> * M. Kvaraia, J. Kereselidze, Z. Pachulia and T. Zarqua. Quantum-Chemical Study of the Solvent Effect on Process of a Proton Transfer in Nucleotide Bases. Proceed. Georgian NA Sciences, v. 36, pp 306-308 (2010).<br /> * J. Kereselidze, Z. Pachulia and M. Kvaraia. Quantum-chemical modeling of tendency of DNA to denaturetion. J.Biol. Phys. Chem., 11, 51-53 (2011).<br /> <br /> <br /> == Foreseen Activities ==<br /> <br /> * Until now simulations are performed for the small fragments of the DNA. The main goal is stabile operation of application with reasonably large fragments of the DNA on HP-SEE infrastructure. <br /> * Preparation of publication based on results of simulations on HP-SEE infrastructure.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/MSBP MSBP 2012-09-24T11:59:13Z <p>Lifesci: /* Infrastructure Usage */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Modeling of some biochemical processes with the purpose of realization of their thin and purposeful synthesis''<br /> * Application's acronym: ''MSBP''<br /> * Virtual Research Community: ''Life Sciences'' <br /> * Scientific contact: ''Jumber Kereselidze, Ramaz Kvatadze ramaz[at]grena.ge''<br /> * Technical contact: ''George Mikuchadze gmikuchadze[at]gmail.com''<br /> * Developers: ''Scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University''<br /> * Web site: http://wiki.hp-see.eu/index.php/MSBP<br /> <br /> == Short Description ==<br /> <br /> One of the priority directions of modern natural sciences is the research and creation of an opportunity of realization of thin end purposeful synthesis of nucleotide bases. Solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. The scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University during last years are engaged in research of modeling of transformations of biochemical macromolecular systems (amino acids, proteins and DNA) with the use of the appropriate computer programs (program package „ PRIRODA – 04“, P6, P32 - Moscow State University). The main characteristics of the PRIRODA quantum-chemical program designed for the study of complex molecular systems by the density functional theory, at the MP2, MP3, and MP4 levels of multiparticle perturbation theory, and by the coupled-cluster single and double excitations method (CCSD) with the application of parallel computing.<br /> <br /> == Problems Solved ==<br /> <br /> The energy characteristics of the tautomeric transformations of cytosine, thymine, and uracil have been calculated within the framework of the quantum chemistry theory of functional density. It was obtained that the directions of the tautomeric conversions are characterized by energies of activation calculated according to the theory of functional density.<br /> The published data on the prototropic tautomerism of some carbonyl and nitrogen-containing acyclic and heterocyclic compounds are systematized. Mechanisms of the intramolecular and intermolecular proton transfer in tautomerisation reactions was considered. On the basis of the results of semiempirical and quantum-chemical calculations, preference is given to an intermolecular collective (dimeric, trimeric, tetrameric or oligomeric) mechanism. A new approach to the description of the solvent effect on the prototropic tautomeric equilibrium was proposed.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. Obtained results will be important for the prediction of denaturation of DNA. Working on this project will significantly improve research capacity of the scientists involved and will rise educational level in biophysical chemistry at the Tbilisi State University and Sokhumi State University. Researches will improve their experience in participation in European Programmes and will contribute to the integration of the Georgian research potential to the European Research Area.<br /> <br /> == Collaborations ==<br /> <br /> * Tbilisi Statae University, Georgia<br /> * Sokhumi State University, Georgia<br /> * Moscow State University, Russia<br /> <br /> == Beneficiaries ==<br /> <br /> Primary beneficiaries will be research groups from Tbilisi, Sokhumi and Moscow State Universities, however obtained results can be used by all scientists working on realization of thin end purposeful synthesis of nucleotide bases. Students involved in this research will gain experience in scientific collaboration.<br /> <br /> == Number of users ==<br /> <br /> 9<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''M1''<br /> * Start of beta stage: ''M6''<br /> * Start of testing stage: ''M8''<br /> * Start of deployment stage: ''M11''<br /> * Start of production stage: ''M15''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''From 24 to up to 200''<br /> * Minimum RAM/core required: ''1 GB/24''<br /> * Storage space during a single run: ''200 - 500 MB''<br /> * Long-term data storage: ''not required''<br /> * Total core hours required: ''not clear yet''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C, Fortran''<br /> * Parallel programming paradigm: ''MPI/OpenMP''<br /> * Main parallel code: ''OpenMPI/OpenMP''<br /> * Pre/post processing code: ''in-house development, C''<br /> * Application tools and libraries: ''Intel C/Fortran compilers, GCC/GFortran, PGI Fortran''<br /> <br /> == Usage Example ==<br /> <br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''NCIT-Cluster'' <br /> ** Applied for access on: ''04.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''64 cores''<br /> * Accessed production systems:<br /> # ''HPC centre in Debrecen (Debrecen SC)''<br /> #* Applied for access on: ''07.2012''<br /> #* Access granted on: ''08.2012''<br /> #* Achieved scalability: ''64 cores''<br /> ** Porting activities: ''The application has been successfully ported at NCIT-Cluster. George Mikuchadze was assisted by Mihnea Dulea and then by Emil Slusanschi Associate Professor of the Department of Computer Science and Engineering of University Politehnica of Bucharest. In August 2012 application was successfully ported at HPC centre in Debrecen.''<br /> ** Scalability studies: ''Tests on 8, 16, 33 and 64 cores.''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''.''<br /> * Other issues: ''.'' <br /> <br /> == Achieved Results ==<br /> <br /> * The quantum-chemical modeling of a proton transfer in nitrogen containing biological active compounds using the modern non empirical method - Density Function Theory. As a result of calculations of the energetic, electronic and structural characteristics of protons transfer in the nucleotide bases the mutation processes in DNA are quantitatively described. The quantum-chemical model of the stacking and pentameric mechanisms of the tautomeric transformations of the heterocyclic compounds is constructed.<br /> <br /> == Publications ==<br /> <br /> * T. Zarqua, J. Kereselidze and Z. Pachulia. Quantum-chemical description of the influenceof electronic effects of proton transfer in guanine-cytosine base pairs. J.Biol. Phys. Chem., v.10, pp.71-73 (2010).<br /> * J. Kereselidze, T. Zarqua, Z. Paculia, M. Kvaraia. Quantum Chemical Modeling of the Mechanism of Formation of the Peptide Bond.International Conference on Computational Biology. Tokyo, Japan May 26-28, 2010.WASET, 65, p.1469 (2010).<br /> * M. Kvaraia, J. Kereselidze, Z. Pachulia and T. Zarqua. Quantum-Chemical Study of the Solvent Effect on Process of a Proton Transfer in Nucleotide Bases. Proceed. Georgian NA Sciences, v. 36, pp 306-308 (2010).<br /> * J. Kereselidze, Z. Pachulia and M. Kvaraia. Quantum-chemical modeling of tendency of DNA to denaturetion. J.Biol. Phys. Chem., 11, 51-53 (2011).<br /> <br /> <br /> == Foreseen Activities ==<br /> <br /> * Until now simulations are performed for the small fragments of the DNA. The main goal is stabile operation of application with reasonably large fragments of the DNA on HP-SEE infrastructure. <br /> * Preparation of publication based on results of simulations on HP-SEE infrastructure.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/MSBP MSBP 2012-09-24T11:48:09Z <p>Lifesci: /* Infrastructure Usage */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Modeling of some biochemical processes with the purpose of realization of their thin and purposeful synthesis''<br /> * Application's acronym: ''MSBP''<br /> * Virtual Research Community: ''Life Sciences'' <br /> * Scientific contact: ''Jumber Kereselidze, Ramaz Kvatadze ramaz[at]grena.ge''<br /> * Technical contact: ''George Mikuchadze gmikuchadze[at]gmail.com''<br /> * Developers: ''Scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University''<br /> * Web site: http://wiki.hp-see.eu/index.php/MSBP<br /> <br /> == Short Description ==<br /> <br /> One of the priority directions of modern natural sciences is the research and creation of an opportunity of realization of thin end purposeful synthesis of nucleotide bases. Solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. The scientific groups of biophysical chemistry of the Tbilisi State University and the Sokhumi State University during last years are engaged in research of modeling of transformations of biochemical macromolecular systems (amino acids, proteins and DNA) with the use of the appropriate computer programs (program package „ PRIRODA – 04“, P6, P32 - Moscow State University). The main characteristics of the PRIRODA quantum-chemical program designed for the study of complex molecular systems by the density functional theory, at the MP2, MP3, and MP4 levels of multiparticle perturbation theory, and by the coupled-cluster single and double excitations method (CCSD) with the application of parallel computing.<br /> <br /> == Problems Solved ==<br /> <br /> The energy characteristics of the tautomeric transformations of cytosine, thymine, and uracil have been calculated within the framework of the quantum chemistry theory of functional density. It was obtained that the directions of the tautomeric conversions are characterized by energies of activation calculated according to the theory of functional density.<br /> The published data on the prototropic tautomerism of some carbonyl and nitrogen-containing acyclic and heterocyclic compounds are systematized. Mechanisms of the intramolecular and intermolecular proton transfer in tautomerisation reactions was considered. On the basis of the results of semiempirical and quantum-chemical calculations, preference is given to an intermolecular collective (dimeric, trimeric, tetrameric or oligomeric) mechanism. A new approach to the description of the solvent effect on the prototropic tautomeric equilibrium was proposed.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The solution of this problem is directly connected to application of modern methods of quantum chemistry (DFT- Density Function Theory) and molecular mechanics. Obtained results will be important for the prediction of denaturation of DNA. Working on this project will significantly improve research capacity of the scientists involved and will rise educational level in biophysical chemistry at the Tbilisi State University and Sokhumi State University. Researches will improve their experience in participation in European Programmes and will contribute to the integration of the Georgian research potential to the European Research Area.<br /> <br /> == Collaborations ==<br /> <br /> * Tbilisi Statae University, Georgia<br /> * Sokhumi State University, Georgia<br /> * Moscow State University, Russia<br /> <br /> == Beneficiaries ==<br /> <br /> Primary beneficiaries will be research groups from Tbilisi, Sokhumi and Moscow State Universities, however obtained results can be used by all scientists working on realization of thin end purposeful synthesis of nucleotide bases. Students involved in this research will gain experience in scientific collaboration.<br /> <br /> == Number of users ==<br /> <br /> 9<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''M1''<br /> * Start of beta stage: ''M6''<br /> * Start of testing stage: ''M8''<br /> * Start of deployment stage: ''M11''<br /> * Start of production stage: ''M15''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''From 24 to up to 200''<br /> * Minimum RAM/core required: ''1 GB/24''<br /> * Storage space during a single run: ''200 - 500 MB''<br /> * Long-term data storage: ''not required''<br /> * Total core hours required: ''not clear yet''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C, Fortran''<br /> * Parallel programming paradigm: ''MPI/OpenMP''<br /> * Main parallel code: ''OpenMPI/OpenMP''<br /> * Pre/post processing code: ''in-house development, C''<br /> * Application tools and libraries: ''Intel C/Fortran compilers, GCC/GFortran, PGI Fortran''<br /> <br /> == Usage Example ==<br /> <br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''NCIT-Cluster'' <br /> ** Applied for access on: ''04.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''64 cores''<br /> * Accessed production systems:<br /> # ''HPC centre in Debrecen (Debrecen SC)''<br /> #* Applied for access on: ''07.2012''<br /> #* Access granted on: ''08.2012''<br /> #* Achieved scalability: ''64 cores''<br /> ** Porting activities: ''The application has been successfully ported at NCIT-Cluster. George Mikuchadze was assisted by Mihnea Dulea and then by Emil Slusanschi Associate Professor of the Department of Computer Science and Engineering of University Politehnica of Bucharest. In August 2012 application was successfully ported at HPC centre in Debrecen.''<br /> * Other issues: ''Change of source of the tasks and compilation methods.''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''.''<br /> * Other issues: ''.'' <br /> <br /> == Achieved Results ==<br /> <br /> * The quantum-chemical modeling of a proton transfer in nitrogen containing biological active compounds using the modern non empirical method - Density Function Theory. As a result of calculations of the energetic, electronic and structural characteristics of protons transfer in the nucleotide bases the mutation processes in DNA are quantitatively described. The quantum-chemical model of the stacking and pentameric mechanisms of the tautomeric transformations of the heterocyclic compounds is constructed.<br /> <br /> == Publications ==<br /> <br /> * T. Zarqua, J. Kereselidze and Z. Pachulia. Quantum-chemical description of the influenceof electronic effects of proton transfer in guanine-cytosine base pairs. J.Biol. Phys. Chem., v.10, pp.71-73 (2010).<br /> * J. Kereselidze, T. Zarqua, Z. Paculia, M. Kvaraia. Quantum Chemical Modeling of the Mechanism of Formation of the Peptide Bond.International Conference on Computational Biology. Tokyo, Japan May 26-28, 2010.WASET, 65, p.1469 (2010).<br /> * M. Kvaraia, J. Kereselidze, Z. Pachulia and T. Zarqua. Quantum-Chemical Study of the Solvent Effect on Process of a Proton Transfer in Nucleotide Bases. Proceed. Georgian NA Sciences, v. 36, pp 306-308 (2010).<br /> * J. Kereselidze, Z. Pachulia and M. Kvaraia. Quantum-chemical modeling of tendency of DNA to denaturetion. J.Biol. Phys. Chem., 11, 51-53 (2011).<br /> <br /> <br /> == Foreseen Activities ==<br /> <br /> * Until now simulations are performed for the small fragments of the DNA. The main goal is stabile operation of application with reasonably large fragments of the DNA on HP-SEE infrastructure. <br /> * Preparation of publication based on results of simulations on HP-SEE infrastructure.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DiseaseGene DiseaseGene 2012-06-10T23:35:16Z <p>Lifesci: /* Publications */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''In-silico Disease Gene Mapper''<br /> * Application's acronym: ''DiseaseGene''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics <br /> * Web site: <br /> http://ls-hpsee.nik.uni-obuda.hu:8080/liferay-portal-6.0.5<br /> http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Complex data mining and data processing tool using large-scale external open-access databases. The aim of the task is to port a data mining tool to the SEE-HPC infrastructure, which can help researchers to do comparative analysis and target candidate genes for further research of polygene type diseases. The implemented solution is capable to target candidate genes for various diseases such as asthma, diabetes, epilepsy, hypertension or schizophrenia using external online open-access eukaryotic (animal: mouse, rat, B. rerio, etc.) databases. The application does an in-silico mapping between the genes coming from the different model animals and search for unexplored potential target genes. With small modification the application is useful to target human genes too. <br /> <br /> == Problems Solved ==<br /> <br /> The implemented solution is capable to target candidate genes for various diseases such as asthma, diabetes, epilepsy, hypertension or schizophrenia using external online open-access eukaryotic (animal: mouse, rat, B. rerio, etc.) databases. The application does an in-silico mapping between the genes coming from the different model animals and search for unexplored potential target genes. With small modification the application is useful to target human genes too. Grid's reliability parameters and response time (1-5 min) is not suitable for such service.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Researchers in the region will be able to target candidate genes for further research of polygene type diseases.<br /> Create a data mining a service to the SEE-HPC infrastructure, which can help researchers to do comparative analysis.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> <br /> == Beneficiaries ==<br /> <br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 2-5 scientific groups (5-15 researchers) world wide will use our service.<br /> <br /> == Number of users ==<br /> 6<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M19 (delayed for storage access issues)''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128 – 256''<br /> * Minimum RAM/core required: ''4 - 8 GB''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''5-10TB''<br /> * Total core hours required: ''1 300 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C/C++'' <br /> * Parallel programming paradigm: ''Clustered multiprocessing (ex. using MPI) + Multiple serial jobs (data-splitting, parametric studies)''<br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as a gene mapper applications and sequence alignment applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''8 cores''<br /> * Accessed production systems:<br /> # ''NIIF's infrastructure/HU''<br /> #* Applied for access on: ''09.2010''<br /> #* Access granted on: ''10.2010''<br /> #* Achieved scalability: ''16 cores--&gt;96 cores''<br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 8 and 16 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 8 cores benchmaring was initiated for 16 and 96 cores, further scaling is planned to higher number of cores. ''<br /> * Other issues: ''There were painful (ARC) authentication problems/access issues and with the supercomputing infrastructure's local storage during porting. Further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> In-silico Disease Gene Mapper was tested with some poligene diseases (e.g.:asthma) successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012<br /> <br /> == Foreseen Activities ==<br /> More scientific publications about the porting of the data mining tool (show results of some comparative data analysis targeting polygene type diseases).</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DiseaseGene DiseaseGene 2012-06-10T23:34:51Z <p>Lifesci: /* General Information */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''In-silico Disease Gene Mapper''<br /> * Application's acronym: ''DiseaseGene''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics <br /> * Web site: <br /> http://ls-hpsee.nik.uni-obuda.hu:8080/liferay-portal-6.0.5<br /> http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Complex data mining and data processing tool using large-scale external open-access databases. The aim of the task is to port a data mining tool to the SEE-HPC infrastructure, which can help researchers to do comparative analysis and target candidate genes for further research of polygene type diseases. The implemented solution is capable to target candidate genes for various diseases such as asthma, diabetes, epilepsy, hypertension or schizophrenia using external online open-access eukaryotic (animal: mouse, rat, B. rerio, etc.) databases. The application does an in-silico mapping between the genes coming from the different model animals and search for unexplored potential target genes. With small modification the application is useful to target human genes too. <br /> <br /> == Problems Solved ==<br /> <br /> The implemented solution is capable to target candidate genes for various diseases such as asthma, diabetes, epilepsy, hypertension or schizophrenia using external online open-access eukaryotic (animal: mouse, rat, B. rerio, etc.) databases. The application does an in-silico mapping between the genes coming from the different model animals and search for unexplored potential target genes. With small modification the application is useful to target human genes too. Grid's reliability parameters and response time (1-5 min) is not suitable for such service.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Researchers in the region will be able to target candidate genes for further research of polygene type diseases.<br /> Create a data mining a service to the SEE-HPC infrastructure, which can help researchers to do comparative analysis.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> <br /> == Beneficiaries ==<br /> <br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 2-5 scientific groups (5-15 researchers) world wide will use our service.<br /> <br /> == Number of users ==<br /> 6<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M19 (delayed for storage access issues)''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128 – 256''<br /> * Minimum RAM/core required: ''4 - 8 GB''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''5-10TB''<br /> * Total core hours required: ''1 300 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C/C++'' <br /> * Parallel programming paradigm: ''Clustered multiprocessing (ex. using MPI) + Multiple serial jobs (data-splitting, parametric studies)''<br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as a gene mapper applications and sequence alignment applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''8 cores''<br /> * Accessed production systems:<br /> # ''NIIF's infrastructure/HU''<br /> #* Applied for access on: ''09.2010''<br /> #* Access granted on: ''10.2010''<br /> #* Achieved scalability: ''16 cores--&gt;96 cores''<br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 8 and 16 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 8 cores benchmaring was initiated for 16 and 96 cores, further scaling is planned to higher number of cores. ''<br /> * Other issues: ''There were painful (ARC) authentication problems/access issues and with the supercomputing infrastructure's local storage during porting. Further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> In-silico Disease Gene Mapper was tested with some poligene diseases (e.g.:asthma) successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012, accepted<br /> <br /> == Foreseen Activities ==<br /> More scientific publications about the porting of the data mining tool (show results of some comparative data analysis targeting polygene type diseases).</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DeepAligner DeepAligner 2012-06-10T23:25:05Z <p>Lifesci: /* General Information */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Deep sequencing for short fragment alignment''<br /> * Application's acronym: ''DeepAligner''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics''<br /> * Web site: <br /> http://ls-hpsee.nik.uni-obuda.hu:8080/liferay-portal-6.0.5<br /> http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Mapping short fragment reads to open-access eukaryotic genomes is solvable by BLAST and BWA and other sequence alignment tools - BLAST is one of the most frequently used tool in bioinformatics and BWA is a relative new fast light-weighted tool that aligns short sequences. Local installations of these algorithms are typically not able to handle such problem size therefore the procedure runs slowly, while web based implementations cannot accept high number of queries. SEE-HPC infrastructure allows accessing massively parallel architectures and the sequence alignment code is distributed free for academia. Due to the response time and service reliability requirements grid can not be an option for the DeepAligner application.<br /> <br /> == Problems Solved ==<br /> <br /> The recently used deep sequencing techniques present a new data processing challenge: mapping short fragment reads to open-access eukaryotic (animal: focusing on mouse and rat) genomes at the scale of several hundred thousands.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The aim of the task is threefold, the first task is to port the BLAST/BWA algorithms to the massively parallel HP-SEE infrastructure create a BLAST/BWA service, which is capable to serve the short fragment sequence alignment demand of the regional bioinformatics communities, to do sequence analysis with high throughput short fragment sequence alignments against the eukaryotic genomes to search for regulatory mechanisms controlled by short fragments.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> Planned collaboration with the MoSGrid consortium (D-GRID based project, Germany) <br /> <br /> == Beneficiaries ==<br /> <br /> Serve the short fragment sequence alignment demand of the regional bioinformatics communities.<br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 5-15 scientific groups worldwide will use our service.<br /> <br /> == Number of users ==<br /> 5<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M18''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128-256''<br /> * Minimum RAM/core required: ''4-8 Gb''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''1-2 TB''<br /> * Total core hours required: ''1 500 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C,C++''<br /> * Parallel programming paradigm: ''Master-slave, MPI, + Multiple serial jobs (data-splitting, parametric studies)'' <br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as BLAST, BWA, or gene mapper applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> 2. IMPLEMENTATION OF THE GENERIC BLAST WORKFLOW<br /> <br /> Normal applications need to be firstly ported for use with gUSE/WS-PGRADE. Our used porting methodology includes two main steps: workflow development and user specific web interface development based on gUSE’s ASM (shown in Fig. 2.). gUSE is using a DAG (directed acyclic graph) based workflow concept. In a generic workflow, nodes represent jobs, which are basically batch programs to be executed on one of the DCI’s computing element. Ports represent input/output files the jobs receiving or producing. Arcs between ports represent file transfer operations. gUSE supports Parameter Study type high level parallelization. In the workflow special Generator ports can be used to generate the input files for all parallel jobs automatically while Collector jobs can run after all parallel execution to collect all parallel outputs. During the BLAST porting, we have exploited all the PS capabilities of gUSE.<br /> [[File:Devel wf.jpg|200px|thumb|left|Porting steps of the application]]<br /> <br /> Parallel job submission into the DCI environment needs to have parameter assignment of the generated parameters. gUSE’s PS workflow components were used to create a DCI-aware parallel BLAST application and realize a complex DCI workflow as a proof of concept. Later on the web-based DCI user interface was created using the Application Specific Module (ASM) of gUSE. On this web GUI, end-users can configure the input parameter like the “e” value or the number of MPI tasks and they can submit the alignment into the DCI environment with arbitrary large parameter fields.<br /> During the development of the workflow structure, we have aimed to construct a workflow that will be able to handle the main properties of the parallel BLAST application. To exploit the mechanism of Parameter Study used by gUSE the workflow has developed as a Parameter Study workflow with usage of autogenerator port (second small box around left top box in Fig 5.) and collector job (right bottom box in Fig. 5). The preprocessor job generates a set of input files from some pre-adjusted parameter. Then the second job (middle box in Fig. 5) will be executed as many times as the input files specify. <br /> The last job of the workflow is a Collector which is used to collect several files and then process them as a single input. Collectors force delayed job execution until the last file of the input file set to be collected has arrived to the Collector job. The workflow engine computes the expected number of input files at run time. When all the expected inputs arrived to the Collector it starts to process all the incoming inputs files as a single input set. Finally output files will be generated, and will be stored on a Storage Element of the DCI shown as little box around the Collector in. <br /> <br /> [[File:Blast wf.jpg|200px|thumb|left|Internal architecture of the generic blast workflow]]<br /> <br /> Due to the strict HPC security constraints, end users should posses valid certificate to utilize the HP-SEE Bioinformatics eScience Gateway. Users can utilize seamlessly the developed workflows on ARC based infrastructure (like the NIIF’s Hungarian supercomputing infrastructure) or on gLite/EMI based infrastructure (Service Grids like SEE-GRID-SCI, or SHIWA). After login, the users should create their own workflow based application instances, which are derived from pre-developed and well-tested workflows.<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''4 nodes 8 cores''<br /> * Accessed production systems:<br /> ''NIIF's infrastructure/HU''<br /> ** Applied for access on: ''09.2010''<br /> ** Access granted on: ''10.2010''<br /> ** Achieved scalability: ''96 cores''<br /> <br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 32, 59 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 32 cores benchmaring was initiated for 59 and 96 cores''<br /> * Other issues: ''There were painful authentication problems and access issues with the supercomputing infrastructure's local storage during porting. Some input parameter assignment optimisation and further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> The DeepAligner application was tested with parallel short DNA sequence searches successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> <br /> * M. Kozlovszky, G. Windisch, Á. Balaskó;Short fragment sequence alignment on the HP-SEE infrastructure;MIPRO 2012<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012<br /> <br /> == Foreseen Activities ==<br /> Parameter assignements optimisation of the GUI, more scientific publications about short sequence alignment.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DeepAligner DeepAligner 2012-06-10T23:23:55Z <p>Lifesci: /* Publications */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Deep sequencing for short fragment alignment''<br /> * Application's acronym: ''DeepAligner''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics''<br /> * Web site: http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Mapping short fragment reads to open-access eukaryotic genomes is solvable by BLAST and BWA and other sequence alignment tools - BLAST is one of the most frequently used tool in bioinformatics and BWA is a relative new fast light-weighted tool that aligns short sequences. Local installations of these algorithms are typically not able to handle such problem size therefore the procedure runs slowly, while web based implementations cannot accept high number of queries. SEE-HPC infrastructure allows accessing massively parallel architectures and the sequence alignment code is distributed free for academia. Due to the response time and service reliability requirements grid can not be an option for the DeepAligner application.<br /> <br /> == Problems Solved ==<br /> <br /> The recently used deep sequencing techniques present a new data processing challenge: mapping short fragment reads to open-access eukaryotic (animal: focusing on mouse and rat) genomes at the scale of several hundred thousands.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The aim of the task is threefold, the first task is to port the BLAST/BWA algorithms to the massively parallel HP-SEE infrastructure create a BLAST/BWA service, which is capable to serve the short fragment sequence alignment demand of the regional bioinformatics communities, to do sequence analysis with high throughput short fragment sequence alignments against the eukaryotic genomes to search for regulatory mechanisms controlled by short fragments.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> Planned collaboration with the MoSGrid consortium (D-GRID based project, Germany) <br /> <br /> == Beneficiaries ==<br /> <br /> Serve the short fragment sequence alignment demand of the regional bioinformatics communities.<br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 5-15 scientific groups worldwide will use our service.<br /> <br /> == Number of users ==<br /> 5<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M18''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128-256''<br /> * Minimum RAM/core required: ''4-8 Gb''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''1-2 TB''<br /> * Total core hours required: ''1 500 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C,C++''<br /> * Parallel programming paradigm: ''Master-slave, MPI, + Multiple serial jobs (data-splitting, parametric studies)'' <br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as BLAST, BWA, or gene mapper applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> 2. IMPLEMENTATION OF THE GENERIC BLAST WORKFLOW<br /> <br /> Normal applications need to be firstly ported for use with gUSE/WS-PGRADE. Our used porting methodology includes two main steps: workflow development and user specific web interface development based on gUSE’s ASM (shown in Fig. 2.). gUSE is using a DAG (directed acyclic graph) based workflow concept. In a generic workflow, nodes represent jobs, which are basically batch programs to be executed on one of the DCI’s computing element. Ports represent input/output files the jobs receiving or producing. Arcs between ports represent file transfer operations. gUSE supports Parameter Study type high level parallelization. In the workflow special Generator ports can be used to generate the input files for all parallel jobs automatically while Collector jobs can run after all parallel execution to collect all parallel outputs. During the BLAST porting, we have exploited all the PS capabilities of gUSE.<br /> [[File:Devel wf.jpg|200px|thumb|left|Porting steps of the application]]<br /> <br /> Parallel job submission into the DCI environment needs to have parameter assignment of the generated parameters. gUSE’s PS workflow components were used to create a DCI-aware parallel BLAST application and realize a complex DCI workflow as a proof of concept. Later on the web-based DCI user interface was created using the Application Specific Module (ASM) of gUSE. On this web GUI, end-users can configure the input parameter like the “e” value or the number of MPI tasks and they can submit the alignment into the DCI environment with arbitrary large parameter fields.<br /> During the development of the workflow structure, we have aimed to construct a workflow that will be able to handle the main properties of the parallel BLAST application. To exploit the mechanism of Parameter Study used by gUSE the workflow has developed as a Parameter Study workflow with usage of autogenerator port (second small box around left top box in Fig 5.) and collector job (right bottom box in Fig. 5). The preprocessor job generates a set of input files from some pre-adjusted parameter. Then the second job (middle box in Fig. 5) will be executed as many times as the input files specify. <br /> The last job of the workflow is a Collector which is used to collect several files and then process them as a single input. Collectors force delayed job execution until the last file of the input file set to be collected has arrived to the Collector job. The workflow engine computes the expected number of input files at run time. When all the expected inputs arrived to the Collector it starts to process all the incoming inputs files as a single input set. Finally output files will be generated, and will be stored on a Storage Element of the DCI shown as little box around the Collector in. <br /> <br /> [[File:Blast wf.jpg|200px|thumb|left|Internal architecture of the generic blast workflow]]<br /> <br /> Due to the strict HPC security constraints, end users should posses valid certificate to utilize the HP-SEE Bioinformatics eScience Gateway. Users can utilize seamlessly the developed workflows on ARC based infrastructure (like the NIIF’s Hungarian supercomputing infrastructure) or on gLite/EMI based infrastructure (Service Grids like SEE-GRID-SCI, or SHIWA). After login, the users should create their own workflow based application instances, which are derived from pre-developed and well-tested workflows.<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''4 nodes 8 cores''<br /> * Accessed production systems:<br /> ''NIIF's infrastructure/HU''<br /> ** Applied for access on: ''09.2010''<br /> ** Access granted on: ''10.2010''<br /> ** Achieved scalability: ''96 cores''<br /> <br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 32, 59 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 32 cores benchmaring was initiated for 59 and 96 cores''<br /> * Other issues: ''There were painful authentication problems and access issues with the supercomputing infrastructure's local storage during porting. Some input parameter assignment optimisation and further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> The DeepAligner application was tested with parallel short DNA sequence searches successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> <br /> * M. Kozlovszky, G. Windisch, Á. Balaskó;Short fragment sequence alignment on the HP-SEE infrastructure;MIPRO 2012<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012<br /> <br /> == Foreseen Activities ==<br /> Parameter assignements optimisation of the GUI, more scientific publications about short sequence alignment.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DNAMA DNAMA 2012-06-10T13:00:15Z <p>Lifesci: /* Foreseen Activities */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''DNA Multicore Analysis''<br /> * Application's acronym: ''DNAMA''<br /> * Virtual Research Community: ''Life Sciences ''<br /> * Scientific contact: ''Danilo Mrdak, danilomrdak@gmail.com''<br /> * Technical contact: ''Luka Filipovic, lukaf@ac.me''<br /> * Developers: ''Center of Information System &amp; Faculty of Natural Sciences - University of Montenegro''<br /> * Web site: http://wiki.hp-see.eu/index.php/DNAMA<br /> <br /> == Short Description ==<br /> <br /> Using of Network Cluster Web with potential of super-computer performances for DNA sequences analyzing will give us unlimited potential for DNA research. This will give us unlimited potential in term of analyzed sequence number and time consumption for analysis to be carried out. As many of DNA comparing and analyzing software use Monte Carlo and Markov chain algorithms that are time consuming regarding to sequence numbers, super-computer resource will faster our job and make the robust and overall analysis possible.Using of all published sequences for one group (e.g. for all salmonid species: salmons, trout, grayling, river huchon) from the same DNA region (mitochondrial D-loop DNA, Cytochrom b gene…) will give us more detailed insight in their relationships and phylogeny relationships.<br /> <br /> DNAMA application is based on RAxML application from The Exelixis Lab.<br /> <br /> == Problems Solved ==<br /> <br /> The working resource that is possible to use trough network computer clustering will allow us to put in analysis as much samples as we wish and that those analysis will be finished in one to few hours. Moreover, we will tray to modified the algorithms in order to have multi-loci analysis to get a consensus three that will suggest the most possible pathways of phylogeny with much higher level of confidence<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Use of Network Cluster Web with potential of super-computer performances for DNA sequence comparison analysis will give unlimited potential for DNA research. Use of all published sequences for one group (e.g. for all salmonid species: salmons, trout, grayling, river huchon) from the same DNA region (mitochondrial D-loop DNA, Cytochrom b gene…) will give a more detailed insight in their relationships and phylogeny relationships (RAXML). <br /> <br /> Problems solved : Analyze as many samples as possible within a few hours. Modification of algorithms in order to have multi-loci analysis to get a consensus tree that will suggest the most probable pathways of phylogeny with much higher level of confidence.<br /> <br /> Impact : Assessing whether computer resources are reliable is one of the main obstacle for making scientific breakthroughs in field of Molecular Biology and Phylogeny (Evolution). HPC allows for faster and more reliable results. Enhancement of competitiveness in terms of regional and European collaboration. Drawing the attention of national stakeholders in future building of Montenegro as a &quot;society of knowledge”<br /> <br /> == Collaborations ==<br /> <br /> * <br /> <br /> == Beneficiaries ==<br /> <br /> * University of Montenegro - Faculty of Natural sciences - Biology Department<br /> <br /> == Number of users ==<br /> 15-20 from Faculty of natural Sciences, University of Montenegro<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''before the start of the project - finished by RAxML developers''<br /> * Start of alpha stage: ''before the start of the project''<br /> * Start of beta stage: ''09.2010''<br /> * Start of testing stage: ''03.2011''<br /> * Start of deployment stage: ''11.2011''<br /> * Start of production stage: ''11.2011''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''up to 512 cores''<br /> * Minimum RAM/core required: ''1 GB''<br /> * Storage space during a single run: ''256 MB''<br /> * Long-term data storage: ''1 GB''<br /> * Total core hours required: ''.''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C''<br /> * Parallel programming paradigm: ''MPI, OpenMPI''<br /> * Main parallel code: ''MPI, OpenMPI''<br /> * Pre/post processing code: ''C, Dendroscope (for visualization for results)''<br /> * Application tools and libraries: ''RAxML''<br /> <br /> == Usage Example ==<br /> <br /> Execution from command line :<br /> /opt/exp_software/mpi/mpiexec/mpiexec-0.84-mpich2-pmi/bin/mpiexec -np 128 /home/lukaf/raxml/RAxML-7.2.6/raxmlHPC-MPI -m GTRGAMMA -s /home/lukaf/raxml/trutte_input.txt -# 1000 -n T16x8<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''HPCG, Bulgaria''<br /> ** Applied for access on: ''05.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''up to 256 cores''<br /> * Accessed production systems:<br /> # ''Debrecen SC, NIIF''<br /> #* Applied for access on: ''02.2012''<br /> #* Access granted on: ''03.2012''<br /> #* Achieved scalability: ''240 cores''<br /> # ''...''<br /> #* Applied for access on: ''...''<br /> #* Access granted on: ''...''<br /> #* Achieved scalability: ''... cores''<br /> * Porting activities: ''...''<br /> * Scalability studies: ''...''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''.''<br /> * Other issues: ''.''<br /> <br /> == Achieved Results ==<br /> <br /> <br /> == Publications ==<br /> <br /> * <br /> <br /> == Foreseen Activities ==<br /> <br /> * data analysis for new DNA sequences<br /> * multigene analysys (mt DNA, Cut B. … ) and 3 simulated genes<br /> * Benchmark activities for MPI, PThreads &amp; Hybrid version</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DNAMA DNAMA 2012-06-10T12:43:26Z <p>Lifesci: /* Infrastructure Usage */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''DNA Multicore Analysis''<br /> * Application's acronym: ''DNAMA''<br /> * Virtual Research Community: ''Life Sciences ''<br /> * Scientific contact: ''Danilo Mrdak, danilomrdak@gmail.com''<br /> * Technical contact: ''Luka Filipovic, lukaf@ac.me''<br /> * Developers: ''Center of Information System &amp; Faculty of Natural Sciences - University of Montenegro''<br /> * Web site: http://wiki.hp-see.eu/index.php/DNAMA<br /> <br /> == Short Description ==<br /> <br /> Using of Network Cluster Web with potential of super-computer performances for DNA sequences analyzing will give us unlimited potential for DNA research. This will give us unlimited potential in term of analyzed sequence number and time consumption for analysis to be carried out. As many of DNA comparing and analyzing software use Monte Carlo and Markov chain algorithms that are time consuming regarding to sequence numbers, super-computer resource will faster our job and make the robust and overall analysis possible.Using of all published sequences for one group (e.g. for all salmonid species: salmons, trout, grayling, river huchon) from the same DNA region (mitochondrial D-loop DNA, Cytochrom b gene…) will give us more detailed insight in their relationships and phylogeny relationships.<br /> <br /> DNAMA application is based on RAxML application from The Exelixis Lab.<br /> <br /> == Problems Solved ==<br /> <br /> The working resource that is possible to use trough network computer clustering will allow us to put in analysis as much samples as we wish and that those analysis will be finished in one to few hours. Moreover, we will tray to modified the algorithms in order to have multi-loci analysis to get a consensus three that will suggest the most possible pathways of phylogeny with much higher level of confidence<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Use of Network Cluster Web with potential of super-computer performances for DNA sequence comparison analysis will give unlimited potential for DNA research. Use of all published sequences for one group (e.g. for all salmonid species: salmons, trout, grayling, river huchon) from the same DNA region (mitochondrial D-loop DNA, Cytochrom b gene…) will give a more detailed insight in their relationships and phylogeny relationships (RAXML). <br /> <br /> Problems solved : Analyze as many samples as possible within a few hours. Modification of algorithms in order to have multi-loci analysis to get a consensus tree that will suggest the most probable pathways of phylogeny with much higher level of confidence.<br /> <br /> Impact : Assessing whether computer resources are reliable is one of the main obstacle for making scientific breakthroughs in field of Molecular Biology and Phylogeny (Evolution). HPC allows for faster and more reliable results. Enhancement of competitiveness in terms of regional and European collaboration. Drawing the attention of national stakeholders in future building of Montenegro as a &quot;society of knowledge”<br /> <br /> == Collaborations ==<br /> <br /> * <br /> <br /> == Beneficiaries ==<br /> <br /> * University of Montenegro - Faculty of Natural sciences - Biology Department<br /> <br /> == Number of users ==<br /> 15-20 from Faculty of natural Sciences, University of Montenegro<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''before the start of the project - finished by RAxML developers''<br /> * Start of alpha stage: ''before the start of the project''<br /> * Start of beta stage: ''09.2010''<br /> * Start of testing stage: ''03.2011''<br /> * Start of deployment stage: ''11.2011''<br /> * Start of production stage: ''11.2011''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''up to 512 cores''<br /> * Minimum RAM/core required: ''1 GB''<br /> * Storage space during a single run: ''256 MB''<br /> * Long-term data storage: ''1 GB''<br /> * Total core hours required: ''.''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C''<br /> * Parallel programming paradigm: ''MPI, OpenMPI''<br /> * Main parallel code: ''MPI, OpenMPI''<br /> * Pre/post processing code: ''C, Dendroscope (for visualization for results)''<br /> * Application tools and libraries: ''RAxML''<br /> <br /> == Usage Example ==<br /> <br /> Execution from command line :<br /> /opt/exp_software/mpi/mpiexec/mpiexec-0.84-mpich2-pmi/bin/mpiexec -np 128 /home/lukaf/raxml/RAxML-7.2.6/raxmlHPC-MPI -m GTRGAMMA -s /home/lukaf/raxml/trutte_input.txt -# 1000 -n T16x8<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''HPCG, Bulgaria''<br /> ** Applied for access on: ''05.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''up to 256 cores''<br /> * Accessed production systems:<br /> # ''Debrecen SC, NIIF''<br /> #* Applied for access on: ''02.2012''<br /> #* Access granted on: ''03.2012''<br /> #* Achieved scalability: ''240 cores''<br /> # ''...''<br /> #* Applied for access on: ''...''<br /> #* Access granted on: ''...''<br /> #* Achieved scalability: ''... cores''<br /> * Porting activities: ''...''<br /> * Scalability studies: ''...''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''.''<br /> * Other issues: ''.''<br /> <br /> == Achieved Results ==<br /> <br /> <br /> == Publications ==<br /> <br /> * <br /> <br /> == Foreseen Activities ==<br /> <br /> * data analysis for new DNA sequences<br /> * multigene analysys (mt DNA, Cut B. … )<br /> * Hybrid RAxML testing</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DNAMA DNAMA 2012-06-10T12:35:50Z <p>Lifesci: /* Resource Requirements */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''DNA Multicore Analysis''<br /> * Application's acronym: ''DNAMA''<br /> * Virtual Research Community: ''Life Sciences ''<br /> * Scientific contact: ''Danilo Mrdak, danilomrdak@gmail.com''<br /> * Technical contact: ''Luka Filipovic, lukaf@ac.me''<br /> * Developers: ''Center of Information System &amp; Faculty of Natural Sciences - University of Montenegro''<br /> * Web site: http://wiki.hp-see.eu/index.php/DNAMA<br /> <br /> == Short Description ==<br /> <br /> Using of Network Cluster Web with potential of super-computer performances for DNA sequences analyzing will give us unlimited potential for DNA research. This will give us unlimited potential in term of analyzed sequence number and time consumption for analysis to be carried out. As many of DNA comparing and analyzing software use Monte Carlo and Markov chain algorithms that are time consuming regarding to sequence numbers, super-computer resource will faster our job and make the robust and overall analysis possible.Using of all published sequences for one group (e.g. for all salmonid species: salmons, trout, grayling, river huchon) from the same DNA region (mitochondrial D-loop DNA, Cytochrom b gene…) will give us more detailed insight in their relationships and phylogeny relationships.<br /> <br /> DNAMA application is based on RAxML application from The Exelixis Lab.<br /> <br /> == Problems Solved ==<br /> <br /> The working resource that is possible to use trough network computer clustering will allow us to put in analysis as much samples as we wish and that those analysis will be finished in one to few hours. Moreover, we will tray to modified the algorithms in order to have multi-loci analysis to get a consensus three that will suggest the most possible pathways of phylogeny with much higher level of confidence<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Use of Network Cluster Web with potential of super-computer performances for DNA sequence comparison analysis will give unlimited potential for DNA research. Use of all published sequences for one group (e.g. for all salmonid species: salmons, trout, grayling, river huchon) from the same DNA region (mitochondrial D-loop DNA, Cytochrom b gene…) will give a more detailed insight in their relationships and phylogeny relationships (RAXML). <br /> <br /> Problems solved : Analyze as many samples as possible within a few hours. Modification of algorithms in order to have multi-loci analysis to get a consensus tree that will suggest the most probable pathways of phylogeny with much higher level of confidence.<br /> <br /> Impact : Assessing whether computer resources are reliable is one of the main obstacle for making scientific breakthroughs in field of Molecular Biology and Phylogeny (Evolution). HPC allows for faster and more reliable results. Enhancement of competitiveness in terms of regional and European collaboration. Drawing the attention of national stakeholders in future building of Montenegro as a &quot;society of knowledge”<br /> <br /> == Collaborations ==<br /> <br /> * <br /> <br /> == Beneficiaries ==<br /> <br /> * University of Montenegro - Faculty of Natural sciences - Biology Department<br /> <br /> == Number of users ==<br /> 15-20 from Faculty of natural Sciences, University of Montenegro<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''before the start of the project - finished by RAxML developers''<br /> * Start of alpha stage: ''before the start of the project''<br /> * Start of beta stage: ''09.2010''<br /> * Start of testing stage: ''03.2011''<br /> * Start of deployment stage: ''11.2011''<br /> * Start of production stage: ''11.2011''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''up to 512 cores''<br /> * Minimum RAM/core required: ''1 GB''<br /> * Storage space during a single run: ''256 MB''<br /> * Long-term data storage: ''1 GB''<br /> * Total core hours required: ''.''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C''<br /> * Parallel programming paradigm: ''MPI, OpenMPI''<br /> * Main parallel code: ''MPI, OpenMPI''<br /> * Pre/post processing code: ''C, Dendroscope (for visualization for results)''<br /> * Application tools and libraries: ''RAxML''<br /> <br /> == Usage Example ==<br /> <br /> Execution from command line :<br /> /opt/exp_software/mpi/mpiexec/mpiexec-0.84-mpich2-pmi/bin/mpiexec -np 128 /home/lukaf/raxml/RAxML-7.2.6/raxmlHPC-MPI -m GTRGAMMA -s /home/lukaf/raxml/trutte_input.txt -# 1000 -n T16x8<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''HPCG, Bulgaria''<br /> ** Applied for access on: ''05.2011''<br /> ** Access granted on: ''05.2011''<br /> ** Achieved scalability: ''up to 128 cores''<br /> * Accessed production systems:<br /> # ''Debrecen SC, NIIF''<br /> #* Applied for access on: ''02.2012''<br /> #* Access granted on: ''03.2012''<br /> #* Achieved scalability: ''... cores''<br /> # ''...''<br /> #* Applied for access on: ''...''<br /> #* Access granted on: ''...''<br /> #* Achieved scalability: ''... cores''<br /> * Porting activities: ''...''<br /> * Scalability studies: ''...''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''.''<br /> * Other issues: ''.''<br /> <br /> == Achieved Results ==<br /> <br /> <br /> == Publications ==<br /> <br /> * <br /> <br /> == Foreseen Activities ==<br /> <br /> * data analysis for new DNA sequences<br /> * multigene analysys (mt DNA, Cut B. … )<br /> * Hybrid RAxML testing</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DeepAligner DeepAligner 2012-06-08T12:27:10Z <p>Lifesci: /* Technical Features and HP-SEE Implementation */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''Deep sequencing for short fragment alignment''<br /> * Application's acronym: ''DeepAligner''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics''<br /> * Web site: http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Mapping short fragment reads to open-access eukaryotic genomes is solvable by BLAST and BWA and other sequence alignment tools - BLAST is one of the most frequently used tool in bioinformatics and BWA is a relative new fast light-weighted tool that aligns short sequences. Local installations of these algorithms are typically not able to handle such problem size therefore the procedure runs slowly, while web based implementations cannot accept high number of queries. SEE-HPC infrastructure allows accessing massively parallel architectures and the sequence alignment code is distributed free for academia. Due to the response time and service reliability requirements grid can not be an option for the DeepAligner application.<br /> <br /> == Problems Solved ==<br /> <br /> The recently used deep sequencing techniques present a new data processing challenge: mapping short fragment reads to open-access eukaryotic (animal: focusing on mouse and rat) genomes at the scale of several hundred thousands.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> The aim of the task is threefold, the first task is to port the BLAST/BWA algorithms to the massively parallel HP-SEE infrastructure create a BLAST/BWA service, which is capable to serve the short fragment sequence alignment demand of the regional bioinformatics communities, to do sequence analysis with high throughput short fragment sequence alignments against the eukaryotic genomes to search for regulatory mechanisms controlled by short fragments.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> Planned collaboration with the MoSGrid consortium (D-GRID based project, Germany) <br /> <br /> == Beneficiaries ==<br /> <br /> Serve the short fragment sequence alignment demand of the regional bioinformatics communities.<br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 5-15 scientific groups worldwide will use our service.<br /> <br /> == Number of users ==<br /> 5<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M18''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128-256''<br /> * Minimum RAM/core required: ''4-8 Gb''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''1-2 TB''<br /> * Total core hours required: ''1 500 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C,C++''<br /> * Parallel programming paradigm: ''Master-slave, MPI, + Multiple serial jobs (data-splitting, parametric studies)'' <br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as BLAST, BWA, or gene mapper applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> 2. IMPLEMENTATION OF THE GENERIC BLAST WORKFLOW<br /> <br /> Normal applications need to be firstly ported for use with gUSE/WS-PGRADE. Our used porting methodology includes two main steps: workflow development and user specific web interface development based on gUSE’s ASM (shown in Fig. 2.). gUSE is using a DAG (directed acyclic graph) based workflow concept. In a generic workflow, nodes represent jobs, which are basically batch programs to be executed on one of the DCI’s computing element. Ports represent input/output files the jobs receiving or producing. Arcs between ports represent file transfer operations. gUSE supports Parameter Study type high level parallelization. In the workflow special Generator ports can be used to generate the input files for all parallel jobs automatically while Collector jobs can run after all parallel execution to collect all parallel outputs. During the BLAST porting, we have exploited all the PS capabilities of gUSE.<br /> [[File:Devel wf.jpg|200px|thumb|left|Porting steps of the application]]<br /> <br /> Parallel job submission into the DCI environment needs to have parameter assignment of the generated parameters. gUSE’s PS workflow components were used to create a DCI-aware parallel BLAST application and realize a complex DCI workflow as a proof of concept. Later on the web-based DCI user interface was created using the Application Specific Module (ASM) of gUSE. On this web GUI, end-users can configure the input parameter like the “e” value or the number of MPI tasks and they can submit the alignment into the DCI environment with arbitrary large parameter fields.<br /> During the development of the workflow structure, we have aimed to construct a workflow that will be able to handle the main properties of the parallel BLAST application. To exploit the mechanism of Parameter Study used by gUSE the workflow has developed as a Parameter Study workflow with usage of autogenerator port (second small box around left top box in Fig 5.) and collector job (right bottom box in Fig. 5). The preprocessor job generates a set of input files from some pre-adjusted parameter. Then the second job (middle box in Fig. 5) will be executed as many times as the input files specify. <br /> The last job of the workflow is a Collector which is used to collect several files and then process them as a single input. Collectors force delayed job execution until the last file of the input file set to be collected has arrived to the Collector job. The workflow engine computes the expected number of input files at run time. When all the expected inputs arrived to the Collector it starts to process all the incoming inputs files as a single input set. Finally output files will be generated, and will be stored on a Storage Element of the DCI shown as little box around the Collector in. <br /> <br /> [[File:Blast wf.jpg|200px|thumb|left|Internal architecture of the generic blast workflow]]<br /> <br /> Due to the strict HPC security constraints, end users should posses valid certificate to utilize the HP-SEE Bioinformatics eScience Gateway. Users can utilize seamlessly the developed workflows on ARC based infrastructure (like the NIIF’s Hungarian supercomputing infrastructure) or on gLite/EMI based infrastructure (Service Grids like SEE-GRID-SCI, or SHIWA). After login, the users should create their own workflow based application instances, which are derived from pre-developed and well-tested workflows.<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''4 nodes 8 cores''<br /> * Accessed production systems:<br /> ''NIIF's infrastructure/HU''<br /> ** Applied for access on: ''09.2010''<br /> ** Access granted on: ''10.2010''<br /> ** Achieved scalability: ''96 cores''<br /> <br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 32, 59 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 32 cores benchmaring was initiated for 59 and 96 cores''<br /> * Other issues: ''There were painful authentication problems and access issues with the supercomputing infrastructure's local storage during porting. Some input parameter assignment optimisation and further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> The DeepAligner application was tested with parallel short DNA sequence searches successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> <br /> * M. Kozlovszky, G. Windisch, Á. Balaskó;Short fragment sequence alignment on the HP-SEE infrastructure;MIPRO 2012, accepted<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012, accepted<br /> <br /> == Foreseen Activities ==<br /> Parameter assignements optimisation of the GUI, more scientific publications about short sequence alignment.</div> Lifesci http://hpseewiki.ipb.ac.rs/index.php/DiseaseGene DiseaseGene 2012-06-08T12:26:43Z <p>Lifesci: /* Technical Features and HP-SEE Implementation */</p> <hr /> <div>== General Information ==<br /> <br /> * Application's name: ''In-silico Disease Gene Mapper''<br /> * Application's acronym: ''DiseaseGene''<br /> * Virtual Research Community: ''Life Sciences''<br /> * Scientific contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Technical contact: ''Kozlovszky Miklos, Windisch Gergely; m.kozlovszky at sztaki.hu''<br /> * Developers: ''Windisch Gergely, Biotech Group, Obuda University – John von Neumann Faculty of Informatics <br /> * Web site: http://ls-hpsee.nik.uni-obuda.hu<br /> <br /> == Short Description ==<br /> <br /> Complex data mining and data processing tool using large-scale external open-access databases. The aim of the task is to port a data mining tool to the SEE-HPC infrastructure, which can help researchers to do comparative analysis and target candidate genes for further research of polygene type diseases. The implemented solution is capable to target candidate genes for various diseases such as asthma, diabetes, epilepsy, hypertension or schizophrenia using external online open-access eukaryotic (animal: mouse, rat, B. rerio, etc.) databases. The application does an in-silico mapping between the genes coming from the different model animals and search for unexplored potential target genes. With small modification the application is useful to target human genes too. <br /> <br /> == Problems Solved ==<br /> <br /> The implemented solution is capable to target candidate genes for various diseases such as asthma, diabetes, epilepsy, hypertension or schizophrenia using external online open-access eukaryotic (animal: mouse, rat, B. rerio, etc.) databases. The application does an in-silico mapping between the genes coming from the different model animals and search for unexplored potential target genes. With small modification the application is useful to target human genes too. Grid's reliability parameters and response time (1-5 min) is not suitable for such service.<br /> <br /> == Scientific and Social Impact ==<br /> <br /> Researchers in the region will be able to target candidate genes for further research of polygene type diseases.<br /> Create a data mining a service to the SEE-HPC infrastructure, which can help researchers to do comparative analysis.<br /> <br /> == Collaborations ==<br /> <br /> Ongoing collaborations so far: Hungarian Bioinformatics Association, Semmelweis University<br /> <br /> == Beneficiaries ==<br /> <br /> People who are interested in using short fragment alignments will greatly benefit from the availability of this service. The service will be freely available to the LS community. We estimate that a number of 2-5 scientific groups (5-15 researchers) world wide will use our service.<br /> <br /> == Number of users ==<br /> 6<br /> <br /> == Development Plan ==<br /> <br /> * Concept: ''Done before the project started.''<br /> * Start of alpha stage: ''Done before the project started.''<br /> * Start of beta stage: ''M9''<br /> * Start of testing stage: ''M13''<br /> * Start of deployment stage: ''M16''<br /> * Start of production stage: ''M19 (delayed for storage access issues)''<br /> <br /> == Resource Requirements ==<br /> <br /> * Number of cores required for a single run: ''128 – 256''<br /> * Minimum RAM/core required: ''4 - 8 GB''<br /> * Storage space during a single run: ''2-5 GB''<br /> * Long-term data storage: ''5-10TB''<br /> * Total core hours required: ''1 300 000''<br /> <br /> == Technical Features and HP-SEE Implementation ==<br /> <br /> * Primary programming language: ''C/C++'' <br /> * Parallel programming paradigm: ''Clustered multiprocessing (ex. using MPI) + Multiple serial jobs (data-splitting, parametric studies)''<br /> * Main parallel code: ''WS-PGRADE/gUSE and C/C++''<br /> * Pre/post processing code: ''BASH script (in-house development)''<br /> * Application tools and libraries: ''BASH script / mpiBLAST (in-house development)''<br /> <br /> == Usage Example ==<br /> <br /> 1. HP-SEE’S BIOINFORMATICS ESCIENCE GATEWAY<br /> <br /> The Bioinformatics eScience Gateway based on gUSE and operates within the Life Science VO of the HP-SEE infrastructure. It provides unified GUI of different bioinformatics applications (such as a gene mapper applications and sequence alignment applications) and enables end-user access indirectly to some open European bioinformatics databases. gUSE is basically a virtualization environment providing large set of high-level DCI services by which interoperation among classical service and desktop grids, clouds and clusters, unique web services and user communities can be achieved in a scalable way. gUSE has a graphical user interface, which is called WS-PGRADE. All part of gUSE is implemented as a set of Web services. WS-PGRADE uses the client APIs of gUSE services to turn user requests into sequences of gUSE specific Web service calls. Our bioinformaticians need application specific portlets to make the usage of the portal more customized for their work. In order to support the development of such application specific UI we have used the Application Specific Module (ASM) API of the gUSE by which such customization can easily and quickly be done. Some other remaining features were included from WS-PGRADE. Our GUI is built up from JSR168 compliant portlets and can be accessed via normal Web browsers (shown in Fig. 1.).<br /> [[File:HP-SEE-Bioinformatics_Portal.jpg|200px|thumb|left|Login screen of the HP-SEE Bioinformatics eScience Gateway]]<br /> <br /> == Infrastructure Usage ==<br /> <br /> * Home system: ''OE cluster/HU''<br /> ** Applied for access on: ''08.2010''<br /> ** Access granted on: ''08.2010''<br /> ** Achieved scalability: ''8 cores''<br /> * Accessed production systems:<br /> # ''NIIF's infrastructure/HU''<br /> #* Applied for access on: ''09.2010''<br /> #* Access granted on: ''10.2010''<br /> #* Achieved scalability: ''16 cores--&gt;96 cores''<br /> * Porting activities: ''The application has been successfully ported,the core workflow was successfully created, the GUI portlet was designed and created.''<br /> * Scalability studies: ''Tests on 8 and 16 and 96 cores''<br /> <br /> == Running on Several HP-SEE Centres ==<br /> <br /> * Benchmarking activities and results: ''At initial phase the application was benchmarkedand optimized on the OE's cluster. After successfull deployment on 8 cores benchmaring was initiated for 16 and 96 cores, further scaling is planned to higher number of cores. ''<br /> * Other issues: ''There were painful (ARC) authentication problems/access issues and with the supercomputing infrastructure's local storage during porting. Further study for higher scaling is still required.''<br /> <br /> == Achieved Results ==<br /> In-silico Disease Gene Mapper was tested with some poligene diseases (e.g.:asthma) successfully. So far publications are targeting mainly the porting of the application, publication of more scientific results is planned.<br /> <br /> == Publications ==<br /> * M. Kozlovszky, G. Windisch; Supported bioinformatics applications of the HP-SEE project’s infrastructure; Networkshop 2012, accepted<br /> <br /> == Foreseen Activities ==<br /> More scientific publications about the porting of the data mining tool (show results of some comparative data analysis targeting polygene type diseases).</div> Lifesci