Resource centre HPCG

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
Line 25: Line 25:
<li>More than 92% efficiency on LINPACK (>3 TFlops, peak performance 3.2TFlops).
<li>More than 92% efficiency on LINPACK (>3 TFlops, peak performance 3.2TFlops).
-
<p>A smaller cluster with powerful GPU computing cards is also attached to it. The extended cluster has.
+
<p>A smaller cluster with powerful GPU computing cards is also attached to it. The extended cluster has:
<li>4 GPU cards NVIDIA GTX 295 (each card counts as 2 graphical devices), CPU Intel Core i7 @2.66 Ghz, 12 GB RAM.
<li>4 GPU cards NVIDIA GTX 295 (each card counts as 2 graphical devices), CPU Intel Core i7 @2.66 Ghz, 12 GB RAM.

Revision as of 08:58, 29 November 2011

BG01-IPP Grid Cluster






Contents

Resource centre HPCG

The HPCG cluster is located at IICT-BAS. It has 576 computing cores organized in a blade system.

  • HP Cluster Platform Express 7000 enclosures with 36 blades BL 280c with dual Intel Xeon X5560 @ 2.8Ghz (total 576cores).
  • Non-blocking DDR Interconnection via Voltaire Grid director 2004 with latency 2.5 μs and bandwidth 20Gbps.
  • Two SAN switches for redundant access.

    The storage and management nodes have 128 cores.

  • MSA2312fc with 48 TB storage, Lustrefilesystem
  • P2000 G3 with 48 TB storage.
  • More than 92% efficiency on LINPACK (>3 TFlops, peak performance 3.2TFlops). <p>A smaller cluster with powerful GPU computing cards is also attached to it. The extended cluster has:
  • 4 GPU cards NVIDIA GTX 295 (each card counts as 2 graphical devices), CPU Intel Core i7 @2.66 Ghz, 12 GB RAM.
  • Total number of threads for GPU computing –4x2x240=1920.
  • High performance Lustre filesystems. <p>The main scientific areas covered by this center are environmental modelling, computational physics, computational chemistry and biomedicine.

    Parallel programming models

    Parallel programming paradigms supported by HPCG cluster are Message passing, supporting several implementations of MPI: MVIAPICH1/2, OpenMPI, OpenMP, as a shared memory approach, available through GNU Compiler Collection (GCC). The nodes (36 nodes) have relatively high amount of RAM (24GB per node). Hybrid approach available through combining the two approaches listed above. GPU computing is available by using CUDA and/or OpenCL

    Development software

    Several versions of the GCC toolchain are available, in order to have flexibility and resolve portability issues for some software packages. Performance and debugging tools include standard gdb and gprof as well as MPE, mpiP and SCALASCA.

    Infrastructural services

    User administration, authentication, authorization, security

    The main way to use the cluster for HPC work is by standard authentication through username/password or public key authentication. It is also possible to submit jobs using the gLite Grid middleware, provided the user has X.509 certificate and is a member of an appropriate supported Virtual Organization.

    Workflow and batch management

    HPCG cluster is using a Torque + Maui combination. The main way to manage resource utllizations is from the Maui configuration.

    Distributed filesystem and data management

    There are two main filesystems (/home and /gscratch), both based on the high performance Lustre filesystem. The latter is used only for temporary, but large files, created during job execution.

    Accounting and resource management

    A custom solution that gathers accounting data from several Bulgarian clusters has been developed and deployed. This solution provides accurate low-level data for all jobs run at these clusters and may be used not only for aggregate accounting but also for performance monitoring.

    Operational monitoring

    Monitoring of HPCG cluster is performed through the nagios portal. Some additional tools like pakiti are also available.

    Helpdesk and user support

    User support is achieved through email lists or through the regional helpdesk.

    Libraries and application tools

    1) Software libraries:

    ATLAS, LAPACK, Linpack, ScaLAPACK, GotoBLAS, FFTW, LUSTRE, SPRNG, MPI (MVIAPICH1/2, OpenMPI), BLACS, BLAS, Maple, VMD, CUDA, OpenCL, OpenFOAM, octave

    2) Development and application software available:

    Charm++, CPMD, GAMESS, GROMACS, NAMD, NWChem, Quantum Espresso, mpiBLAST, WRF, CMAQ, SMOKE

    Access the HPCG cluster

    If you want to have access to HPCG you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult Resource management system.

    Support

    HP-SEE researchers should use the help desk - https://helpdesk.hp-see.eu/ If you don’t have an account send mail to Ioannis Liaboti - iliaboti at grnet dot gr

  • Personal tools