Resource centre HPCG

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(Resource centre HPCG)
(Resource centre HPCG)
Line 37: Line 37:
{|border="1"
{|border="1"
|Processor
|Processor
-
|
+
|-
|Intel@Xeon@X5650 (6 core, 2.66GHz, 12MB L3, 95W)
|Intel@Xeon@X5650 (6 core, 2.66GHz, 12MB L3, 95W)
|Name
|Name

Revision as of 14:43, 21 March 2013

BG01-IPP Grid Cluster






Contents

Resource centre HPCG

The HPCG cluster is located at IICT-BAS. It has 576 computing cores organized in a blade system.

  • HP Cluster Platform Express 7000 enclosures with 36 blades BL 280c with dual Intel Xeon X5560 @ 2.8Ghz (total 576cores).
  • Non-blocking DDR Interconnection via Voltaire Grid director 2004 with latency 2.5 μs and bandwidth 20Gbps.
  • Two SAN switches for redundant access.

The storage and management nodes have 128 cores.

  • MSA2312fc with 48 TB storage, Lustre filesystem
  • P2000 G3 with 48 TB storage.
  • More than 92% efficiency on LINPACK (>3 TFlops, peak performance 3.2TFlops).

A smaller cluster with powerful GPU computing cards is also attached to it. The extended cluster has:

  • 4 GPU cards NVIDIA GTX 295 (each card counts as 2 graphical devices), CPU Intel Core i7 @2.66 Ghz, 12 GB RAM.
  • Total number of threads for GPU computing – 4x2x240=1920.
  • High performance Lustre filesystems.

HPCG Resource centre has two computational nodes GPGPU servers, working with 7 NVidia TESLA M2090 graphic cards - the first node has 6 cards and the second has 1 card. The technical characteristics are described in the tables below.

GPGPU-enabled servers

Processor
Intel@Xeon@X5650 (6 core, 2.66GHz, 12MB L3, 95W) Name HP ProLiant SL390s G7 Server series - Models Number of processors 2 Processor core available 12(24) Memory, standard 12GB Memory slots 18 DIMM slots Memory 96GB Memory type PC3-10600R (RDIMM) Expansion slots 2 PCle Network controller 1GbE NC382i Multifunction 4 Ports Power supply type 460 Watt 92% Efficiency, hot plug, redundant Storage controller Smart Array P410i/1GB FBWC Internal mass storage SAS: 4 TB; SAS: 500 GB Management software HP Insight Control Form factor 1 U Max GPU slots 8


The main scientific areas covered by this center are environmental modelling, computational physics, computational chemistry and biomedicine.

Parallel programming models

Parallel programming paradigms supported by HPCG cluster are Message passing, supporting several implementations of MPI: MVIAPICH1/2, OpenMPI, OpenMP, as a shared memory approach, available through GNU Compiler Collection (GCC). The nodes (36 nodes) have relatively high amount of RAM (24GB per node). Hybrid approach available through combining the two approaches listed above. GPU computing is available by using CUDA and/or OpenCL

Development software

Several versions of the GCC toolchain are available, in order to have flexibility and resolve portability issues for some software packages. Performance and debugging tools include standard gdb and gprof as well as MPE, mpiP and SCALASCA.

Infrastructural services

User administration, authentication, authorization, security

The main way to use the cluster for HPC work is by standard authentication through username/password or public key authentication. It is also possible to submit jobs using the gLite Grid middleware, provided the user has X.509 certificate and is a member of an appropriate supported Virtual Organization.

Workflow and batch management

HPCG cluster is using a Torque + Maui combination. The main way to manage resource utllizations is from the Maui configuration.

Distributed filesystem and data management

There are two main filesystems (/home and /gscratch), both based on the high performance Lustre filesystem. The latter is used only for temporary, but large files, created during job execution.

Accounting and resource management

A custom solution that gathers accounting data from several Bulgarian clusters has been developed and deployed. This solution provides accurate low-level data for all jobs run at these clusters and may be used not only for aggregate accounting but also for performance monitoring.

Operational monitoring

Monitoring of HPCG cluster is performed through the nagios portal. Some additional tools like pakiti are also available.

Helpdesk and user support

User support is achieved through email lists or through the regional helpdesk.

Libraries and application tools

1) Software libraries:

ATLAS, LAPACK, Linpack, ScaLAPACK, GotoBLAS, FFTW, LUSTRE, SPRNG, MPI (MVIAPICH1/2, OpenMPI), BLACS, BLAS, Maple, VMD, CUDA, OpenCL, OpenFOAM, octave

2) Development and application software available:

Charm++, CPMD, GAMESS, GROMACS, NAMD, NWChem, Quantum Espresso, mpiBLAST, WRF, CMAQ, SMOKE

Access the HPCG cluster

If you want to have access to HPCG you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult Resource management system.

Support

HP-SEE researchers should use the help desk - https://helpdesk.hp-see.eu/ If you don’t have an account send mail to Ioannis Liaboti - iliaboti at grnet dot gr

Personal tools