Resource centre HPGCC
From HP-SEE Wiki
Resource centre FINKI-HPGCC
The HPGCC cluster is located at FINKI-UKIM. It has 1008 computing cores organized in a blade system.
- HP Cluster Platform Express 7000 enclosures with 42 blades BL2x220c G7 with dual motherboard and dual Intel Xeon L5640 @ 2.26Ghz (total 1008 cores).
- Non-blocking QDR Interconnection via 4 x Voltaire 4036 with latency 1.5 μs and bandwidth 40Gbps and in-enclosure 6 x Infiniscale IV Mellanox.
- Two SAN switches for redundant access.
The storage and management nodes have 72 cores.
- P2000 G3 SAS with 36 TB storage, Lustre filesystem
- More than 86% efficiency on LINPACK.
Parallel programming models
Parallel programming paradigms supported by HPGCC cluster are Message passing, supporting several implementations of MPI: MVIAPICH1/2, OpenMPI, OpenMP, as a shared memory approach, available through GNU Compiler Collection (GCC). The nodes (84 nodes) have relatively high amount of RAM (24GB per node).
Several versions of the GCC toolchain are available, in order to have flexibility and resolve portability issues for some software packages. Performance and debugging tools include standard gdb and gprof as well as MPE, mpiP and SCALASCA.
User administration, authentication, authorization, security
The main way to use the cluster for HPC work is by standard authentication through username/password or public key authentication. It is also possible to submit jobs using the gLite Grid middleware, provided the user has X.509 certificate and is a member of an appropriate supported Virtual Organization.
Workflow and batch management
HPGCC cluster is using a Torque + Maui combination. The main way to manage resource utllizations is from the Maui configuration.
Distributed filesystem and data management
There in a main filesystems (/home), based on the high performance Lustre 2.1 filesystem.
Monitoring of HPGCC cluster is performed through the nagios portal. Some additional tools like pakiti are also available.
Helpdesk and user support
User support is achieved through email lists or through the regional helpdesk.
Libraries and application tools
1) Software libraries:
ATLAS, LAPACK, Linpack, ScaLAPACK, GotoBLAS, FFTW, LUSTRE, SPRNG, MPI (MVIAPICH1/2, OpenMPI), BLACS, BLAS, Maple, VMD, octave
2) Development and application software available:
Charm++, CPMD, GAMESS, GROMACS, NAMD, NWChem, mpiBLAST
Access the HPGCC cluster
If you want to have access to HPGCC you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult Resource management system.
HP-SEE researchers should use the help desk - https://helpdesk.hp-see.eu/ If you don’t have an account send mail to Ioannis Liaboti - iliaboti at grnet dot gr