Resource centre PARADOX

From HP-SEE Wiki

Jump to: navigation, search
PARADOX HPC cluster







Contents

Resource centre PARADOX

PARADOX is the largest HPC cluster in Serbia which consists of 84 worker nodes (2 x quad core Xeon E5345 @ 2.33 GHz with 8GB of RAM). Its computing nodes are interconnected by the star topology Gigabit Ethernet network through three stacked high-throughput Layer 3 switches, each node being connected to the switch by two Gigabit Ethernet cables in channel bonding. In terms of storage resources, PARADOX provides up to 50 TB of disk space to the HP-SEE community.

For the training and educational purposes, Scientific Computing Laboratory of Institute of Physics Belgrade assembled its tPARADOX training cluster for educational purposes. Cluster is used for introductory and advanced training of programming and optimization techniques for different architectures as well as for mastering parallel programming paradigms (both shared and distributed memory based). tPARADOX cluster is based on IBM's BladeCenter technology and it is consisted of IBM BladeCenter H chassis commonly used in high performance computing and different types of Blade servers that cover some of the major CPU architectures currently available: Intel's x86_64 and IBM's POWER and Cell/B.E.

tPARADOX


Following blades are present within the chassis:

  • 2 x HX21XM blade Server based on 2 quad-core Intel Xeon E5405 processors that run on 2.0GHz.
  • 2 x JS22 blade Server based on four IBM's POWER6 cores running on 4.0GHz.
  • 2 x QS22 blade Server based on 2 multi-core IBM PowerXCell 8i processors, featuring Cell Broadband Engine Architecture (Cell/B.E.)
  • 1 x HS22 blade Server based on 2 quad-core Intel Nehalem E5540 Xeon CPUs that run on 2.53GHz.

Beside standard 1 Gbps Ethernet, tPARADOX cluster is also featuring high throughput, low latency DDR Infiniband interconnect. Software stack available to trainees include both open source and commercial libraries and compilers optimized for specific architecture.



Parallel programming models

Supported parallel programming paradigms are MPI, OpenMP and Hybrid.

Development software

Development is supported for C/C++ and Fortran programing languages through GCC, Intel, PGI and IBM XL compilers toolchain. For profiling and debugging purposes different tools are available: TotalView Debugger, gdb, pgdbg, valgrind, gprof etc.

Infrastructural services

User administration, authentication, authorization, security

PARADOX supports standard authentication method using username/password together with capability of job submitting through the gLite Grid middleware stack with authentication based on X.509 PKI infrastructure.

Workflow and batch management

TORQUE Resource Manager coupled with the MAUI Scheduler.

Distributed filesystem and data management

There are three filesystems available to users (/home, /storage , /scratch) where /scratch is local filesystem on cluster nodes used for temporary files created during job execution. Data management is, beside standard linux tools, available also through the gLite middleware software stack.

Accounting and resource management

Resource management is achieved through TORQUE and accounting monitoring is performed through gLite middleware provided tools and portals.

Operational monitoring

Operational monitoring is performed using various tools: Ganglia, Nagios, Pakiti and in house solutions.

Helpdesk and user support

User support is provided through mailing lists and dedicated rs_paradox queue at HP-SEE Helpdesk

Libraries and application tools

1) Software libraries:

LAPACK, BLAS, FFTW3, SPRNG, Intel MKL, ScaLAPACK, MPI (MVIAPICH1/2, OpenMPI), IBM ESSL, IBM PESSL, IBM MASS

2) Development and application software available:

MPICH, MPICH2, OpenMPI, gcc, gfortran, Intel Compilers (C/C++, Fortran), PortlandGroup Compilers (C/C++, Fortran), NAMD, CPMD, Firefly, AutoDock Vina, OpenEye (OMEGA, EON, ROCS, FRED, SZYBKI), Paraver, Dimemas

Access the PARADOX Cluster

In order to obtain access to PARADOX Cluster you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult Resource management system.

Support

HP-SEE researchers should use the help desk - https://helpdesk.hp-see.eu/. If you don’t have an account send mail to Ioannis Liaboti - iliaboti at grnet dot gr. Additional way of support is through direct contact address: hpc-admin at ipb dot ac dot rs.

PARADOX Cluster user guide is available here .

Personal tools