Resource centre NIIFI SC

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(Created page with "The resource currently consists of two SUN Fire (SMP) machines (regina and codex) totaling to 216 cores providing 0.5 Tflop/s computational power. The operating system is SPARC...")
(Infrastructural services)
 
(16 intermediate revisions not shown)
Line 1: Line 1:
-
The resource currently consists of two SUN Fire (SMP) machines (regina and codex)
+
[[File:Niif_sc.jpg|right|NIIFI SC]]
-
totaling to 216 cores providing 0.5 Tflop/s computational power. The operating system is  
+
 
-
SPARC Solaris 9.  
+
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
__TOC__
 +
 
 +
== '''Resource centre NIIFI SC''' ==
 +
 
 +
NIIFI supercomputer is a fat-node HP cluster which is a very sophisticated type of blade technology CP4000BL. It is content the latest AMD Opteron 6174 type processors with 12-core Magny-Cours. Total number of cores is 768. It is using Infiniband network for the internal high-performance communication. The machine content 1.9 TByte memory and the computing power is 5.4 Tflops. Water-cooled rack was placed in the system to increases energy efficiency. This unique super computer run it very effectively in the mixed parallel programming paradigms and each node is a powerful 24 cores SMP computer.
= Parallel programming models =
= Parallel programming models =
-
Supported parallel programming paradigms and APIs:
+
Supported parallel programming paradigms are MPI, OpenMP, PVM and Hybrid.
-
* PVM3 – 3.4.4
+
-
* lammpi – 7.1.4
+
-
* OpenMP supported
+
= Development software =
= Development software =
-
Solaris specific tools and GNU development programs are available:
+
Languages C/C++ ; gcc compiler toolchain, MPI libraries, profiling and debugging tools.
-
* Sun One Studio Compiler Kit – 5.0
+
-
* ddd – 3.3.1
+
-
* g77, g++, gcc – 3.4.6
+
-
* gdb – 5.0
+
-
= Infrastructural services =
+
== ''Infrastructural services'' ==
-
== User administration, authentication, authorization, security ==
+
'''User administration, authentication, authorization, security'''
-
The users are authecticated from LDAP. There is a dedicated LDAP tree for this:
+
Users can access it with SSH public key authentication
-
o=GRID,o=NIIF,c=HU The default Sun Solaris PAM LDAP module is used with the POSIX
+
-
account schema.
+
-
== Workflow and batch management ==
+
'''Workflow and batch management'''
-
Sun Grid Engine is the local resource manager system. Its version is SGEEE pre6.0 (Maintrunk).
+
[https://wiki.niif.hu/index.php?title=PRACE_User_Support#Usage_of_the_SLURM_scheduler SLURM scheduler]
-
+
-
Configured queues:  
+
-
* arc1 (interactive queue for the ARC middleware, regina is the execution machine)
+
-
* codex.in (interactive queue for codex)
+
-
* codex.q (batch queue for codex)
+
-
* regina.in (interactive queue for regina)
+
-
* regina.q (batch queue for regina)
+
-
* regina.test (regina test queue)
+
-
== Distributed filesystem and data management ==
+
'''Distributed filesystem and data management'''
-
Quick File System (QFS) is used. The storage frontend machine is the anulus which has optical connection. Every machine see the storage by the same way.  
+
There are two filesystems available to users (/home, /scratch) where /scratch is for temporary files created during job execution and it is shared beetween the worker nodes.  
-
== Accounting and resource management ==
+
'''Accounting and resource management'''
-
The resource management software is the Sun15k’s System Management Services (SMS) and the accounting is provided by the Sun Grid Engine.
+
Sun Grid Engine's accounting
-
== Operational monitoring ==
+
'''Operational monitoring'''
-
The supercomputers monitoring encapsulated into the NIIFI’s Nagios v3 system and the resource parameters recorded into the NIIFI’s Munin tool.
+
Operational monitoring is performed using various tools: Nagios, Munin  
-
== Helpdesk and user support ==
+
'''Helpdesk and user support'''
-
Users can ask help by mail and telephone announcement.
+
User support is provided through dedicated hu_niifi queue at HP-SEE Helpdesk
= Libraries and application tools =
= Libraries and application tools =
-
Software libraries, development and application software available:  
+
1) '''Software libraries:'''
-
* Sun HPC ClusterTools – 5.0
+
 
-
* FFTW – 2.1.3 and 3.1.2
+
PVM3, FFTW3, Blas, Lapack, MPICH2, OpenMPI
-
* ScaLapack – 1.7
+
 
-
* mpiblacs – 1.1
+
2) '''Development and application software available:'''
-
* BLAS – Fortran77 reference implementation of the LEVEL 1, 2, and 3
+
 
-
* LAPACK – lite-3.1.1
+
AMD compiler, GDB, GNU compilers (gcc, g++, gfortran)
-
* Scilab – 2.7
+
 
-
* RasMol – 2.7.1
+
More detailed description about the current software stack: http://www.niif.hu/en/node/689
-
* Meep – 1.1.1
+
 
-
* GAUSSIAN – 03
+
== ''Access the NIIFI SC'' ==
-
* GROMACS – 3.3.1
+
 
 +
If you want to have access to NIIFI SC you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult [[Resource_management_system|Resource management system]].
 +
 
 +
Frontend machine: login.budapest.hpc.niif.hu
 +
 
 +
Here you can read about the usage of the Hungarian supercomputers: http://www.niif.hu/en/services/supercomputing/usage_of_the_niifi_supercomputers
 +
 
 +
== ''Support'' ==
 +
HP-SEE researchers should use the help desk - https://helpdesk.hp-see.eu/.
 +
If you don’t have an account send mail to Ioannis Liaboti - iliaboti at grnet dot gr.

Latest revision as of 17:58, 7 November 2013

NIIFI SC




Contents


Resource centre NIIFI SC

NIIFI supercomputer is a fat-node HP cluster which is a very sophisticated type of blade technology CP4000BL. It is content the latest AMD Opteron 6174 type processors with 12-core Magny-Cours. Total number of cores is 768. It is using Infiniband network for the internal high-performance communication. The machine content 1.9 TByte memory and the computing power is 5.4 Tflops. Water-cooled rack was placed in the system to increases energy efficiency. This unique super computer run it very effectively in the mixed parallel programming paradigms and each node is a powerful 24 cores SMP computer.

Parallel programming models

Supported parallel programming paradigms are MPI, OpenMP, PVM and Hybrid.

Development software

Languages C/C++ ; gcc compiler toolchain, MPI libraries, profiling and debugging tools.

Infrastructural services

User administration, authentication, authorization, security

Users can access it with SSH public key authentication

Workflow and batch management

SLURM scheduler

Distributed filesystem and data management

There are two filesystems available to users (/home, /scratch) where /scratch is for temporary files created during job execution and it is shared beetween the worker nodes.

Accounting and resource management

Sun Grid Engine's accounting

Operational monitoring

Operational monitoring is performed using various tools: Nagios, Munin

Helpdesk and user support

User support is provided through dedicated hu_niifi queue at HP-SEE Helpdesk

Libraries and application tools

1) Software libraries:

PVM3, FFTW3, Blas, Lapack, MPICH2, OpenMPI

2) Development and application software available:

AMD compiler, GDB, GNU compilers (gcc, g++, gfortran)

More detailed description about the current software stack: http://www.niif.hu/en/node/689

Access the NIIFI SC

If you want to have access to NIIFI SC you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult Resource management system.

Frontend machine: login.budapest.hpc.niif.hu

Here you can read about the usage of the Hungarian supercomputers: http://www.niif.hu/en/services/supercomputing/usage_of_the_niifi_supercomputers

Support

HP-SEE researchers should use the help desk - https://helpdesk.hp-see.eu/. If you don’t have an account send mail to Ioannis Liaboti - iliaboti at grnet dot gr.

Personal tools