Resource centre IFIN

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(Parallel programming models)
(Resource centre IFIN-HH)
 
(3 intermediate revisions not shown)
Line 2: Line 2:
'''IFIN_Bio Cluster'''
'''IFIN_Bio Cluster'''
 +
 +
[[File:IFIN_Bio.jpg|thumb|right|x180px|IFIN_Bio]]
''Server name: Dell PowerEdge 1950 III''
''Server name: Dell PowerEdge 1950 III''
Line 16: Line 18:
'''IFIN_BC Cluster'''
'''IFIN_BC Cluster'''
 +
 +
[[File:IFIN_BC.jpg|thumb|right|x180px|IFIN_BC]]
''Server name: IBM QS22''
''Server name: IBM QS22''
-
   - Processor: IBM PoerXCell 8i
+
   - Processor: IBM PowerXCell 8i
   - Clock frequency: 3.2 GHz
   - Clock frequency: 3.2 GHz
   - Cores per CPU: 1x PPE + 8x SPE
   - Cores per CPU: 1x PPE + 8x SPE
Line 103: Line 107:
=== '''''Parallel programming models''''' ===
=== '''''Parallel programming models''''' ===
-
Supported parallel programming paradigms are MPI
+
Supported parallel programming paradigms are MPI, OpenMP
=== '''''Development software''''' ===
=== '''''Development software''''' ===

Latest revision as of 14:53, 17 October 2012

Contents

Resource centre IFIN-HH

IFIN_Bio Cluster

IFIN_Bio

Server name: Dell PowerEdge 1950 III

 - Processor: Intel Xeon E5430
 - Clock frequency: 2.66 GHz
 - Cores per CPU: 4
 - CPUs per node: 2
 - RAM on mode: 16 GB
 - Overall RAM: 512 GB
 - Nodes within cluster: 32
 - Overall number of CPUs: 64
 - Overall number of cores: 256
 - Network: 1x Myrinet 2000 2Gbps

IFIN_BC Cluster

IFIN_BC

Server name: IBM QS22

 - Processor: IBM PowerXCell 8i
 - Clock frequency: 3.2 GHz
 - Cores per CPU: 1x PPE + 8x SPE
 - CPUs per node: 2
 - RAM on node: 32 GB
 - Overall RAM: 512 GB
 - Nodes within cluster: 16
 - Overall number of CPUs: 32 PowerXCell
 - Overall number of cores: 32x PPE + 256x SPE
 - Network: Infiniband 4x QDR 40 Gbps

Server name: IBM LS22

 - Processor: AMD Opteron 2376
 - Clock frequency: 2.3 GHz
 - Cores per CPU: 4
 - CPUs per node: 2
 - RAM on node: 8 GB
 - Overall RAM: 80 GB
 - Nodes within cluster: 10
 - Overall number of CPUs: 20
 - Overall number of cores: 80
 - Network: Infiniband 4x QDR 40 Gbps

Server name: IBM HS22

 - Processor: Intel Xeon X5650
 - Clock frequency: 2.67 GHz
 - Cores per CPU: 6
 - CPUs per node: 2
 - RAM on node: 24 GB
 - Overall RAM: 672 GB
 - Nodes within cluster: 28
 - Overall number of CPUs: 56
 - Overall number of cores: 336
 - Network: Infiniband 4x QDR 40 Gbps

IFIN_Bio Cluster

Parallel programming models

Supported parallel programming paradigms are MPI, OpenMP

Development software

Language C/C++; gcc compiler; Fortran; Java Compiler

Infrastructural services

User administration, authentication, authorization, security

The main authentication and authorization method is via password/private keys.

Workflow and batch management

Torque-PBS

Operational monitoring

Operational monitoring is done with open source tools - Cacti

Helpdesk and user support

e-mail lists

Libraries and application tools

Software libraries:

GotoBLAS, FFTW

Development and application software available:

MPICH-MX, OpenMP, CHARM++, NAMD, MMTSB

Access the IFIN_Bio Cluster

If you want to have access to the IFIN_Bio Cluster you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult Resource management system.

IFIN_BC Cluster

Parallel programming models

Supported parallel programming paradigms are MPI, OpenMP

Development software

Language C/C++; gcc compiler; Fortran; Java Compiler

Infrastructural services

User administration, authentication, authorization, security

The main authentication and authorization method is via password/private keys.

Workflow and batch management

Torque-PBS

Operational monitoring

Operational monitoring is done with open source tools - Cacti

Helpdesk and user support

e-mail lists

Libraries and application tools

Software libraries:

fftw

Development and application software available:

openMPI, MVAPICH, MVAPICH2, IBM Software Kit for Multicore Acceleration

Access the IFIN_BC Cluster

If you want to have access to the IFIN_BC Cluster you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult Resource management system.

Personal tools