Resource centre IFIN
From HP-SEE Wiki
(→Parallel programming models) |
(→Resource centre IFIN-HH) |
||
(25 intermediate revisions not shown) | |||
Line 2: | Line 2: | ||
'''IFIN_Bio Cluster''' | '''IFIN_Bio Cluster''' | ||
+ | |||
+ | [[File:IFIN_Bio.jpg|thumb|right|x180px|IFIN_Bio]] | ||
''Server name: Dell PowerEdge 1950 III'' | ''Server name: Dell PowerEdge 1950 III'' | ||
Line 16: | Line 18: | ||
'''IFIN_BC Cluster''' | '''IFIN_BC Cluster''' | ||
+ | |||
+ | [[File:IFIN_BC.jpg|thumb|right|x180px|IFIN_BC]] | ||
''Server name: IBM QS22'' | ''Server name: IBM QS22'' | ||
- | - Processor: IBM | + | - Processor: IBM PowerXCell 8i |
- Clock frequency: 3.2 GHz | - Clock frequency: 3.2 GHz | ||
- Cores per CPU: 1x PPE + 8x SPE | - Cores per CPU: 1x PPE + 8x SPE | ||
Line 58: | Line 62: | ||
=== '''''Parallel programming models''''' === | === '''''Parallel programming models''''' === | ||
+ | Supported parallel programming paradigms are MPI, OpenMP | ||
- | + | === '''''Development software''''' === | |
- | + | ||
- | == '''''Development software''''' == | + | |
- | + | ||
Language C/C++; gcc compiler; Fortran; Java Compiler | Language C/C++; gcc compiler; Fortran; Java Compiler | ||
- | + | === '''''Infrastructural services''''' === | |
- | '''''Infrastructural services''''' | + | |
'''User administration, authentication, authorization, security''' | '''User administration, authentication, authorization, security''' | ||
Line 85: | Line 86: | ||
e-mail lists | e-mail lists | ||
- | + | === '''''Libraries and application tools''''' === | |
- | '''''Libraries and application tools''''' | + | |
'''Software libraries:''' | '''Software libraries:''' | ||
Line 94: | Line 94: | ||
'''Development and application software available:''' | '''Development and application software available:''' | ||
- | MPICH-MX, | + | MPICH-MX, OpenMP, CHARM++, NAMD, MMTSB |
- | '''''Access the IFIN_Bio Cluster''''' | + | === '''''Access the IFIN_Bio Cluster''''' === |
- | + | ||
- | + | ||
+ | If you want to have access to the IFIN_Bio Cluster you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult [[Resource_management_system|Resource management system]]. | ||
''' | ''' | ||
Line 105: | Line 104: | ||
== ''IFIN_BC Cluster'' == | == ''IFIN_BC Cluster'' == | ||
- | |||
- | + | === '''''Parallel programming models''''' === | |
+ | Supported parallel programming paradigms are MPI, OpenMP | ||
- | '''''Development software''''' | + | === '''''Development software''''' === |
Language C/C++; gcc compiler; Fortran; Java Compiler | Language C/C++; gcc compiler; Fortran; Java Compiler | ||
- | + | === '''''Infrastructural services''''' === | |
- | '''''Infrastructural services''''' | + | |
'''User administration, authentication, authorization, security''' | '''User administration, authentication, authorization, security''' | ||
Line 133: | Line 131: | ||
e-mail lists | e-mail lists | ||
- | + | === '''''Libraries and application tools''''' === | |
- | '''''Libraries and application tools''''' | + | |
'''Software libraries:''' | '''Software libraries:''' | ||
Line 144: | Line 141: | ||
openMPI, MVAPICH, MVAPICH2, IBM Software Kit for Multicore Acceleration | openMPI, MVAPICH, MVAPICH2, IBM Software Kit for Multicore Acceleration | ||
+ | === '''''Access the IFIN_BC Cluster''''' === | ||
- | + | If you want to have access to the IFIN_BC Cluster you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult [[Resource_management_system|Resource management system]]. | |
- | + | ||
- | If you want to have access to the IFIN_BC Cluster you have to register in the HP-SEE Resource Management System at https:// | + |
Latest revision as of 14:53, 17 October 2012
Contents |
Resource centre IFIN-HH
IFIN_Bio Cluster
Server name: Dell PowerEdge 1950 III
- Processor: Intel Xeon E5430 - Clock frequency: 2.66 GHz - Cores per CPU: 4 - CPUs per node: 2 - RAM on mode: 16 GB - Overall RAM: 512 GB - Nodes within cluster: 32 - Overall number of CPUs: 64 - Overall number of cores: 256 - Network: 1x Myrinet 2000 2Gbps
IFIN_BC Cluster
Server name: IBM QS22
- Processor: IBM PowerXCell 8i - Clock frequency: 3.2 GHz - Cores per CPU: 1x PPE + 8x SPE - CPUs per node: 2 - RAM on node: 32 GB - Overall RAM: 512 GB - Nodes within cluster: 16 - Overall number of CPUs: 32 PowerXCell - Overall number of cores: 32x PPE + 256x SPE - Network: Infiniband 4x QDR 40 Gbps
Server name: IBM LS22
- Processor: AMD Opteron 2376 - Clock frequency: 2.3 GHz - Cores per CPU: 4 - CPUs per node: 2 - RAM on node: 8 GB - Overall RAM: 80 GB - Nodes within cluster: 10 - Overall number of CPUs: 20 - Overall number of cores: 80 - Network: Infiniband 4x QDR 40 Gbps
Server name: IBM HS22
- Processor: Intel Xeon X5650 - Clock frequency: 2.67 GHz - Cores per CPU: 6 - CPUs per node: 2 - RAM on node: 24 GB - Overall RAM: 672 GB - Nodes within cluster: 28 - Overall number of CPUs: 56 - Overall number of cores: 336 - Network: Infiniband 4x QDR 40 Gbps
IFIN_Bio Cluster
Parallel programming models
Supported parallel programming paradigms are MPI, OpenMP
Development software
Language C/C++; gcc compiler; Fortran; Java Compiler
Infrastructural services
User administration, authentication, authorization, security
The main authentication and authorization method is via password/private keys.
Workflow and batch management
Torque-PBS
Operational monitoring
Operational monitoring is done with open source tools - Cacti
Helpdesk and user support
e-mail lists
Libraries and application tools
Software libraries:
GotoBLAS, FFTW
Development and application software available:
MPICH-MX, OpenMP, CHARM++, NAMD, MMTSB
Access the IFIN_Bio Cluster
If you want to have access to the IFIN_Bio Cluster you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult Resource management system.
IFIN_BC Cluster
Parallel programming models
Supported parallel programming paradigms are MPI, OpenMP
Development software
Language C/C++; gcc compiler; Fortran; Java Compiler
Infrastructural services
User administration, authentication, authorization, security
The main authentication and authorization method is via password/private keys.
Workflow and batch management
Torque-PBS
Operational monitoring
Operational monitoring is done with open source tools - Cacti
Helpdesk and user support
e-mail lists
Libraries and application tools
Software libraries:
fftw
Development and application software available:
openMPI, MVAPICH, MVAPICH2, IBM Software Kit for Multicore Acceleration
Access the IFIN_BC Cluster
If you want to have access to the IFIN_BC Cluster you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult Resource management system.