Resource centre IFIN
From HP-SEE Wiki
(→IFIN_BC Cluster) |
(→Resource centre IFIN-HH) |
||
(46 intermediate revisions not shown) | |||
Line 1: | Line 1: | ||
== '''Resource centre IFIN-HH''' == | == '''Resource centre IFIN-HH''' == | ||
- | IFIN_Bio Cluster | + | '''IFIN_Bio Cluster''' |
- | + | [[File:IFIN_Bio.jpg|thumb|right|x180px|IFIN_Bio]] | |
+ | |||
+ | ''Server name: Dell PowerEdge 1950 III'' | ||
- Processor: Intel Xeon E5430 | - Processor: Intel Xeon E5430 | ||
- Clock frequency: 2.66 GHz | - Clock frequency: 2.66 GHz | ||
Line 15: | Line 17: | ||
- Network: 1x Myrinet 2000 2Gbps | - Network: 1x Myrinet 2000 2Gbps | ||
- | IFIN_BC Cluster | + | '''IFIN_BC Cluster''' |
- | Server name: IBM QS22 | + | [[File:IFIN_BC.jpg|thumb|right|x180px|IFIN_BC]] |
- | - Processor: IBM | + | |
+ | ''Server name: IBM QS22'' | ||
+ | - Processor: IBM PowerXCell 8i | ||
- Clock frequency: 3.2 GHz | - Clock frequency: 3.2 GHz | ||
- Cores per CPU: 1x PPE + 8x SPE | - Cores per CPU: 1x PPE + 8x SPE | ||
Line 29: | Line 33: | ||
- Network: Infiniband 4x QDR 40 Gbps | - Network: Infiniband 4x QDR 40 Gbps | ||
- | Server name: IBM LS22 | + | ''Server name: IBM LS22'' |
- Processor: AMD Opteron 2376 | - Processor: AMD Opteron 2376 | ||
- Clock frequency: 2.3 GHz | - Clock frequency: 2.3 GHz | ||
Line 41: | Line 45: | ||
- Network: Infiniband 4x QDR 40 Gbps | - Network: Infiniband 4x QDR 40 Gbps | ||
- | Server name: IBM HS22 | + | ''Server name: IBM HS22'' |
- Processor: Intel Xeon X5650 | - Processor: Intel Xeon X5650 | ||
- Clock frequency: 2.67 GHz | - Clock frequency: 2.67 GHz | ||
Line 55: | Line 59: | ||
== ''IFIN_Bio Cluster'' == | == ''IFIN_Bio Cluster'' == | ||
- | |||
- | + | === '''''Parallel programming models''''' === | |
+ | Supported parallel programming paradigms are MPI, OpenMP | ||
- | '''''Development software''''' | + | === '''''Development software''''' === |
Language C/C++; gcc compiler; Fortran; Java Compiler | Language C/C++; gcc compiler; Fortran; Java Compiler | ||
- | + | === '''''Infrastructural services''''' === | |
- | '''''Infrastructural services''''' | + | |
'''User administration, authentication, authorization, security''' | '''User administration, authentication, authorization, security''' | ||
Line 83: | Line 86: | ||
e-mail lists | e-mail lists | ||
- | + | === '''''Libraries and application tools''''' === | |
- | '''''Libraries and application tools''''' | + | |
'''Software libraries:''' | '''Software libraries:''' | ||
- | + | GotoBLAS, FFTW | |
'''Development and application software available:''' | '''Development and application software available:''' | ||
- | + | MPICH-MX, OpenMP, CHARM++, NAMD, MMTSB | |
- | '''''Access the IFIN_Bio Cluster''''' | + | === '''''Access the IFIN_Bio Cluster''''' === |
- | If you want to have access to the IFIN_Bio Cluster you have to register in the HP-SEE Resource Management System at https:// | + | If you want to have access to the IFIN_Bio Cluster you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult [[Resource_management_system|Resource management system]]. |
+ | ''' | ||
- | + | == ''IFIN_BC Cluster'' == | |
- | |||
- | + | === '''''Parallel programming models''''' === | |
+ | Supported parallel programming paradigms are MPI, OpenMP | ||
- | '''''Development software''''' | + | === '''''Development software''''' === |
Language C/C++; gcc compiler; Fortran; Java Compiler | Language C/C++; gcc compiler; Fortran; Java Compiler | ||
- | + | === '''''Infrastructural services''''' === | |
- | '''''Infrastructural services''''' | + | |
'''User administration, authentication, authorization, security''' | '''User administration, authentication, authorization, security''' | ||
Line 129: | Line 131: | ||
e-mail lists | e-mail lists | ||
- | + | === '''''Libraries and application tools''''' === | |
- | '''''Libraries and application tools''''' | + | |
'''Software libraries:''' | '''Software libraries:''' | ||
Line 138: | Line 139: | ||
'''Development and application software available:''' | '''Development and application software available:''' | ||
- | openMPI, MVAPICH, MVAPICH2 | + | openMPI, MVAPICH, MVAPICH2, IBM Software Kit for Multicore Acceleration |
- | '''''Access the IFIN_BC Cluster''''' | + | === '''''Access the IFIN_BC Cluster''''' === |
- | If you want to have access to the IFIN_BC Cluster you have to register in the HP-SEE Resource Management System at https:// | + | If you want to have access to the IFIN_BC Cluster you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult [[Resource_management_system|Resource management system]]. |
Latest revision as of 14:53, 17 October 2012
Contents |
Resource centre IFIN-HH
IFIN_Bio Cluster
Server name: Dell PowerEdge 1950 III
- Processor: Intel Xeon E5430 - Clock frequency: 2.66 GHz - Cores per CPU: 4 - CPUs per node: 2 - RAM on mode: 16 GB - Overall RAM: 512 GB - Nodes within cluster: 32 - Overall number of CPUs: 64 - Overall number of cores: 256 - Network: 1x Myrinet 2000 2Gbps
IFIN_BC Cluster
Server name: IBM QS22
- Processor: IBM PowerXCell 8i - Clock frequency: 3.2 GHz - Cores per CPU: 1x PPE + 8x SPE - CPUs per node: 2 - RAM on node: 32 GB - Overall RAM: 512 GB - Nodes within cluster: 16 - Overall number of CPUs: 32 PowerXCell - Overall number of cores: 32x PPE + 256x SPE - Network: Infiniband 4x QDR 40 Gbps
Server name: IBM LS22
- Processor: AMD Opteron 2376 - Clock frequency: 2.3 GHz - Cores per CPU: 4 - CPUs per node: 2 - RAM on node: 8 GB - Overall RAM: 80 GB - Nodes within cluster: 10 - Overall number of CPUs: 20 - Overall number of cores: 80 - Network: Infiniband 4x QDR 40 Gbps
Server name: IBM HS22
- Processor: Intel Xeon X5650 - Clock frequency: 2.67 GHz - Cores per CPU: 6 - CPUs per node: 2 - RAM on node: 24 GB - Overall RAM: 672 GB - Nodes within cluster: 28 - Overall number of CPUs: 56 - Overall number of cores: 336 - Network: Infiniband 4x QDR 40 Gbps
IFIN_Bio Cluster
Parallel programming models
Supported parallel programming paradigms are MPI, OpenMP
Development software
Language C/C++; gcc compiler; Fortran; Java Compiler
Infrastructural services
User administration, authentication, authorization, security
The main authentication and authorization method is via password/private keys.
Workflow and batch management
Torque-PBS
Operational monitoring
Operational monitoring is done with open source tools - Cacti
Helpdesk and user support
e-mail lists
Libraries and application tools
Software libraries:
GotoBLAS, FFTW
Development and application software available:
MPICH-MX, OpenMP, CHARM++, NAMD, MMTSB
Access the IFIN_Bio Cluster
If you want to have access to the IFIN_Bio Cluster you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult Resource management system.
IFIN_BC Cluster
Parallel programming models
Supported parallel programming paradigms are MPI, OpenMP
Development software
Language C/C++; gcc compiler; Fortran; Java Compiler
Infrastructural services
User administration, authentication, authorization, security
The main authentication and authorization method is via password/private keys.
Workflow and batch management
Torque-PBS
Operational monitoring
Operational monitoring is done with open source tools - Cacti
Helpdesk and user support
e-mail lists
Libraries and application tools
Software libraries:
fftw
Development and application software available:
openMPI, MVAPICH, MVAPICH2, IBM Software Kit for Multicore Acceleration
Access the IFIN_BC Cluster
If you want to have access to the IFIN_BC Cluster you have to register in the HP-SEE Resource Management System at https://portal.ipp.acad.bg:8443/hpseeportal/. For more information on using the Resource Management System consult Resource management system.