HPC Resources:
sciCORE currently operates a high-performance computing infrastructure, divided in three different environments tailored to specific scientific needs. The infrastructure is composed of near 400 InfiniBand-interconnected nodes and around 8000 CPU cores, providing 60 TB of distributed memory and a high-performance (GPFS) cluster file system with a disk-storage capacity of 7.5 PB. The technical details are provided in the tables below.
The sciCORE cluster is updated on a regular basis to match the growing needs in Life Sciences and parallel demanding applications. Nowadays, almost 30 million of CPUh are consumed per year by our 800 users, summing up to more than 14 million of jobs run per year.
sciCORE cluster
Cluster |
Totals nodes |
Totals cores |
Total Ram |
Totals GPUs |
Inter-connect |
Total Disk |
sciCORE |
360 |
7624 |
50TBytes |
64 |
Infiniband |
7.5PBytes |
DMZ
Cluster |
Totals nodes |
Totals cores |
Total Ram |
Inter-connect |
Total Disk |
sciCORE |
16 |
304 |
4.4TBytes |
Infiniband |
55TBytes |
BioMedIT
Cluster |
Totals nodes |
Totals cores |
Total Ram |
Totals GPUs |
Inter-connect |
Total Disk |
BioMedIT |
21 |
440 total
200 user
|
6.3TBytes |
2x2 |
100-Gigabit-Ethernet |
250TBytes |
sciCORE PUMA, (Network Attached Storage)
Protocol |
Totals Disk |
NFS , SMB |
1.9 PBytes |