Cluster Overview
Projects and user accounts are grouped based on the purpose of the compute and association with different groups/institutions that provided funding for parts of the cluster.
Project/Account Groups and Purposes
SCC
The SCC (Scientific Compute Cluster) provides HPC resources for
- Georg-August-University of Göttingen
- Universitätsmedizin Göttingen (UMG)
- Max-Planck Gesellschaft (MPG)
- Deutsches Primatenzentrum (DPZ)
- Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen (GWDG)
NHR (formerly HLRN)
NHR-NORD@Göttingen (NHR for short) is one NHR center in the NHR (Nationales Hochleistungsrechnen) Alliance of Tier 2 HPC centers, which provide HPC resources to all German universities (application required). NHR-NORD@Göttingen was previously part of HLRN (Norddeutscher Verbund für Hoch- und Höchstleistungsrechnen) IV, NHR’s predecessor, which provided HPC resources for universities in Northern Germany.
KISSKI
The KISSKI project provides AI compute resources to critical and sensitive infrastructure.
REACT
The REACT program is an EU-funded initiative to support various economic and social developments, including one of our GPU partitions.
Institution/Research-Group Specific
Some institutions and research groups have their own dedicated islands or nodes as part of GWDG’s HPC Hosting service in addition to being able to use SCC resources. These islands or nodes can consist of
- Dedicated compute nodes, usually with their own Slurm partition (see CPU partitions and GPU partitions)
- SCRATCH/WORK data store
- Login node/s (see Logging In)
An example is the CIDBN island (also known as “Sofja”) with its own storage system, dedicated login nodes, and CPU partition.
DLR CARO
The other cluster GWDG operates is DLR CARO, which is for exclusive use by DLR employees. CARO is not connected to the GWDG HPC Cluster in any way. CARO’s documentation is only available on the DLR intranet. If you are a CARO user, you should go to its documentation and ignore the rest of this site.
Cluster Islands
The GWDG HPC system is composed of several smaller groups of nodes that are not quite HPC clusters on their own, we call them “cluster islands”. Nodes within a cluster island share the same or very similar hardware and are more closely networked together, meaning the internal connections are much faster and have higher bandwidth than connections to another island. Each island has different connections to our storage systems, resulting in significant performance differences when accessing a storage system, depending on which island the accessing nodes come from. Some storage systems are integrated into the high-performance fabric of a specific cluster island and are not accessible from other islands. For more information, see the links below.
General CPU node islands are called “Emmy Phase X” where X indicates the hardware generation (1, 2, 3, …). General GPU node islands are called “Grete Phase X” where X indicates the hardware generation (1, 2, 3, …).
| Island | CPUs | GPUs | Fabric |
|---|---|---|---|
| Emmy Phase 3 | Intel Sapphire Rapids | Omni-Path (100 Gb/s) | |
| Emmy Phase 2 | Intel Cascade Lake | Omni-Path (100 Gb/s) | |
| Emmy Phase 1 | Intel Skylake | Omni-Path (100 Gb/s) | |
| Grete Phase 3 | Intel Sapphire Rapids | Nvidia H100 | Infiniband (2 × 200 Gb/s) |
| Grete Phase 2 | AMD Zen 3 AMD Zen 2 | Nvidia A100 | Infiniband (2 × 200 Gb/s) |
| Grete Phase 1 | Intel Skylake | Nvidia V100 | Infiniband (100 Gb/s) |
| SCC Legacy (CPU) | Intel Cascade Lake Intel Skylake | Omni-Path (100 Gb/s) none (Ethernet only) | |
| SCC Legacy (GPU) | Intel Cascade Lake | Nvidia V100 Nvidia Quadro RTX 5000 | Omni-Path (2 × 100 Gb/s) Omni-Path (100 Gb/s) |
| CIDBN | AMD Zen2 | Infiniband (100 Gb/s) | |
| FG | AMD Zen3 | RoCE (25 Gb/s) | |
| SOE | AMD Zen2 | RoCE (25 Gb/s) |
Info
See CPU partitions and GPU partitions for the Slurm partitions in each island.
See Logging In for the best login nodes for each island (other login nodes will often work, but may have access to different storage systems and their hardware will be less of a match).
See Cluster Storage Map for the storage systems accessible from each island and their relative performance characteristics.
See Software Stacks for the available and default software stacks for each island.