Cluster Overview

The GWDG HPC Cluster is composed of several islands with similar hardware. Projects and user accounts are grouped based on the purpose of the compute and association with different groups/institutions that provided funding for parts of the cluster.

Project/Account Groups and Purposes

SCC

The SCC (Scientific Compute Cluster) provides HPC resources for

  • Georg-August University, including UMG
  • Max Planck Society
  • and other related research institutions

NHR (formerly HLRN)

NHR-NORD@Göttingen (NHR for short) is one NHR center in the NHR (Nationales Hochleistungsrechnen) Alliance of Tier 2 HPC centers, which provide HPC resources to all German universities (application required). NHR-NORD@Göttingen was previously part of HLRN (Norddeutscher Verbund für Hoch- und Höchstleistungsrechnen) IV, NHR’s predecessor, which provided HPC resources for universities in Northern Germany.

KISSKI

The KISSKI project provides AI compute resources to critical and sensitive infrastructure.

REACT

The REACT program is an EU-funded initiative to support various economic and social developments, including one of our GPU partitions.

Institution/Research-Group Specific

Some institutions and research groups have their own dedicated islands or nodes as part of GWDG’s HPC Hosting service in addition to being able to use SCC resources. These islands or nodes can consist of

An example is the CIDBN island (also known as “Sofja”) with its own storage system, dedicated login nodes, and CPU partition.

DLR CARO

The other cluster GWDG operates is DLR CARO, which is for exclusive use by DLR employees. CARO is not connected to the GWDG HPC Cluster in any way. CARO’s documentation is only available on the DLR intranet. If you are a CARO user, you should go to its documentation and ignore the rest of this site.

Islands

The nodes can be grouped into islands that share the same/similar hardware, are more closely networked together, and have access to the same storage systems with similar performance. General CPU node islands are called “Emmy Phase X” where X indicates the hardware generation (1, 2, 3, …). General GPU node islands are called “Grete Phase X” where X indicates the hardware generation (1, 2, 3, …). The islands with a brief summary of their hardware are

IslandCPUsGPUsFabric
Emmy Phase 3Intel Sapphire RapidsOmni-Path (100 Gb/s)
Emmy Phase 2Intel Cascade LakeOmni-Path (100 Gb/s)
Emmy Phase 1Intel SkylakeOmni-Path (100 Gb/s)
Grete Phase 3Intel Sapphire RapidsNvidia H100Infiniband (2 × 200 Gb/s)
Grete Phase 2AMD Zen 3
AMD Zen 2
Nvidia A100Infiniband (2 × 200 Gb/s)
Grete Phase 1Intel SkylakeNvidia V100Infiniband (100 Gb/s)
SCC Legacy (CPU)Intel Cascade Lake
Intel Skylake
Omni-Path (100 Gb/s)
none (Ethernet only)
SCC Legacy (GPU)Intel Cascade LakeNvidia V100
Nvidia Quadro RTX 5000
Omni-Path (2 × 100 Gb/s)
Omni-Path (100 Gb/s)
CIDBNAMD Zen2Infiniband (100 Gb/s)
FGAMD Zen3RoCE (25 Gb/s)
SOEAMD Zen2RoCE (25 Gb/s)
Info

See CPU partitions and GPU partitions for the Slurm partitions in each island.

See Logging In for the best login nodes for each island (other login nodes will often work, but may have access to different storage systems and their hardware will be less of a match).

See Cluster Storage Map for the storage systems accessible from each island and their relative performance characteristics.

See Software Stacks for the available and default software stacks for each island.

Legacy SCC users only have access to the SCC Legacy island unless they are also CIDBN, FG, or SOE users in which case they also have access to those islands.