The Clusters
The GWDG operates various HPC systems, seperated into different clusters or “cluster islands”, most of which are documented here.
SCC
The SCC (Scientific Compute Cluster) provides HPC resources for
- Georg-August University, including UMG
- Max Planck Society
- and other related research institutions
NHR
NHR-NORD@Göttingen (NHR for short) is one NHR center in the NHR (Nationales Hochleistungsrechnen) Alliance of Tier 2 HPC centers providing HPC resources to all German universities (application required). The NHR cluster consists of a CPU island Emmy and a GPU island Grete. Emmy was previously part of HLRN (Norddeutscher Verbund für Hoch- und Höchstleistungsrechnen) IV, NHR’s predecessor, which provided HPC resources for universities in Northern Germany.
KISSKI
The KISSKI project provides a compute partition, which is listed under the GPU partitions.
REACT
The REACT program is an EU-funded initiative to support various economic and scoial developments, including one of our GPU partitions.
CARO
The CARO cluster is for exclusive use by DLR employees. CARO’s documentation is only available on the DLR intranet.
CIDBN
The CIDBN cluster “Sofja” is operated as an island within our HPC infrastructure, and shares the same setup and software stack as the SCC. It is located at a different site, with its own storage system and can be reached via dedicated login nodes, but otherwise can be considered another CPU partition.
Organization
The current organization of the SCC and NHR clusters is shown in the diagram below:
--- title: Current Cluster Organization --- flowchart TB %% zib[\"NHR@ZIB Frontend Nodes<br>blogin[1-9]"/] internet([Internet]) jumphosts{{"SCC Jumphosts<br>gwdu[19-20]"}} vpn[/GWDG VPN/] goenet[["GÖNET (Göttingen Campus)"]] subgraph SCC scc_frontends("SCC Frontends<br>gwdu[101-102]") scc_compute[Compute Nodes] scc_storage[(Storage)] end subgraph NHR nhr_frontends("NHR Frontends<br>glogin[1-13]") emmy_compute[Emmy Compute Nodes] grete_compute[Grete Compute Nodes] nhr_storage[(Storage)] end internet --> jumphosts internet --> vpn internet --> nhr_frontends %% zib --> nhr_frontends vpn --> goenet jumphosts --> scc_frontends goenet --> scc_frontends scc_frontends --> scc_compute scc_frontends --> scc_storage scc_compute --> scc_storage nhr_frontends --> emmy_compute nhr_frontends --> grete_compute nhr_frontends --> nhr_storage emmy_compute --> nhr_storage grete_compute --> nhr_storage
Essentially, both clusters consist of of their own separate set of the following nodes:
- Frontend nodes, which users login to and are connected to the compute and cluster storage nodes
- Compute nodes, which run user submitted compute jobs, connected to the cluster storage over the internal network.
- Storage systems, which contain shared software, user data, etc.
The NHR’s frontend nodes can be connected to directly from the open internet.
The SCC’s frontend nodes, however cannot be connected to directly if you are not on GÖNET, the network of the Göttingen Campus including Eduroam in Göttingen. Instead, you can use one of the NHR frontend nodes as a jumphost, more details here.
The SCC and NHR clusters will be unified more and more in the coming months, after which the organization will be a little different. Specifically, there will be a shared set of frontend nodes accessible directly from the internet that work with all cluster islands and eventually a unified shared storage. Some storage locations will remain specific to certain user groups (SCC/NHR/KISSKI/etc.) and some will be optimized for particular islands.