The Clusters
The GWDG provides three HPC clusters, two of which are documented here (SCC and NHR):
SCC
The SCC (Scientific Compute Cluster) provides HPC compute for
- Georg-August Univeristy, including UMG
- Max Planck Society
- and some other research institutions
NHR
NHR-NORD@Göttingen (NHR for short in this documentation) is one NHR center in the NHR (Nationales Hochleistungsrechnen) Alliance of Tier 2 HPC centers providing HPC resources to German universities (application required). It was formerly part of HLRN (Norddeutscher Verbund für Hoch- und Höchstleistungsrechnen) IV providing HPC resources for universities in Northern Germany.
The NHR cluster consists of a CPU island Emmy and a GPU island Grete.
KISSKI
The KISSKI project provides a compute partition, which is listed under the GPU partitions.
CARO
The CARO cluster is for exclusive use by DLR employees. CARO’s documentation is only available on the DLR intranet.
CIDBN
The CIDBN cluster is also managed by the same system but is located separately, with it’s own storage system. It can be reached via a partition as stated in the CPU partitions table under SCC.
Organization
The current organization of the SCC and NHR clusters is shown in the diagram below:
--- title: Current Cluster Organization --- flowchart TB %% zib[\"NHR@ZIB Frontend Nodes<br>blogin[1-9]"/] internet([Internet]) jumphosts{{"SCC Jumphosts<br>gwdu[19-20]"}} vpn[/GWDG VPN/] goenet[["GÖNET (Göttingen Campus)"]] subgraph SCC scc_frontends("SCC Frontends<br>gwdu[101-102]") scc_compute[Compute Nodes] scc_storage[(Storage)] end subgraph NHR nhr_frontends("NHR Frontends<br>glogin[1-13]") emmy_compute[Emmy Compute Nodes] grete_compute[Grete Compute Nodes] nhr_storage[(Storage)] end internet --> jumphosts internet --> vpn internet --> nhr_frontends %% zib --> nhr_frontends vpn --> goenet jumphosts --> scc_frontends goenet --> scc_frontends scc_frontends --> scc_compute scc_frontends --> scc_storage scc_compute --> scc_storage nhr_frontends --> emmy_compute nhr_frontends --> grete_compute nhr_frontends --> nhr_storage emmy_compute --> nhr_storage grete_compute --> nhr_storage
Essentially, both clusters consist of of their own separate set of the following nodes:
- Frontend nodes, which users login to and are connected to the compute and cluster storage nodes
- Compute nodes, which run user submitted compute jobs, connected to the cluster storage over the internal network.
- Storage, which contains shared software, user data, etc.
The NHR’s frontend nodes can be connected to directly over the internet.
The SCC’s frontend nodes CANNOT be connected to directly over the internet. Instead, one must either:
- Be on GÖNET, the network of the Göttingen Campus including Eduroam in Göttingen.
- Be connected to GÖNET via the GWDG VPN.
- Use the SCC jumphosts (Not possible for HPC Project Portal accounts).
and then connect to the SCC’s frontend nodes.
The SCC and NHR clusters will be unified more and more in the coming months, after which the organization will be quite different. Specifically, there will be a shared set of frontend nodes accessible directly from the internet that work with all three islands (SCC, Emmy, Grete) and a unified shared storage, though some storage will be restricted to SCC/NHR/KISSKI/etc. users and some will be specific or optimized for particular islands.