Cluster Overview

The Clusters

The GWDG provides three HPC clusters, two of which are documented here (SCC and NHR):

SCC

The SCC (Scientific Compute Cluster) provides HPC compute for

  • Georg-August Univeristy, including UMG
  • Max Planck Society
  • and some other research institutions

NHR

NHR-NORD@Göttingen (NHR for short in this documentation) is one NHR center in the NHR (Nationales Hochleistungsrechnen) Alliance of Tier 2 HPC centers providing HPC resources to German universities (application required). It was formerly part of HLRN (Norddeutscher Verbund für Hoch- und Höchstleistungsrechnen) IV providing HPC resources for universities in Northern Germany.

The NHR cluster consists of a CPU island Emmy and a GPU island Grete.

CARO

The CARO cluster is for exclusive use by DLR employees. CARO’s documentation is only available on the DLR intranet.

Organization

The current organization of the SCC and NHR clusters is shown in the diagram below:

---
title: Current Cluster Organization
---
flowchart TB

    zib[\"NHR@ZIB Frontend Nodes<br>blogin[1-9]"/]
    internet([Internet])
    jumphosts{{SCC Jumphosts}}
    vpn[/GWDG VPN/]
    goenet[["GÖNET (Göttingen Campus)"]]

    subgraph SCC
    scc_frontends(SCC Frontends)
    scc_compute[Compute Nodes]
    scc_storage[(Storage)]
    end

    subgraph NHR
    nhr_frontends(NHR Frontends)
    emmy_compute[Emmy Compute Nodes]
    grete_compute[Grete Compute Nodes]
    nhr_storage[(Storage)]
    end

    internet --> jumphosts
    internet --> vpn
    internet --> nhr_frontends
    zib --> nhr_frontends
    vpn --> goenet
    jumphosts --> scc_frontends
    goenet --> scc_frontends

    scc_frontends --> scc_compute
    scc_frontends --> scc_storage
    scc_compute --> scc_storage

    nhr_frontends --> emmy_compute
    nhr_frontends --> grete_compute
    nhr_frontends --> nhr_storage
    emmy_compute --> nhr_storage
    grete_compute --> nhr_storage

Essentially, both clusters consist of of their own separate set of the following nodes:

  1. Frontend nodes, which users login to and are connected to the compute nodes and cluster storage over the internal network.
  2. Compute nodes, which run user submitted compute jobs and are connected to the cluster storage over the internal network.
  3. Storage, which contains shared software, user data, etc.

The NHR’s frontend nodes can be connected to directly over the internet, as well as directly from the blogin[1-9] frontend nodes of the NHR@ZIB (Lise) cluster over a dedicated link.

The SCC’s frontend nodes CANNOT be connected to directly over the internet. Instead, one must either:

  1. Be on GÖNET, the network of the Göttingen Campus including Eduroam in Göttingen.
  2. Be connected to GÖNET via the GWDG VPN.
  3. Use the SCC Jumphosts (Not possible for HPC Project Portal accounts).

and then connect to the SCC’s frontend nodes.

Note

The SCC and NHR clusters will be unified later in 2024, after which the origanization will be quite different. Specifically, there will be a shared set of frontend nodes accessible directly from the internet that work with all three islands (SCC, Emmy, Grete) and a unified shared storage, though some storage will be restricted to SCC/NHR/KISSKI/etc. users and some will be specific or optimized for particular islands.