CPU Partitions
Nodes in these partitons provide many CPU cores for parallelizing calculations.
Islands
The islands with a brief overview of their hardware are listed below.
Island | CPUs | Fabric |
---|---|---|
Emmy Phase 1 | Intel Skylake | Omni-Path (100 Gb/s) |
Emmy Phase 2 | Intel Cascade Lake | Omni-Path (100 Gb/s) |
Emmy Phase 3 | Intel Sapphire Rapids | Omni-Path (100 Gb/s) |
SCC Legacy | Intel Cascade Lake Intel Skylake | Omni-Path (100 Gb/s) none (Ethernet only) |
CIDBN | AMD Zen2 | Infiniband (100 Gb/s) |
FG | AMD Zen3 | RoCE (25 Gb/s) |
SOE | AMD Zen2 | RoCE (25 Gb/s) |
See Logging In for the best login nodes for each island (other login nodes will often work, but may have access to different storage systems and their hardware will be less of a match).
See Cluster Storage Map for the storage systems accessible from each island and their relative performance characteristics.
See Software Stacks for the available and default software stacks for each island.
Legacy SCC users only have access to the SCC Legacy island unless they are also CIDBN, FG, or SOE users in which case they also have access to those islands.
Partitions
The NHR partitions following the naming scheme sizeCORES[suffix]
where size
indicates the amount of RAM (medium
, standard
, large
, or huge
), CORES
indicates the number of cores, and suffix
is only included to differentiate partitions with the same size
and CORES
.
SCC, KISSKI, REACT, etc. partitions do not follow this scheme.
The partitions are listed in the table below by which users can use them and island without hardware details.
See Types of User Accounts to determine which kind of user you are.
Note that some users are members of multiple classifications (e.g. all CIDBN/FG/SOE users are also SCC users).
Users | Island | Partition | OS | Shared | Max. walltime | Max. nodes per job | Core-hr per core |
---|---|---|---|---|---|---|---|
NHR | Emmy P3 | medium96s | Rocky 8 | 12 hr | 256 | 0.75 | |
medium96s:test | Rocky 8 | 1 hr | 64 | 0.75 | |||
standard96s | Rocky 8 | 12 hr | 256 | 1 | |||
standard96s:shared | Rocky 8 | yes | 48 hr | 1 | 1 | ||
standard96s:test | Rocky 8 | 1 hr | 64 | 1 | |||
large96s | Rocky 8 | 12 hr | 2 | 1.5 | |||
large96s:shared | Rocky 8 | yes | 48 hr | 1 | 2 | ||
large96s:test | Rocky 8 | 1 hr | 2 | 1.5 | |||
huge96s | Rocky 8 | 24 hr | 1 | 2 | |||
huge96s:shared | Rocky 8 | yes | 24 hr | 1 | 2 | ||
Emmy P2 | standard96 | Rocky 8 | 12 hr | 256 | 1 | ||
standard96:shared | Rocky 8 | yes | 48 hr | 64 | 1 | ||
standard96:test | Rocky 8 | 1 hr | 64 | 1 | |||
large96 | Rocky 8 | 12 hr | 2 | 1.5 | |||
large96:shared | Rocky 8 | yes | 48 hr | 1 | 2 | ||
large96:test | Rocky 8 | 1 hr | 2 | 1.5 | |||
huge96 | Rocky 8 | 24 hr | 256 | 2 | |||
SCC | Emmy P3 | scc-cpu | Rocky 8 | yes | 48 hr | inf | 1 |
SCC Legacy | medium | Rocky 8 | yes | 48 hr | inf | 1 | |
sgiz | Rocky 8 | yes | 48 hr | inf | |||
all | Emmy P1 | jupyter (jupyter) | Rocky 8 | yes | 24 h | 1 | 1 |
NHR, KISSKI, REACT | Emmy P2 | jupyter:cpu (jupyter) | Rocky 8 | yes | 24 hr | 1 | 1 |
CIDBN | CIDBN | cidbn | Rocky 8 | yes | 14 days | inf | |
FG | FG | fg | Rocky 8 | yes | 30 days | inf | |
SOEDING | SOE | soeding | Rocky 8 | yes | 14 days | inf |
JupyterHub sessions run on the partitions marked with jupyter in the table above.
These partitions are oversubscribed (multiple jobs share resources).
Additionally, the jupyter
partition is composed of both CPU and GPU nodes.
The hardware for the different nodes in each partition are listed in the table below. Note that some partitions are heterogeneous, having nodes with different hardware. Additionally, many nodes are in more than one partition.
Partition | Nodes | CPU | RAM per node | Cores | SSD |
---|---|---|---|---|---|
medium96s | 380 | 2 × Sapphire Rapids 8468 | 256 000 MB | 96 | yes |
medium96s:test | 164 | 2 × Sapphire Rapids 8468 | 256 000 MB | 96 | yes |
standard96 | 853 | 2 × Cascadelake 9242 | 364 000 MB | 96 | |
148 | 2 × Cascadelake 9242 | 364 000 MB | 96 | yes | |
standard96:shared | 853 | 2 × Cascadelake 9242 | 364 000 MB | 96 | |
138 | 2 × Cascadelake 9242 | 364 000 MB | 96 | yes | |
standard96:test | 856 | 2 × Cascadelake 9242 | 364 000 MB | 96 | |
140 | 2 × Cascadelake 9242 | 364 000 MB | 96 | yes | |
standard96s | 220 | 2 × Sapphire Rapids 8468 | 514 000 MB | 96 | yes |
standard96s:shared | 220 | 2 × Sapphire Rapids 8468 | 514 000 MB | 96 | yes |
standard96s:test | 224 | 2 × Sapphire Rapids 8468 | 514 000 MB | 96 | yes |
large96 | 12 | 2 × Cascadelake 9242 | 747 000 MB | 96 | yes |
large96:shared | 9 | 2 × Cascadelake 9242 | 747 000 MB | 96 | yes |
large96:test | 4 | 2 × Cascadelake 9242 | 747 000 MB | 96 | yes |
large96s | 13 | 2 × Sapphire Rapids 8468 | 1 030 000 MB | 96 | yes |
large96s:shared | 9 | 2 × Sapphire Rapids 8468 | 1 030 000 MB | 96 | yes |
large96s:test | 4 | 2 × Sapphire Rapids 8468 | 1 030 000 MB | 96 | yes |
huge96 | 2 | 2 × Cascadelake 9242 | 1 522 000 MB | 96 | yes |
huge96s | 2 | 2 × Sapphire Rapids 8468 | 2 062 000 MB | 96 | yes |
huge96:shared | 2 | 2 × Sapphire Rapids 8468 | 2 062 000 MB | 96 | yes |
jupyter | 16 | 2 × Skylake 6148 | 763 000 MB | 40 | yes |
jupyter:cpu | 8 | 2 × Cascadelake 9242 | 364 000 MB | 96 | |
medium | 94 | 2 × Cascadelake 9242 | 364 000 MB | 96 | yes |
scc-cpu | ≤ 49 | 2 × Sapphire Rapids 8468 | 256 000 MB | 96 | yes |
≤ 49 | 2 × Sapphire Rapids 8468 | 514 000 MB | 96 | yes | |
≤ 24 | 2 × Sapphire Rapids 8468 | 1 030 000 MB | 96 | yes | |
≤ 2 | 2 × Sapphire Rapids 8468 | 2 062 000 MB | 96 | yes | |
sgiz | 11 | 2 × Skylake 6130 | 95 000 MB | 32 | yes |
cidbn | 30 | 2 × Zen3 EPYC 7763 | 496 000 MB | 128 | yes |
fg | 8 | 2 × Zen3 EPYC 4713 | 512 000 MB | 48 | yes |
soeding | 7 | 2 × Zen2 EPYC 7742 | 1 000 000 MB | 128 | yes |
The CPUs
For partitions that have heterogeneous hardware, you can give Slurm options to request the particular hardware you want.
For CPUs, you can specify the kind of CPU you want by passing a -C/--constraint
option to slurm to get the CPUs you want.
Use -C ssd
or --constraint=ssd
to request a node with a local SSD on the NHR cluster, and -C local
or --constraint local
to request a node with a local SSD on the SCC cluster.
See Slurm for more information.
The CPUs, the options to request them, and some of their properties are give in the table below.
CPU | Cores | -C option | Architecture |
---|---|---|---|
AMD Zen3 EPYC 7763 | 64 | milan or zen3 | zen3 |
AMD Zen3 EPYC 4713 | 24 | milan or zen3 | zen3 |
AMD Zen2 EPYC 7742 | 64 | rome or zen2 | zen2 |
Intel Sapphire Rapids Xeon Platinum 8468 | 48 | sapphirerapids | sapphirerapids |
Intel Cascadelake Xeon Platinum 9242 | 48 | cascadelake | cascadelake |
Intel Skylake Xeon Gold 6148 | 20 | skylake | skylake_avx512 |
Intel Skylake Xeon Gold 6130 | 16 | skylake | skylake_avx512 |
Hardware Totals
The total nodes, cores, and RAM for each island are given in the table below.
Island | Nodes | Cores | RAM (TiB) |
---|---|---|---|
Emmy Phase 1 | 16 | 640 | 11.6 |
Emmy Phase 2 | 1,022 | 98,112 | 362.8 |
Emmy Phase 3 | 411 | 39,456 | 173.4 |
SCC Legacy | 105 | 9,376 | 43.6 |
CIDBN | 30 | 3,840 | 14.1 |
FG | 8 | 384 | 3.9 |
SOE | 7 | 896 | 6.6 |
TOTAL | 1,599 | 152,704 | 616 |