CPU Partitions
Nodes in these partitons provide many CPU cores for parallelizing calculations.
Partitions
The NHR partitions following the naming scheme sizeCORES[suffix]
where size
indicates the amount of RAM (medium
, standard
, large
, or huge
), CORES
indicates the number of cores, and suffix
is only included to differentiate partitions with the same size
and CORES
.
SCC partitions do not follow this scheme.
The partitions are listed in the table below without hardware details.
Cluster | Partition | OS | Shared | Max. walltime | Max. nodes per job | Core-hours per node |
---|---|---|---|---|---|---|
NHR | medium96s | Rocky 8 | 12 hr | 256 | 96 | |
medium96s:test | Rocky 8 | 1 hr | 64 | 96 | ||
medium40 | CentOS 7 | 48 hr | 1 | 40 | ||
medium40:shared | CentOS 7 | yes | 12 hr | 2 | 40 | |
medium40:test | CentOS 7 | 1 hr | 64 | 40 | ||
standard96 | CentOS 7 | 12 hr | 256 | 96 | ||
standard96:test | CentOS 7 | 1 hr | 64 | 96 | ||
standard96s | Rocky 8 | 12 hr | 256 | 96 | ||
standard96s:shared | Rocky 8 | yes | 48 hr | 1 | 96 | |
standard96s:test | Rocky 8 | 1 hr | 64 | 96 | ||
large40 | CentOS 7 | 48 hr | 4 | 80 | ||
large40:shared | CentOS 7 | yes | 48 hr | 1 | 80 | |
large40:test | CentOS 7 | 1 hr | 2 | 80 | ||
large96 | CentOS 7 | 12 hr | 2 | 144 | ||
large96:shared | CentOS 7 | yes | 48 hr | 1 | 144 | |
large96:test | CentOS 7 | 1 hr | 2 | 144 | ||
large96s | Rocky 8 | 12 hr | 2 | 144 | ||
large96s:shared | Rocky 8 | yes | 48 hr | 1 | 144 | |
large96s:test | Rocky 8 | 1 hr | 2 | 144 | ||
huge96 | CentOS 7 | 24 hr | 256 | 192 | ||
huge96s | Rocky 8 | 24 hr | 1 | 192 | ||
huge96s:shared | Rocky 8 | yes | 24 hr | 1 | 192 | |
jupyter:cpu (jupyter) | CentOS 7 | yes | 24 hr | 1 | 40 | |
SCC | medium | SL 7 | yes | 48 hr | inf | |
fat | SL 7 | yes | 48 hr | inf | ||
fat+ | SL 7 | yes | 48 hr | inf | ||
int (jupyter) | SL 7 | yes | 48 hr | inf | ||
sgiz (reserved) | SL 7 | yes | 48 hr | inf | ||
sa (reserved) | Rocky 8 | yes | 30 days | inf | ||
hh (reserved) | Rocky 8 | yes | 14 days | inf | ||
cidbn (reserved) | Rocky 8 | yes | 14 days | inf |
The partitions you are allowed to use may be restricted by your kind of account and/or POSIX group. For example, the partitions marked as reserved in the table above are reserved-for/restricted-to specific research groups.
JupyterHub sessions run on the partitions marked with jupyter in the table above.
The hardware for the different nodes in each partition are listed in the table below. Note that some partitions are heterogeneous, having nodes with different hardware. Additionally, many nodes are in more than one partition.
Partition | Nodes | CPU | RAM per node | Cores | SSD |
---|---|---|---|---|---|
medium40 | 416 | 2 × Skylake 6148 | 182 000 MB | 40 | yes |
medium40:shared | 416 | 2 × Skylake 6148 | 182 000 MB | 40 | yes |
medium40:test | 424 | 2 × Skylake 6148 | 182 000 MB | 40 | yes |
medium96s | 380 | 2 × Sapphire Rapids 8468 | 256 000 MB | 96 | yes |
medium96s:test | 164 | 2 × Sapphire Rapids 8468 | 256 000 MB | 96 | yes |
standard96 | 857 | 2 × Cascadelake 9242 | 364 000 MB | 96 | |
149 | 2 × Cascadelake 9242 | 364 000 MB | 96 | yes | |
standard96:test | 864 | 2 × Cascadelake 9242 | 364 000 MB | 96 | |
140 | 2 × Cascadelake 9242 | 364 000 MB | 96 | yes | |
standard96s | 220 | 2 × Sapphire Rapids 8468 | 514 000 MB | 96 | yes |
standard96s:shared | 220 | 2 × Sapphire Rapids 8468 | 514 000 MB | 96 | yes |
standard96s:test | 224 | 2 × Sapphire Rapids 8468 | 514 000 MB | 96 | yes |
large40 | 12 | 2 × Skylake 6148 | 763 000 MB | 40 | yes |
large40:shared | 8 | 2 × Skylake 6148 | 763 000 MB | 40 | yes |
large40:test | 4 | 2 × Skylake 6148 | 763 000 MB | 40 | yes |
large96 | 12 | 2 × Cascadelake 9242 | 747 000 MB | 96 | yes |
large96:shared | 9 | 2 × Cascadelake 9242 | 747 000 MB | 96 | yes |
large96:test | 4 | 2 × Cascadelake 9242 | 747 000 MB | 96 | yes |
large96s | 13 | 2 × Sapphire Rapids 8468 | 1 030 000 MB | 96 | yes |
large96s:shared | 9 | 2 × Sapphire Rapids 8468 | 1 030 000 MB | 96 | yes |
large96s:test | 4 | 2 × Sapphire Rapids 8468 | 1 030 000 MB | 96 | yes |
huge96 | 2 | 2 × Cascadelake 9242 | 1 522 000 MB | 96 | yes |
huge96s | 2 | 2 × Sapphire Rapids 8468 | 2 062 000 MB | 96 | yes |
huge96:shared | 2 | 2 × Sapphire Rapids 8468 | 2 062 000 MB | 96 | yes |
jupyter:cpu | 8 | 2 × Skylake 6148 | 182 000 MB | 40 | yes |
medium | 94 | 2 × Cascadelake 9242 | 364 000 MB | 96 | yes |
12 | 2 × Broadwell E5-2650v4 | 512 000 MB | 48 | yes | |
3 | 4 × Haswell E5-4620v3 | 1 500 000 MB | 24 | yes | |
1 | 4 × Haswell E7-4809v3 | 2 048 000 MB | 32 | yes | |
fat | 12 | 2 × Broadwell E5-2650v4 | 512 000 MB | 48 | yes |
4 | 4 × Haswell E5-4620v3 | 1 500 000 MB | 24 | yes | |
1 | 4 × Haswell E7-4809v3 | 2 048 000 MB | 32 | yes | |
fat+ | 4 | 4 × Haswell E5-4620v3 | 1 500 000 MB | 24 | yes |
1 | 4 × Haswell E7-4809v3 | 2 048 000 MB | 32 | yes | |
int | 2 | 2 × Cascadelake 9242 | 364 000 MB | 96 | yes |
11 | 2 × Skylake 6130 | 95 000 MB | 32 | yes | |
sgiz | 11 | 2 × Skylake 6130 | 95 000 MB | 32 | yes |
sa | 8 | 2 × Zen3 EPYC 4713 | 512 000 MB | 48 | yes |
hh | 7 | 2 × Zen2 EPYC 7742 | 1 000 000 MB | 128 | yes |
cidbn | 30 | 2 × Zen3 EPYC 7763 | 496 000 MB | 128 | yes |
The CPUs
For partitions that have heterogeneous hardware, you can give Slurm options to request the particular hardware you want.
For CPUs, you can specify the kind of CPU you want by passing a -C/--constraint
option to slurm to get the CPUs you want.
Use -C ssd
or --constraint=ssd
to request a node with a local SSD on the NHR cluster, and -C local
or --constraint local
to request a node with a local SSD on the SCC cluster.
See Slurm for more information.
The CPUs, the options to request them, and some of their properties are give in the table below.
CPU | Cores | -C option | Architecture |
---|---|---|---|
AMD Zen3 EPYC 7763 | 64 | milan | zen3 |
AMD Zen3 EPYC 4713 | 24 | milan | zen3 |
AMD Zen2 EPYC 7742 | 64 | rome | zen2 |
Intel Sapphire Rapids Xeon Platinum 8468 | 48 | sapphirerapids | sapphirerapids |
Intel Cascadelake Xeon Platinum 9242 | 48 | cascadelake | cascadelake |
Intel Skylake Xeon Gold 6148 | 20 | skylake | skylake_avx512 |
Intel Skylake Xeon Gold 6130 | 16 | skylake | skylake_avx512 |
Intel Broadwell Xeon E5-2650 V4 | 12 | broadwell | broadwell |
Intel Haswell Xeon E5-4620 V3 | 10 | haswell | haswell |
Intel Haswell Xeon E7-4809 V3 | 8 | haswell | haswell |
Hardware Totals
The total nodes, cores, and RAM for each cluster and sub-cluster are given in the table below.
Cluster | Sub-cluster | Nodes | Cores | RAM (TiB) |
---|---|---|---|---|
NHR | Emmy Phase 1 | 448 | 17,920 | 86.6 |
Emmy Phase 2 | 1,022 | 98,112 | 362.8 | |
Emmy Phase 3 | 411 | 39,456 | 173.4 | |
TOTAL | 1,881 | 155,488 | 622.8 | |
SCC | main | 113 | 9,920 | 46.8 |
CIDBN | 30 | 3,840 | 14.1 | |
sa | 8 | 384 | 3.9 | |
hh | 7 | 896 | 6.6 | |
sgiz | 11 | 352 | 1.0 | |
TOTAL | 169 | 15,392 | 72.6 |