CPU Partitions

Nodes in these partitons provide many CPU cores for parallelizing calculations.

Islands

The islands with a brief overview of their hardware are listed below.

IslandCPUsFabric
Emmy Phase 1Intel SkylakeOmni-Path (100 Gb/s)
Emmy Phase 2Intel Cascade LakeOmni-Path (100 Gb/s)
Emmy Phase 3Intel Sapphire RapidsOmni-Path (100 Gb/s)
SCC LegacyIntel Cascade Lake
Intel Skylake
Omni-Path (100 Gb/s)
none (Ethernet only)
CIDBNAMD Zen2Infiniband (100 Gb/s)
FGAMD Zen3RoCE (25 Gb/s)
SOEAMD Zen2RoCE (25 Gb/s)
Info

See Logging In for the best login nodes for each island (other login nodes will often work, but may have access to different storage systems and their hardware will be less of a match).

See Cluster Storage Map for the storage systems accessible from each island and their relative performance characteristics.

See Software Stacks for the available and default software stacks for each island.

Legacy SCC users only have access to the SCC Legacy island unless they are also CIDBN, FG, or SOE users in which case they also have access to those islands.

Partitions

The NHR partitions following the naming scheme sizeCORES[suffix] where size indicates the amount of RAM (medium, standard, large, or huge), CORES indicates the number of cores, and suffix is only included to differentiate partitions with the same size and CORES. SCC, KISSKI, REACT, etc. partitions do not follow this scheme. The partitions are listed in the table below by which users can use them and island without hardware details. See Types of User Accounts to determine which kind of user you are. Note that some users are members of multiple classifications (e.g. all CIDBN/FG/SOE users are also SCC users).

UsersIslandPartitionOSSharedMax. walltimeMax. nodes
per job
Core-hr per core
NHREmmy P3medium96sRocky 812 hr2560.75
medium96s:testRocky 81 hr640.75
standard96sRocky 812 hr2561
standard96s:sharedRocky 8yes48 hr11
standard96s:testRocky 81 hr641
large96sRocky 812 hr21.5
large96s:sharedRocky 8yes48 hr12
large96s:testRocky 81 hr21.5
huge96sRocky 824 hr12
huge96s:sharedRocky 8yes24 hr12
Emmy P2standard96Rocky 812 hr2561
standard96:sharedRocky 8yes48 hr641
standard96:testRocky 81 hr641
large96Rocky 812 hr21.5
large96:sharedRocky 8yes48 hr12
large96:testRocky 81 hr21.5
huge96Rocky 824 hr2562
SCCEmmy P3scc-cpuRocky 8yes48 hrinf1
SCC LegacymediumRocky 8yes48 hrinf1
sgizRocky 8yes48 hrinf
allEmmy P1jupyter
(jupyter)
Rocky 8yes24 h11
NHR,
KISSKI,
REACT
Emmy P2jupyter:cpu
(jupyter)
Rocky 8yes24 hr11
CIDBNCIDBNcidbnRocky 8yes14 daysinf
FGFGfgRocky 8yes30 daysinf
SOEDINGSOEsoedingRocky 8yes14 daysinf
Info

JupyterHub sessions run on the partitions marked with jupyter in the table above. These partitions are oversubscribed (multiple jobs share resources). Additionally, the jupyter partition is composed of both CPU and GPU nodes.

The hardware for the different nodes in each partition are listed in the table below. Note that some partitions are heterogeneous, having nodes with different hardware. Additionally, many nodes are in more than one partition.

PartitionNodesCPURAM per nodeCoresSSD
medium96s3802 × Sapphire Rapids 8468256 000 MB96yes
medium96s:test1642 × Sapphire Rapids 8468256 000 MB96yes
standard968532 × Cascadelake 9242364 000 MB96
1482 × Cascadelake 9242364 000 MB96yes
standard96:shared8532 × Cascadelake 9242364 000 MB96
1382 × Cascadelake 9242364 000 MB96yes
standard96:test8562 × Cascadelake 9242364 000 MB96
1402 × Cascadelake 9242364 000 MB96yes
standard96s2202 × Sapphire Rapids 8468514 000 MB96yes
standard96s:shared2202 × Sapphire Rapids 8468514 000 MB96yes
standard96s:test2242 × Sapphire Rapids 8468514 000 MB96yes
large96122 × Cascadelake 9242747 000 MB96yes
large96:shared92 × Cascadelake 9242747 000 MB96yes
large96:test42 × Cascadelake 9242747 000 MB96yes
large96s132 × Sapphire Rapids 84681 030 000 MB96yes
large96s:shared92 × Sapphire Rapids 84681 030 000 MB96yes
large96s:test42 × Sapphire Rapids 84681 030 000 MB96yes
huge9622 × Cascadelake 92421 522 000 MB96yes
huge96s22 × Sapphire Rapids 84682 062 000 MB96yes
huge96:shared22 × Sapphire Rapids 84682 062 000 MB96yes
jupyter162 × Skylake 6148763 000 MB40yes
jupyter:cpu82 × Cascadelake 9242364 000 MB96
medium942 × Cascadelake 9242364 000 MB96yes
scc-cpu≤ 492 × Sapphire Rapids 8468256 000 MB96yes
≤ 492 × Sapphire Rapids 8468514 000 MB96yes
≤ 242 × Sapphire Rapids 84681 030 000 MB96yes
≤ 22 × Sapphire Rapids 84682 062 000 MB96yes
sgiz112 × Skylake 613095 000 MB32yes
cidbn302 × Zen3 EPYC 7763496 000 MB128yes
fg82 × Zen3 EPYC 4713512 000 MB48yes
soeding72 × Zen2 EPYC 77421 000 000 MB128yes

The CPUs

For partitions that have heterogeneous hardware, you can give Slurm options to request the particular hardware you want. For CPUs, you can specify the kind of CPU you want by passing a -C/--constraint option to slurm to get the CPUs you want. Use -C ssd or --constraint=ssd to request a node with a local SSD on the NHR cluster, and -C local or --constraint local to request a node with a local SSD on the SCC cluster. See Slurm for more information.

The CPUs, the options to request them, and some of their properties are give in the table below.

CPUCores-C optionArchitecture
AMD Zen3 EPYC 776364milan or zen3zen3
AMD Zen3 EPYC 471324milan or zen3zen3
AMD Zen2 EPYC 774264rome or zen2zen2
Intel Sapphire Rapids Xeon Platinum 846848sapphirerapidssapphirerapids
Intel Cascadelake Xeon Platinum 924248cascadelakecascadelake
Intel Skylake Xeon Gold 614820skylakeskylake_avx512
Intel Skylake Xeon Gold 613016skylakeskylake_avx512

Hardware Totals

The total nodes, cores, and RAM for each island are given in the table below.

IslandNodesCoresRAM (TiB)
Emmy Phase 11664011.6
Emmy Phase 21,02298,112362.8
Emmy Phase 341139,456173.4
SCC Legacy1059,37643.6
CIDBN303,84014.1
FG83843.9
SOE78966.6
TOTAL1,599152,704616