CPU Partitions

Nodes in these partitons provide many CPU cores for parallelizing calculations.

Partitions

The NHR partitions following the naming scheme sizeCORES[suffix] where size indicates the amount of RAM (medium, standard, large, or huge), CORES indicates the number of cores, and suffix is only included to differentiate partitions with the same size and CORES. SCC partitions do not follow this scheme. The partitions are listed in the table below without hardware details.

ClusterPartitionOSSharedMax. walltimeMax. nodes per jobCore-hours per node
NHRmedium96sRocky 812 hr25696
medium96s:testRocky 81 hr6496
standard96Rocky 812 hr25696
standard96:sharedRocky 8yes48 hr6496
standard96:testRocky 81 hr6496
standard96sRocky 812 hr25696
standard96s:sharedRocky 8yes48 hr196
standard96s:testRocky 81 hr6496
large96Rocky 812 hr2192
large96:sharedRocky 8yes48 hr1192
large96:testRocky 81 hr2192
large96sRocky 812 hr2192
large96s:sharedRocky 8yes48 hr1192
large96s:testRocky 81 hr2192
huge96Rocky 824 hr256192
huge96sRocky 824 hr1192
huge96s:sharedRocky 8yes24 hr1192
jupyter:cpu
(jupyter)
Rocky 8yes24 hr1192
SCCmediumRocky 8yes48 hrinf
fatRocky 8yes48 hrinf
fat+Rocky 8yes48 hrinf
int (jupyter)Rocky 8yes48 hrinf
sgiz (reserved)Rocky 8yes48 hrinf
fg (reserved)Rocky 8yes30 daysinf
soeding (reserved)Rocky 8yes14 daysinf
cidbn (reserved)Rocky 8yes14 daysinf
Warning

The partitions you are allowed to use may be restricted by your kind of account and/or POSIX group. For example, the partitions marked as reserved in the table above are reserved-for/restricted-to specific research groups.

Info

JupyterHub sessions run on the partitions marked with jupyter in the table above.

The NHR nodes are grouped into subclusters of different hardware. The partitions for each NHR subcluster are listed in the table below where the parts of partition names in square brackets are optional (e.g. medium96s[:*] includes medium96s and medium96s:test).

NHR SubclusterCPUsPartitions
Emmy Phase 2Intel Cascade Lakejupyter:cpu, standard96[:*], large96[:*], huge96
Emmy Phase 3Intel Sapphire Rapidsmedium96s[:*], standard96s[:*], large96s[:*], huge96s[:*]

The hardware for the different nodes in each partition are listed in the table below. Note that some partitions are heterogeneous, having nodes with different hardware. Additionally, many nodes are in more than one partition.

PartitionNodesCPURAM per nodeCoresSSD
medium96s3802 × Sapphire Rapids 8468256 000 MB96yes
medium96s:test1642 × Sapphire Rapids 8468256 000 MB96yes
standard968532 × Cascadelake 9242364 000 MB96
1482 × Cascadelake 9242364 000 MB96yes
standard96:shared8532 × Cascadelake 9242364 000 MB96
1382 × Cascadelake 9242364 000 MB96yes
standard96:test8562 × Cascadelake 9242364 000 MB96
1402 × Cascadelake 9242364 000 MB96yes
standard96s2202 × Sapphire Rapids 8468514 000 MB96yes
standard96s:shared2202 × Sapphire Rapids 8468514 000 MB96yes
standard96s:test2242 × Sapphire Rapids 8468514 000 MB96yes
large96122 × Cascadelake 9242747 000 MB96yes
large96:shared92 × Cascadelake 9242747 000 MB96yes
large96:test42 × Cascadelake 9242747 000 MB96yes
large96s132 × Sapphire Rapids 84681 030 000 MB96yes
large96s:shared92 × Sapphire Rapids 84681 030 000 MB96yes
large96s:test42 × Sapphire Rapids 84681 030 000 MB96yes
huge9622 × Cascadelake 92421 522 000 MB96yes
huge96s22 × Sapphire Rapids 84682 062 000 MB96yes
huge96:shared22 × Sapphire Rapids 84682 062 000 MB96yes
jupyter:cpu82 × Cascadelake 9242364 000 MB96
medium942 × Cascadelake 9242364 000 MB96yes
122 × Broadwell E5-2650v4512 000 MB48yes
34 × Haswell E5-4620v31 500 000 MB24yes
14 × Haswell E7-4809v32 048 000 MB32yes
fat122 × Broadwell E5-2650v4512 000 MB48yes
44 × Haswell E5-4620v31 500 000 MB24yes
14 × Haswell E7-4809v32 048 000 MB32yes
fat+44 × Haswell E5-4620v31 500 000 MB24yes
14 × Haswell E7-4809v32 048 000 MB32yes
int22 × Cascadelake 9242364 000 MB96yes
112 × Skylake 613095 000 MB32yes
sgiz112 × Skylake 613095 000 MB32yes
fg82 × Zen3 EPYC 4713512 000 MB48yes
soeding72 × Zen2 EPYC 77421 000 000 MB128yes
cidbn302 × Zen3 EPYC 7763496 000 MB128yes

The CPUs

For partitions that have heterogeneous hardware, you can give Slurm options to request the particular hardware you want. For CPUs, you can specify the kind of CPU you want by passing a -C/--constraint option to slurm to get the CPUs you want. Use -C ssd or --constraint=ssd to request a node with a local SSD on the NHR cluster, and -C local or --constraint local to request a node with a local SSD on the SCC cluster. See Slurm for more information.

The CPUs, the options to request them, and some of their properties are give in the table below.

CPUCores-C optionArchitecture
AMD Zen3 EPYC 776364milan or zen3zen3
AMD Zen3 EPYC 471324milan or zen3zen3
AMD Zen2 EPYC 774264rome or zen2zen2
Intel Sapphire Rapids Xeon Platinum 846848sapphirerapidssapphirerapids
Intel Cascadelake Xeon Platinum 924248cascadelakecascadelake
Intel Skylake Xeon Gold 613016skylakeskylake_avx512
Intel Broadwell Xeon E5-2650 V412broadwellbroadwell
Intel Haswell Xeon E5-4620 V310haswellhaswell
Intel Haswell Xeon E7-4809 V38haswellhaswell

The entire CPU table (downloadable)

CPU Partitions
ClusterPartitionCores per nodeCore-hours per nodeMax. walltimeMax. nodes per jobNodesRAM per nodeOSCPU-C optionCores per CPUSSDShared
NHRmedium96s9696.012 hr256.0380256 GBRocky 82 × Sapphire Rapids 8468sapphirerapids48yesno
NHRmedium96s:test9696.01 hr64.0164256 GBRocky 82 × Sapphire Rapids 8468sapphirerapids48yesno
NHRstandard969696.012 hr256.0853364 GBRocky 82 × Cascadelake 9242cascadelake48nono
NHRstandard969696.012 hr256.0148364 GBRocky 82 × Cascadelake 9242cascadelake48yesno
NHRstandard96:shared9696.048 hr64.0853364 GBRocky 82 × Cascadelake 9242cascadelake48noyes
NHRstandard96:shared9696.048 hr64.0138364 GBRocky 82 × Cascadelake 9242cascadelake48yesno
NHRstandard96:test9696.01 hr64.0856364 GBRocky 82 × Cascadelake 9242cascadelake48nono
NHRstandard96:test9696.01 hr64.0140364 GBRocky 82 × Cascadelake 9242cascadelake48yesno
NHRstandard96s9696.012 hr256.0220514 GBRocky 82 × Sapphire Rapids 8468sapphirerapids48yesno
NHRstandard96s:shared9696.048 hr1.0220514 GBRocky 82 × Sapphire Rapids 8468sapphirerapids48yesyes
NHRstandard96s:test9696.01 hr64.0224514 GBRocky 82 × Sapphire Rapids 8468sapphirerapids48yesNaN
NHRlarge9696192.012 hr2.012747 GBRocky 82 × Cascadelake 9242cascadelake48yesno
NHRlarge96:shared96192.048 hr1.09747 GBRocky 82 × Cascadelake 9242cascadelake48yesyes
NHRlarge96:test96192.01 hr2.04747 GBRocky 82 × Cascadelake 9242cascadelake48yesno
NHRlarge96s96192.012 hr2.0131030 GBRocky 82 × Sapphire Rapids 8468sapphirerapids48yesno
NHRlarge96s:shared96192.048 hr1.091030 GBRocky 82 × Sapphire Rapids 8468sapphirerapids48yesyes
NHRlarge96s:test96192.01 hr2.041030 GBRocky 82 × Sapphire Rapids 8468sapphirerapids48yesno
NHRhuge9696192.024 hr256.021522 GBRocky 82 × Cascadelake 9242cascadelake48yesno
NHRhuge96s96192.024 hr1.022062 GBRocky 82 × Sapphire Rapids 8468sapphirerapids48yesno
NHRhuge96s:shared96192.024 hr1.022062 GBRocky 82 × Sapphire Rapids 8468sapphirerapids48yesyes
NHRjupyter:cpu\n(jupyter)96192.024 hr1.08364 GBRocky 82 × Cascadelake 9242cascadelake48NaNyes
SCCmedium96NaN48 hrinf94364 GBRocky 82 × Cascadelake 9242cascadelake48yesyes
SCCmedium48NaN48 hrinf12512 GBRocky 82 × Broadwell E5-2650v4broadwell12yesno
SCCmedium24NaN48 hrinf31500 GBRocky 84 × Haswell E5-4620v3haswell10yesno
SCCmedium32NaN48 hrinf12048 GBRocky 84 × Haswell E7-4809v3haswell8yesno
SCCfat48NaN48 hrinf12512 GBRocky 82 × Broadwell E5-2650v4broadwell12yesyes
SCCfat24NaN48 hrinf41500 GBRocky 84 × Haswell E5-4620v3haswell10yesno
SCCfat32NaN48 hrinf12048 GBRocky 84 × Haswell E7-4809v3haswell8yesno
SCCfat+24NaN48 hrinf41500 GBRocky 84 × Haswell E5-4620v3haswell10yesyes
SCCfat+32NaN48 hrinf12048 GBRocky 84 × Haswell E7-4809v3haswell8yesno
SCCint (jupyter)96NaN48 hrinf2364 GBRocky 82 × Cascadelake 9242cascadelake48yesyes
SCCint (jupyter)32NaN48 hrinf1195 GBRocky 82 × Skylake 6130skylake16yesno
SCCsgiz (reserved)32NaN48 hrinf1195 GBRocky 82 × Skylake 6130skylake16yesyes
SCCfg (reserved)48NaN30 daysinf8512 GBRocky 82 × Zen3 EPYC 4713milan or zen324yesyes
SCCsoeding (reserved)128NaN14 daysinf71000 GBRocky 82 × Zen2 EPYC 7742rome or zen264yesyes
SCCcidbn (reserved)128NaN14 daysinf30496 GBRocky 82 × Zen3 EPYC 7763milan or zen364yesyes

Hardware Totals

The total nodes, cores, and RAM for each cluster and sub-cluster are given in the table below.

ClusterSub-clusterNodesCoresRAM (TiB)
NHREmmy Phase 21,02298,112362.8
Emmy Phase 341139,456173.4
TOTAL1,433137,568537.2
SCCmain1139,92046.8
CIDBN303,84014.1
sa83843.9
hh78966.6
sgiz113521.0
TOTAL16915,39272.6