CPU Partitions

Nodes in these partitons provide many CPU cores for parallelizing calculations.

Partitions

The NHR partitions following the naming scheme sizeCORES[suffix] where size indicates the amount of RAM (medium, standard, large, or huge), CORES indicates the number of cores, and suffix is only included to differentiate partitions with the same size and CORES. SCC partitions do not follow this scheme. The partitions are listed in the table below without hardware details.

ClusterPartitionOSSharedMax. walltimeMax. nodes per jobCore-hours per node
NHRmedium96sRocky 812 hr25696
medium96s:testRocky 81 hr6496
medium40CentOS 748 hr140
medium40:sharedCentOS 7yes12 hr240
medium40:testCentOS 71 hr6440
standard96CentOS 712 hr25696
standard96:testCentOS 71 hr6496
standard96sRocky 812 hr25696
standard96s:sharedRocky 8yes48 hr196
standard96s:testRocky 81 hr6496
large40CentOS 748 hr480
large40:sharedCentOS 7yes48 hr180
large40:testCentOS 71 hr280
large96CentOS 712 hr2144
large96:sharedCentOS 7yes48 hr1144
large96:testCentOS 71 hr2144
large96sRocky 812 hr2144
large96s:sharedRocky 8yes48 hr1144
large96s:testRocky 81 hr2144
huge96CentOS 724 hr256192
huge96sRocky 824 hr1192
huge96s:sharedRocky 8yes24 hr1192
jupyter:cpu
(jupyter)
CentOS 7yes24 hr140
SCCmediumSL 7yes48 hrinf
fatSL 7yes48 hrinf
fat+SL 7yes48 hrinf
int (jupyter)SL 7yes48 hrinf
sgiz (reserved)SL 7yes48 hrinf
sa (reserved)Rocky 8yes30 daysinf
hh (reserved)Rocky 8yes14 daysinf
cidbn (reserved)Rocky 8yes14 daysinf
Warning

The partitions you are allowed to use may be restricted by your kind of account and/or POSIX group. For example, the partitions marked as reserved in the table above are reserved-for/restricted-to specific research groups.

Info

JupyterHub sessions run on the partitions marked with jupyter in the table above.

The hardware for the different nodes in each partition are listed in the table below. Note that some partitions are heterogeneous, having nodes with different hardware. Additionally, many nodes are in more than one partition.

PartitionNodesCPURAM per nodeCoresSSD
medium404162 × Skylake 6148182 000 MB40yes
medium40:shared4162 × Skylake 6148182 000 MB40yes
medium40:test4242 × Skylake 6148182 000 MB40yes
medium96s3802 × Sapphire Rapids 8468256 000 MB96yes
medium96s:test1642 × Sapphire Rapids 8468256 000 MB96yes
standard968572 × Cascadelake 9242364 000 MB96
1492 × Cascadelake 9242364 000 MB96yes
standard96:test8642 × Cascadelake 9242364 000 MB96
1402 × Cascadelake 9242364 000 MB96yes
standard96s2202 × Sapphire Rapids 8468514 000 MB96yes
standard96s:shared2202 × Sapphire Rapids 8468514 000 MB96yes
standard96s:test2242 × Sapphire Rapids 8468514 000 MB96yes
large40122 × Skylake 6148763 000 MB40yes
large40:shared82 × Skylake 6148763 000 MB40yes
large40:test42 × Skylake 6148763 000 MB40yes
large96122 × Cascadelake 9242747 000 MB96yes
large96:shared92 × Cascadelake 9242747 000 MB96yes
large96:test42 × Cascadelake 9242747 000 MB96yes
large96s132 × Sapphire Rapids 84681 030 000 MB96yes
large96s:shared92 × Sapphire Rapids 84681 030 000 MB96yes
large96s:test42 × Sapphire Rapids 84681 030 000 MB96yes
huge9622 × Cascadelake 92421 522 000 MB96yes
huge96s22 × Sapphire Rapids 84682 062 000 MB96yes
huge96:shared22 × Sapphire Rapids 84682 062 000 MB96yes
jupyter:cpu82 × Skylake 6148182 000 MB40yes
medium942 × Cascadelake 9242364 000 MB96yes
122 × Broadwell E5-2650v4512 000 MB48yes
34 × Haswell E5-4620v31 500 000 MB24yes
14 × Haswell E7-4809v32 048 000 MB32yes
fat122 × Broadwell E5-2650v4512 000 MB48yes
44 × Haswell E5-4620v31 500 000 MB24yes
14 × Haswell E7-4809v32 048 000 MB32yes
fat+44 × Haswell E5-4620v31 500 000 MB24yes
14 × Haswell E7-4809v32 048 000 MB32yes
int22 × Cascadelake 9242364 000 MB96yes
112 × Skylake 613095 000 MB32yes
sgiz112 × Skylake 613095 000 MB32yes
sa82 × Zen3 EPYC 4713512 000 MB48yes
hh72 × Zen2 EPYC 77421 000 000 MB128yes
cidbn302 × Zen3 EPYC 7763496 000 MB128yes

The CPUs

For partitions that have heterogeneous hardware, you can give Slurm options to request the particular hardware you want. For CPUs, you can specify the kind of CPU you want by passing a -C/--constraint option to slurm to get the CPUs you want. Use -C ssd or --constraint=ssd to request a node with a local SSD on the NHR cluster, and -C local or --constraint local to request a node with a local SSD on the SCC cluster. See Slurm for more information.

The CPUs, the options to request them, and some of their properties are give in the table below.

CPUCores-C optionArchitecture
AMD Zen3 EPYC 776364milanzen3
AMD Zen3 EPYC 471324milanzen3
AMD Zen2 EPYC 774264romezen2
Intel Sapphire Rapids Xeon Platinum 846848sapphirerapidssapphirerapids
Intel Cascadelake Xeon Platinum 924248cascadelakecascadelake
Intel Skylake Xeon Gold 614820skylakeskylake_avx512
Intel Skylake Xeon Gold 613016skylakeskylake_avx512
Intel Broadwell Xeon E5-2650 V412broadwellbroadwell
Intel Haswell Xeon E5-4620 V310haswellhaswell
Intel Haswell Xeon E7-4809 V38haswellhaswell

Hardware Totals

The total nodes, cores, and RAM for each cluster and sub-cluster are given in the table below.

ClusterSub-clusterNodesCoresRAM (TiB)
NHREmmy Phase 144817,92086.6
Emmy Phase 21,02298,112362.8
Emmy Phase 341139,456173.4
TOTAL1,881155,488622.8
SCCmain1139,92046.8
CIDBN303,84014.1
sa83843.9
hh78966.6
sgiz113521.0
TOTAL16915,39272.6