CPU Partitions

Nodes in these partitons provide many CPU cores for parallelizing calculations.

Islands

The islands with a brief overview of their hardware are listed below.

IslandCPUsFabric
Emmy Phase 1Intel SkylakeOmni-Path (100 Gb/s)
Emmy Phase 2Intel Cascade LakeOmni-Path (100 Gb/s)
Emmy Phase 3Intel Sapphire RapidsOmni-Path (100 Gb/s)
SCC LegacyIntel Cascade LakeOmni-Path (100 Gb/s)
CIDBNAMD Zen2Infiniband (100 Gb/s)
FGAMD Zen3RoCE (25 Gb/s)
SOEAMD Zen2RoCE (25 Gb/s)
Info

See Logging In for the best login nodes for each island (other login nodes will often work, but may have access to different storage systems and their hardware will be less of a match).

See Cluster Storage Map for the storage systems accessible from each island and their relative performance characteristics.

See Software Stacks for the available and default software stacks for each island.

Legacy SCC users only have access to the SCC Legacy island unless they are also CIDBN, FG, or SOE users in which case they also have access to those islands.

Partitions

The NHR partitions following the naming scheme sizeCORES[suffix] where size indicates the amount of RAM (medium, standard, large, or huge), CORES indicates the number of cores, and suffix is only included to differentiate partitions with the same size and CORES. SCC, KISSKI, REACT, etc. partitions do not follow this scheme. The partitions are listed in the table below by which users can use them and island without hardware details. See Types of User Accounts to determine which kind of user you are. Note that some users are members of multiple classifications (e.g. all CIDBN/FG/SOE users are also SCC users).

UsersIslandPartitionOSSharedDefault/Max. Time LimitMax. Nodes
per Job
Core-hr per Core
NHREmmy P3medium96sRocky 812/48 hr2560.75
medium96s:testRocky 81/1 hr640.75
standard96sRocky 812/48 hr2561
standard96s:sharedRocky 8yes12/48 hr11
standard96s:testRocky 81/1 hr641
large96sRocky 812/48 hr21.5
large96s:sharedRocky 8yes12/48 hr12
large96s:testRocky 81/1 hr21.5
huge96sRocky 812/24 hr12
huge96s:sharedRocky 8yes12/24 hr12
Emmy P2standard96Rocky 812/48 hr2561
standard96:sharedRocky 8yes12/48 hr641
standard96:testRocky 81/1 hr641
large96Rocky 812/48 hr21.5
large96:sharedRocky 8yes12/48 hr12
large96:testRocky 81/1 hr21.5
huge96Rocky 812/24 hr2562
SCCEmmy P3scc-cpuRocky 8yes12/48 hrinf1
SCC LegacymediumRocky 8yes12/48 hrinf1
allEmmy P2
Emmy P1
jupyterRocky 8yes12/48 hr11
CIDBNCIDBNcidbnRocky 8yes12 hr/14 daysinf
FGFGfgRocky 8yes12 hr/30 daysinf
SOEDINGSOEsoedingRocky 8yes12 hr/14 daysinf
Info

JupyterHub sessions run on the jupyter partition. This partition is oversubscribed (multiple jobs share resources) and is comprised of both CPU and GPU nodes.

Info

The default time limit for most partitions is 12 hours and failed jobs that are requested to run for longer will only get refunded for the 12 hours. This is detailed on the Slurm page about job runtime.

The hardware for the different nodes in each partition are listed in the table below. Note that some partitions are heterogeneous, having nodes with different hardware. Additionally, many nodes are in more than one partition.

PartitionNodesCPURAM per nodeCoresSSD
medium96s3802 × Sapphire Rapids 8468256 000 MB96yes
medium96s:test1642 × Sapphire Rapids 8468256 000 MB96yes
standard968532 × Cascadelake 9242364 000 MB96
1482 × Cascadelake 9242364 000 MB96yes
standard96:shared8532 × Cascadelake 9242364 000 MB96
1382 × Cascadelake 9242364 000 MB96yes
standard96:test8562 × Cascadelake 9242364 000 MB96
1402 × Cascadelake 9242364 000 MB96yes
standard96s2202 × Sapphire Rapids 8468514 000 MB96yes
standard96s:shared2202 × Sapphire Rapids 8468514 000 MB96yes
standard96s:test2242 × Sapphire Rapids 8468514 000 MB96yes
large96122 × Cascadelake 9242747 000 MB96yes
large96:shared92 × Cascadelake 9242747 000 MB96yes
large96:test42 × Cascadelake 9242747 000 MB96yes
large96s132 × Sapphire Rapids 84681 030 000 MB96yes
large96s:shared92 × Sapphire Rapids 84681 030 000 MB96yes
large96s:test42 × Sapphire Rapids 84681 030 000 MB96yes
huge9622 × Cascadelake 92421 522 000 MB96yes
huge96s22 × Sapphire Rapids 84682 062 000 MB96yes
huge96:shared22 × Sapphire Rapids 84682 062 000 MB96yes
jupyter162 × Skylake 6148763 000 MB40yes
82 × Cascadelake 9242364 000 MB96
medium942 × Cascadelake 9242364 000 MB96yes
scc-cpu≤ 492 × Sapphire Rapids 8468256 000 MB96yes
≤ 492 × Sapphire Rapids 8468514 000 MB96yes
≤ 242 × Sapphire Rapids 84681 030 000 MB96yes
≤ 22 × Sapphire Rapids 84682 062 000 MB96yes
cidbn302 × Zen3 EPYC 7763496 000 MB128yes
fg82 × Zen3 EPYC 7413512 000 MB48yes
soeding72 × Zen2 EPYC 77421 000 000 MB128yes

The CPUs

For partitions that have heterogeneous hardware, you can give Slurm options to request the particular hardware you want. For CPUs, you can specify the kind of CPU you want by passing a -C/--constraint option to slurm to get the CPUs you want. Use -C ssd or --constraint=ssd to request a node with a local SSD. If you need a particularly large amount of memory, please use the --mem option to request an appropriate amount (per node). See Slurm for more information.

The CPUs, the options to request them, and some of their properties are give in the table below.

CPUCores-C optionArchitecture
AMD Zen3 EPYC 776364milan or zen3zen3
AMD Zen3 EPYC 741324milan or zen3zen3
AMD Zen2 EPYC 774264rome or zen2zen2
Intel Sapphire Rapids Xeon Platinum 846848sapphirerapidssapphirerapids
Intel Cascadelake Xeon Platinum 924248cascadelakecascadelake
Intel Skylake Xeon Gold 614820skylakeskylake_avx512
Intel Skylake Xeon Gold 613016skylakeskylake_avx512

Hardware Totals

The total nodes, cores, and RAM for each island are given in the table below.

IslandNodesCoresRAM (TiB)
Emmy Phase 11664011.6
Emmy Phase 21,02298,112362.8
Emmy Phase 341139,456173.4
SCC Legacy949,02432.6
CIDBN303,84014.1
FG83843.9
SOE78966.6
TOTAL1,588152,352605