Unified system - Transition guide

During the May 2024 downtime and maintenance, we setup new SLURM controllers. Some changes affect SCC users especially.

Warning

The default partition for the unified SLURM controller is now standard96. It is an NHR partition, to which SCC users don’t have access. In batch files, SSC users need to specify the partition -p medium explicitly.

Partitions

The default partition is now standard96. This is an NHR partition. In fact, all the NHR partitions are visible on the SCC and vice versa. It is important for all SCC users to not simply try to run jobs on the default partition. Your jobs will remain in the queue indefinitely with status PartitionConfig. You need to explicitly specify the partition -p medium (which was the default on the SCC previously).

Please use the appropriate login nodes to submit your jobs: glogin-p3.hpc.gwdg.de for scc-cpu and login-mdc.hpc.gwdg.de for medium.

SCC Slurm Account Names

Most SCC users have only a single Slurm account associated with their username which is usually the default account (you don’t have to use -A ACCOUNT to use the default account). It’s name has been changed from all to scc_users. The renamed Slurm accounts for SCC users are listed in the table below.

OldNew
allscc_users
cidbncidbn_legacy
cramercramer_legacy
gailinggailing_legacy
gizongizon_legacy
gpukursgpukurs_legacy
soedingsoeding_legacy
workshopworkshop_legacy

To see all Slurm accounts you have access to, run the following from a login node.

sacctmgr show assoc user=$USER

QoS

The QoS system has changes mainly for SCC in terms of names, but also some changes effect the NHR names.

OldNew
short2h
long7d

Selecting GPUs

Only the names change slightly, because they now require to use upper case letters.

OldNew
-G rtx5000-G RTX5000
-G v100-G V100