Unified system - Transition guide

We updated the SLURM controller so some changes, which hit the old SCC system users especially.

Warning

The default partition for the unified SLURM controller is now standard96. It is an NHR partition, so all SSC user need to specify the partition -p medium specifically.

Partitions

The default partition is now standard96. This is an NHR partition. In fact, all the NHR partitions are visible on the SCC and vice versa. It is important for all SCC users who simply run on the default partition to explicitly specify the partition -p medium.

Also, you can only submit to the NHR partitions using the NHR front end nodes glogin.hpc.gwdg.de, and to the SCC systems using the SCC front end nodes login-mdc.hpc.gwdg.de.

Currently it is not possible to submit to the NHR partitions from the SCC and vice versa because the storage system is still different. We are working on unifying the underlying systems as well.

SCC Slurm Account Names

Most SCC users have only a single Slurm account associated with their username which is usually the default account (you don’t have to use -A ACCOUNT to use the default account). Its name has been changed from all to scc_users. The renamed Slurm accounts for SCC users are listed in the table below.

OldNew
allscc_users
cidbncidbn_legacy
cramercramer_legacy
gailinggailing_legacy
gizongizon_legacy
gpukursgpukurs_legacy
soedingsoeding_legacy
workshopworkshop_legacy

To see all Slurm accounts you have access to, run the following from a login node.

sacctmgr show assoc user=$USER

QoS

The QoS system has changes mainly for SCC in terms of names, but also some changes effect the NHR names.

Oldnewpreviously
short2h
long7d5d
verylong14d10d
twoweeks14d

Selecting GPUs

Only the names change slightly, because they now require to use upper case letters.

Oldnew
-G rtx5000-G RTX5000
-G v100-G V100