SCRATCH/WORK
Warning
Permanent long-lived SCRATCH/WORK directories are being phased out in favor of dynamically created workspaces with limited lifetimes. Their phase out schedule is:
| SCRATCH SCC | SCRATCH RZG | |
|---|---|---|
| User permanent directories no longer issued | 1st of October 2025 | 1st of October 2025 |
| Project permanent directories no longer issued | 1st of October 2025 |
After this point on SCRATCH RZG (formerly “SCRATCH Grete”), only workspaces will continue to be available (each workspace directory has a limited lifetime).
The SCRATCH SCC filesystem will be retired on March 31, 2026.
Info
The old SCRATCH EMMY filesystems available at /scratch-emmy, /mnt/lustre-emmy-hdd and /mnt/lustre-emmy-ssd (as well as the symlink /scratch on the Emmy P2 nodes) was already retired and should no longer be used in compute jobs.
SCRATCH/WORK data stores are meant for active data and are configured for high performance at the expense of robustness. The characteristics of the SCRATCH/WORK data stores are:
- Optimized for performance from the sub-clusters located in the same building
- Optimized for high input/output bandwidth from many nodes and jobs at the same time
- Optimized for a moderate number of medium to large files
- Meant for active data (heavily used data with a relatively short lifetime)
- Has a quota
- Has NEITHER backups nor snapshots
Warning
The SCRATCH filesystems have NO BACKUPS. Their performance comes at the price of robustness, meaning they are more fragile than other systems. This means there is a non-negligible risk of data on them being completely lost if more than a few components/drives in the underlying storage fail at the same time.
There is one SCRATCH/WORK data store for the SCC and three for NHR which are shown in the table below and detailed by Project/User kind in separate subsections.
| Project/User Kind | Name | Media | Capacity | Filesystem |
|---|---|---|---|---|
| SCC | SCRATCH SCC | HDD with metadata on SSD | 2.1 PiB | BeeGFS |
| NHR | SCRATCH RZG (formerly “SCRATCH Grete”) | SSD | 509 TiB | Lustre |
SCC
Projects get a SCRATCH SCC directory at /scratch-scc/projects/PROJECT, which has a Project Map symlink with the name dir.scratch-scc.
Users used to get a SCRATCH SCC directory at /scratch-scc/users/USER, but they are no longer issued (existing ones will be preserved until the filesystem is retired)
NHR
In the past, projects used to be issued directories in each SCRATCH/WORK data store, which will not be removed for older projects that still have them: New projects in the HPC Project Portal get the directories marked “new”. Legacy NHR/HLRN projects started before 2024/Q2 have the directories marked “legacy”. Legacy NHR/HLRN projects that have been migrated to the HPC Project Portal keep the directories marked “legacy” and get the directories marked “new” (they get both). See NHR/HLRN Project Migration for more information on migration.
| Project Data Store | Pathes | Project Map symlink |
|---|---|---|
| SCRATCH RZG | /mnt/lustre-grete/projects/PROJECT (new)/scratch-grete/projects/PROJECT (legacy) | dir.lustre-grete (new)dir.scratch-grete (legacy) |
Before there were projects, users got dedicated directories on SCRATCH RZG, which will not be removed for older users that still have them.
They take the form /mnt/lustre-grete/SUBDIR/USER, with /mnt/lustre-grete/usr/USER being for the user’s files and /mnt/lustre-grete/tmp/USER for temporary files (see Temporary Storage for more information).
The directories in each data store are listed in the table below.
Info
SCRATCH RZG used to be known as “SCRATCH Grete” because it used to be that all of Emmy was in the MDC and all of Grete was in the RZG, which is no longer the case. This historical legacy can still be seen in the name of the mount point. The new Lustre filesystem in the MDC has carried the name lustre-mdc from the start (August 2025).