Ceph

The CephFS based systems provide the volume storage for NHR and SCC users.

Ceph is connected to Emmy P2, Emmy P3 and Grete with 200Gbit/s of aggregate bandwidth each, so it can be used to transfer larger amounts of data between compute islands. Individual transfers are of course limited by load of other users and individual network interfaces and will typically be much slower.

The Ceph HDD system stores data on harddrives and metadata on SSDs and has a total capacity of 21 PiB. Data that is no longer actively used by compute jobs should be stored here.

The Ceph SSD system uses only SSDs and has a total capacity of 606 TiB.

Ceph should not be used for heavy parallel I/O from compute jobs, storage systems integrated into the compute islands like the Lustre systems are usually more suitable for this purpose. The exception are workloads that need to access storage from multiple compute islands (e.g. job chains that run on both phases of Emmy or on both Emmy and Grete), here the Ceph SSD system can be used.