Julia

A high-level, high-performance, dynamic programming language

Description

Julia is a high-level, high-performance, dynamic programming language. While it is a general-purpose language and can be used to write any application, many of its features are well suited for numerical analysis and computational science. (Wikipedia)

Read more on the Julia home page

Licensing Terms and Conditions

Julia is distributed under the MIT license. Julia is free for everyone to use and all source code is publicly available on GitHub.

Julia at HLRN

Modules

Currently ony version 1.7.2 is installed, so before starting Julia, load the corresponding module:

$ module load julia

Running Julia on the frontends

This is possible, but resources and runtime are limited. Be friendly to other users and work on the (shared) compute nodes!

Running Julia on the compute nodes

Allocate capacity in the batch system, and log onto the related node:

$ salloc -N 1 -p large96:shared
$ squeue --job <jobID>

The output of salloc shows your job ID. With squeue you see the node you are going to use. Login with X11-forwarding:

$ ssh -X <nodename>

Load a module file and work interactively as usual. When ready, free the resources:

$ scancel <jobID>

You may also use srun:

$ srun -v -p large96:shared --pty --interactive bash

Do not forget to free the resources when ready.

Julia packages

Package can be installed via Julia’s package manager in your local depot. But for some HPC relevant packages there are the following things to follow to make the packages run correctly on the HLRN-IV systems:

MPI.jl

impi/2018.5 (supported)

By default, MPI.jl will download and link against an own MPICH implementation. On the HLRN-IV systems, we advise using the Intel MPI implementation, as we have found some serious problems with the Open MPI implementation in conjunction with multithreading.

Therefore the Julia module set already some environment variables under the assumption that the impi/2018.5 module is used (both for MPI versions 0.19 or earlier and for the newer versions using the MPIPreferences system).

To add the MPI.jl package to your depot follow these steps:

$ module load impi/2018.5
$ module load julia 
$ julia -e 'using Pkg; Pkg.add("MPIPreferences"); using MPIPreferences; MPIPreferences.use_system_binary(); Pkg.add("MPI")'  

You can test the the correct version is used via

$ julia -e 'using MPI; println(MPI.MPI_LIBRARY_VERSION_STRING)'

The result should be “Intel(R) MPI Library 2018 Update 5 for Linux* OS”

As Intel MPI comes with an own pinning policy, please add “export SLURM_CPU_BIND=none” to your batch scripts.

Other MPI implementations (unsupported)

There is no direct dependency to impi/2018.5 in Julia’s module file, so if needed it is possible to adjust the environment to a different configuration before building and loading the MPI.jl package. Please check to the MPI.jl documentation for details.

HDF5.jl

Also for the HDF5.jl package one of the HDF5 modules provided by the HLRN-IV system should be used. After loading an HDF5 module, copy the HDF5_ROOT environment variable to JULIA_HDF5_PATH:

$ export JULIA_HDF5_PATH=$HDF5_ROOT