Robin-hpc

Fra Robin

(Forskjeller mellom versjoner)
Gå til: navigasjon, søk
(Hardware)
(Hardware and network configuration)
Linje 4: Linje 4:
   
   
=== Specs ===
=== Specs ===
 +
{| class="wikitable"  
{| class="wikitable"  
|-
|-
Linje 21: Linje 22:
| CentOS
| CentOS
|}
|}
 +
=== Storage ===
=== Storage ===
 +
 +
The storage on the nodes consists of one 1TB disk where 100GB is reserved for software and 900GB is reserved for the users of the node. However, each user has a soft limit of 20 GB and a hard limit of 100 GB with a grace period of 14 days.
 +
 +
It's important to note that there is no backup of the disk, so do not use the <code>robin-hpc</code> as a cloud storage service. We suggest using <code>rsync</code>/<code>scp</code> to your home area on <code>login.ifi.uio.no</code>.
 +
 +
E.g.
 +
rsync --progress <file>.tar.gz <username>@login.ifi.uio.no:~/<path>/<to>/<wherever>
 +
 +
It is also a possibility to mount your UiO home directory to the <code>robin-hpc</code>.
== Access ==
== Access ==

Versjonen fra 23. feb 2021 kl. 15:47

Innhold

Hardware and network configuration

The robin-hpc is a shared resource for robins researchers and Master students. The strength of the machine is the amount of CPU cores and RAM. Unforunatly, there's no GPU available in this service.

Specs

CPU RAM OS
login node 2 cores/4 vCPU 16GB
CentOS
worrker node 120 cores/240 vCPU 460GB CentOS

Storage

The storage on the nodes consists of one 1TB disk where 100GB is reserved for software and 900GB is reserved for the users of the node. However, each user has a soft limit of 20 GB and a hard limit of 100 GB with a grace period of 14 days.

It's important to note that there is no backup of the disk, so do not use the robin-hpc as a cloud storage service. We suggest using rsync/scp to your home area on login.ifi.uio.no.

E.g.

rsync --progress <file>.tar.gz <username>@login.ifi.uio.no:~/<path>/<to>/<wherever>

It is also a possibility to mount your UiO home directory to the robin-hpc.

Access

Apply for access using this link: https://nettskjema.no/a/robin-hpc

SLURM

Todo: emmaste

Software

Matlab R2019b

Todo: sebastto

Setting up the SLURM job script

#SBATCH --job-name=matlab_job
#SBATCH --ntasks=1
#SBATCH --cpus-per-task 16

srun matlab -batch "addpath(genpath('/path/to/your/matlab/folder'));run('myScript.m')"

Running Matlab in batch mode is the most safe option for running MATLAB on a HPC. (From Mathworks documentation)[1]:

-batch statement

  • Starts without the desktop

  • Does not display the splash screen

  • Executes statement

  • Disables changes to preferences

  • Disables toolbox caching

  • Logs text to stdout and stderr

  • Does not display modal dialog boxes

  • Exits automatically with exit code 0 if script executes successfully. Otherwise, MATLAB terminates with a non-zero exit code.

The addpath(genpath('/path/to/your/matlab/folder')) part adds all files in the specified directory to the MATLAB search path. Afterwards we run the main script of your program with run('myScript.m').

Utilizing parallel computing in your MATLAB Script

When the SLURM worker node is setting up your job, a number of environment variables is set. We can use the environment variable SLURM_CPUS_ON_NODE to get the number of CPU cores available in our MATLAB script. In fact, we can use that variable to dynamically select the number of workers in the MATLAB parallel pool, so that your script works both on your own computer and on the HPC.

SLURM_CPUS_STR = getenv('SLURM_CPUS_ON_NODE');

% Delete parallel pool from earlier runs
delete(gcp('nocreate'));

if isempty(SLURM_CPUS_STR) 
    % Run on personal computer (with however many cores your CPU has)
    parpool(6);
else
    % Run on SLURM-scheduled HPC
    SLURM_CPUS_NUM = str2num(SLURM_CPUS_STR);
    parpool(SLURM_CPUS_NUM);
end

Anaconda

Todo: emmaste

Podman

alias docker=podman
Personlige verktøy