Robin-hpc
From Robin
(→Hardware) |
(→Matlab R2019b) |
||
Line 20: | Line 20: | ||
Todo: sebastto | Todo: sebastto | ||
+ | |||
+ | '''Setting up the SLURM job script''' | ||
+ | <pre class="brush: bash"> | ||
+ | #SBATCH --job-name=matlab_job | ||
+ | #SBATCH --ntasks=1 | ||
+ | #SBATCH --cpus-per-task 16 | ||
+ | |||
+ | srun matlab -batch "addpath(genpath('/path/to/your/matlab/folder'));run('myScript.m')" | ||
+ | </pre> | ||
+ | |||
+ | Running Matlab in batch mode is the most safe option for running MATLAB on a HPC. | ||
+ | (From Mathworks documentation)[https://se.mathworks.com/help/matlab/ref/matlablinux.html]: | ||
+ | |||
+ | -batch <em><code>statement</code></em> | ||
+ | <ul> | ||
+ | <li><p>Starts without the desktop</p></li> | ||
+ | <li><p>Does not display the splash screen</p></li> | ||
+ | <li><p>Executes <em><code>statement</code></em></p></li> | ||
+ | <li><p>Disables changes to preferences</p></li> | ||
+ | <li><p>Disables toolbox caching</p></li> | ||
+ | <li><p>Logs text to <code\>stdout</code> and <code>stderr</code></p></li> | ||
+ | <li><p>Does not display modal dialog boxes</p></li> | ||
+ | <li><p>Exits automatically with exit code 0 if <em><code>script</code></em> executes successfully. Otherwise, MATLAB terminates with a non-zero exit code.</p></li> | ||
+ | </ul> | ||
+ | |||
+ | The <code>addpath(genpath('/path/to/your/matlab/folder'))</code> part adds all files in the specified directory to the MATLAB search path. Afterwards we run the main script of your program with <code>run('myScript.m')</code>. | ||
+ | |||
+ | '''Utilizing parallel computing in your MATLAB Script''' | ||
+ | |||
+ | When the SLURM worker node is setting up your job, a number of environment variables is set. | ||
+ | We can use the environment variable <code>SLURM_CPUS_ON_NODE</code> to get the number of CPU cores available in our MATLAB script. In fact, we can use that variable to dynamically select the number of workers in the MATLAB parallel pool, so that your script works both on your own computer and on the HPC. | ||
+ | |||
+ | <pre class="matlab: matlab"> | ||
+ | SLURM_CPUS_STR = getenv('SLURM_CPUS_ON_NODE'); | ||
+ | |||
+ | % Delete parallel pool from earlier runs | ||
+ | delete(gcp('nocreate')); | ||
+ | |||
+ | if isempty(SLURM_CPUS_STR) | ||
+ | % Run on personal computer (with however many cores your CPU has) | ||
+ | parpool(6); | ||
+ | else | ||
+ | % Run on SLURM-scheduled HPC | ||
+ | SLURM_CPUS_NUM = str2num(SLURM_CPUS_STR); | ||
+ | parpool(SLURM_CPUS_NUM); | ||
+ | end | ||
+ | </pre> | ||
=== Anaconda === | === Anaconda === |
Revision as of 18:57, 19 February 2021
Contents |
Hardware
Todo: vegardds
CPU and RAM
Storage
Access
Todo: vegardds
SLURM
Todo: emmaste
Software
Matlab R2019b
Todo: sebastto
Setting up the SLURM job script
#SBATCH --job-name=matlab_job #SBATCH --ntasks=1 #SBATCH --cpus-per-task 16 srun matlab -batch "addpath(genpath('/path/to/your/matlab/folder'));run('myScript.m')"
Running Matlab in batch mode is the most safe option for running MATLAB on a HPC. (From Mathworks documentation)[1]:
-batch statement
Starts without the desktop
Does not display the splash screen
Executes
statement
Disables changes to preferences
Disables toolbox caching
Logs text to
stdout
andstderr
Does not display modal dialog boxes
Exits automatically with exit code 0 if
script
executes successfully. Otherwise, MATLAB terminates with a non-zero exit code.
The addpath(genpath('/path/to/your/matlab/folder'))
part adds all files in the specified directory to the MATLAB search path. Afterwards we run the main script of your program with run('myScript.m')
.
Utilizing parallel computing in your MATLAB Script
When the SLURM worker node is setting up your job, a number of environment variables is set.
We can use the environment variable SLURM_CPUS_ON_NODE
to get the number of CPU cores available in our MATLAB script. In fact, we can use that variable to dynamically select the number of workers in the MATLAB parallel pool, so that your script works both on your own computer and on the HPC.
SLURM_CPUS_STR = getenv('SLURM_CPUS_ON_NODE'); % Delete parallel pool from earlier runs delete(gcp('nocreate')); if isempty(SLURM_CPUS_STR) % Run on personal computer (with however many cores your CPU has) parpool(6); else % Run on SLURM-scheduled HPC SLURM_CPUS_NUM = str2num(SLURM_CPUS_STR); parpool(SLURM_CPUS_NUM); end
Anaconda
Todo: emmaste
Podman
alias docker=podman