slurm is the cluster management and job scheduling system on Holodeck. (Note that Holodeck uses a different schedular than Atlas, which uses Condor) Slurm is or at least should be the main way you run your code on the compute nodes of Holodeck. To do so, you will submit the necessary commands to slurm, which puts your so called job into a queue. The job than stays in this queue until the requirements for running it (as specified by the user, e.g. a specific node or number of nodes) are satisfied. Once the requirement is fulfilled it reserves a (or multiple) node on the work-cluster and runs the job on it. As a side node, slurm always reserves the entire node, so no other person can utilize it when a job is occupying it.


By default all output is redirected to a file, which is called slurm-<job-ID>.out, where job-ID is the ID of the job that generated the output.

Basic commands

There are three main commands you will need in order to use slurm:


squeue is the command that shows the state of slurm’s queue and which jobs are running on which node. It also shows some more information, like the job-ID (which you might need to kill a process), the name of the script and the owner. Also the runtime of the job is shown.

This command is especially useful if you want to reserve a specific node or want to see how many nodes are free at the moment. (overview of all available work nodes)


sbatch is the main submit command. You provide it with a submit script (see below) and it will put the job into the queue. Within the submit script you have many options to choose from to specify the way your code should be run.



scancel takes a Job-ID and cancels the job belonging to the ID. If the job is running already it will be terminated, if it is idle it will be taken out of the queue. You can only kill your own jobs so don’t be afraid to kill the wrong job.

Other commands can be found in slurms quick start guide.

The submit file

In the submit file you specify how the script should be run, on which nodes and where input/output to your code is stored. The file is structured in the following way. The first line tells slurm which shell to use. Usually this will be the bash command line. After that all lines not beginning with #SBATCH are ignored and only after the first line of beginning with #SBATCH can you specify other commands. All lines beginning with #SBATCH are options you can provide to slurm itself. All options used in the attached submit-file-template will be explained below. To see a list of more possible options please refer to the manual page of sbatch by typing
man sbatch

when logged into Holodeck 1. All options specifying which hardware you request can be omitted, as all work nodes share the same hardware specifications.

One thing to note before going into the submit file is slurms notion of a task. A task in slurm terms is a process. Therefore a program that utilizes multi-processing (e.g. through MPI or Python’s multiprocessing package) will launch multiple tasks. A program utilizing multi-threading will only create a single task. Keep this in mind when setting the corresponding options.

Below you find an explanation of the commands used in the example file attached to this wiki page.


Specify which shell should be used. (Just leave this option alone if you are unsure about its behavior)

#SBATCH --export=ALL

Specify which environment variables are propagated to the work nodes. If you are running your code in a virtual environment you usually want to have this set to “ALL” so that the environment and all dependencies are passed along. (For more detail see the manual page of sbatch)

#SBATCH --input=<my_input.txt>

File where input to your program is stored and in a readable format. Replace <my_input.txt> by the path to your file. (refer to code example)

#SBATCH --job-name=<your_job_name>

The humanly understandable name you should give to your job. Replace <your_job_name> by the name you choose.

#SBATCH --mail-type=ALL

When this option is active slurm will send out E-Mail notifications when certain things happen to your job. You can choose between “BEGIN”, “END”, “FAIL”, “REQUEUE” and “ALL”. You also need to set the option --mail-user

#SBATCH --mail-user=<your_E-Mail-address>

This option specifies the E-Mail address slurm uses to send out the notifications specified in –mail-type. Replace <your_E-Mail-address> by your own E-Mail address.

#SBATCH --nodes <min>(-<max>)

Specifies the minimum number of nodes slurm will request for your job. Replace <x> by any integer between 1 and 24 (total number of available work nodes). You can optionally also give a maximum number of nodes by replacing <max> and deleting the round brackets.

#SBATCH --nodelist=nr[01-24]

Choose a number of nodes you want to allow slurm to use. There is no real reason to use this option, as all nodes share the same hardware. Indexing is supported. You can specify specific nodes by separating two nodes with a comma, or a whole range by separating with a dash. (In the example at hand all work nodes are allowed to be used)

#SBATCH --ntasks=<x>

Tells slurm how many tasks your code will execute. Based on this information slurm might exclude certain nodes. Replace <x> by the number of tasks your script will launch. If in doubt use 40 as that is the maximum number of tasks allowed per CPU.

#SBATCH --ntasks-per-node=<x>

Specify the number of task that should be available per node. If you want to use the same number of tasks on all nodes set this option to number of tasks / number of nodes. Replace <x> by the number of tasks per node.

#SBATCH --output=<my_output.txt>

Specify a name for the standard output file. slurm redirects stdout to a file. If this option is not specified all output will be directed to a file called “slurm-<job-ID>.out”, where <job-ID> is the ID of your job. Replace <my_output.txt> by a name or path.

#SBATCH --overcommit

Using this option will overcommit resources. This means that even if your code would need more nodes than you maximally allow, the job would not crash but run on the maximal number of nodes one task after the other. Hence the CPU potentially has to work on more than one task per job. (You can also only allocate a single CPU per node if this flag is set)

#SBATCH -t hh:mm:ss

Specify the maximum wall time a job may run for. Replace hh:mm:ss by hours, minutes and seconds respectively.

cd ~/<your_path>

Change to the directory where your executable is stored.


Place your executable here.

See also

Simple code example · Useful commands
Topic revision: r2 - 24 Jun 2019, MarlinSchFer
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback