Option(s) define multiple jobs in a co\-scheduled heterogeneous job.
For more details about heterogeneous jobs see the document
.br
https://slurm.schedmd.com/heterogeneous_jobs.html

.SH "DESCRIPTION"
sbatch submits a batch script to Slurm. The batch script may be given to
sbatch through a file name on the command line, or if no file name is specified,
sbatch will read in a script from standard input.

The batch script may contain one or more lines beginning with "#SBATCH" followed
by any of the CLI options documented on this page. #SBATCH directives are read
directly by Slurm, so shell\-specific syntax including variable names will be
read as literal text. Once the first non\-comment, non\-whitespace line has been
reached in the script, no more #SBATCH directives will be processed. See example
below.

sbatch exits immediately after the script is successfully transferred to the
Slurm controller and assigned a Slurm job ID. The batch script is not
necessarily granted resources immediately, it may sit in the queue of pending
jobs for some time before its required resources become available.

By default both standard output and standard error are directed to a file of
the name "slurm\-%j.out", where the "%j" is replaced with the job allocation
number. The file will be generated on the first node of the job allocation.
Other than the batch script itself, Slurm does no movement of user files.

When the job allocation is finally granted for the batch script, Slurm
runs a single copy of the batch script on the first node in the set of
allocated nodes.

The following document describes the influence of various options on the
allocation of cpus to jobs and tasks.
.br
https://slurm.schedmd.com/cpu_management.html

.SH "RETURN VALUE"
sbatch will return 0 on success or error code on failure.

.SH "SCRIPT PATH RESOLUTION"

The batch script is resolved in the following order:
.br

1. If script starts with ".", then path is constructed as:
current working directory / script
.br
2. If script starts with a "/", then path is considered absolute.
.br
3. If script is in current working directory.
.br
be changed after job submission using the \fBscontrol\fR
command.
.IP

.TP
\fB\-\-acctg\-freq\fR=<\fIdatatype\fR>=<\fIinterval\fR>[,<\fIdatatype\fR>=<\fIinterval\fR>...]
Define the job accounting and profiling sampling intervals in seconds.
This can be used to override the \fIJobAcctGatherFrequency\fR parameter in
the slurm.conf file. <\fIdatatype\fR>=<\fIinterval\fR> specifies the task
sampling interval for the jobacct_gather plugin or a
sampling interval for a profiling type by the
acct_gather_profile plugin. Multiple
comma\-separated <\fIdatatype\fR>=<\fIinterval\fR> pairs
may be specified. Supported \fIdatatype\fR values are:
.IP
.RS
.TP 12
\fBtask\fR
Sampling interval for the jobacct_gather plugins and for task
profiling by the acct_gather_profile plugin.
.br
\fBNOTE\fR: This frequency is used to monitor memory usage. If memory limits
are enforced, the highest frequency a user can request is what is configured
in the slurm.conf file. It can not be disabled.
.IP

.TP
\fBenergy\fR
Sampling interval for energy profiling using the
acct_gather_energy plugin.
.IP

.TP
\fBnetwork\fR
Sampling interval for infiniband profiling using the
acct_gather_interconnect plugin.
.IP

.TP
\fBfilesystem\fR
Sampling interval for filesystem profiling using the
acct_gather_filesystem plugin.
.IP

.LP
The default value for the task sampling interval is 30 seconds.
The default value for all other intervals is 0.
An interval of 0 disables sampling of the specified type.
If the task sampling interval is 0, accounting
information is collected only at job termination (reducing Slurm
interference with the job).
.br
number. For example, "\-\-array=0\-15:4" is equivalent to "\-\-array=0,4,8,12".
A maximum number of simultaneously running tasks from the job array may be
specified using a "%" separator.
For example "\-\-array=0\-15%4" will limit the number of simultaneously
running tasks from this job array to 4.
The minimum index value is 0.
the maximum value is one less than the configuration parameter MaxArraySize.
\fBNOTE\fR: Currently, federated job arrays only run on the local cluster.
.IP

.TP
\fB\-\-batch\fR=<\fIlist\fR>
Nodes can have \fBfeatures\fR assigned to them by the Slurm administrator.
Users can specify which of these \fBfeatures\fR are required by their batch
script using this options.
For example a job's allocation may include both Intel Haswell and KNL nodes
with features "haswell" and "knl" respectively.
On such a configuration the batch script would normally benefit by executing
on a faster Haswell node.
This would be specified using the option "\-\-batch=haswell".
The specification can include AND and OR operators using the ampersand and
vertical bar separators. For example:
"\-\-batch=haswell|broadwell" or "\-\-batch=haswell|big_memory".
The \-\-batch argument must be a subset of the job's
\fB\-\-constraint\fR=<\fIlist\fR> argument (i.e. the job can not request only
KNL nodes, but require the script to execute on a Haswell node).
If the request can not be satisfied from the resources allocated to the job,
the batch script will execute on the first node of the job allocation.
.IP

.TP
\fB\-\-bb\fR=<\fIspec\fR>
Burst buffer specification. The form of the specification is system dependent.
Also see \fB\-\-bbf\fR.
When the \fB\-\-bb\fR option is used, Slurm parses this option and creates a
temporary burst buffer script file that is used internally by the burst buffer
plugins. See Slurm's burst buffer guide for more information and examples:
.br
https://slurm.schedmd.com/burst_buffer.html
.IP

.TP
\fB\-\-bbf\fR=<\fIfile_name\fR>
Path of file containing burst buffer specification.
The form of the specification is system dependent.
These burst buffer directives will be inserted into the submitted batch script.
See Slurm's burst buffer guide for more information and examples:
.br
https://slurm.schedmd.com/burst_buffer.html
.IP

.TP
give times like \fInow + count time\-units\fR, where the time\-units
can be \fIseconds\fR (default), \fIminutes\fR, \fIhours\fR,
\fIdays\fR, or \fIweeks\fR and you can tell Slurm to run
the job today with the keyword \fItoday\fR and to run the
job tomorrow with the keyword \fItomorrow\fR.
The value may be changed after job submission using the
\fBscontrol\fR command.
For example:
.IP
.nf
   \-\-begin=16:00
   \-\-begin=now+1hour
   \-\-begin=now+60           (seconds by default)
   \-\-begin=2010\-01\-20T12:34:00
.fi

.RS
.PP
Notes on date/time specifications:
 \- Although the 'seconds' field of the HH:MM:SS time specification is
allowed by the code, note that the poll time of the Slurm scheduler
is not precise enough to guarantee dispatch of the job on the exact
second. The job will be eligible to start on the next poll
following the specified time. The exact poll interval depends on the
Slurm scheduler (e.g., 60 seconds with the default sched/builtin).
 \- If no time (HH:MM:SS) is specified, the default is (00:00:00).
 \- If a date is specified without a year (e.g., MM/DD) then the current
year is assumed, unless the combination of MM/DD and HH:MM:SS has
already passed for that year, in which case the next year is used.
.RE
.IP

.TP
\fB\-D\fR, \fB\-\-chdir\fR=<\fIdirectory\fR>
Set the working directory of the batch script to \fIdirectory\fR before
it is executed. The path can be specified as full path or relative path
to the directory where the command is executed.
.IP

.TP
\fB\-\-cluster\-constraint\fR=[!]<\fIlist\fR>
Specifies features that a federated cluster must have to have a sibling job
submitted to it. Slurm will attempt to submit a sibling job to a cluster if it
has at least one of the specified features. If the "!" option is included, Slurm
will attempt to submit a sibling job to a cluster that has none of the specified
features.
.IP

.TP
\fB\-M\fR, \fB\-\-clusters\fR=<\fIstring\fR>
Clusters to issue commands to. Multiple cluster names may be comma separated.
The job will be submitted to the one cluster providing the earliest expected
.TP
\fB\-\-consolidate\-segments\fR
Ensure that all segments from the allocation will be consolidated
into one higher-level aggregated block.

This option applies to job allocations.
\fBNOTE\fR: This option will only work with the \fBtopology/block\fR plugin.
.IP

.TP
\fB\-C\fR, \fB\-\-constraint\fR=<\fIlist\fR>
Nodes can have \fBfeatures\fR assigned to them by the Slurm administrator.
Users can specify which of these \fBfeatures\fR are required by their job
using the constraint option. If you are looking for 'soft' constraints please
see \fB\-\-prefer\fR for more information.
Only nodes having features matching the job constraints will be used to
satisfy the request.
Multiple constraints may be specified with AND, OR, matching OR,
resource counts, etc. (some operators are not supported on all system types).

\fBNOTE\fR: Changeable features are features defined by a NodeFeatures plugin.

Supported \fB\-\-constraint\fR options include:
.IP
.PD 1
.RS
.TP
\fBSingle Name\fR
Only nodes which have the specified feature will be used.
For example, \fB\-\-constraint="intel"\fR
.IP

.TP
\fBNode Count\fR
A request can specify the number of nodes needed with some feature
by appending an asterisk and count after the feature name.
For example, \fB\-\-nodes=16 \-\-constraint="graphics*4"\fR
indicates that the job requires 16 nodes and that at least four of those
nodes must have the feature "graphics."
If requesting more than one feature and using node counts, the request
must have square brackets surrounding it.

\fBNOTE\fR: This option is not supported by the helpers NodeFeatures plugin.
Heterogeneous jobs can be used instead.
.IP

.TP
\fBAND\fR
Only nodes with all of specified features will be used.
The ampersand is used for an AND operator.
For example, \fB\-\-constraint="intel&gpu"\fR
.IP
will find the first set of node features that matches all nodes in the job
allocation; these features are set as active features on the node and passed to
RebootProgram (see \fBslurm.conf\fR(5)) and the helper script (see
\fBhelpers.conf\fR(5)). In this case, the helpers plugin uses the first of
"foo" or "bar,baz" that match the two nodes in the job allocation.
.IP

.TP
\fBMatching OR\fR
If only one of a set of possible options should be used for all allocated
nodes, then use the OR operator and enclose the options within square brackets.
For example, \fB\-\-constraint="[rack1|rack2|rack3|rack4]"\fR might
be used to specify that all nodes must be allocated on a single rack of
the cluster, but any of those four racks can be used.
.IP

.TP
\fBMultiple Counts\fR
Specific counts of multiple resources may be specified by using the AND
operator and enclosing the options within square brackets.
For example, \fB\-\-constraint="[rack1*2&rack2*4]"\fR might
be used to specify that two nodes must be allocated from nodes with the feature
of "rack1" and four nodes must be allocated from nodes with the feature
"rack2".

\fBNOTE\fR: This option is not supported by the helpers NodeFeatures plugin.

\fBNOTE\fR: Multiple Counts can cause jobs to be allocated with a non-optimal
network layout.
.IP

.TP
\fBBrackets\fR
Brackets can be used to indicate that you are looking for a set of nodes with
the different requirements contained within the brackets. For example,
\fB\-\-constraint="[(rack1|rack2)*1&(rack3)*2]"\fR will get you one node with
either the "rack1" or "rack2" features and two nodes with the "rack3" feature.
If requesting more than one feature and using node counts, the request
must have square brackets surrounding it.

\fBNOTE\fR: Brackets are only reserved for \fBMultiple Counts\fR and
\fBMatching OR\fR syntax.
AND operators require a count for each feature inside square brackets
(i.e. "[quad*2&hemi*1]"). Slurm will only allow a single set of bracketed
constraints per job.

\fBNOTE\fR: Square brackets are not supported by the helpers NodeFeatures
plugin. Matching OR can be requested without square brackets by using the
vertical bar character with at least one changeable feature.
.IP

.TP
NodeFeatures plugin but is supported by the helpers NodeFeatures plugin.
.RE
.IP

.TP
\fB\-\-container\fR=<\fIpath_to_container\fR>
Absolute path to OCI container bundle.
.IP

.TP
\fB\-\-container-id\fR=<\fIcontainer_id\fR>
Unique name for OCI container.
.IP

.TP
\fB\-\-contiguous\fR
If set, then the allocated nodes must form a contiguous set.

\fBNOTE\fR: This option will only work with the \fBtopology/flat\fR plugin.
Other topology plugins modify the node ordering and prevent this option from
taking effect.
.IP

.TP
\fB\-S\fR, \fB\-\-core\-spec\fR=<\fInum\fR>
Count of Specialized Cores per node reserved by the job for system operations
and not used by the application.
If AllowSpecResourcesUsage is enabled a job can override the CoreSpecCount of
all its allocated nodes with this option.
The overridden Specialized Cores will still be reserved for system processes.
The job will get an implicit \fB--exclusive\fR allocation for the rest of
the Cores on the nodes, resulting in the job's processes being able to use (and
being charged for) all the Cores on the nodes except for the overridden
Specialized Cores.
This option can not be used with the \fB\-\-thread\-spec\fR option.

\fBNOTE\fR: Explicitly setting a job's specialized core value implicitly sets
the --exclusive option.
.IP

.TP
\fB\-\-cores\-per\-socket\fR=<\fIcores\fR>
Restrict node selection to nodes with at least the specified number of
cores per socket. See additional information under \fB\-B\fR option
above when task/affinity plugin is enabled.
.br
\fBNOTE\fR: This option may implicitly set the number of tasks (if \fB\-n\fR
was not specified) as one task per requested thread.
.IP

.TP
\fB\-\-cpu\-freq\fR=<\fIp1\fR>[\-\fIp2\fR][:\fIp3\fR]
\fBp2\fR will be the maximum scaling frequency. In that case the governor
\fBp3\fR or CpuFreqDef cannot be UserSpace since it doesn't support a range.

\fBp2\fR can be [#### | medium | high | highm1]. p2 must be greater than p1 and
is incompatible with UserSpace governor.

\fBp3\fR can be [Conservative | OnDemand | Performance | PowerSave | SchedUtil |
UserSpace]
which will set the governor to the corresponding value.

If \fBp3\fR is UserSpace, the frequency scaling_speed, scaling_max_freq and
scaling_min_freq will be statically set to the value defined by \fBp1\fR.

Any requested frequency below the minimum available frequency will be rounded
to the minimum available frequency. In the same way, any requested frequency
above the maximum available frequency will be rounded to the maximum available
frequency.

The \fBCpuFreqDef\fR parameter in slurm.conf will be used to set the governor
in absence of \fBp3\fR. If there's no \fBCpuFreqDef\fR, the default governor
will be to use the system current governor set in each cpu. Specifying a
range without \fBCpuFreqDef\fR or a specific governor is therefore not allowed.

Acceptable values at present include:
.IP
.RS
.TP 14
\fB####\fR
frequency in kilohertz
.IP

.TP
\fBLow\fR
the lowest available frequency
.IP

.TP
\fBHigh\fR
the highest available frequency
.IP

.TP
\fBHighM1\fR
(high minus one) will select the next highest available frequency
.IP

.TP
\fBMedium\fR
attempts to set a frequency in the middle of the available range
.IP

.TP
.TP
\fBPowerSave\fR
attempts to use the PowerSave CPU governor
.IP

.TP
\fBUserSpace\fR
attempts to use the UserSpace CPU governor
.IP
.RE

The following informational environment variable is set in the job
step when \fB\-\-cpu\-freq\fR option is requested.
.nf
        SLURM_CPU_FREQ_REQ
.fi

This environment variable can also be used to supply the value for the
CPU frequency request if it is set when the 'srun' command is issued.
The \fB\-\-cpu\-freq\fR on the command line will override the
environment variable value. The form on the environment variable is
the same as the command line.
See the \fBENVIRONMENT VARIABLES\fR
section for a description of the SLURM_CPU_FREQ_REQ variable.

\fBNOTE\fR: This parameter is treated as a request, not a requirement.
If the job step's node does not support setting the CPU frequency, or
the requested value is outside the bounds of the legal frequencies, an
error is logged, but the job step is allowed to continue.

\fBNOTE\fR: Setting the frequency for just the CPUs of the job step
implies that the tasks are confined to those CPUs. If task
confinement (i.e. the task/affinity TaskPlugin is enabled, or the task/cgroup
TaskPlugin is enabled with "ConstrainCores=yes" set in cgroup.conf) is not
configured, this parameter is ignored.

\fBNOTE\fR: When the step completes, the frequency and governor of each
selected CPU is reset to the previous values.

\fBNOTE\fR: When submitting jobs with the \fB\-\-cpu\-freq\fR option
with linuxproc as the ProctrackType can cause jobs to run too quickly before
Accounting is able to poll for job information. As a result not all of
accounting information will be present.
.RE
.IP

.TP
\fB\-\-cpus\-per\-gpu\fR=<\fIncpus\fR>
Request that \fIncpus\fR processors be allocated per allocated GPU.
Steps inheriting this value will imply \-\-exact.
Not compatible with the \fB\-\-cpus\-per\-task\fR option.
.IP

.TP
\fB\-\-deadline\fR=<\fIOPT\fR>
Remove the job if no ending is possible before
this deadline (start > (deadline \- time[\-min])).
Default is no deadline. Note that if neither \fBDefaultTime\fR nor
\fBMaxTime\fR are configured on the partition the job is in, the job will
need to specify some form of time limit (\-\-time[\-min]) if a deadline
is to be used.

Valid time formats are:
.br
HH:MM[:SS] [AM|PM]
.br
MMDD[YY] or MM/DD[/YY] or MM.DD[.YY]
.br
MM/DD[/YY]\-HH:MM[:SS]
.br
YYYY\-MM\-DD[THH:MM[:SS]]
.br
now[+\fIcount\fR[seconds(default)|minutes|hours|days|weeks]]
.br
midnight, elevenses (11 AM), noon, fika (3 PM), teatime (4 PM), or tomorrow

One or more time strings may be specified (e.g., 'tomorrow18:00'). If there is
a conflict between them, the last one will silently take precedence.
.IP

.TP
\fB\-\-delay\-boot\fR=<\fIminutes\fR>
Do not reboot nodes in order to satisfied this job's feature specification if
the job has been eligible to run for less than this time period.
If the job has waited for less than the specified period, it will use only
nodes which already have the specified features.
The argument is in units of minutes.
A default value may be set by a system administrator using the \fBdelay_boot\fR
option of the \fBSchedulerParameters\fR configuration parameter in the
slurm.conf file, otherwise the default value is zero (no delay).
.IP

.TP
\fB\-d\fR, \fB\-\-dependency\fR=<\fIdependency_list\fR>
Defer the start of this job until the specified dependencies have been
satisfied. Once a dependency is satisfied, it is removed from the job.
<\fIdependency_list\fR> is of the form
<\fItype:job_id[:job_id][,type:job_id[:job_id]]\fR> or
<\fItype:job_id[:job_id][?type:job_id[:job_id]]\fR>.
All dependencies must be satisfied if the "," separator is used.
Any dependency may be satisfied if the "?" separator is used.
Only one separator may be used. For instance:
.nf
-d afterok:20:21,afterany:23
the dependent job will never be run, even if the preceding job is requeued and
has a different termination state in a subsequent execution.
.IP
.PD
.RS
.TP
\fBafter:job_id[[+time][:jobid[+time]...]]\fR
After the specified jobs start or are cancelled and 'time' in minutes from job
start or cancellation happens, this
job can begin execution. If no 'time' is given then there is no delay after
start or cancellation.
.IP

.TP
\fBafterany:job_id[:jobid...]\fR
This job can begin execution after the specified jobs have terminated.
This is the default dependency type.
.IP

.TP
\fBafterburstbuffer:job_id[:jobid...]\fR
This job can begin execution after the specified jobs have terminated and
any associated burst buffer stage out operations have completed.
.IP

.TP
\fBaftercorr:job_id[:jobid...]\fR
A task of this job array can begin execution after the corresponding task ID
in the specified job has completed successfully (ran to completion with an
exit code of zero). If the specified job is not an array, this is treated the
same as afterok.
.IP

.TP
\fBafternotok:job_id[:jobid...]\fR
This job can begin execution after the specified jobs have terminated
in some failed state (non\-zero exit code, node failure, timed out, etc).
This job must be submitted while the specified job is still active or within
\fBMinJobAge\fR seconds after the specified job has ended.
If the dependent job id is not found and is on the same cluster as the job
submission, the job is rejected. If the dependent job id is not found and is on
a different cluster from the job submission, the dependency is marked as
failed.
.IP

.TP
\fBafterok:job_id[:jobid...]\fR
This job can begin execution after the specified jobs have successfully
executed (ran to completion with an exit code of zero).
This job must be submitted while the specified job is still active or within
\fBMinJobAge\fR seconds after the specified job has ended.
If the dependent job id is not found and is on the same cluster as the job
.IP

.TP
\fB\-m\fR, \fB\-\-distribution\fR={*|block|cyclic|arbitrary|plane=<\fIsize\fR>}[:{*|block|cyclic|fcyclic}[:{*|block|cyclic|fcyclic}]][,{Pack|NoPack}]

Specify alternate distribution methods for remote processes.
For job allocation, this sets environment variables that will be used by
subsequent srun requests and also affects which cores will be selected for
job allocation.

This option controls the distribution of tasks to the nodes on which
resources have been allocated, and the distribution of those resources
to tasks for binding (task affinity). The first distribution
method (before the first ":") controls the distribution of tasks to nodes.
The second distribution method (after the first ":")
controls the distribution of allocated CPUs across sockets for binding
to tasks. The third distribution method (after the second ":") controls
the distribution of allocated CPUs across cores for binding to tasks.
The second and third distributions apply only if task affinity is enabled.
The third distribution is supported only if the task/cgroup plugin is
configured. The default value for each distribution type is specified by *.

Note that with select/cons_tres, the number of CPUs
allocated to each socket and node may be different. Refer to
https://slurm.schedmd.com/mc_support.html
for more information on resource allocation, distribution of tasks to
nodes, and binding of tasks to CPUs.
.RS
First distribution method (distribution of tasks across nodes):

.TP
.B *
Use the default method for distributing tasks to nodes (block).
.IP

.TP
.B block
The block distribution method will distribute tasks to a node such
that consecutive tasks share a node. For example, consider an
allocation of three nodes each with two cpus. A four\-task block
distribution request will distribute those tasks to the nodes with
tasks one and two on the first node, task three on the second node,
and task four on the third node. Block distribution is the default
behavior if the number of tasks exceeds the number of allocated nodes.
.IP

.TP
.B cyclic
The cyclic distribution method will distribute tasks to a node such
that consecutive tasks are distributed over consecutive nodes (in a
round\-robin fashion). For example, consider an allocation of three
nodes each with two cpus. A four\-task cyclic distribution request
distributed to each node is the same as for cyclic distribution, but the
taskids assigned to each node depend on the plane size. Additional distribution
specifications cannot be combined with this option.
For more details (including examples and diagrams), please see
https://slurm.schedmd.com/mc_support.html and
https://slurm.schedmd.com/dist_plane.html
.IP

.TP
.B arbitrary
The arbitrary method of distribution will allocate processes in\-order
as listed in file designated by the environment variable
SLURM_HOSTFILE. If this variable is listed it will override any
other method specified. If not set the method will default to block.
Inside the hostfile must contain at minimum the number of hosts
requested and be one per line or comma separated. If specifying a
task count (\fB\-n\fR, \fB\-\-ntasks\fR=<\fInumber\fR>), your tasks
will be laid out on the nodes in the order of the file.
.br
\fBNOTE\fR: The arbitrary distribution option on a job allocation only
controls the nodes to be allocated to the job and not the allocation of
CPUs on those nodes. This option is meant primarily to control a job step's
task layout in an existing job allocation for the srun command.
.br
\fBNOTE\fR: If the number of tasks is given and a list of requested nodes is
also given, the number of nodes used from that list will be reduced to match
that of the number of tasks if the number of nodes in the list is greater than
the number of tasks.
.IP

.LP
Second distribution method (distribution of CPUs across sockets for binding):

.TP
.B *
Use the default method for distributing CPUs across sockets (cyclic).
.IP

.TP
.B block
The block distribution method will distribute allocated CPUs
consecutively from the same socket for binding to tasks, before using
the next consecutive socket.
.IP

.TP
.B cyclic
The cyclic distribution method will distribute allocated CPUs for
binding to a given task consecutively from the same socket, and
from the next consecutive socket for the next task, in a
round\-robin fashion across sockets.
Tasks requiring more than one CPU will have all of those CPUs allocated on a
.IP

.LP
Third distribution method (distribution of CPUs across cores for binding):

.TP
.B *
Use the default method for distributing CPUs across cores
(inherited from second distribution method).
.IP

.TP
.B block
The block distribution method will distribute allocated CPUs
consecutively from the same core for binding to tasks, before using
the next consecutive core.
.IP

.TP
.B cyclic
The cyclic distribution method will distribute allocated CPUs for
binding to a given task consecutively from the same core, and
from the next consecutive core for the next task, in a
round\-robin fashion across cores.
.IP

.TP
.B fcyclic
The fcyclic distribution method will distribute allocated CPUs
for binding to tasks from consecutive cores in a
round\-robin fashion across the cores.
.IP

.LP
Optional control for task distribution over nodes:

.TP
.B Pack
Rather than evenly distributing a job step's tasks evenly across its allocated
nodes, pack them as tightly as possible on the nodes.
This only applies when the "block" task distribution method is used.
.IP

.TP
.B NoPack
Rather than packing a job step's tasks as tightly as possible on the nodes,
distribute them evenly.
This user option will supersede the SelectTypeParameters CR_Pack_Nodes
configuration parameter.
.RE
.IP

Explicitly exclude certain nodes from the resources granted to the job.
.IP

.TP
\fB\-\-exclusive\fR[={user|mcs|topo}]
The job allocation can not share nodes (or topology segment  with the "=topo")
with other running jobs (or just other users with the "=user" option or
with the "=mcs" option).
If user/mcs/topo are not specified (i.e. the job allocation can not share nodes with
other running jobs), the job is allocated all CPUs and GRES on all nodes in the
allocation, but is only allocated as much memory as it requested. This is by
design to support gang scheduling, because suspended jobs still reside in
memory. To request all the memory on a node, use \fB\-\-mem=0\fR.
The default shared/exclusive behavior depends on system configuration and the
partition's \fBOverSubscribe\fR option takes precedence over the job's option.
\fBNOTE\fR: Since shared GRES (MPS) cannot be allocated at the same time as a
sharing GRES (GPU) this option only allocates all sharing GRES and no underlying
shared GRES.

\fBNOTE\fR: This option is mutually exclusive with \fB\-\-oversubscribe\fR.
.IP

.TP
\fB\-\-export\fR={[ALL,]<\fIenvironment_variables\fR>|ALL|NIL|NONE}
Identify which environment variables from the submission environment are
propagated to the launched application. Note that SLURM_* variables are
always propagated.
.IP
.RS
.TP 10
\fB\-\-export\fR=ALL
Default mode if \fB\-\-export\fR is not specified. All of the user's environment
will be loaded (either from the caller's environment or from a clean environment
if \fI\-\-get\-user\-env\fR is specified).
.IP

.TP
\fB\-\-export\fR=NIL
Only SLURM_* and SPANK option variables from the user environment will be
defined. User must use absolute path to the binary to be executed that will
define the environment.
User can not specify explicit environment variables with "NIL".

Unlike NONE, NIL will not automatically create a user's environment using the
\fI\-\-get\-user\-env\fR mechanism.
.IP

.TP
\fB\-\-export\fR=NONE
Only SLURM_* and SPANK option variables from the user environment will be
defined. User must use absolute path to the binary to be executed that will
define the environment.
Exports all SLURM_* and SPANK option environment variables along with explicitly
defined variables. Multiple environment variable names should be comma
separated.
Environment variable names may be specified to propagate the current
value (e.g. "\-\-export=EDITOR") or specific values may be exported
(e.g. "\-\-export=EDITOR=/bin/emacs"). If "ALL" is specified, then all user
environment variables will be loaded and will take precedence over any
explicitly given environment variables.
.IP
.RS 5
.TP 5
Example: \fB\-\-export\fR=EDITOR,ARG1=test
In this example, the propagated environment will only contain the
variable \fIEDITOR\fR from the user's environment, \fISLURM_*\fR environment
variables, and \fIARG1\fR=test.
.IP

.TP
Example: \fB\-\-export\fR=ALL,EDITOR=/bin/emacs
There are two possible outcomes for this example. If the caller has the
\fIEDITOR\fR environment variable defined, then the job's environment will
inherit the variable from the caller's environment. If the caller doesn't
have an environment variable defined for \fIEDITOR\fR, then the job's
environment will use the value given by \fB\-\-export\fR.
.RE

\fBNOTE\fR: NONE and [\fIALL\fR,]<\fIenvironment_variables\fR> implicitly
work as if \fB--get-user-env\fR was defined. Please see the implications
of this in its respective section.

.RE
.IP

.TP
\fB\-\-export\-file\fR={<\fIfilename\fR>|<\fIfd\fR>}
If a number between 3 and OPEN_MAX is specified as the argument to
this option, a readable file descriptor will be assumed (STDIN and
STDOUT are not supported as valid arguments). Otherwise a filename is
assumed. Export environment variables defined in <\fIfilename\fR> or
read from <\fIfd\fR> to the job's execution environment. The
content is one or more environment variable definitions of the form
NAME=value, each separated by a null character. This allows the use
of special characters in environment definitions.
.IP

.TP
\fB\-\-extra\fR=<\fIstring\fR>
An arbitrary string enclosed in single or double quotes if using spaces or some
special characters.

If \fBSchedulerParameters=extra_constraints\fR is enabled, this string is used
for node filtering based on the \fIExtra\fR field in each node.
    \fB\-\-sockets\-per\-node\fR=<\fIsockets\fR>
    \fB\-\-cores\-per\-socket\fR=<\fIcores\fR>
    \fB\-\-threads\-per\-core\fR=<\fIthreads\fR>
.fi
If task/affinity plugin is enabled, then specifying an allocation in this
manner also results in subsequently launched tasks being bound to threads
if the \fB\-B\fR option specifies a thread count, otherwise an option of
\fIcores\fR if a core count is specified, otherwise an option of \fIsockets\fR.
If SelectType is configured to select/cons_tres, it must have a parameter of
CR_Core, CR_Core_Memory, CR_Socket, or CR_Socket_Memory for this option
to be honored.
If not specified, the scontrol show job will display 'ReqS:C:T=*:*:*'. This
option applies to job allocations.
.br
\fBNOTE\fR: This option is mutually exclusive with \fB\-\-hint\fR,
\fB\-\-threads\-per\-core\fR and \fB\-\-ntasks\-per\-core\fR.
.br
\fBNOTE\fR: This option may implicitly set the number of tasks (if \fB\-n\fR
was not specified) as one task per requested thread.
.IP

.TP
\fB\-\-get\-user\-env\fR
This option will tell sbatch to retrieve the
login environment variables for the user specified in the \fB\-\-uid\fR option.
The environment variables are retrieved by running something of this sort
"su \- <username> \-c /usr/bin/env" and parsing the output.
Be aware that any environment variables already set in sbatch's environment
will take precedence over any environment variables in the user's
login environment. Clear any environment variables before calling sbatch
that you do not want propagated to the spawned program. If the user environment
retrieval fails or times out, the job will be aborted, requeued and held.

\fBNOTE\fR: The explicit or implicit use of \fB--get-user-env\fR relies in
the capability of being able to create PID and mount namespaces. It is very
advisable to ensure that PID and mount namespace creation is available and
not limited (check that \fB/proc/sys/user/max_[pid|mnt]_namespaces\fR
is not 0). Although they are not strictly mandatory for \fB--get-user-env\fR
to work, they ensure that there are no orphan processes left after the
environment is retrieved.
.IP

.TP
\fB\-\-gid\fR=<\fIgroup\fR>
If \fBsbatch\fR is run as root, and the \fB\-\-gid\fR option is used,
submit the job with \fIgroup\fR's group access permissions. \fIgroup\fR
may be the group name or the numerical group ID.
.IP

.TP
\fB\-\-gpu\-bind\fR=[verbose,]<\fItype\fR>
Equivalent to \-\-tres\-bind=gres/gpu:[verbose,]<\fItype\fR>
The \fIvalue\fR field can either be "low", "medium", "high", "highm1" or
a numeric value in megahertz (MHz).
If the specified numeric value is not possible, a value as close as
possible will be used. See below for definition of the values.
The \fIverbose\fR option causes current GPU frequency information to be logged.
Examples of use include "\-\-gpu\-freq=medium,memory=high" and
"\-\-gpu\-freq=450".

Supported \fIvalue\fR definitions:
.IP
.RS
.TP 10
\fBlow\fR
the lowest available frequency.
.IP

.TP
\fBmedium\fR
attempts to set a frequency in the middle of the available range.
.IP

.TP
\fBhigh\fR
the highest available frequency.
.IP

.TP
\fBhighm1\fR
(high minus one) will select the next highest available frequency.
.RE
.IP

.TP
\fB\-G\fR, \fB\-\-gpus\fR=[\fItype\fR:]<\fInumber\fR>
Specify the total number of GPUs required for the job.
An optional GPU type specification can be supplied.
For example "\-\-gpus=volta:3".
See also the \fB\-\-gpus\-per\-node\fR, \fB\-\-gpus\-per\-socket\fR and
\fB\-\-gpus\-per\-task\fR options.
.br
\fBNOTE\fR: The allocation has to contain at least one GPU per node, or one of
each GPU type per node if types are used. Use heterogeneous jobs if different
nodes need different GPU types.
.IP

.TP
\fB\-\-gpus\-per\-node\fR=[\fItype\fR:]<\fInumber\fR>
Specify the number of GPUs required for the job on each node included in
the job's resource allocation.
An optional GPU type specification can be supplied.
For example "\-\-gpus\-per\-node=volta:3".
Multiple options can be requested in a comma separated list, for example:
See also the \fB\-\-gpus\fR, \fB\-\-gpus\-per\-node\fR and
\fB\-\-gpus\-per\-task\fR options.
.IP

.TP
\fB\-\-gpus\-per\-task\fR=[\fItype\fR:]<\fInumber\fR>
Specify the number of GPUs required for the job on each task to be spawned
in the job's resource allocation.
An optional GPU type specification can be supplied.
For example "\-\-gpus\-per\-task=volta:1". Multiple options can be
requested in a comma separated list, for example:
"\-\-gpus\-per\-task=volta:3,kepler:1". See also the \fB\-\-gpus\fR,
\fB\-\-gpus\-per\-socket\fR and \fB\-\-gpus\-per\-node\fR options.
This option requires an explicit task count, e.g. \-n, \-\-ntasks or "\-\-gpus=X
\-\-gpus\-per\-task=Y" rather than an ambiguous range of nodes with \-N, \-\-nodes.
This option will implicitly set \-\-tres\-bind=gres/gpu:per_task:<gpus_per_task>,
or if multiple gpu types are specified
\-\-tres\-bind=gres/gpu:per_task:<gpus_per_task_type_sum>. However, that can be
overridden with an explicit \-\-tres\-bind=gres/gpu specification.
.br
.IP

.TP
\fB\-\-gres\fR=<\fIlist\fR>
Specifies a comma\-delimited list of generic consumable resources requested per
node.
The format for each entry in the list is "name[[:type]:count]".
The \fIname\fR is the type of consumable resource (e.g. gpu).
The \fItype\fR is an optional classification for the resource (e.g. a100).
The \fIcount\fR is the number of those resources with a default value of 1.
The count can have a suffix of
"k" or "K" (multiple of 1024),
"m" or "M" (multiple of 1024 x 1024),
"g" or "G" (multiple of 1024 x 1024 x 1024),
"t" or "T" (multiple of 1024 x 1024 x 1024 x 1024),
"p" or "P" (multiple of 1024 x 1024 x 1024 x 1024 x 1024).
The specified resources will be allocated to the job on each node.
The available generic consumable resources is configurable by the system
administrator.
A list of available generic consumable resources will be printed and the
command will exit if the option argument is "help".
Examples of use include "\-\-gres=gpu:2", "\-\-gres=gpu:kepler:2", and
"\-\-gres=help".
.IP

.TP
\fB\-\-gres\-flags\fR=<\fItype\fR>
Specify generic resource task binding options.
.IP
.RS

.TP
GRES (i.e. the CPUs identified in the gres.conf file will be strictly
enforced). This option may result in delayed initiation of a job.
For example a job requiring two GPUs and one CPU will be delayed until both
GPUs on a single socket are available rather than using GPUs bound to separate
sockets, however, the application performance may be improved due to improved
communication speed.
Requires the node to be configured with more than one socket and resource
filtering will be performed on a per\-socket basis.
.br
\fBNOTE\fR: This option can be set by default in \fBSelectTypeParameters\fR.
.br
\fBNOTE\fR: This option is specific to \fBSelectType=cons_tres\fR.
.br
\fBNOTE\fR: This option can give undefined results if attempting to enforce
binding on multiple gres on multiple sockets.
.IP

.TP
.B one\-task\-per\-sharing
Do not allow different tasks in to be allocated shared gres from the same
sharing gres.
.br
\fBNOTE\fR: This flag is only enforced if shared gres are requested with
\-\-tres\-per\-task.
.br
\fBNOTE\fR: This option can be set by default with
\fBSelectTypeParameters=ONE_TASK_PER_SHARING_GRES\fR.
.br
\fBNOTE\fR: This option is specific to
\fBSelectTypeParameters=MULTIPLE_SHARING_GRES_PJ\fR
.RE
.IP

.TP
\fB\-h\fR, \fB\-\-help\fR
Display help information and exit.
.IP

.TP
\fB\-\-hint\fR=<\fItype\fR>
Bind tasks according to application hints.
.br
\fBNOTE\fR: This option implies specific values for certain related options,
which prevents its use with any user\-specified values for
\fB\-\-ntasks\-per\-core\fR, \fB\-\-cores\-per\-socket\fR,
\fB\-\-sockets\-per\-node\fR, \fB\-\-threads\-per\-core\fR or \fB\-B\fR.
These conflicting options will override \fB\-\-hint\fR when specified as
command line arguments. If a conflicting option is specified as an environment
variable, \-\-hint as a command line argument will take precedence.
.IP
.RS
.TP
which can benefit communication intensive applications.
Only supported with the task/affinity plugin.
.IP

.TP
.B nomultithread
Don't use extra threads with in\-core multi\-threading;
restricts tasks to one thread per core.
Only supported with the task/affinity plugin.
.IP

.TP
.B help
show this help message
.RE
.IP

.TP
\fB\-H, \-\-hold\fR
Specify the job is to be submitted in a held state (priority of zero).
A held job can now be released using scontrol to reset its priority
(e.g. "\fIscontrol release <job_id>\fR").
.IP

.TP
\fB\-\-ignore\-pbs\fR
Ignore all "#PBS" and "#BSUB" options specified in the batch script.
.IP

.TP
\fB\-i\fR, \fB\-\-input\fR=<\fIfilename_pattern\fR>
Instruct Slurm to connect the batch script's standard input
directly to the file name specified in the "\fIfilename pattern\fR".

By default, "/dev/null" is open on the batch script's standard input and both
standard output and standard error are directed to a file of the name
"slurm\-%j.out", where the "%j" is replaced with the job allocation number, as
described below in the \fBfilename pattern\fR section.
.IP

.TP
\fB\-J\fR, \fB\-\-job\-name\fR=<\fIjobname\fR>
Specify a name for the job allocation. The specified name will appear along with
the job id number when querying running jobs on the system. The default
is the name of the batch script, or just "sbatch" if the script is
read on sbatch's standard input.
.IP

.TP
\fB\-\-kill\-on\-invalid\-dep\fR=<yes|no>
If a job has an invalid dependency and it can never run this parameter tells
Slurm to terminate it or not. A terminated job state will be JOB_CANCELLED.
then only one of the license requests are required for the job. For example,
"\-\-licenses=foo:4|bar". AND and OR cannot both be used.
To submit jobs using remote licenses, those served by the slurmdbd, specify
the name of the server providing the licenses.
For example "\-\-license=nastran@slurmdb:12".

\fBNOTE\fR: When submitting heterogeneous jobs, license requests
may only be made on the first component job.
For example "sbatch \-L ansys:2 : script.sh".

\fBNOTE\fR: If licenses are tracked in AccountingStorageTres and OR is used,
ReqTRES will display all requested tres separated by commas. AllocTRES will
display only the license that was allocated to the job.

\fBNOTE\fR: When a job requests OR'd licenses, Slurm will attempt to allocate
the licenses in the order in which they are requested. This specified order
will take precedence even if the rest of requested licenses could be satisfied
on a requested reservation. This also applies to backfill planning when
\fBSchedulerParameters=bf_licenses\fR is configured.
.IP

.TP
\fB\-\-mail\-type\fR=<\fItype\fR>
Notify user by email when certain event types occur.
Valid \fItype\fR values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to
BEGIN, END, FAIL, INVALID_DEPEND, REQUEUE, and STAGE_OUT), INVALID_DEPEND
(dependency never satisfied), STAGE_OUT (burst buffer stage out and teardown
completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent of time limit),
TIME_LIMIT_80 (reached 80 percent of time limit), TIME_LIMIT_50 (reached 50
percent of time limit) and ARRAY_TASKS (send emails for each array task).
Multiple \fItype\fR values may be specified in a comma separated list.
NONE will suppress all event notifications, ignoring any other values specified.
By default no email notifications are sent.
The user to be notified is indicated with \fB\-\-mail\-user\fR.

Unless the ARRAY_TASKS option is specified, mail notifications on job BEGIN,
END, FAIL and REQUEUE apply to a job array as a whole rather than generating
individual email messages for each task in the job array.
.IP

.TP
\fB\-\-mail\-user\fR=<\fIuser\fR>
User to receive email notification of state changes as defined by
\fB\-\-mail\-type\fR. This may be a full email address or a username. If a
username is specified, the value from \fBMailDomain\fR in slurm.conf will be
appended to create an email address.
The default value is the submitting user.
.IP

.TP
\fB\-\-mcs\-label\fR=<\fImcs\fR>
Used only when a compatible \fBMCSPlugin\fR is enabled. This parameter is a
are allocated to jobs (\fBSelectType=select/linear\fR).
Also see \fB\-\-mem\-per\-cpu\fR and \fB\-\-mem\-per\-gpu\fR.
The \fB\-\-mem\fR, \fB\-\-mem\-per\-cpu\fR and \fB\-\-mem\-per\-gpu\fR
options are mutually exclusive. If \fB\-\-mem\fR, \fB\-\-mem\-per\-cpu\fR or
\fB\-\-mem\-per\-gpu\fR are specified as command line arguments, then they will
take precedence over the environment.

\fBNOTE\fR: A memory size specification of zero is treated as a special case and
grants the job access to all of the memory on each node.

\fBNOTE\fR: The memory used by each slurmstepd process is included in the job's
total memory usage. It typically consumes between 20MiB and 200MiB, though this
can vary depending on system configuration and any loaded plugins.

\fBNOTE\fR: Memory requests will not be strictly enforced unless Slurm is
configured to use an enforcement mechanism. See \fBConstrainRAMSpace\fR in
the \fBcgroup.conf\fR(5) man page and \fBOverMemoryKill\fR in the
\fBslurm.conf\fR(5) man page for more details.
.IP

.TP
\fB\-\-mem\-bind\fR=[{quiet|verbose},]<\fItype\fR>
Bind tasks to memory. Used only when the task/affinity plugin is enabled
and the NUMA memory functions are available.
\fBNote that the resolution of CPU and memory binding
may differ on some architectures.\fR For example, CPU binding may be performed
at the level of the cores within a processor while memory binding will
be performed at the level of nodes, where the definition of "nodes"
may differ from system to system.
By default no memory binding is performed; any task using any CPU can use
any memory. This option is typically used to ensure that each task is bound to
the memory closest to its assigned CPU. \fBThe use of any type other than
"none" or "local" is not recommended.\fR

\fBNOTE\fR: To have Slurm always report on the selected memory binding for
all commands executed in a shell, you can enable verbose mode by
setting the SLURM_MEM_BIND environment variable value to "verbose".

The following informational environment variables are set when
\fB\-\-mem\-bind\fR is in use:
.IP
.nf
   SLURM_MEM_BIND_LIST
   SLURM_MEM_BIND_PREFER
   SLURM_MEM_BIND_TYPE
   SLURM_MEM_BIND_VERBOSE
.fi

See the \fBENVIRONMENT VARIABLES\fR section for a more detailed description
of the individual SLURM_MEM_BIND* variables.

Supported options include:
Bind by setting memory masks on tasks (or ranks) as specified where <list> is
fR(5) man page for a full list of flags. The environment
variable takes precedence over the setting in the slurm.conf.
.IP

.TP
\fBSLURM_EXIT_ERROR\fR
Specifies the exit code generated when a Slurm error occurs
(e.g. invalid options).
This can be used by a script to distinguish application exit codes from
various Slurm error conditions.
.IP

.TP
\fBSLURM_STEP_KILLED_MSG_NODE_ID\fR=ID
If set, only the specified node will log when the job or step are killed
by a signal.
.IP

.TP
\fBSLURM_UMASK\fR
If defined, Slurm will use the defined \fIumask\fR to set permissions when
creating the output/error files for the job.
.IP

.SH "OUTPUT ENVIRONMENT VARIABLES"
.TP
\fBSBATCH_MEM_BIND_PREFER\fR
Set to "prefer" if the \fB\-\-mem\-bind\fR option includes the prefer option.
.IP

.TP
\fBSBATCH_MEM_BIND_TYPE\fR
Set to the memory binding type specified with the \fB\-\-mem\-bind\fR option.
Possible values are "none", "rank", "map_mem:", "mask_mem:" and "local".
.IP

.TP
\fBSBATCH_MEM_BIND_VERBOSE\fR
Set to "verbose" if the \fB\-\-mem\-bind\fR option includes the verbose option.
Set to "quiet" otherwise.
.IP

.TP
\fBSLURM_*_HET_GROUP_#\fR
For a heterogeneous job allocation, the environment variables are set separately
for each component.
.IP

.TP
\fBSLURM_ARRAY_JOB_ID\fR
Job array's master job ID number.
.IP

.TP
\fBSLURM_ARRAY_TASK_COUNT\fR
Total number of tasks in a job array.
.IP

.TP
\fBSLURM_ARRAY_TASK_ID\fR
Job array ID (index) number.
.IP

.TP
\fBSLURM_ARRAY_TASK_MAX\fR
Job array's maximum ID (index) number.
.IP

.TP
\fBSLURM_ARRAY_TASK_MIN\fR
Job array's minimum ID (index) number.
.IP

.TP
\fBSLURM_ARRAY_TASK_STEP\fR
Job array's index step size.
.IP

.TP
\fBSLURM_CPUS_PER_GPU\fR
Number of CPUs requested per allocated GPU.
Only set if the \fB\-\-cpus\-per\-gpu\fR option is specified.
.IP

.TP
\fBSLURM_CPUS_PER_TASK\fR
Number of cpus requested per task.
Only set if either the \fB\-\-cpus\-per\-task\fR option or the
\fB\-\-tres\-per\-task=cpu=#\fR option is specified.
.IP

.TP
\fBSLURM_CONTAINER\fR
OCI Bundle for job.
Only set if \-\-container\fR is specified.
.IP

.TP
\fBSLURM_CONTAINER_ID\fR
OCI id for job.
Only set if \fB\-\-container-id\fR is specified.
.IP

.TP
\fBSLURM_DIST_PLANESIZE\fR
Plane distribution size. Only set for plane distributions.
See \fB\-m, \-\-distribution\fR.
.IP

.TP
\fBSLURM_DISTRIBUTION\fR
Same as \fB\-m, \-\-distribution\fR
.IP

.TP
\fBSLURM_EXPORT_ENV\fR
Same as \fB\-\-export\fR.
.IP

.TP
\fBSLURM_GPU_BIND\fR
Requested binding of tasks to GPU.
Only set if the \fB\-\-gpu\-bind\fR option is specified.
.IP

.TP
\fBSLURM_GPU_FREQ\fR
Requested GPU frequency.
Only set if the \fB\-\-gpu\-freq\fR option is specified.
\fBSLURM_GPUS_PER_NODE\fR
Requested GPU count per allocated node.
Only set if the \fB\-\-gpus\-per\-node\fR option is specified.
.IP

.TP
\fBSLURM_GPUS_PER_SOCKET\fR
Requested GPU count per allocated socket.
Only set if the \fB\-\-gpus\-per\-socket\fR option is specified.
.IP

.TP
\fBSLURM_GTIDS\fR
Global task IDs running on this node. Zero origin and comma separated.
It is read internally by pmi if Slurm was built with pmi support. Leaving
the variable set may cause problems when using external packages from
within the job (Abaqus and Ansys have been known to have problems when
it is set \- consult the appropriate documentation for 3rd party software).
.IP

.TP
\fBSLURM_HET_SIZE\fR
Set to count of components in heterogeneous job.
.IP

.TP
\fBSLURM_JOB_ACCOUNT\fR
Account name associated of the job allocation.
.IP

.TP
\fBSLURM_JOB_CPUS_PER_NODE\fR
Count of CPUs available to the job on the nodes in the allocation, using the
format \fICPU_count\fR[(x\fInumber_of_nodes\fR)][,\fICPU_count\fR
[(x\fInumber_of_nodes\fR)] ...].
For example: SLURM_JOB_CPUS_PER_NODE='72(x2),36' indicates that on the
first and second nodes (as listed by SLURM_JOB_NODELIST) the allocation
has 72 CPUs, while the third node has 36 CPUs.
\fBNOTE\fR: The \fBselect/linear\fR plugin allocates entire nodes to jobs, so
the value indicates the total count of CPUs on allocated nodes. The
\fBselect/cons_tres\fR plugin allocates individual
CPUs to jobs, so this number indicates the number of CPUs allocated to the job.
.IP

.TP
\fBSLURM_JOB_DEPENDENCY\fR
Set to value of the \fB\-\-dependency\fR option.
.IP

.TP
\fBSLURM_JOB_END_TIME\fR
The UNIX timestamp for a job's projected end time.
.TP
\fBSLURM_JOB_LICENSES\fR
Name and count of any license(s) requested.
.IP

.TP
\fBSLURM_JOB_NAME\fR
Name of the job.
.IP

.TP
\fBSLURM_JOB_NODELIST\fR
List of nodes allocated to the job.
.IP

.TP
\fBSLURM_JOB_NUM_NODES\fR
Total number of nodes in the job's resource allocation.
.IP

.TP
\fBSLURM_JOB_PARTITION\fR
Name of the partition in which the job is running.
.IP

.TP
\fBSLURM_JOB_QOS\fR
Quality Of Service (QOS) of the job allocation.
.IP

.TP
\fBSLURM_JOB_RESERVATION\fR
Advanced reservation containing the job allocation, if any.
.IP

.TP
\fBSLURM_JOB_SEGMENT_SIZE\fR
The size of the segments that was used to create the job allocation.
Only set if \-\-segment\fR is specified.
.IP

.TP
\fBSLURM_JOB_START_TIME\fR
The UNIX timestamp for a job's start time.
.IP

.TP
\fBSLURM_JOBID\fR
The ID of the job allocation. See \fBSLURM_JOB_ID\fR. Included for backwards
compatibility.
.IP

.IP

.TP
\fBSLURM_MEM_PER_NODE\fR
Same as \fB\-\-mem\fR
.IP

.TP
\fBSLURM_NETWORK\fR
Set to the value of the \fB\-\-network\fR option, if specified.
.IP

.TP
\fBSLURM_NNODES\fR
Total number of nodes in the job's resource allocation. See
\fBSLURM_JOB_NUM_NODES\fR. Included for backwards compatibility.
.IP

.TP
\fBSLURM_NODEID\fR
ID of the nodes allocated.
.IP

.TP
\fBSLURM_NODELIST\fR
List of nodes allocated to the job. See \fBSLURM_JOB_NODELIST\fR. Included
for backwards compatibility.
.IP

.TP
\fBSLURM_NPROCS\fR
Same as \fBSLURM_NTASKS\fR. Included for backwards compatibility.
.IP

.TP
\fBSLURM_NTASKS\fR
Set to value of the \fB\-\-ntasks\fR option, if specified. Or, if any of the
\fB\-\-ntasks\-per\-*\fR options are specified, set to the number of tasks in
the job.

\fBNOTE\fR: This is also an input variable for srun, so if set it will
effectively set the \fB\-\-ntasks\fR option for srun when called from the batch
script.
.IP

.TP
\fBSLURM_NTASKS_PER_CORE\fR
Number of tasks requested per core.
Only set if the \fB\-\-ntasks\-per\-core\fR option is specified.

.IP

Number of tasks requested per socket.
Only set if the \fB\-\-ntasks\-per\-socket\fR option is specified.
.IP

.TP
\fBSLURM_OOMKILLSTEP\fR
Same as \fB\-\-oom\-kill\-step\fR
.IP

.TP
\fBSLURM_OVERCOMMIT\fR
Set to \fB1\fR if \fB\-\-overcommit\fR was specified.
.IP

.TP
\fBSLURM_PRIO_PROCESS\fR
The scheduling priority (nice value) at the time of job submission.
This value is propagated to the spawned processes.
.IP

.TP
\fBSLURM_PROCID\fR
The MPI rank (or relative process ID) of the current process
.IP

.TP
\fBSLURM_PROFILE\fR
Same as \fB\-\-profile\fR
.IP

.TP
\fBSLURM_RESTART_COUNT\fR
If the job has been restarted due to system failure or has been
explicitly requeued, this will be sent to the number of times
the job has been restarted.
.IP

.TP
\fBSLURM_SHARDS_ON_NODE\fR
Number of GPU Shards available to the step on this node.
.IP

.TP
\fBSLURM_SUBMIT_DIR\fR
The directory from which \fBsbatch\fR was invoked.
.IP

.TP
\fBSLURM_SUBMIT_HOST\fR
The hostname of the computer from which \fBsbatch\fR was invoked.
.IP

.IP

.TP
\fBSLURM_THREADS_PER_CORE\fR
This is only set if \fB\-\-threads\-per\-core\fR or
\fBSBATCH_THREADS_PER_CORE\fR were specified. The value will be set to the
value specified by \fB\-\-threads\-per\-core\fR or
\fBSBATCH_THREADS_PER_CORE\fR. This is used by subsequent srun calls within the
job allocation.
.IP

.TP
\fBSLURM_TOPOLOGY_ADDR\fR
This is set only if the system has the topology/tree plugin
configured. The value will be set to the names network switches
which may be involved in the job's communications from the
system's top level switch down to the leaf switch and ending with
node name. A period is used to separate each hardware component name.
.IP

.TP
\fBSLURM_TOPOLOGY_ADDR_PATTERN\fR
This is set only if the system has the topology/tree plugin
configured. The value will be set component types listed in
SLURM_TOPOLOGY_ADDR. Each component will be identified as
either "switch" or "node". A period is used to separate each
hardware component type.
.IP

.TP
\fBSLURM_TRES_PER_TASK\fR
Set to the value of \fB\-\-tres\-per\-task\fR. If \fB\-\-cpus\-per\-task\fR or
\fB\-\-gpus\-per\-task\fR is specified, it is also set in
\fBSLURM_TRES_PER_TASK\fR as if it were specified in \fB\-\-tres\-per\-task\fR.
.IP

.TP
\fBSLURMD_NODENAME\fR
Name of the node running the job script.
.IP

.SH "EXAMPLES"

.TP
Specify a batch script by filename on the command line. \
The batch script specifies a 1 minute time limit for the job.
.IP
.nf
$ cat myscript
#!/bin/sh
#SBATCH \-\-time=1
srun hostname |sort
.nf
$ sbatch \-N4 <<EOF
> #!/bin/sh
> srun hostname |sort
> EOF
sbatch: Submitted batch job 65541

$ cat slurm\-65541.out
host1
host2
host3
host4
.fi

.TP
To create a heterogeneous job with 3 components, each allocating a unique set \
of nodes:
.IP
.nf
$ sbatch \-w node[2\-3] : \-w node4 : \-w node[5\-7] work.bash
Submitted batch job 34987
.fi

.SH "COPYING"
Copyright (C) 2006\-2007 The Regents of the University of California.
Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
.br
Copyright (C) 2008\-2010 Lawrence Livermore National Security.
.br
Copyright (C) 2010\-2022 SchedMD LLC.
.LP
This file is part of Slurm, a resource management program.
For details, see <https://slurm.schedmd.com/>.
.LP
Slurm is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Software Foundation; either version 2 of the License, or (at your option)
any later version.
.LP
Slurm is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
details.

.SH "SEE ALSO"
.LP
\fBsinfo\fR(1), \fBsattach\fR(1), \fBsalloc\fR(1), \fBsqueue\fR(1), \fBscancel\fR(1), \fBscontrol\fR(1),
\fBslurm.conf\fR(5), \fBsched_setaffinity\fR (2), \fBnuma\fR (3)