HP XC System 3.x Software User Manual
Page 50

The SLURM srun command is required to run jobs on an LSF-HPC node allocation. The srun command
is the user job launched by the LSF bsub command. SLURM launches the jobname in parallel on the
reserved cores in the lsf partition.
The jobname parameter is the name of an executable file or command to be run in parallel.
illustrates a non-MPI parallel job submission. The job output shows that the job “srun
hostname
” was launched from the LSF execution host lsfhost.localdomain, and that it ran on 4
cores from the compute nodes n1 and n2.
Example 5-5 Submitting a Non-MPI Parallel Job
$ bsub -n4 -I srun hostname
Job <21> is submitted to default queue
<
<
n1
n1
n2
n2
You can use the LSF-SLURM external scheduler to specify additional SLURM options on the command
line. As shown in
, it can be used to submit a job to run one task per compute node (on
nodes):
Example 5-6 Submitting a Non-MPI Parallel Job to Run One Task per Node
$ bsub -n4 -ext "SLURM[nodes=4]" -I srun hostname
Job <22> is submitted to default queue
<
<
n1
n2
n3
n4
5.3.2 Submitting a Parallel Job That Uses the HP-MPI Message Passing Interface
Use the following format of the LSF bsub command to submit a parallel job that makes use of HP-MPI:
bsub -n num-procs [bsub-options] mpijob
The bsub command submits the job to LSF-HPC.
The -n num-procs parameter, which is required for parallel jobs, specifies the number of cores requested
for the job.
The mpijob argument has the following format:
mpirun
[mpirun-options] [-srun] [srun-options] [mpi-jobname] [job-options]
See the mpirun(1) manpage for more information on this command.
The mpirun command's -srun option is required if the MPI_USESRUN environment variable is not set
or if you want to use additional srun options to execute your job.
The srun command, used by the mpirun command to launch the
tasks in parallel in the lsf partition,
determines the number of tasks to launch from the SLURM_NPROCS environment variable that was set by
LSF-HPC; this environment variable is equivalent to the number provided by the -n option of the bsub
command.
Any additional SLURM srun options are job specific, not allocation-specific.
The mpi-jobname is the executable file to be run. The mpi-jobname must be compiled with the
appropriate HP-MPI compilation utility. For more information, see the section titled Compiling applications
in the HP-MPI User's Guide.
shows an MPI job that runs a hello world program on 4 cores on 2 compute nodes.
50
Submitting Jobs