3 submitting an mpi job, Example 2-4: running an mpi job with lsf – HP XC System 2.x Software User Manual
Page 36
![background image](/manuals/398425/36/background.png)
Example 2-3: Submitting a Non-MPI Parallel Job to Run One Task per Node
$ bsub -n4 -ext "SLURM[nodes=4]" -I srun hostname
Job <22> is submitted to default queue
<
<
n1
n2
n3
n4
2.3.5.3 Submitting an MPI Job
Submitting MPI jobs is discussed in detail in Section 7.4.5. The
bsub
command format to
submit a job to HP-MPI by means of
mpirun
command is:
bsub -n num-procs [bsub-options] mpirun [mpirun-options] [-srun
[srun-options] ]mpi-jobname [job-options]
The
-srun
command is required by the
mpirun
command to run jobs in the LSF partition.
The
-n
num-procs parameter specifies the number of processors the job requests.
-n
num-procs
is required for parallel jobs. Any SLURM
srun
options that are included are job specific, not
allocation-specific.
Using SLURM Options in MPI Jobs with the LSF External Scheduler
An important option that can be included in submitting HP-MPI jobs is LSF’s external scheduler
option. The LSF external scheduler provides additional capabilities at the job level and queue
level by allowing the inclusion of several SLURM options in the LSF command line. For
example, it can be used to submit a job to run one task per node, or to submit a job to run
on specific nodes. This option is discussed in detail in Section 7.4.2. An example of its use
is provided in this section.
Consider an HP XC configuration where
lsfhost.localdomain
is the LSF execution host
and nodes
n[1-10]
are compute nodes in the LSF partition. All nodes contain two processors,
providing 20 processors for use by LSF jobs.
Example 2-4: Running an MPI Job with LSF
$ bsub -n4 -I mpirun -srun ./hello_world
Job <24> is submitted to default queue
<
<
Hello world!
Hello world!
I’m 1 of 4 on host1
Hello world!
I’m 3 of 4 on host2
Hello world!
I’m 0 of 4 on host1
Hello world!
I’m 2 of 4 on host2
Example 2-5: Running an MPI Job with LSF Using the External Scheduler Option
$ bsub -n4 -ext "SLURM [nodes=4]" -I mpirun -srun ./hello_world
Job <27> is submitted to default queue
<
<
Hello world!
Hello world!
I’m 1 of 4 on host1
2-10
Using the System