beautypg.com

Launching an interactive mpi job – HP XC System 3.x Software User Manual

Page 112

background image

$ bjobs -l 124 | grep slurm
date and time stamp: slurm_id=150;ncpus=8;slurm_alloc=n[1-4];

LSF allocated nodes n[1-4] for this job. The SLURM JOBID is 150 for this allocation.

Begin your work in another terminal. Use ssh to login to one of the compute nodes. If you want
to run tasks in parallel, use the srun command with the --jobid option to specify the SLURM
JOBID. For example, to run the hostname command on all nodes in the allocation:

$ srun --jobid=150 hostname
n1
n2
n3
n4

You can simplify this by first setting the SLURM_JOBID environment variable to the SLURM
JOBID in the environment, as follows:

$ export SLURM_JOBID=150
$ srun hostname
n1
n2
n3
n4

Note:

Be sure to unset the SLURM_JOBID when you are finished with the allocation, to prevent a
previous SLURM JOBID from interfering with future jobs:

$ unset SLURM_JOBID

The following examples illustrate launching interactive MPI jobs. They use the hellompi job
script introduced in

Section 5.3.2 (page 56)

.

Example 10-9

Example 10-9 Launching an Interactive MPI Job

$ mpirun -srun --jobid=150 hellompi
Hello! I'm rank 0 of 4 on n1
Hello! I'm rank 1 of 4 on n2
Hello! I'm rank 2 of 4 on n3
Hello! I'm rank 3 of 4 on n4

Example 10-10

uses the -n 8 option to launch on all cores in the allocation.

112

Using LSF-HPC