beautypg.com

A.7 submitting an hp-mpi job with lsf-hpc, View the finished jobs, View the node state – HP XC System 3.x Software User Manual

Page 120: Show the environment, Run the job

background image

loadSched - - - - - - - - - - -
loadStop - - - - - - - - - - -

View the finished jobs:

$ bhist -l 1008
Job <1008>, User smith, Project ,
Interactive pseudo-terminal mode, Command

date and time stamp: Submitted from host n16, to Queue ,
CWD <$HOME/tar_drop1/test>, 8 Processors Requested;
date and time stamp: Dispatched to 8 Hosts/Processors
<8*lsfhost.localdomain>;
date and time stamp: slurm_id=74;ncpus=8;slurm_alloc=n16,n14,n13,n15;
date and time stamp: Starting (Pid 26446);
date and time stamp: Done successfully. The CPU time used is 0.1 seconds;
date and time stamp: Post job process done successfully;

Summary of time in seconds spent in various states by date and time
PEND PSUSP RUN USUSP SSUSP UNKWN TOTAL
12 0 93 0 0 0 105

View the node state:

$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
lsf up infinite 4 idle n[13-16]

A.7 Submitting an HP-MPI Job with LSF-HPC

This example shows how to run an

MPI

job with the bsub command.

Show the environment:

$ lsid
Platform LSF HPC version number for SLURM, date and time stamp
Copyright 1992-2006 Platform Computing Corporation

My cluster name is penguin
My master name is lsfhost.localdomain

$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
lsf up infinite 4 alloc n[13-16]

$ lshosts
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
lsfhost.loc SLINUX6 DEFAULT 1.0 8 1M - Yes (slurm)

$ bhosts
HOST_NAME STATUS JL/U MAX NJOBS RUN SSUSP USUSP RSV
lsfhost.localdomai ok - 8 0 0 0 0 0

Run the job:

$ bsub -I -n6 -ext "SLURM[nodes=3]" mpirun -srun /usr/share/hello
Job <1009> is submitted to default queue .
<>
<>
I'm process 0! from ( n13 pid 27222)
Greetings from process 1! from ( n13 pid 27223)

120

Examples