HP XC System 3.x Software User Manual
Page 98
Example 10-2 Examples of Launching LSF-HPC Jobs Without the srun Command
The following bsub command line invokes the bash shell to run the hostname command with
the pdsh command:
[lsfadmin@n16 ~]$ bsub -n4 -I -ext "SLURM[nodes=4]" /bin/bash -c 'pdsh -w "$LSB_HOSTS" hostname'
Job <118> is submitted to default queue
<
<
n15: n15
n14: n14
n16: n16
n13: n13
The following command line uses the mpirun command to launch the hello_world example
program on one core on each of four nodes:
[lsfadmin@n16 ~]$ bsub -n4 -I -ext "SLURM[nodes=4]" mpirun -lsb_hosts ./hello_world
Job <119> is submitted to default queue
<
<
Hello world! I'm 0 of 4 on n13
Hello world! I'm 1 of 4 on n14
Hello world! I'm 2 of 4 on n15
Hello world! I'm 3 of 4 on n16
[lsfadmin@n16 ~]$"
The differences described in HP XC System Software documentation take precedence over
descriptions in the LSF documentation from Platform Computing Corporation. See
Between LSF-HPC and LSF-HPC Integrated with SLURM”
and the lsf_diff(1) manpage for more
information on the subtle differences between LSF-HPC and LSF-HPC integrated with SLURM.
10.3 Differences Between LSF-HPC and LSF-HPC Integrated with SLURM
LSF-HPC integrated with SLURM for the HP XC environment supports all the standard features
and functions that LSF-HPC supports, except for those items described in this section, in
LSF-HPC Integrated with SLURM in the HP XC Environment”
, and in the HP XC release notes
for LSF-HPC.
•
By LSF-HPC standards, the HP XC system is a single host. Therefore, all LSF-HPC “per-host”
configuration and “per-host” options apply to the entire HP XC system.
LSF-HPC integrated with SLURM knows only about the HP XC compute nodes through
SLURM, so any preference or resource request that is intended for the HP XC compute nodes
must go through LSF-HPC's external SLURM scheduler. See the
for more details.
•
LSF-HPC requires LSF daemons on every node. These daemons allow LSF-HPC to extract
detailed information from each node, which you display or use for scheduling. This
information includes CPU load, number of users, free memory, and so on.
When LSF-HPC is integrated with SLURM, it runs daemons only on one node in the HP XC
system. Therefore, it relies on SLURM for static resource information (that is, number of
CPUs, total physical memory, and any assigned SLURM “features”), and bases its scheduling
on that static information.
•
LSF-HPC integrated with SLURM does not collect the following information from each node
in the HP XC system:
— tmp
— swp
— mem load
— r15m
— ut
— pg
— io
— maxswap
— ndisks
— r15s
— r1m
The lshosts and lsload commands display “-” for each of these items.
98
Using LSF-HPC