beautypg.com

Differences between lsf-hpc, And lsf-hpc integrated with slurm – HP XC System 3.x Software User Manual

Page 85

background image

10.3 Differences Between LSF-HPC and LSF-HPC Integrated with SLURM

LSF-HPC integrated with SLURM for the HP XC environment supports all the standard features and
functions that LSF-HPC supports, except for those items described in this section, in

“Using LSF-HPC

Integrated with SLURM in the HP XC Environment”

, and in the HP XC release notes for LSF-HPC.

By LSF-HPC standards, the HP XC system is a single host. Therefore, all LSF-HPC “per-host”
configuration and “per-host” options apply to the entire HP XC system.

LSF-HPC integrated with SLURM knows only about the HP XC compute nodes through SLURM, so
any preference or resource request that is intended for the HP XC compute nodes must go through
LSF-HPC's external SLURM scheduler. See the

“LSF-SLURM External Scheduler” (page 88)

for more

details.

LSF-HPC requires LSF daemons on every node. These daemons allow LSF-HPC to extract detailed
information from each node, which you display or use for scheduling. This information includes
CPU load, number of users, free memory, and so on.

When LSF-HPC is integrated with SLURM, it runs daemons only on one node in the HP XC system.
Therefore, it relies on SLURM for static resource information (that is, number of CPUs, total physical
memory, and any assigned SLURM “features”), and bases its scheduling on that static information.

LSF-HPC integrated with SLURM does not collect the following information from each node in the
HP XC system:

— tmp
— swp
— mem load

— r15m
— ut
— pg
— io

— maxswap
— ndisks
— r15s
— r1m

The lshosts and lsload commands display “-” for each of these items.

LSF-HPC integrated with SLURM only runs daemons on one node within the HP XC system. This
node hosts an HP XC LSF Alias, which is an IP address and corresponding

host name

specifically

established for LSF-HPC integrated with SLURM on HP XC to use. The HP XC system is known by
this HP XC LSF Alias within LSF.

Various LSF-HPC commands, such as lsid , lshosts, and bhosts, display HP XC LSF Alias in
their output. The default value of the HP XC LSF Alias, lsfhost.localdomain is shown in the
following examples:

$ lsid

Platform LSF HPC version number for SLURM, date stamp

Copyright 1992-2005 Platform Computing Corporation

My cluster name is hptclsf

My master name is lsfhost.localdomain

$ lshosts

HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES

lsfhost.loc SLINUX6 Opteron8 60.0 8 2007M - Yes (slurm)

$ bhosts

HOST_NAME STATUS JL/U MAX NJOBS RUN SSUSP USUSP RSV

lsfhost.localdomai ok - 8 0 0 0 0 0

All HP XC nodes are dynamically configured as “LSF Floating Client Hosts” so that you can execute
LSF-HPC commands from any HP XC node. When you do execute an LSF-HPC command from an
HP XC node, an entry in the output of the lshosts acknowledges the node is licensed to run LSF-HPC
commands.

In the following example, node n15 is configured as an LSF Client Host, not the LSF execution host.
This is shown in the output when you run lshosts command is run on that node: The values for
the type and model are UNKNOWN and the value for server is No.

10.3 Differences Between LSF-HPC and LSF-HPC Integrated with SLURM

85