beautypg.com

Job terminology – HP XC System 3.x Software User Manual

Page 70

background image

All HP XC nodes are dynamically configured as “LSF Floating Client Hosts” so that you can execute
LSF commands from any HP XC node. When you do execute an LSF command from an HP XC node,
an entry in the output of the lshosts acknowledges the node is licensed to run LSF commands.

In the following example, node n15 is configured as an LSF Client Host, not the LSF execution host.
This is shown in the output when you run lshosts command is run on that node: The values for the
type

and model are UNKNOWN and the value for server is No.

$ lshosts

HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES

lsfhost.loc SLINUX6 Opteron8 60.0 8 2007M - Yes (slurm)

$ ssh n15 lshosts

HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES

lsfhost.loc SLINUX6 Opteron8 60.0 8 2007M - Yes (slurm)

n15 UNKNOWN UNKNOWN_ 1.0 - - - No ()

LSF-HPC-enforced job-level run-time limits are not supported.

LSF-HPC does not support parallel or SLURM-based interactive jobs in PTY mode (bsub -Is and bsub
-Ip

). However, after LSF dispatches a user job on the HP XC system, you can use the srun or ssh

command to access the job resources directly accessible. For more information, see

"Working

Interactively Within an LSF-HPC Allocation"

.

LSF-HPC does not support user-account mapping and system-account mapping.

LSF-HPC does not support chunk jobs. If a job is submitted to chunk queue, SLURM will let the job pend.

LSF-HPC does not support topology-aware advanced reservation scheduling.

Job Terminology

The following terms are used to describe jobs submitted to LSF-HPC

Batch job

A job submitted to LSF or SLURM that runs without any I/O connection
back to the terminal from which the job was submitted. This job may run
immediately, or it may run sometime in the future, depending on resource
availability and batch system scheduling policies.

Batch job submissions typically provide instructions on I/O management,
such as files from which to read input and filenames to collect output.

By default, LSF jobs are batch jobs. The output is e-mailed to the user,
which requires that e-mail be set up properly. SLURM batch jobs are
submitted with the srun -b command. By default, the output is written
to $CWD/slurm-SLURMjobID.out from the node on which the batch
job was launched.

Use Ctrl-C at any time to terminate the job.

Interactive batch job

A job submitted to LSF or SLURM that maintains I/O connections with the
terminal from which the job was submitted. The job is also subject to
resource availability and scheduling policies, so it may pause before
starting. After running, the job output displays on the terminal and the user
can provide input if the job allows it.

By default, SLURM jobs are interactive. Interactive LSF jobs are submitted
with the bsub -I command.

Use Ctrl-C at any time to terminate the job.

Serial job

A job that requests only one slot and does not specify any of the following
constraints:

mem

tmp

mincpus

nodes

70

Using LSF