beautypg.com

HP XC System 3.x Software User Manual

Page 59

background image

Example 5-9 Using the External Scheduler to Submit a Job to Run on Specific Nodes

$ bsub -n4 -ext "SLURM[nodelist=n6,n8]" -I srun hostname
Job <70> is submitted to default queue .
<>
<>
n6
n6
n8
n8

In the previous example, the job output shows that the job was launched from the LSF execution
host lsfhost.localdomain, and it ran on four cores on the specified nodes, n6 and n8.

Example 5-10

shows one way to submit a parallel job to run one task per node.

Example 5-10 Using the External Scheduler to Submit a Job to Run One Task per Node

$ bsub -n4 -ext "SLURM[nodes=4]" -I srun hostname
Job <71> is submitted to default queue .
<>
<>
n1
n2
n3
n4

In the previous example, the job output shows that the job was launched from the LSF execution
host lsfhost.localdomain, and it ran on four cores on four different nodes (one task per
node).

Example 5-11

shows one way to submit a parallel job to avoid running on a particular node.

Example 5-11 Using the External Scheduler to Submit a Job That Excludes One or More Nodes

$ bsub -n4 -ext "SLURM[nodes=4; exclude=n3]" -I srun hostname
Job <72> is submitted to default queue .
<>
<>
n1
n2
n4
n5

This example runs the job exactly the same as in

Example 5-10 “Using the External Scheduler to

Submit a Job to Run One Task per Node”

, but additionally requests that node n3 is not to be

used to run the job. Note that this command could have been written to exclude additional nodes.

Example 5-12

launches the hostname command once on nodes n1 through n10 (n[1-10]):

Example 5-12 Using the External Scheduler to Launch a Command in Parallel on Ten Nodes

$ bsub -n 10 -ext "SLURM[nodelist=n[1-10]]" srun hostname

Example 5-13

launches the hostname command on 10 cores on nodes with a dualcore SLURM

feature assigned to them:

5.3 Submitting a Parallel Job

59