beautypg.com

3 submitting a parallel job, 1 submitting a non-mpi parallel job – HP XC System 3.x Software User Manual

Page 55

background image

The following is the C source code for this program; the file name is hw_hostname.c.

#include
#include

int main()
{
char name[100];
gethostname(name, sizeof(name));
printf("%s says Hello!\n", name);
return 0;
}

The following is the command line used to compile this program:

$ cc hw_hostname.c -o hw_hostname

NOTE:

The following invocations of the sample hw_hostname program are run on a SLURM

non-root default partition, which is not the default SLURM partition for the HP XC system
software.

When run on the login node, it shows the name of the login node, n16 in this case:

$ ./hw_hostname
n16 says Hello!

When you use the srun command to submit this program, it runs on one of the compute nodes.
In this instance, it runs on node n13:

$ srun ./hw_hostname
n13 says Hello!

Submitting the same program again with the srun command may run this program on another
node, as shown here:

$ srun ./hw_hostname
n12 says Hello!

The srun can also be used to replicate the program on several cores. Although it is not generally
useful, it illustrates the point. Here, the same program is run on 4 cores on 2 nodes.

$ srun -n4 ./hw_hostname
n13 says Hello!
n13 says Hello!
n14 says Hello!
n14 says Hello!

The output for this command could also have been 1 core on each of 4 compute nodes in the
SLURM allocation.

5.3 Submitting a Parallel Job

When submitting a parallel job, you can specify the use of HP-MPI. You can also opt to schedule
the job by using SLURM. Depending on which submission method you choose, read the
appropriate sections, as follows:

“Submitting a Non-MPI Parallel Job” (page 55)

“Submitting a Parallel Job That Uses the HP-MPI Message Passing Interface” (page 56)

“Submitting a Parallel Job Using the SLURM External Scheduler” (page 57)

5.3.1 Submitting a Non-MPI Parallel Job

to submit a parallel job

Use the following format of the LSF bsub command to submit a parallel job that does not make
use of HP-

MPI

to an LSF-HPC node allocation (

compute node

s). An LSF-HPC node allocation

is created by the -n num-procs parameter, which specifies the minimum number of cores the
job requests.

5.3 Submitting a Parallel Job

55