beautypg.com

5 i/o performance considerations, 1 shared file view, 2 private file view – HP XC System 4.x Software User Manual

Page 113: 6 communication between nodes, 7 using mpich on the hp xc system, 1 shared file view 11.5.2 private file view

background image

11.5 I/O Performance Considerations

Before building and running your parallel application, I/O performance issues on the HP XC

cluster

must be considered.

The I/O control system provides two basic types of standard file system views to the application:

Shared

Private

11.5.1 Shared File View

Although a file opened by multiple processes of an application is shared, each core maintains a
private file pointer and file position. This means that if a certain order of input or output from
multiple cores is desired, the application must synchronize its I/O requests or position its file
pointer such that it acts on the desired file location.

Output requests to standard output and standard error are line-buffered, which can be sufficient
output ordering in many cases. A similar effect for other files can be achieved by using append
mode when opening the file with the fopen system call:

fp = fopen ("myfile", "a+");

11.5.2 Private File View

Although the shared file approach improves ease of use for most applications, some applications,
especially those written for shared-nothing clusters, can require the use of file systems private
to each node. To accommodate these applications, the system must be configured with local disk.

For example, assume /tmp and/tmp1 have been configured on each compute node.

Now each process can open up a file named /tmp/myscratch or /tmp1/myotherscratch
and each would see a unique file pointer. If these file systems do not exist local to the node, an
error results.

It is a good idea to use this option for temporary storage only, and make sure that the application
deletes the file at the end.

C example: fd = open ("/tmp/myscratch", flags)

Fortran example: open (unit=9, file="/tmp1/myotherscratch" )

11.6 Communication Between Nodes

On the HP XC system, processes in an

MPI

application run on

compute node

s and use the system

interconnect for communication between the nodes. By default, intranode communication is
done using shared memory between MPI processes. For information about selecting and
overriding the default system interconnect, see the HP-MPI documentation.

11.7 Using MPICH on the HP XC System

MPICH is a freely available portable implementation of MPI. For additional information on
MPICH, see the following URL:

http://www-unix.mcs.anl.gov/mpi/mpich1/

.

Verify with your system administrator that MPICH has been installed on your system. The HP
XC System Software Administration Guide
provides procedures for setting up MPICH.

MPICH jobs must not run on nodes allocated to other tasks. HP strongly recommends that all
MPICH jobs request node allocation through either SLURM or LSF and that MPICH jobs restrict
themselves to using only those resources in the allocation.

Launch MPICH jobs using a wrapper script, such as the one shown in

Figure 11-1

. The following

subsections describe how to launch MPICH jobs from a wrapper script with SLURM or LSF,

11.5 I/O Performance Considerations

113