beautypg.com

HP StorageWorks Scalable File Share User Manual

Page 168

background image

Verifying, diagnosing, and maintaining the system

6–38

The command tests the speed at which each client node can read from and write to a single OST service in

the file system, by creating a file of a single stripe on a single OST service and using the

IOR

utility to read

and write data to a file for each client node in parallel.

If a dual Gigabit Ethernet interconnect is used on the servers in the HP SFS system and a single Gigabit

Ethernet interconnect is used on client nodes, the order in which the client nodes are specified to the

ost_perf_check.bash

command is important, as it affects the performance that will be achieved in the

test. You must ensure that the client nodes are ordered in such a way that each Object Storage Server uses

both of the Gigabit Ethernet interconnect links, as follows:

Determine the order of the OST services by entering the

show filesystem filesystem_name

command, as shown in the following example:

sfs> show filesystem data
.

.

.

OST Information:

Name LUN Array Controller Size(GB) Used Service State Running on
----- --- ----- ---------- -------- ---- ------------- ----------
ost7 7 3 scsi-1/2 2048 5% running south3
ost8 11 5 scsi-2/2 2048 5% running south3
ost9 5 2 scsi-1/1 2048 5% running south4
ost10 9 4 scsi-2/1 2048 5% running south4
.

.

.

The output in this example shows that the file system has four OST devices, labelled

ost7

,

ost8

,

ost9

and

ost10

. The OST service numbering reflects the order in which the client nodes will write to the Object

Storage Servers. In this test, a single-stripe file is created on each OST service in the same order, so that the

first specified client node will write to

ost7

, the second client node will write to

ost8

, and so on.

When you enter the

ost_perf_check.bash

command, you must assign a client node from each subnet

to each OST service.

In this example, the write order is

ost7

,

ost8

,

ost9

,

ost10

; the corresponding servers for these services

(as seen in the

Running on

field) are

south3

,

south3

,

south4

, and

south4

. This means that the client

ordering needs to be alternated by subnet. Assuming that client nodes

delta1

and

delta2

are on one

subnet, and client nodes

delta3

and

delta4

are on a second subnet, the command line would be as

follows:

# /usr/opt/hpls/diags/bin/ost_perf_check.bash --parallel --mount-point /mnt/lustre -
-remote-shell ssh --clients "delta1 delta3 delta2 delta4"

When the test finishes, the speeds at which each client node could read from and write to its particular OST

service are displayed, as shown in this example:

[root@client1 root]# /usr/opt/hpls/diags/bin/ost_perf_check.bash --parallel --mount-
point /mnt/lustre

--remote-shell ssh --clients "delta1 delta2"

== Testing write on all OSTs ==
Maximum wall clock deviation: 0.03 sec
Max Write: 105.58 MiB/sec (110.71 MB/sec)
== Testing read on all OSTs ==
Maximum wall clock deviation: 0.02 sec
Max Read: 189.83 MiB/sec (199.05 MB/sec)

Compare the results of these tests with the expected performance details provided in Appendix B.

If the test fails with output similar to the following, ensure that the client nodes are not running a firewall,

then retry the test:

[root@client1 root]# /usr/opt/hpls/diags/bin/ost_perf_check.bash --parallel
--mount-point /mnt/lustre --remote-shell ssh --clients "delta1 delta2"

== Testing write on all OSTs ==
Killed by signal 2.
== Testing read on all OSTs ==
Killed by signal 3.
[root@client1 root]#