beautypg.com

B.1 i/o performance, I/o performance (section b.1) – HP StorageWorks Scalable File Share User Manual

Page 330

background image

Performance figures

B–2

B.1 I/O performance

This section provides expected I/O performance figures for a single server in the HP SFS system. These

figures are based on tests carried out by HP.

Note the following points in regard to these figures:

The

raw_lun_check.bash

script was used to obtain the raw performance figures. This script tests

the speed of reading and writing 4GB of data. The devices are unmounted and remounted between

the write and read tests.

Figures similar to those shown for single-client Lustre performance can be obtained using the

ost_perf_check.bash

script. This script reads and writes 4GB of data for each OST device

tested. In the tests on which these figures are based, the tests were run on client nodes with less than

4GB of memory.

Figures similar to those shown for multi-client Lustre performance can be obtained using the

IOR

sequential read and write benchmark on eight client nodes. Each client process reads or writes 2GB

of data to individual files in parallel.

All results are based on tests over a Quadrics interconnect; however, given the raw speeds of Myrinet

2XP and Voltaire InfiniBand interconnects, there should not be any significant difference when these

interconnects are used.

All results are based on tests using new, empty file systems.

EVA4000 storage

Table B-1 provides details of expected performance figures for one Object Storage Server in systems using

EVA4000 storage. These figures are based on the following configuration:

Dual-connected QLogic host bus adapters (QLA2342 (from QLogic Corporation)—HP Part Number

FCA2214 DC HBA adapters).

EVA4000 controller firmware version XCS v5.100.

Two disk groups, each containing fourteen 146GB 10K RPM disks.

One LUN per disk group.

In Table B-1, the first set of figures (1 LUN) for each entry provides the data for a scenario where the first

disk group is accessed; and the second set of figures (2 LUNs) provides data for a scenario where both disk

groups are simultaneously accessed.

Table B-1

Performance figures — EVA4000 storage

Number of LUNs

Write (MB/sec)

Read (MB/sec)

Raw Performance

1

170

185

2

280

350

Single-client Lustre

Performance
(one client,

ost_perf_check

command)

1

180

185

2

280

290

Multi-client Lustre

Performance
(eight clients,

IOR

utility)

1

180

185

2

280

350