beautypg.com

2 examining the myrinet interconnect, Examining the myrinet interconnect -23, D in section 6.1.8.2 – HP StorageWorks Scalable File Share User Manual

Page 153

background image

Verifying the system

6–23

NOTE:

To ensure that an accurate test is performed where a dual Gigabit Ethernet interconnect is used,

order the client nodes so that they are matched with a server IP address that they can communicate with.

Where there is only a single Gigabit Ethernet link on the client nodes, this means that each client must be

matched to a server address on the same subnet as itself. Where there is a dual Gigabit Ethernet link on

the client nodes, it is possible to specify each client twice, once for each link to a server on a different

subnet.
To ensure that an accurate test is performed where a bonded Gigabit Ethernet interconnect is used, the

ratio of clients to servers must be an even number. (If the number of clients is not an even multiple of the

number of servers, there will be an imbalance in the system, and this will skew the results.) The likelihood

of imbalance in the system decreases as the ratio of clients to servers increases. As a general rule, it is best

to test with four or more clients per server.

/usr/opt/hpls/diags/bin/net_test.bash --incremental --net tcp --server
"server_address1 [server_address2 ...]" --client "client_name1 [client_name2 ...]"

The command uses the

netperf

tool to determine the speed each link is running at, incrementing the

number of links tested in each phase of the test. The speed of each link and the aggregate speed for each

incremental test are displayed in the output, as shown in the following example:

# ./net_test.bash --incremental --net tcp --server "10.128.0.1 10.128.0.2" --client
"blue0 blue1"

== Testing 1 servers ==
10.128.0.1 Throughput MBytes/sec 96.71
Total throughput: 96.71
== Testing 2 servers ==
10.128.0.1 Throughput MBytes/sec 91.34
10.128.0.2 Throughput MBytes/sec 94.33
Total throughput: 185.67
== Test Finished ==

Compare the results of these tests with the expected performance details provided in Appendix B.

6.1.8.2

Examining the Myrinet interconnect

This section is organized as follows:

Examining the Myrinet adapter and interconnect link (Section 6.1.8.2.1)

Testing Myrinet interconnect performance using the net_test.bash command (Section 6.1.8.2.2)

Testing Myrinet interconnect performance using the gm_allsize command (Section 6.1.8.2.3)

6.1.8.2.1 Examining the Myrinet adapter and interconnect link

This section describes how to identify and isolate problems on the Myrinet interconnect. The procedures

described in this section apply only to the portion of the interconnect that connects the servers in the HP SFS

system to the switch infrastructure of the Myrinet interconnect. Specifically, diagnosing problems with links

within or between switches is not covered in this guide. It is assumed that switch monitoring and use of the

Mute graphical diagnostic tool (from Myrinet Inc.) is performed by and on the client system to which the HP

SFS system is connected.

Read this section in conjunction with the diagnostics manual of the client system (if one is provided). For

example, for an HP XC system, the diagnostics are described in the HP XC System Software Administration

Guide. In addition, you can read the Myrinet Installation and Troubleshooting Guide provided by Myricom

Inc.

To verify the presence and correct operation of the Myrinet adapter and interconnect link on a server,

perform the following procedures:

1.

Log in to the server on the HP SFS system, as shown in the following example:

# ssh south3