beautypg.com

6 performance considerations – Avago Technologies Syncro 9361-8i User Manual

Page 8

background image

Avago Technologies

- 8 -

Syncro CS 9361-8i and Syncro CS 9380-8e Solution User Guide

October 2014

Chapter 1: Introduction

Overview of Cluster Setup, Planned Failovers, and Firmware Updates

1.5

Overview of Cluster Setup, Planned Failovers, and Firmware Updates

Chapter 2

explains how to set up HA-DAS clustering on a Syncro CS 9361-8i configuration or on a Syncro CS 9380-8e

configuration after you configure the hardware and install the operating system.

Chapter 3

explains how to perform system administration tasks, such as planned failovers and updates of the Syncro

CS 9361-8i and Syncro CS 9380-8e controller firmware.

Chapter 4

has information about troubleshooting a Syncro CS system.

Refer to the Syncro CS 9361-8i and Syncro CS 9380-8e Controllers User Guide on the Syncro CS Resource CD for
instructions on how to install the Syncro CS controllers and connect them by cable to the CiB enclosure.

1.6

Performance Considerations

SAS technology offers throughput-intensive data transfers and low latency times. Throughput is crucial during
failover periods where the system needs to process reconfiguration activity in a fast, efficient manner. SAS offers a
throughput rate of 124 Gb/s over a single lane. SAS controllers and enclosures typically aggregate 4 lanes into an x4
wide link, giving an available bandwidth of 48 Gb/s across a single connector, which makes SAS ideal for HA
environments.

Syncro CS controllers work together across a shared SAS Fabric to achieve sharing, cache coherency, heartbeat
monitoring and redundancy by using a set of protocols to carry out these functions. At any point in time, a particular
VD is accessed or owned by a single controller. This owned VD is a termed a local VD. The second controller is aware of
the VD on the first controller, but it has only indirect access to the VD. The VD is a remote VD for the second controller.
In a configuration with multiple VDs, the workload is typically balanced across controllers to provide a higher degree
of efficiency.

When a controller requires access to a remote VD, the I/Os are shipped to the remote controller, which processes the
I/O locally. I/O requests that are handled by local VDs are much faster than those handled by remote VDs.

The preferred configuration is for the controller to own the VD that hosts the clustered resource (the MegaRAID
Storage Manager™ utility shows which controller owns this VD). If the controller does not own this VD, it must issue a
request to the peer controller to ship the data to it, which affects performance. This situation can occur if the
configuration has been configured incorrectly or if the system is in a failover situation.

NOTE

Performance tip: You can reduce the impact of I/O shipping by
locating the VD or drive groups with the server node that is primarily
driving the I/O load. Avoid drive group configurations with multiple
VDs whose I/O load is split between the server nodes.

MSM has no visibility to remote VDs, so all VD management operations must be performed locally. A controller that
has no direct access to a VD must use I/O shipping to access the data if it receives a client data request. Accessing the
remote VD affects performance because of the I/O shipping overhead.

Performance tip: Use the MSM utility to verify correct resource ownership and load balancing. Load balancing is a
method of spreading work between two or more computers, network links, CPUs, drives, or other resources. Load
balancing is used to maximize resource use, throughput, or response time. Load balancing is the key to ensuring that
client requests are handled in a timely, efficient manner.

This manual is related to the following products: