HP 3PAR Operating System Software User Manual
Page 163
21 Using Peer Persistence for Non-disruptive Failover in
1-to-1 Remote Copy in Geocluster Environments
HP 3PAR Peer Persistence software allows you to federate your HP 3PAR StoreServ Storage systems
across geographically separated data centers. This inter-site federation of storage allows you to
use your data centers more effectively by allowing you to move applications from one site to another
according to your business need and without any application downtime.
NOTE:
Peer-persistence functionality requires the HP 3PAR Peer Persistence Software license.
Contact your local HP representative for details.
Non-disruptive failover is a high availability configuration between two sites where the hosts are
set up in a geocluster configuration with access to storage arrays on both sites. Storage volumes
created on the primary storage array are replicated to the secondary array using synchronous
remote copy to ensure that the volumes are in sync at all times. In the event of a non-disruptive
migration scenario, the host traffic to the failed (primary) storage array is redirected to the secondary
storage array without major impact to the hosts.
HP 3PAR Peer Persistence software provides support for non-disruptive failover with the following
requirements:
•
The sites must be set up in a 1-to-1 remote-copy configuration in synchronous mode.
•
This non-disruptive failover configuration is supported only with VMware ESXi 5.0 and ESXi
5.1 hosts.
•
All volumes exported on all primary and secondary arrays must share the same volume WWN.
•
All associated hosts are connected to both the primary and secondary arrays.
•
The path_management policy must be enabled for the volume groups.
•
The ESX hosts that these volumes are exported to must be configured with host persona 11
(VMware), which supports host capability ALUA, which enables the reversal of replication
direction and redirects host traffic from one array to another.
In this peer-persistence configuration, the software running on the storage arrays manages
the ALUA states of the exported volumes by blocking I/O as appropriate and notifying the
ESX host when ALUA states have been changed. The ESX host then re-evaluates ALUA states
and directs I/O to the currently active LUNs.
When storage volumes in a peer-persistence configuration are exported, unexported, or
re-exported, the ESX host must rescan the storage arrays to ensure it has an up-to-date record
of the LUNs and their ALUA states. If you fail to perform a rescan, the ESX host will not identify
recent changes in export status and will make I/O path management decisions based upon
stale export and ALUA status.
For information about performing a rescan on an ESX/ESXi host, see the following VMware
Knowledge Base article:
163