2 configuring heartbeat, Configuring heartbeat – HP StorageWorks Scalable File Share User Manual
Page 44
![background image](/manuals/398326/44/background.png)
2.
Start the file system manually and test for proper operation before configuring Heartbeat
to start the file system. Mount the MGS mount-point on the MGS node:
# mount /mnt/mgs
3.
Mount the MDT on the MDS node:
# mount /mnt/mds
4.
Mount the OSTs served from each OSS node. For example:
# mount /mnt/ ost0
# mount /mnt/ ost1
# mount /mnt/ ost2
# mount /mnt/ ost3
5.
Mount the file system on a client node according to the instructions in
# mount /testfs
6.
Verify proper file system behavior as described in
“Testing Your Configuration” (page 50)
7.
After the behavior is verified, unmount the file system on the client:
# umount /testfs
8.
Unmount the file system components from each of the servers, starting with the OSS nodes:
# umount /mnt/ost0
# umount /mnt/ost1
# umount /mnt/ost2
# umount /mnt/ost3
9.
Unmount the MDT on the MDS node:
# umount /mnt/mds
10.
Unmount the MGS on the MGS node:
# umount /mnt/mgs
5.2 Configuring Heartbeat
HP SFS G3.1-0 uses Heartbeat V2.1.3 for failover. Heartbeat is open source software. Heartbeat
RPMs are included in the HP SFS G3.1-0 kit. More information and documentation is available
at:
.
IMPORTANT:
This section assumes you are familiar with the concepts in the Failover chapter
of the Lustre 1.6 Operations Manual.
HP SFS G3.1-0 uses Heartbeat to place pairs of nodes in failover pairs, or clusters. A Heartbeat
failover pair is responsible for a set of resources. Heartbeat resources are Lustre servers: the MDS,
the MGS, and the OSTs. Lustre servers are implemented as locally mounted file systems, for
example, /mnt/ost13. Mounting the file system starts the Lustre server. Each node in a failover
pair is responsible for half the servers and the corresponding mount-points. If one node fails,
the other node in the failover pair mounts the file systems that belong to the failed node causing
the corresponding Lustre servers to run on that node. When a failed node returns, the
mount-points can be transferred to that node either automatically or manually, depending on
how Heartbeat is configured. Manual fail back can prevent system oscillation if, for example, a
bad node reboots continuously.
Heartbeat nodes send messages over the network interfaces to exchange status information and
determine whether the other member of the failover pair is alive. The HP SFS G3.1-0
44
Using HP SFS Software