1 installation requirements, 2 installation instructions – HP StorageWorks Scalable File Share User Manual
Page 27
4 Installing and Configuring HP SFS Software on Client
Nodes
This chapter provides information about installing and configuring HP SFS G3.0-0 Software on
client nodes running RHEL5U2, SLES10 SP2, and HP XC V4.0.
4.1 Installation Requirements
HP SFS G3.0-0 Software supports file system clients running RHEL5U2 and SLES10 SP2, as well
as the HP XC V4.0 cluster clients. The HP SFS G3.0-0 Software tarball contains the latest supported
Lustre client RPMs for these systems. Use the correct type for your system.
The installation assumes that the client systems have already been installed and are running
with the supported version of Linux and OFED InfiniBand software, and have functioning
InfiniBand ib0 interfaces that are connected on the same InfiniBand fabric as the HP SFS G3.0-0
file system server cluster.
4.2 Installation Instructions
The following installation instructions are for a RHEL5U2 system. The other systems are similar,
but use the correct Lustre client RPMs for your system type from the HP SFS G3.0-0 Software
tarball /opt/hp/sfs/lustre/client directory.
The Lustre client RPMs that are provided with HP SFS G3.0-0 are for use with RHEL5U2 kernel
version 2.6.18_92.1.10.e15. If your client is not running this kernel, you need to either update
your client to this kernel or rebuild the Lustre RPMs to match the kernel you have using the
instructions in
“RHEL5U2 Custom Client Build Procedure” (page 28)
. You can determine what
kernel you are running by using the command uname -r.
1.
Install the required Lustre RPMs for the kernel version 2.6.18_92.1.10.e15. Enter the following
command on one line:
# rpm -Uvh lustre-client1.6.6-2.6.18_92.1.10.el5_lustre.1.6.6smp.x86_64.rpm \
lustre-client-modules-1.6.6-2.6.18_92.1.10.el5_lustre1.6.6smp.x86_64.rpm
For custom-built client RPMs, the RPM names are slightly different. In this case, enter the
following command on one line:
# rpm -Uvh lustre-1.6.6-2.6.18_53.el5_200808041123.x86_64.rpm \
lustre-modules-1.6.6-2.6.18_53.el5_200808041123.x86_64.rpm \
lustre-tests-1.6.6-2.6.18_53.el5_200808041123.x86_64.rpm
2.
Run the depmod command to ensure Lustre modules are loaded at boot.
3.
Add the following line to /etc/modprobe.conf:
options lnet networks=o2ib0
4.
Create the mount-point to use for the file system. The following example uses a Lustre file
system called testfs, as defined in
“Creating a Lustre File System” (page 31)
. It also uses
a client mount-point called /testfs. For example:
# mkdir /testfs
NOTE:
The file system cannot be mounted by the clients until the file system is created
and started on the servers. For more information, see
.
5.
To automatically mount the Lustre file system after reboot, add the following line to /etc/
fstab
:
172.31.80.1@o2ib0:/testfs /testfs lustre _netdev,rw,flock 0 0
4.1 Installation Requirements
27