7 known issues and workarounds, 1 server reboot, 2 errors from install2 – HP StorageWorks Scalable File Share User Manual
Page 45: 3 application file locking, 4 mds is unresponsive, Errors
7 Known Issues and Workarounds
The following items are known issues and workarounds.
7.1 Server Reboot
After the server reboots, it checks the file system and reboots again.
/boot: check forced
This can be ignored.
7.2 Errors from install2
You might receive the following errors when running install2. They can be ignored.
error: package cpq_cciss is not installed
error: package bnx2 is not installed
error: package nx_nic is not installed
error: package nx_lsa is not installed
error: package hponcfg is not installed
7.3 Application File Locking
Applications using fcntl for file locking will fail unless HP SFS is mounted on the clients with
the flock option. See
“Installation Instructions” (page 27)
for an example of how to use the
flock
option.
7.4 MDS Is Unresponsive
When processes on multiple client nodes are simultaneously changing directory entries on the
same directory, the MDS can appear to be hung. Watchdog timeout messages will appear in
/var/log/messages
on the MDS. The workaround is to reboot the MDS node.
7.5 Changing group_upcall Value to Disable Group Validation
By default the SFS G3.0-0 group_upcall value on the MDS server is set to /usr/sbin/
l_getgroups
. This causes all user and group IDs to be validated on the SFS server. Therefore,
the server must have full knowledge of all user accounts via /etc/passwd and /etc/group
or some other equivalent mechanism. Users who are unknown to the server will not have access
to the Lustre file systems.
This function can be disabled by setting group_upcall to NONE using the following procedure:
1.
All clients must umount the SFS file system.
2.
All SFS servers must umount the SFS file system.
IMPORTANT:
All clients and servers must not have SFS mounted. Otherwise, the file system
configuration data will become corrupted.
3.
Perform the following two steps on the MDS node only:
a.
tunefs.lustre --dryrun --erase-params --param="mdt.group_upcall=NONE" --writeconf /dev/mapper/mpath?
Capture all of the param settings from the output of dryrun. These will have to be
replaced because the --erase-params option will remove them.
NOTE:
Use the appropriate device in place of /dev/mapper/mpath?
b.
Insert the required params into the final command:
tunefs.lustre --erase-params --param="mgsnode=172.31.207.1@o2ib failover.node=172.31.207.1@o2ib
mdt.group_upcall=NONE" --writeconf /dev/mapper/mpath?
7.1 Server Reboot
45