HP StorageWorks Scalable File Share User Manual
Page 40
![background image](/manuals/398305/40/background.png)
13 UP osc hpcsfsc-OST0004-osc hpcsfsc-mdtlov_UUID 5
14 UP osc hpcsfsc-OST0006-osc hpcsfsc-mdtlov_UUID 5
15 UP osc hpcsfsc-OST0007-osc hpcsfsc-mdtlov_UUID 5
16 UP osc hpcsfsc-OST0001-osc hpcsfsc-mdtlov_UUID 5
17 UP osc hpcsfsc-OST0002-osc hpcsfsc-mdtlov_UUID 5
18 UP osc hpcsfsc-OST0000-osc hpcsfsc-mdtlov_UUID 5
19 UP osc hpcsfsc-OST0003-osc hpcsfsc-mdtlov_UUID 5
Check the recovery status on an MDS or OSS server as follows:
# cat /proc/fs/lustre/*/*/recovery_status
INACTIVE
This displays INACTIVE if no recovery is in progress. If any recovery is in progress or complete,
the following information displays:
status: RECOVERING
recovery_start: 1226084743
time_remaining: 74
connected_clients: 1/2
completed_clients: 1/2
replayed_requests: 0/??
queued_requests: 0 next_transno: 442
status: COMPLETE
recovery_start: 1226084768
recovery_duration: 300
completed_clients: 1/2
replayed_requests: 0
last_transno: 0
The combination of the debugfs and llog_reader commands can be used to examine file
system configuration data as follows:
# debugfs -c -R 'dump CONFIGS/testfs-client /tmp/testfs-client' /dev/mapper/mpath0
debugfs 1.40.7.sun3 (28-Feb-2008)
/dev/mapper/mpath0: catastrophic mode - not reading inode or group bitmaps
# llog_reader /tmp/testfs-client
Header size : 8192
Time : Fri Oct 31 16:50:52 2008
Number of records: 20
Target uuid : config_uuid
-----------------------
#01 (224)marker 3 (flags=0x01, v1.6.6.0) testfs-clilov 'lov setup' Fri Oct 3 1 16:50:52 2008-
#02 (120)attach 0:testfs-clilov 1:lov 2:testfs-clilov_UUID
#03 (168)lov_setup 0:testfs-clilov 1:(struct lov_desc) uuid=testfs-clilov_UUID stripe:cnt=1
size=1048576 offset=0 patt ern=0x1
#04 (224)marker 3 (flags=0x02, v1.6.6.0) testfs-clilov 'lov setup' Fri Oct 3 1 16:50:52 2008-
#05 (224)marker 4 (flags=0x01, v1.6.6.0) testfs-MDT0000 'add mdc' Fri Oct 31 16:50:52 2008-
#06 (088)add_uuid nid=172.31.97.1@o2ib(0x50000ac1f6101) 0: 1:172.31.97.1@o2ib
#07 (128)attach 0:testfs-MDT0000-mdc 1:mdc 2:testfs-MDT0000-mdc_UUID
#08 (144)setup 0:testfs-MDT0000-mdc 1:testfs-MDT0000_UUID 2:172.31.97.1@o2 ib
#09 (088)add_uuid nid=172.31.97.2@o2ib(0x50000ac1f6102) 0: 1:172.31.97.2@o2ib
#10 (112)add_conn 0:testfs-MDT0000-mdc 1:172.31.97.2@o2ib
#11 (128)mount_option 0: 1:testfs-client 2:testfs-clilov 3:testfs-MDT0000-mdc
#12 (224)marker 4 (flags=0x02, v1.6.6.0) testfs-MDT0000 'add mdc' Fri Oct 31 16:50:52 2008-
#13 (224)marker 8 (flags=0x01, v1.6.6.0) testfs-OST0000 'add osc' Fri Oct 31 16:51:29 2008-
#14 (088)add_uuid nid=172.31.97.2@o2ib(0x50000ac1f6102) 0: 1:172.31.97.2@o2ib
#15 (128)attach 0:testfs-OST0000-osc 1:osc 2:testfs-clilov_UUID
#16 (144)setup 0:testfs-OST0000-osc 1:testfs-OST0000_UUID 2:172.31.97.2@o2 ib
#17 (088)add_uuid nid=172.31.97.1@o2ib(0x50000ac1f6101) 0: 1:172.31.97.1@o2ib
#18 (112)add_conn 0:testfs-OST0000-osc 1:172.31.97.1@o2ib
#19 (128)lov_modify_tgts add 0:testfs-clilov 1:testfs-OST0000_UUID 2:0 3:1
#20 (224)marker 8 (flags=0x02, v1.6.6.0) testfs-OST0000 'add osc' Fri Oct 31 16:51:29 2008-
#
Sometimes a client will not connect to one or more components of the file system despite the file
system appearing healthy, because of erroneous information in these configuration logs.
Frequently, this situation can be corrected by the use of the "writeconf procedure" described in
the Lustre Manual section 4.2.3.2. An overview of this procedure follows:
1.
umount
all clients.
2.
Stop the file system on the servers as described in section 5.4.
3.
For every server disk run:
# tunefs.lustre --writeconf /dev/mapper/mpathN
40
Using HP SFS Software