beautypg.com

Using raid levels to configure redundancy – Sun Microsystems Sun StorEdge T3 User Manual

Page 32

background image

20

Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001

If the volume has a hot spare configured and that drive is available, the data on the
disabled drive is reconstructed on the hot-spare drive. When this operation is
complete, the volume is operating with full redundancy protection, so another drive
in the volume may fail without loss of data.

After a drive has been replaced, the original data is automatically reconstructed on
the new drive. If no hot spare was used, the data is regenerated using the RAID
redundancy data in the volume. If the failed drive data has been reconstructed onto
a hot spare, once the reconstruction has completed, a copy-back operation begins
where the hot spare data is copied to the newly replaced drive.

You can also configure the rate at which data is reconstructed, so as not to interfere
with application performance. Reconstruction rate values are low, medium, and high
as follows:

Low is the slowest and has the lowest impact on performance

Medium is the default

High is the fastest and has the highest impact on performance

Note –

Reconstruction rates can be changed while a reconstruction operation is in

process. However, the changes don’t take effect until the current reconstruction has
completed.

Using RAID Levels to Configure
Redundancy

The RAID level determines how the controller reads and writes data and parity on
the drives. The Sun StorEdge T3 and T3+ arrays can be configured with RAID level
0, RAID level 1 (1+0) or RAID level 5. The factory-configured LUN is a RAID 5 LUN.

Note –

The default RAID level (5) can result in very large volumes; for example, 128

Gbytes in a configuration of single 7+1 RAID 5 LUN plus hot spare, with 18 Gbyte
drives. Some applications cannot use such large volumes effectively. The following
two solutions can be used separately or in combination:

First, use the partitioning utility available on the data host’s operating system. In
the Solaris environment, use the format utility, which can create up to seven
distinct partitions per volume. Note that in the case of the configuration described
above, if each partition is equal in size, this will result in 18-Gbyte partitions,
which still may be too large to be used efficiently by legacy applications.