C H A P T E R 5 |
First-Time Configuration |
This chapter summarizes the most common procedures used for first-time configuration and includes the following topics:
The following controller functions describe the redundant controller operation.
The two controllers continuously monitor each other. When a controller detects that the other controller is not responding, the working controller immediately takes over and disables the failed controller.
Note - Logical volumes, an alternative to logical drives, are unsuited to some modern configurations such as Sun Cluster environments, and do not work in those configurations. Use logical drives instead. For more information see First-Time Controller Configuration. |
The active-to-active configuration engages all array resources to actively maximize performance. Users might also assign all logical configurations to one controller and let the other act as a standby.
In a single-controller configuration, keep the controller as the primary controller at all times and assign all logical drives to the primary controller. The primary controller controls all logical drive and firmware operations. In a single-controller configuration, the controller must be the primary controller or the controller cannot operate.
The secondary controller is used only in dual-controller configurations for redistributed I/O and for failover.
The Redundant Controller setting ("View and Edit Peripheral Devices Set Peripheral Device Entry") must remain enabled for single-controller configurations. This preserves the default primary controller assignment of the single controller. The controller status shows "scanning," which indicates that the firmware is scanning for primary and secondary controller status and redundancy is enabled even though it is not used. There is no performance impact.
The battery LED (on far right side of the controller module) is an amber LED if the battery is bad or missing. The LED is blinking green if the battery is charging and is solid green when the battery is fully charged.
The initial firmware screen displays the battery status at the top of the initial screen where BAT: status displays somewhere in the range from BAD to ----- (charging), or +++++ (fully charged).
For maximum life, lithium ion batteries are not recharged until the charge level is very low, indicated by a status of -----. Automatic recharging at this point takes very little time.
A battery module whose status shows one or more + signs can support cache memory for 72 hours. As long as one or more + signs are displayed, your battery is performing correctly.
Your lithium ion battery should be changed every two years if the unit is continuously operated at 25 degrees C. If the unit is continuously operated at 35 degrees C or higher, it should be changed every year. The shelf life of your battery is three years.
For information on the date of manufacture and how to replace the battery module, refer to the Sun StorEdge 3000 Family FRU Installation Guide.
For more information, see for the acceptable operating and non-operating temperature ranges for your array.
Unfinished writes are cached in memory in write-back mode. If power to the array is discontinued, data stored in the cache memory is not lost. Battery modules can support cache memory for several days.
Write cache is not automatically disabled when the battery is offline due to battery failure or a disconnected battery. You can enable or disable the write-back cache capabilities of the RAID controller. To ensure data integrity, you may choose to disable Write Back cache option and switch to the Write Through cache option through the firmware application (go to "view and edit Configuration parameters" and select "Caching Parameters"). The risk of data loss is remote.
You can manage the array through one of the following three methods:
Sun StorEdge 3310 SCSI arrays are preconfigured and require minimal configuration. TABLE 5-2 summarizes the typical series of procedures for completing a first-time RAID controller configuration. All other procedures can be performed by using either the COM port or the Ethernet port connection to a management console.
Set up a serial port connection. See Configuring a COM Port to Connect to a RAID Array for more information. |
|
Reset controller. The IDs assigned to controllers only take effect after controller is reset. |
|
Remove default logical drive(s) and create new logical drives (optional). |
|
Assign logical drives to the secondary controller (optional). |
|
Map each logical drive partition to a LUN on a host channel. |
|
* Reset the controller after each step or at the end of the configuration process. |
You see the initial controller screen when you first access the RAID controller firmware. The initial screen is displayed when the RAID controller is powered-on.
1. Use the up and down arrow keys to select the VT100 terminal emulation mode, and then press Return to enter the Main Menu.
See Introducing Key Screens and Commands for detailed information about understanding and using the initial firmware screen.
2. Use the following keys to navigate within the application:
To return to the previous menu without performing the selected menu option |
|
Press a letter as a keyboard shortcut for commands which have a boldface capital letter |
The firmware procedures use the term "Choose" as a shortcut description. Quotation marks are used to indicate a specific menu option or a series of menu options.
3. Proceed to configure the array using options from the Main Menu as described in the rest of this chapter.
All Sun StorEdge 3310 SCSI RAID arrays are preconfigured when they arrive from the factory. Default channel settings and rules are specified as follows:
The most common reason to change a host channel to a drive channel is to attach an expansion unit to a RAID array when you need only one host channel.
To change the use of a SCSI channel, reconfigure the channel according to the following procedure.
1. Choose "view and edit Scsi channels" from the Main Menu.
The communications path for the controllers is displayed as "RCCOM (Redundant Controller Communications)."
2. Select the channel that you want to modify.
3. Choose "channel Mode" from the displayed menu.
4. Select Yes to change the host or drive assignment.
All RAID arrays are preconfigured when they arrive from the factory.
Each host channel might have two editable ID numbers:
Each ID number must be a unique number within the host channel. You can:
Note - To map 128 partitions into 128 LUNs, you must add additional host IDs. A minimum of four host IDs are required; a maximum of six host IDs are possible. For details on mapping 128 LUNs, refer to Planning for 128 LUNs (Optional). |
To select a unique ID number for a host channel:
1. Choose "view and edit Scsi channels."
2. Select the host channel on which you want to edit the Primary/Secondary ID.
3. Choose "view and edit scsi Id."
If a host ID has already been assigned to that channel, it is displayed.
4. Select the existing host ID and press Return.
5. Choose "Add Channel SCSI ID."
6. Select the controller on which you want to add a host ID.
7. Choose an ID number for that controller.
A confirmation dialog is displayed.
9. From the Main Menu, choose "system Functions Reset controller" and then choose Yes to confirm.
The configuration change takes effect only after the controller is reset.
Before creating or modifying logical drives, you should select the optimization mode for all logical drives you create. The optimization mode determines the block size used when writing data to the drives in an array.
The type of application the array is working with determines whether Random or Sequential I/O should be applied. Video/imaging application I/O size can be 128, 256, 512 Kbyte, or up to 1 Mbyte, so the application reads and writes data to and from the drive as large-block, sequential files. Database/transaction-processing applications read and write data from the drive as small-block, randomly-accessed files.
There are two limitations that apply to the optimization modes:
For more information about optimization modes, including how to change your optimization, refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide for your array.
Your choice of Random or Sequential optimization affects the maximum number of disks you can include in an array and the maximum usable capacity of a logical drive. The following tables contain the maximum number of disks per logical drive and the maximum usable capacity of a logical drive.
Note - You can have a maximum of eight logical drives and 36 disks, using one array and two expansion units. |
Note - You might not be able to use all disks for data when using 36 146-Gbyte disks. Any remaining disks can be used as spares. |
A logical drive is a set of drives grouped together to operate under a given RAID level. Each controller is capable of supporting as many as eight logical drives. The logical drives can have the same or different RAID levels.
For a 12-drive array, the RAID array is preconfigured as follows:
For a 5-drive array, the RAID array is preconfigured as follows:
The following table highlights the RAID levels available.
For more information about logical drives, spares, and RAID levels, refer to Chapter 1, Basic Concepts, in the Sun StorEdge 3000 Family RAID Firmware User's Guide.
The RAID array is already configured with one or two RAID 5 logical drives and one global spare. Each logical drive consists of a single partition by default.
This procedure is used to modify the RAID level and to add more logical drives, if necessary. In this procedure, you configure a logical drive to contain one or more hard drives based on the desired RAID level, and partition the logical drive into additional partitions.
Note - If you want to assign 128 partitions to 128 LUNs in an array, you need to have a minimum of four logical drives (each with 32 partitions). |
For redundancy across separate channels, you can also create a logical drive containing drives distributed over separate channels. You can then partition the logical unit into one or several partitions.
A logical drive consists of a group of SCSI drives. Each logical drive can be configured a different RAID level.
A drive can be assigned as the local spare drive to one specified logical drive, or as a global spare drive that is available to all logical drives on the RAID array. Spares can be part of automatic array rebuild. A spare is not available for logical drives with no data redundancy (RAID 0).
View the connected drives. Before configuring disk drives into a logical drive, it is necessary to understand the status of physical drives in your enclosure.
1. Choose "view and edit Scsi drives."
This displays information of all the physical drives that are installed.
2. Use the arrow keys to scroll through the table. Check that all installed drives are listed here.
If a drive is installed but is not listed, it might be defective or might not be installed correctly, contact your RAID supplier.
When the power is on, the controller scans all hard drives that are connected through the drive channels. If a hard drive was connected after the controller completed initialization, use the "Scan SCSI Drive" function to let the controller recognize the newly added hard drive and configure it as a member of a logical drive.
![]() |
Caution - Scanning an existing drive removes its assignment to any logical drive. All data on that drive is lost. |
After you have determined the status of your disk drives, Create a logical drive with the following steps.
1. Choose "view and edit Logical drive" from the Main Menu.
2. Select the first available unassigned logical drive (LG) to proceed.
You can create as many as eight logical drives from drives on any SCSI bus.
3. When prompted to "Create Logical Drive? " select Yes.
A pull-down list of supported RAID levels is displayed.
4. Select a RAID level for this logical drive.
RAID 5 is used in the following example screens.
For brief descriptions of RAID levels, refer to Reviewing Default Logical Drives and RAID Levels. For more information about RAID levels, refer to Chapter 1 in the Sun StorEdge 3000 Family RAID Firmware User's Guide.
5. Select your member drive(s) from the list of available physical drives.
The drives can be tagged for inclusion by highlighting the drive and then pressing Return. An asterisk (*) mark is displayed on the selected physical drive(s).
To deselect the drive, press Return again on the selected drive. The "*" mark disappears.
Note - You must select at least the minimum number of drives required per RAID level. |
a. Use the up and down arrow keys and press Return to select more drives.
b. After all physical drives have been selected for the logical drive, press the Esc key to continue to the next option.
After member physical drives are selected, a list of selections is displayed.
6. (Optional) Set Maximum Physical Drive Capacity.
a. (Optional) Choose "Maximum Drive Capacity" from the menu.
The Maximum Available Drive Capacity and Maximum Drive Capacity is displayed.
Note - Changing the maximum drive capacity reduces the size of the logical drive and leave some disk space unused. |
b. (Optional) Type a desired drive capacity and press Return if you want to reduce drive capacity.
As a rule, a logical drive should be composed of physical drives with the same capacity. A logical drive can only use the capacity of each drive up to the maximum capacity of the smallest drive.
7. (Optional) Add a local spare drive from the list of unused physical drives.
Note - A global spare cannot be created while creating a logical drive. |
The spare chosen here is a local spare and automatically replaces any failed disk drive in this logical drive. The local spare is not available for any other logical drive.
a. (Optional) Choose "Assign Spare Drives" from the menu.
A list of available drives is displayed.
b. (Optional) Select a drive from the displayed list and press Return.
An asterisk is displayed in the Slot column of the selected drive.
Note - A logical drive created in a RAID level which has no data redundancy (RAID 0) does not support spare drive rebuilding. |
c. (Optional) Press the Esc key to continue.
8. (Optional) Assign this logical drive to the secondary controller.
By default, all logical drives are automatically assigned to the primary controller.
If you use two controllers for the redundant configuration, a logical drive can be assigned to either of the controllers to balance workload. Logical drive assignment can be changed any time later.
a. Choose "Logical Drive Assignments."
A confirmation dialog is displayed.
b. Choose Yes to confirm. Press Esc key or No to exit from this window without changing the controller assignment.
9. Press the Esc key to continue.
A confirmation box is displayed on the screen.
10. Verify all information in the box before choosing Yes to continue.
A message indicates that the logical drive initialization has begun.
11. Press Esc key to cancel the "Notification" prompt.
12. After the logical drive initialization is completed, use the Esc key to return to the Main Menu.
13. Choose "view and edit Logical drives" to view details of the created logical drive.
The Solaris operating environment requires drive geometry for various operations, including newfs. For the appropriate drive geometry to be presented to the Solaris operating environment for logical drives larger than 253 Gbyte, change the default settings to "< 65536 Cylinders" and "255 Heads" to cover all logical drives over 253 GB and under the maximum limit. The controller automatically adjusts the sector count, and then the operating environment can read the correct drive capacity.
For Solaris operating environment configurations, use the values in the following table.
Note - Earlier versions of the Solaris operating environment do not support drive capacities larger than 1 terabyte. |
For Solaris operating environment configurations, use the values in the following table.
* These settings are also valid for all logical drives under 253 GBytes. After settings are changed, they apply to all logical drives in the chassis.
To revise the Cylinder and Head settings, perform the following steps.
1. Choose "view and edit Configuration parameters Host-Side SCSI Parameters
Host Cylinder/Head/Sector Mapping Configuration
Sector Ranges - Variable
255 Sectors
Head Ranges - Variable."
3. Choose "Cylinder Ranges - Variable < 65536 Cylinders."
Refer to Sun StorEdge 3000 Family RAID Firmware User's Guide for more information about firmware commands used with logical drives.
By default, logical drives are automatically assigned to the primary controller. If you assign half the drives to the secondary controller, the maximum speed and performance is somewhat improved due to the redistribution of the traffic.
To balance the workload between both controllers, you can distribute your logical drives between the primary controller (displayed as the Primary ID or PID) and the secondary controller (displayed as the Secondary ID or SID).
After a logical drive has been created, it can be assigned to the secondary controller. Then the host computer associated with the logical drive can be mapped to the secondary controller (see Mapping Logical Drive Partitions to Host LUNs).
To change a logical drive controller assignment:
1. Choose "view and edit Logical drives."
3. Choose "logical drive Assignments."
A confirmation dialog is displayed.
The reassignment is evident from the "view and edit Logical drives" screen.
A "P" in front of the LG number means that the logical drive is assigned to the primary controller. An "S" in front of the LG number means that the logical drive is assigned to a secondary controller.
For example, "S1" indicates that logical drive 1 assigned to the secondary controller.
A confirmation message is displayed:
NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now? |
5. Choose Yes to reset the controller.
You must reset the controller for the changes to take effect.
You might divide a logical drive into several partitions, or use the entire logical drive as a single partition. You might configure up to 32 partitions for each logical drive.
For guidelines on setting up 128 LUNs, refer to Planning for 128 LUNs (Optional).
![]() |
Caution - If you modify the size of a partition or logical drive, you lose all data on those drives. |
To partition a logical drive, perform the following steps.
1. Choose "view and edit Logical drives" from the Main Menu
2. Select the logical drive you want to partition.
3. Choose "Partition logical drive" from the menu.
If the logical drive has not already been partitioned, this message is displayed:
5. Select a partition from the list of defined partitions.
A list of the partitions for this logical drive appears. If the logical drive has not yet been partitioned, all the logical drive capacity is listed as "partition 0."
6. Type the desired size for the selected partition and press Return.
A warning prompt is displayed:
![]() |
Caution - Make sure any data on this partition that you want to save has been backed up before you partition the logical drive. |
The remaining capacity of the logical drive is automatically allotted to the next partition. In the following figure, a partition size of 3000 MB was entered; the remaining storage of 27000 MB is allocated to the partition below the partition created.
8. Repeat the preceding steps to partition the remaining capacity of your logical drive.
You can create up to 32 partitions per logical drive, with a total number of partitions not to exceed 128 partitions/LUNs per the RAID array.
Note - When you modify a partition or logical drive size, you must reconfigure all host LUN mappings. All the host LUN mappings are removed with any change to partition capacity. See Mapping Logical Drive Partitions to Host LUNs. |
Note - When a partition of logical drive/logical volume is deleted, the capacity of the deleted partition is added to the partition above the deleted partition. |
If you want to create 128 LUNs which is the maximum number of storage partitions which can be mapped for a RAID array, set up one of the following configurations:
For details on how to add host IDs, refer to Creating Additional Host IDs (Optional).
Note - For an overview of how partitions, LUNs, and host IDs work, refer to Mapping Logical Drive Partitions to Host LUNs. |
To set up 128 LUNs, the following steps are required.
1. Create a minimum of four host IDs.
By default, you have two host IDs: Channel 1 ID 0 (primary controller) and Channel 3 ID 1 (secondary controller). You can have a total of two IDs per channel, one for the Primary Controller and one for the secondary controller.
For the detailed procedure, refer to Creating Additional Host IDs (Optional).
2. Confirm that the allowed number of LUNs per host ID is 32.
Choose "view and edit Configuration parameters host-side SCSI Parameters."
If the "LUNs per Host SCSI ID" is not 32, highlight the line, press Return and select the number 32. Then choose Yes to confirm.
3. Create at least four logical drives.
For the detailed procedure, refer to Creating Logical Drive(s) (optional).
4. For each logical drive, create a number of partitions per logical drive until you reach a total of 128 partitions, then map those partitions to the host IDs.
For the detailed procedures, refer to Partitioning a Logical Drive (optional) and Mapping Logical Drive Partitions to Host LUNs.
The next step is to map each storage partition as one system drive (host ID/LUN). The host SCSI adapter recognizes the system drives after re-initializing the host bus.
A SCSI channel (SCSI bus) can connect up to 15 devices (excluding the controller itself) when the Wide function is enabled (16-bit SCSI). Each device has one unique ID.
The following figure illustrates the idea of mapping a system drive to host ID/LUN combination.
Each SCSI ID/LUN looks like a storage device to the host computer.
To map a logical drive partition to a LUN, perform the following steps.
1. Choose "view and edit Host luns" from the Main Menu.
2. Select a specific host-channel ID and press Return. Select a logical drive if prompted.
3. Select a LUN number, and press Return.
A list of available logical drives is displayed.
A list of available partitions is displayed.
A confirmation dialog is displayed.
The same partition might be mapped to multiple LUNs on multiple host channels. This feature is necessary for clustered environments and redundant path environments.
8. Press Esc key to return to the Main Menu.
9. Repeat Step 1 through Step 8 for each partition until all partitions are mapped to a LUN.
10. Choose "system Functions Reset controller" and then choose Yes to reset the controller and implement the new configuration settings.
11. Each operating system or environment has a method for recognizing storage devices and LUNs, and may require the use of specific commands or modification of specific files. Be sure to check the information for your operating system/environment to ensure that you have performed the necessary commands or file edits.
For information about the different operating environments and operating systems, see:
You can select to back up your controller-dependent configuration information. We recommend using this function to save configuration information whenever a configuration change is made.
The logical configuration information is stored within the logical drive.
Note - A logical drive must exist for the controller to write NVRAM content onto it. |
1. Choose "system Functions controller maintenance
save NVRAM to disks."
A confirmation dialog is displayed.
A prompt confirms that the NVRAM information has been successfully saved.
To restore the configuration, refer to Restoring Your Configuration (NVRAM) From a File.
The following software tools are available on the Sun StorEdge 3000 Family Professional Storage Manager CD, provided with your array:
The Sun StorEdge 3000 Family Documentation CD provides the related user guides with detailed installation and configuration procedures for these tools.
For other supported software, see the release notes for your array.
To enable VERITAS Dynamic Multi-Pathing (DMP) support on VERITAS Volume Manager Version 3.2 for a RAID array, perform the following steps.
1. Configure at least two SCSI channels as host channels (channels 1 and 3 by default) and add additional SCSI host IDs if needed.
2. Connect host cables to the I/O host ports in step 1.
3. Map each LUN to two host channels to provide dual-pathed LUNs.
4. Add the correct SCSI string to vxddladm so VxVM can manage the LUNs as a multi-pathed JBOD.
5. Reboot the hosts. System reboot is required to implement these changes.
Copyright © 2004, Sun Microsystems, Inc. All rights reserved.