C H A P T E R  5

First-Time Configuration

This chapter summarizes the most common procedures used for first-time configuration and includes the following topics:


5.1 Controller Defaults and Limitations

The following controller functions describe the redundant controller operation.

The two controllers continuously monitor each other. When a controller detects that the other controller is not responding, the working controller immediately takes over and disables the failed controller.



Note - Logical volumes, an alternative to logical drives, are unsuited to some modern configurations such as Sun Cluster environments, and do not work in those configurations. Use logical drives instead. For more information see First-Time Controller Configuration.



The active-to-active configuration engages all array resources to actively maximize performance. Users might also assign all logical configurations to one controller and let the other act as a standby.


5.2 Single-Controller Considerations

In a single-controller configuration, keep the controller as the primary controller at all times and assign all logical drives to the primary controller. The primary controller controls all logical drive and firmware operations. In a single-controller configuration, the controller must be the primary controller or the controller cannot operate.

The secondary controller is used only in dual-controller configurations for redistributed I/O and for failover.



caution icon

Caution - Do not disable the Redundant Controller setting and do not set the controller as a secondary controller. If you disable the Redundant Controller Function and reconfigure the controller with the Autoconfigure option or as a secondary controller, the controller module becomes inoperable and will need to be replaced.



The Redundant Controller setting ("View and Edit Peripheral Devices right arrow Set Peripheral Device Entry") must remain enabled for single-controller configurations. This preserves the default primary controller assignment of the single controller. The controller status shows "scanning," which indicates that the firmware is scanning for primary and secondary controller status and redundancy is enabled even though it is not used. There is no performance impact.


5.3 Battery Operation

The battery LED (on far right side of the controller module) is an amber LED if the battery is bad or missing. The LED is blinking green if the battery is charging and is solid green when the battery is fully charged.

5.3.1 Battery Status

The initial firmware screen displays the battery status at the top of the initial screen where BAT: status displays somewhere in the range from BAD to ----- (charging), or +++++ (fully charged).

For maximum life, lithium ion batteries are not recharged until the charge level is very low, indicated by a status of -----. Automatic recharging at this point takes very little time.

A battery module whose status shows one or more + signs can support cache memory for 72 hours. As long as one or more + signs are displayed, your battery is performing correctly.

TABLE 5-1 Battery Status Indicators

Battery Display

Description

-----

Discharged; the battery is automatically recharged when it reaches this state.

+----

Adequately charged to maintain cache memory for 72 hours or more in case of power loss. Automatic recharging occurs when the battery status drops below this level.

++---

Over 90% charged; adequate to maintain cache memory for 72 hours or more in case of power loss.

+++--

Over 90% charged; adequate to maintain cache memory for 72 hours or more in case of power loss.

++++-

Over 90% charged; adequate to maintain cache memory for 72 hours or more in case of power loss.

+++++

Fully charged; adequate to maintain cache memory for 72 hours or more in case of power loss.


Your lithium ion battery should be changed every two years if the unit is continuously operated at 25 degrees C. If the unit is continuously operated at 35 degrees C or higher, it should be changed every year. The shelf life of your battery is three years.



Note - The RAID controller has a temperature sensor which shuts off battery charging above 54 degrees C. When this happens, the battery status may be reported as BAD, but no alarm is written to the event log since no actual battery failure has occurred. This behavior is normal. As soon as the temperature returns to the normal range, battery charging resumes and the battery status is reported correctly. It is not necessary to replace or otherwise interfere with the battery in this situation.



For information on the date of manufacture and how to replace the battery module, refer to the Sun StorEdge 3000 Family FRU Installation Guide.

For more information, see for the acceptable operating and non-operating temperature ranges for your array.


5.4 Write-Back Versus Write-Through Cache Options

Unfinished writes are cached in memory in write-back mode. If power to the array is discontinued, data stored in the cache memory is not lost. Battery modules can support cache memory for several days.

Write cache is not automatically disabled when the battery is offline due to battery failure or a disconnected battery. You can enable or disable the write-back cache capabilities of the RAID controller. To ensure data integrity, you may choose to disable Write Back cache option and switch to the Write Through cache option through the firmware application (go to "view and edit Configuration parameters" and select "Caching Parameters"). The risk of data loss is remote.


5.5 Accessing the Management Tools

You can manage the array through one of the following three methods:


5.6 First-Time Controller Configuration

Sun StorEdge 3310 SCSI arrays are preconfigured and require minimal configuration. TABLE 5-2 summarizes the typical series of procedures for completing a first-time RAID controller configuration. All other procedures can be performed by using either the COM port or the Ethernet port connection to a management console.

TABLE 5-2 Summary of First-Time Controller Configuration Steps

Bold = Required minimum configuration

  1.  

Cabling from the RAID array to host(s) must be complete.

  1.  

Set up a serial port connection.

See Configuring a COM Port to Connect to a RAID Array for more information.

  1.  

Configure SCSI channels as host or drive (optional).

  1.  

Create primary ID and secondary ID on host channel(s).

  1.  

Reset controller. The IDs assigned to controllers only take effect after controller is reset.

  1.  

Remove default logical drive(s) and create new logical drives (optional).

  1.  

Assign logical drives to the secondary controller (optional).

  1.  

Repartition the logical drive(s) (optional).

  1.  

Map each logical drive partition to a LUN on a host channel.

  1.  

Reset controller. *

 

Configuration is complete.

  1.  

Save configuration to disk.

 

* Reset the controller after each step or at the end of the configuration process.




Note - Resetting the controller can result in occasional host-side error messages such as parity error and synchronous error messages. No action is required and the condition corrects itself as soon as reinitialization of the controller is complete.





Note - For security reasons, only a single monitoring and management session is allowed for a firmware connection to an array. When the firmware is accessed more than once at the same time--either in-band and out-of-band, or through multiple out-of band sessions, both screens are synchronized and both users will interfere with each other's operations.



5.6.1 Viewing the Initial Firmware Windows

You see the initial controller screen when you first access the RAID controller firmware. The initial screen is displayed when the RAID controller is powered-on.

1. Use the up and down arrow keys to select the VT100 terminal emulation mode, and then press Return to enter the Main Menu.

See Introducing Key Screens and Commands for detailed information about understanding and using the initial firmware screen.

2. Use the following keys to navigate within the application:

left arrow right arrow up arrow down arrow

To select options

Return or Enter

To perform the selected menu option or display a submenu

Esc

To return to the previous menu without performing the selected menu option

Ctrl-L (Ctrl key and L key together)

To refresh the screen information

Press a letter as a keyboard shortcut for commands which have a boldface capital letter

To access a Main Menu command quickly


The firmware procedures use the term "Choose" as a shortcut description. Quotation marks are used to indicate a specific menu option or a series of menu options.

Procedure

Meaning

Choose "menu option"

Highlight the menu option and press Return.

or

Press the key that corresponds to the capitalized letter in the menu option if one is available.

Choose "menu option 1right arrow menu option 2 right arrow menu option 3"

This represents a series of nested menus options which are selected with arrow keys. Press Return after each selection in order to access the next menu item and to complete the series.


3. Proceed to configure the array using options from the Main Menu as described in the rest of this chapter.

 FIGURE 5-1 Firmware Main Menu

Firmware Main Menu with eleven commands listed.

5.6.2 Configuring SCSI Channels as Host or Drive (Optional)

All Sun StorEdge 3310 SCSI RAID arrays are preconfigured when they arrive from the factory. Default channel settings and rules are specified as follows:

The most common reason to change a host channel to a drive channel is to attach an expansion unit to a RAID array when you need only one host channel.

To change the use of a SCSI channel, reconfigure the channel according to the following procedure.

1. Choose "view and edit Scsi channels" from the Main Menu.

 Screen capture shows the "view and edit Scsi channels" selected and its status table displaying the SCSI channel information.

The communications path for the controllers is displayed as "RCCOM (Redundant Controller Communications)."

2. Select the channel that you want to modify.

3. Choose "channel Mode" from the displayed menu.

4. Select Yes to change the host or drive assignment.

 Screen capture shows the "channel mode" command selected in the "view and edit Scsi channels" window.


caution icon

Caution - The channels of redundant controllers must be the same. For example, if the primary controller uses channel 2 to connect to a group of drives, the secondary controller must also use channel 2 to connect to the same group of drives. Changes to the primary controller are automatically made to the secondary controller.



5.6.3 Creating Additional Host IDs (Optional)

All RAID arrays are preconfigured when they arrive from the factory.

Default host channel IDs are:

Each host channel might have two editable ID numbers:

Each ID number must be a unique number within the host channel. You can:



Note - To map 128 partitions into 128 LUNs, you must add additional host IDs. A minimum of four host IDs are required; a maximum of six host IDs are possible. For details on mapping 128 LUNs, refer to Planning for 128 LUNs (Optional).



To select a unique ID number for a host channel:

1. Choose "view and edit Scsi channels."

2. Select the host channel on which you want to edit the Primary/Secondary ID.

3. Choose "view and edit scsi Id."

If a host ID has already been assigned to that channel, it is displayed.

4. Select the existing host ID and press Return.

5. Choose "Add Channel SCSI ID."

6. Select the controller on which you want to add a host ID.

 Screen capture shows the "Primary controller scsi id" command selected. in the "view and edit Scsi channels" window.

7. Choose an ID number for that controller.

A confirmation dialog is displayed.



Note - To create a total of 128 LUNs, you must have a minimum of four host IDs (two each for Channels 1 and 3) and might have a maximum of six host IDs (two each for Channels 1 and 2, and 3). Each host ID can have up to 32 partitions, which are then mapped to LUNs to create a total not to exceed 128.



8. Choose Yes to confirm.

9. From the Main Menu, choose "system Functions right arrow Reset controller" and then choose Yes to confirm.

The configuration change takes effect only after the controller is reset.

5.6.4 Selecting Sequential or Random Optimization

Before creating or modifying logical drives, you should select the optimization mode for all logical drives you create. The optimization mode determines the block size used when writing data to the drives in an array.



Note - Your array is preconfigured for Sequential Optimization. If Random Optimization is most appropriate for your use, you will need to delete all of the preconfigured logical drives, change the optimization mode, and then create new logical drives.



The type of application the array is working with determines whether Random or Sequential I/O should be applied. Video/imaging application I/O size can be 128, 256, 512 Kbyte, or up to 1 Mbyte, so the application reads and writes data to and from the drive as large-block, sequential files. Database/transaction-processing applications read and write data from the drive as small-block, randomly-accessed files.

There are two limitations that apply to the optimization modes:



Note - The maximum allowable size of a logical drive optimized for Sequential I/O is 2 Tbyte. The maximum allowable size of a logical drive optimized for Random I/O is 512 Gbyte. When creating a logical drive that is greater than these limits, an error message is displayed.



For more information about optimization modes, including how to change your optimization, refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide for your array.

5.6.4.1 Maximum Number of Disks and Maximum Usable Capacity for Random and Sequential Optimization

Your choice of Random or Sequential optimization affects the maximum number of disks you can include in an array and the maximum usable capacity of a logical drive. The following tables contain the maximum number of disks per logical drive and the maximum usable capacity of a logical drive.



Note - You can have a maximum of eight logical drives and 36 disks, using one array and two expansion units.



TABLE 5-3 Maximum Number of Disks per Logical Drive for a 2U Array

Disk Capacity (GB)

RAID 5 Random

RAID 5 Sequential

RAID 3 Random

RAID 3 Sequential

RAID 1 Random

RAID 1 Sequential

RAID 0 Random

RAID 0 Sequential

36.2

14

31

14

31

28

36

14

36

73.4

7

28

7

28

12

30

6

27

146.8

4

14

4

14

6

26

3

13


TABLE 5-4 Maximum Usable Capacity (Gbyte) per Logical Drive for a 2U Array

Disk Capacity

 

RAID 5 Random

RAID 5 Sequential

RAID 3 Random

RAID 3 Sequential

RAID 1 Random

RAID 1 Sequential

RAID 0 Random

RAID 0 Sequential

36.2

471

1086

471

1086

507

543

507

1122

73.4

440

1982

440

1982

440

1101

440

1982

146.8

440

1908

440

1908

440

1908

440

1908




Note - You might not be able to use all disks for data when using 36 146-Gbyte disks. Any remaining disks can be used as spares.



5.6.5 Reviewing Default Logical Drives and RAID Levels

A logical drive is a set of drives grouped together to operate under a given RAID level. Each controller is capable of supporting as many as eight logical drives. The logical drives can have the same or different RAID levels.

For a 12-drive array, the RAID array is preconfigured as follows:

For a 5-drive array, the RAID array is preconfigured as follows:



Note - While the ability to create and manage logical volumes remains a feature of Sun StorEdge 3000 Family FC and SCSI RAID arrays for legacy reasons, the size and performance of physical and logical drives have made the use of logical volumes obsolete. Logical volumes are unsuited to some modern configurations such as Sun Cluster environments, and do not work in those configurations. Avoid using them and use logical drives instead. For more information about logical drives, refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide.



The following table highlights the RAID levels available.

TABLE 5-5 RAID Level Definitions

RAID Level

Description

RAID 0

Striping without data redundancy; provides maximum performance.

RAID 1

Mirrored or duplexed disks; for each disk in the system, a duplicate disk is maintained for data redundancy. It requires 50% of total disk capacity for overhead.

RAID 3

Striping with dedicated parity. Parity is dedicated to one drive. Data is divided into blocks and striped across the remaining drives.

RAID 5

Striping with distributed parity; this is the best-suited RAID level for multi-tasking or transaction processing.The data and parity are striped across each drive in the logical drive, so that each drive contains a combination of data and parity blocks.

NRAID

NRAID stands for Non-RAID. The NRAID option in the firmware application is no longer used and is not recommended.

RAID 1+0

RAID 1+0 combines RAID 1 and RAID 0--mirroring and disk striping. RAID 1+0 allows multiple drive failure because of the full redundancy of the hard disk drives. If four or more hard disk drives are chosen for a RAID 1 logical drive, RAID 1+0 is performed automatically.

RAID (3+0)

A logical volume with several RAID 3 member logical drives.

RAID (5+0)

A logical volume with several RAID 5 member logical drives.


For more information about logical drives, spares, and RAID levels, refer to Chapter 1, Basic Concepts, in the Sun StorEdge 3000 Family RAID Firmware User's Guide.

5.6.6 Completing Basic Configuration

5.6.7 Creating Logical Drive(s) (optional)

The RAID array is already configured with one or two RAID 5 logical drives and one global spare. Each logical drive consists of a single partition by default.

This procedure is used to modify the RAID level and to add more logical drives, if necessary. In this procedure, you configure a logical drive to contain one or more hard drives based on the desired RAID level, and partition the logical drive into additional partitions.



Note - If you want to assign 128 partitions to 128 LUNs in an array, you need to have a minimum of four logical drives (each with 32 partitions).



For redundancy across separate channels, you can also create a logical drive containing drives distributed over separate channels. You can then partition the logical unit into one or several partitions.

A logical drive consists of a group of SCSI drives. Each logical drive can be configured a different RAID level.

A drive can be assigned as the local spare drive to one specified logical drive, or as a global spare drive that is available to all logical drives on the RAID array. Spares can be part of automatic array rebuild. A spare is not available for logical drives with no data redundancy (RAID 0).

 FIGURE 5-2 Example Allocation of Local and Spare Drives in Logical Configurations

Diagram showing an example allocation of local and spare drives in logical configurations.

View the connected drives. Before configuring disk drives into a logical drive, it is necessary to understand the status of physical drives in your enclosure.

1. Choose "view and edit Scsi drives."

This displays information of all the physical drives that are installed.

 Screen capture shows the physical drives status window accessed with the "view and edit Scsi drives" command.

2. Use the arrow keys to scroll through the table. Check that all installed drives are listed here.

If a drive is installed but is not listed, it might be defective or might not be installed correctly, contact your RAID supplier.

When the power is on, the controller scans all hard drives that are connected through the drive channels. If a hard drive was connected after the controller completed initialization, use the "Scan SCSI Drive" function to let the controller recognize the newly added hard drive and configure it as a member of a logical drive.



caution icon

Caution - Scanning an existing drive removes its assignment to any logical drive. All data on that drive is lost.



After you have determined the status of your disk drives, Create a logical drive with the following steps.

1. Choose "view and edit Logical drive" from the Main Menu.

 Screen capture shown empty logical drive status window accessed with the "view and edit Logical drive" command if there are no preconfigured logical drives.

2. Select the first available unassigned logical drive (LG) to proceed.

You can create as many as eight logical drives from drives on any SCSI bus.

3. When prompted to "Create Logical Drive? " select Yes.

A pull-down list of supported RAID levels is displayed.

4. Select a RAID level for this logical drive.

RAID 5 is used in the following example screens.

 Screen capture shows RAID levels window with "RAID 5" selected.

For brief descriptions of RAID levels, refer to Reviewing Default Logical Drives and RAID Levels. For more information about RAID levels, refer to Chapter 1 in the Sun StorEdge 3000 Family RAID Firmware User's Guide.

5. Select your member drive(s) from the list of available physical drives.

The drives can be tagged for inclusion by highlighting the drive and then pressing Return. An asterisk (*) mark is displayed on the selected physical drive(s).

To deselect the drive, press Return again on the selected drive. The "*" mark disappears.



Note - You must select at least the minimum number of drives required per RAID level.



a. Use the up and down arrow keys and press Return to select more drives.

 Screen capture shows a list of available physical drives for logical drive 0, and member drive 0 is selected.

b. After all physical drives have been selected for the logical drive, press the Esc key to continue to the next option.

After member physical drives are selected, a list of selections is displayed.

 Screen capture showing Maximum Drive Capacity selected.

6. (Optional) Set Maximum Physical Drive Capacity.

a. (Optional) Choose "Maximum Drive Capacity" from the menu.

The Maximum Available Drive Capacity and Maximum Drive Capacity is displayed.



Note - Changing the maximum drive capacity reduces the size of the logical drive and leave some disk space unused.



 Screen capture showing Maximum Available Drive Capacity and Maximum Drive Capacity parameters.

b. (Optional) Type a desired drive capacity and press Return if you want to reduce drive capacity.

As a rule, a logical drive should be composed of physical drives with the same capacity. A logical drive can only use the capacity of each drive up to the maximum capacity of the smallest drive.

7. (Optional) Add a local spare drive from the list of unused physical drives.



Note - A global spare cannot be created while creating a logical drive.



The spare chosen here is a local spare and automatically replaces any failed disk drive in this logical drive. The local spare is not available for any other logical drive.

a. (Optional) Choose "Assign Spare Drives" from the menu.

A list of available drives is displayed.

 Screen capture shows the "Assign Spare Drives" command selected.

b. (Optional) Select a drive from the displayed list and press Return.

An asterisk is displayed in the Slot column of the selected drive.

 Screen capture shows a list of unused drives with.their properties, namely slot, channel, ID, size, speed,.LG_DRV, status, and Vendor and Product ID.


Note - A logical drive created in a RAID level which has no data redundancy (RAID 0) does not support spare drive rebuilding.



c. (Optional) Press the Esc key to continue.

8. (Optional) Assign this logical drive to the secondary controller.

By default, all logical drives are automatically assigned to the primary controller.

If you use two controllers for the redundant configuration, a logical drive can be assigned to either of the controllers to balance workload. Logical drive assignment can be changed any time later.

a. Choose "Logical Drive Assignments."

A confirmation dialog is displayed.

 Screen capture shows the "Redundant Controller Logical Drive Assign to Secondary Controller?" prompt displayed.

b. Choose Yes to confirm. Press Esc key or No to exit from this window without changing the controller assignment.

9. Press the Esc key to continue.

A confirmation box is displayed on the screen.

10. Verify all information in the box before choosing Yes to continue.

 Screen capture shows a prompt "Create Logical Drive?" which includes the new logical drive information.

A message indicates that the logical drive initialization has begun.

11. Press Esc key to cancel the "Notification" prompt.

12. After the logical drive initialization is completed, use the Esc key to return to the Main Menu.

13. Choose "view and edit Logical drives" to view details of the created logical drive.

5.6.8 Preparing for Logical Drives Larger Than 253 Gbytes on Solaris Systems

The Solaris operating environment requires drive geometry for various operations, including newfs. For the appropriate drive geometry to be presented to the Solaris operating environment for logical drives larger than 253 Gbyte, change the default settings to "< 65536 Cylinders" and "255 Heads" to cover all logical drives over 253 GB and under the maximum limit. The controller automatically adjusts the sector count, and then the operating environment can read the correct drive capacity.

For Solaris operating environment configurations, use the values in the following table.

TABLE 5-6 Cylinder and Head Mapping for the Solaris Operating Environment

Logical Drive Capacity

Cylinder

Head

Sector

< 253 GB

variable (default)

variable (default)

variable (default)

253 GB - 1 TB

< 65536 Cylinders

255

variable (default)




Note - Earlier versions of the Solaris operating environment do not support drive capacities larger than 1 terabyte.



For Solaris operating environment configurations, use the values in the following table.

TABLE 5-7 Cylinder and Head Mapping for the Solaris Operating Environment

Logical Drive Capacity

Cylinder

Head

Sector

< 253 GB

variable (default)

variable (default)

variable (default)

253 GB - 1 TB

< 65536 Cylinders *

255 *

variable (default)


* These settings are also valid for all logical drives under 253 GBytes. After settings are changed, they apply to all logical drives in the chassis.

To revise the Cylinder and Head settings, perform the following steps.

1. Choose "view and edit Configuration parameters right arrow Host-Side SCSI Parameters right arrow Host Cylinder/Head/Sector Mapping Configuration right arrow Sector Ranges - Variable right arrow 255 Sectors right arrow Head Ranges - Variable."

2. Specify "255 Heads."

3. Choose "Cylinder Ranges - Variable right arrow < 65536 Cylinders."

 Screen capture shows "< 65536 Cylinders" selected.

Refer to Sun StorEdge 3000 Family RAID Firmware User's Guide for more information about firmware commands used with logical drives.

5.6.9 Changing a Logical Drive Controller Assignment (Optional)

By default, logical drives are automatically assigned to the primary controller. If you assign half the drives to the secondary controller, the maximum speed and performance is somewhat improved due to the redistribution of the traffic.

To balance the workload between both controllers, you can distribute your logical drives between the primary controller (displayed as the Primary ID or PID) and the secondary controller (displayed as the Secondary ID or SID).

After a logical drive has been created, it can be assigned to the secondary controller. Then the host computer associated with the logical drive can be mapped to the secondary controller (see Mapping Logical Drive Partitions to Host LUNs).

To change a logical drive controller assignment:

1. Choose "view and edit Logical drives."

2. Select a logical drive.

3. Choose "logical drive Assignments."

A confirmation dialog is displayed.

4. Choose Yes to confirm.

 Screen capture shows the "logical drive Assignments" command displayed, then the "Redundant Controller Logical Drive Assign to Secondary Controller?" prompt.

The reassignment is evident from the "view and edit Logical drives" screen.

A "P" in front of the LG number means that the logical drive is assigned to the primary controller. An "S" in front of the LG number means that the logical drive is assigned to a secondary controller.

For example, "S1" indicates that logical drive 1 assigned to the secondary controller.



Note - The editable logical drive NAME name is used only in the RAID firmware administration and monitoring, and does not appear anywhere on the host. You can create a logical drive NAME after the logical drive is created: Select the logical drive in the previous screen, and press Return. Then select "logical drive name," type the desired name, and press Return.



A confirmation message is displayed:

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?


5. Choose Yes to reset the controller.

You must reset the controller for the changes to take effect.

5.6.10 Partitioning a Logical Drive (optional)

You might divide a logical drive into several partitions, or use the entire logical drive as a single partition. You might configure up to 32 partitions for each logical drive.

For guidelines on setting up 128 LUNs, refer to Planning for 128 LUNs (Optional).



caution icon

Caution - If you modify the size of a partition or logical drive, you lose all data on those drives.



 FIGURE 5-3 Partitions in Logical Configurations

Diagram shows logical drive 0 with three partitions and logical drive 1 with three partitions.

To partition a logical drive, perform the following steps.

1. Choose "view and edit Logical drives" from the Main Menu

2. Select the logical drive you want to partition.

 Screen capture shows a logical drive selected in the "view and edit Logical drives" window.

3. Choose "Partition logical drive" from the menu.

 Screen capture shows the "Partition logical drive" command selected in the "view and edit Logical drives" window.

If the logical drive has not already been partitioned, this message is displayed:

Partitioning the Logical Drive will make it no longer eligible for membership in a logical volume.

Continue Partition Logical Drive?


4. Choose Yes.

5. Select a partition from the list of defined partitions.

A list of the partitions for this logical drive appears. If the logical drive has not yet been partitioned, all the logical drive capacity is listed as "partition 0."

6. Type the desired size for the selected partition and press Return.

 Screen capture shows a selected partition and Partition Size <MB>: 3000.

A warning prompt is displayed:

This operation will result in the loss of all data on the partition.
Partition Logical Drive?




caution icon

Caution - Make sure any data on this partition that you want to save has been backed up before you partition the logical drive.



7. Choose Yes to confirm.

The remaining capacity of the logical drive is automatically allotted to the next partition. In the following figure, a partition size of 3000 MB was entered; the remaining storage of 27000 MB is allocated to the partition below the partition created.

 Screen capture shows the partition allocation with the 3000MB partition and the remaining 27000 MB storage allocated to the partition below.

8. Repeat the preceding steps to partition the remaining capacity of your logical drive.

You can create up to 32 partitions per logical drive, with a total number of partitions not to exceed 128 partitions/LUNs per the RAID array.



Note - When you modify a partition or logical drive size, you must reconfigure all host LUN mappings. All the host LUN mappings are removed with any change to partition capacity. See Mapping Logical Drive Partitions to Host LUNs.





Note - When a partition of logical drive/logical volume is deleted, the capacity of the deleted partition is added to the partition above the deleted partition.



5.6.11 Planning for 128 LUNs (Optional)

If you want to create 128 LUNs which is the maximum number of storage partitions which can be mapped for a RAID array, set up one of the following configurations:

or

For details on how to add host IDs, refer to Creating Additional Host IDs (Optional).



Note - For an overview of how partitions, LUNs, and host IDs work, refer to Mapping Logical Drive Partitions to Host LUNs.



To set up 128 LUNs, the following steps are required.

1. Create a minimum of four host IDs.

By default, you have two host IDs: Channel 1 ID 0 (primary controller) and Channel 3 ID 1 (secondary controller). You can have a total of two IDs per channel, one for the Primary Controller and one for the secondary controller.

For the detailed procedure, refer to Creating Additional Host IDs (Optional).

2. Confirm that the allowed number of LUNs per host ID is 32.

Choose "view and edit Configuration parameters right arrow host-side SCSI Parameters."

If the "LUNs per Host SCSI ID" is not 32, highlight the line, press Return and select the number 32. Then choose Yes to confirm.

 Screen capture shows "LUNs per Host SCSI ID" parameter selected through the "view and edit configuration parameters," then "Host-side SCSI Parameters."

3. Create at least four logical drives.

For the detailed procedure, refer to Creating Logical Drive(s) (optional).

4. For each logical drive, create a number of partitions per logical drive until you reach a total of 128 partitions, then map those partitions to the host IDs.

For the detailed procedures, refer to Partitioning a Logical Drive (optional) and Mapping Logical Drive Partitions to Host LUNs.

5.6.12 Mapping Logical Drive Partitions to Host LUNs

The next step is to map each storage partition as one system drive (host ID/LUN). The host SCSI adapter recognizes the system drives after re-initializing the host bus.

A SCSI channel (SCSI bus) can connect up to 15 devices (excluding the controller itself) when the Wide function is enabled (16-bit SCSI). Each device has one unique ID.

The following figure illustrates the idea of mapping a system drive to host ID/LUN combination.

 Figure shows the SCSI ID as a file cabinet and its LUNs as file drawers.

Each SCSI ID/LUN looks like a storage device to the host computer.

 FIGURE 5-4 Mapping Partitions to Host ID/LUNs

Figure shows LUN partitions mapped to ID 0 on Channel 1 and to ID 1 on Channel 3.

To map a logical drive partition to a LUN, perform the following steps.

1. Choose "view and edit Host luns" from the Main Menu.

2. Select a specific host-channel ID and press Return. Select a logical drive if prompted.

 Screen capture with "view and edit Host luns" selected, then "CHL 0 ID 0 (Primary Controller)" and "Logical Drive" selected.

3. Select a LUN number, and press Return.

A list of available logical drives is displayed.

4. Select a logical drive.

A list of available partitions is displayed.

5. Select a partition.

 Screen capture showing a LUN number and partition selected.

6. Choose "Map Host LUN."

A confirmation dialog is displayed.

 Screen capture showing "Map Host LUN" selected.

7. Choose Yes to confirm.

 Screen capture showing "Mapping Scheme" prompt.

The same partition might be mapped to multiple LUNs on multiple host channels. This feature is necessary for clustered environments and redundant path environments.

8. Press Esc key to return to the Main Menu.

9. Repeat Step 1 through Step 8 for each partition until all partitions are mapped to a LUN.

10. Choose "system Functions right arrow Reset controller" and then choose Yes to reset the controller and implement the new configuration settings.

11. Each operating system or environment has a method for recognizing storage devices and LUNs, and may require the use of specific commands or modification of specific files. Be sure to check the information for your operating system/environment to ensure that you have performed the necessary commands or file edits.

For information about the different operating environments and operating systems, see:

5.6.13 Saving Configuration (NVRAM) to a Disk

You can select to back up your controller-dependent configuration information. We recommend using this function to save configuration information whenever a configuration change is made.

The logical configuration information is stored within the logical drive.



Note - A logical drive must exist for the controller to write NVRAM content onto it.



1. Choose "system Functions right arrow controller maintenance right arrow save NVRAM to disks."

A confirmation dialog is displayed.

2. Choose Yes to confirm.

A prompt confirms that the NVRAM information has been successfully saved.

 Screen capture shows the "Save nvram to disks" command accessed through the "system Functions" command and "Configuration Parameters" command.

To restore the configuration, refer to Restoring Your Configuration (NVRAM) From a File.


5.7 Installing Software

The following software tools are available on the Sun StorEdge 3000 Family Professional Storage Manager CD, provided with your array:

The Sun StorEdge 3000 Family Documentation CD provides the related user guides with detailed installation and configuration procedures for these tools.

5.7.1 Other Supported Software

For other supported software, see the release notes for your array.

5.7.2 Enabling VERITAS DMP

To enable VERITAS Dynamic Multi-Pathing (DMP) support on VERITAS Volume Manager Version 3.2 for a RAID array, perform the following steps.

1. Configure at least two SCSI channels as host channels (channels 1 and 3 by default) and add additional SCSI host IDs if needed.

2. Connect host cables to the I/O host ports in step 1.

3. Map each LUN to two host channels to provide dual-pathed LUNs.

4. Add the correct SCSI string to vxddladm so VxVM can manage the LUNs as a multi-pathed JBOD.

# vxddladm addjbod vid=SUN pid="StorEdge 3310"
# vxddladm listjbod
VID       PID     Opcode   Page     Code   Page Offset SNO length
================================================================
SEAGATE  ALL 		     PIDs      18   		-1       36         12
SUN      StorEdge  3310       18     -1       36          12

5. Reboot the hosts. System reboot is required to implement these changes.