C H A P T E R  5

First-Time Configuration

This chapter summarizes the most common procedures used for first-time configuration.

This chapter covers the following topics:

Subsequent chapters in this manual describe further procedures used to complete the installation and configuration of FC arrays. The flexible architecture of the Sun StorEdge 3510 FC Array makes many configurations possible.


5.1 Controller Defaults and Limitations

This section describes default configurations and certain controller limitations.

5.1.1 Planning for Reliability, Availability, and Serviceability

The entry-level configuration for an FC array uses only one controller. You can mirror two single-controller arrays using volume manager software on attached servers to ensure high reliability, availability, and serviceability (RAS).

You can also use dual-controller arrays to avoid a single point of failure. A dual-controller FC array features a default active-to-active controller configuration. This configuration provides high reliability and high availability because, in the unlikely event of a controller failure, the array automatically fails over to a second controller, resulting in no interruption of data flow.

Other dual-controller configurations can be used as well. For instance, at a site where maximum throughput or connecting to the largest possible number of servers is of primary importance, you could use a high-performance configuration. Refer to the Sun StorEdge 3000 Family Best Practices Manual for the Sun StorEdge 3510 FC Array for information about array configurations.

Be aware, however, that departing from a high-availability configuration can result in a significant decrease in the mean time between data interruptions. System downtime, however, is not affected as severely. The time required to replace a controller, if one is available, is only about five minutes.

Regardless of configuration, customers requiring high availability should stock field-replaceable units (FRUs), such as disk drives and controllers, on site. Your FC array has been designed to make replacing these FRUs easy and fast.

5.1.2 Dual-Controller Considerations

The following controller functions describe the redundant controller operation.

The two controllers continuously monitor each other. When a controller detects that the other controller is not responding, the working controller immediately takes over and disables the failed controller.

An active-to-standby configuration is also available but is not usually selected. By assigning all the logical configurations of drives to one controller, the other controller stays idle and becomes active only if its counterpart fails.

5.1.3 Single-Controller Considerations

In a single-controller configuration, keep the controller as the primary controller at all times and assign all logical drives to the primary controller. The primary controller controls all logical drive and firmware operations. In a single-controller configuration, the controller must be the primary controller or the controller cannot operate.

The secondary controller is used only in dual-controller configurations for redistributed I/O and for failover.



caution icon

Caution - Do not disable the Redundant Controller setting and do not set the controller as a secondary controller. If you disable the Redundant Controller setting and reconfigure the controller with the Autoconfigure option or as a secondary controller, the controller module becomes inoperable and will need to be replaced.



The Redundant Controller setting ("View and Edit Peripheral Devices right arrow Set Peripheral Device Entry") must remain enabled for single-controller configurations. This preserves the default primary controller assignment of the single controller. The controller status shows "scanning," which indicates that the firmware is scanning for primary and secondary controller status and redundancy is enabled even though it is not used. There is no performance impact.


5.2 Battery Operation

The battery LED (on the far right side of the controller module) is an amber LED if the battery is bad or missing. The LED blinks green if the battery is charging and is solid green when the battery is fully charged.

5.2.1 Battery Status

The initial firmware screen displays the battery status at the top of the initial screen where BAT: status displays somewhere in the range from BAD to ----- (charging), or +++++ (fully charged).

For maximum life, lithium ion batteries are not recharged until the charge level is very low, indicated by a status of -----. Automatic recharging at this point takes very little time.

A battery module whose status shows one or more + signs can support cache memory for 72 hours. As long as one or more + signs are displayed, your battery is performing correctly.

TABLE 5-1 Battery Status Indicators

Battery Display

Description

-----

Discharged; the battery is automatically recharged when it reaches this state.

+----

Adequately charged to maintain cache memory for 72 hours or more in case of power loss. Automatic recharging occurs when the battery status drops below this level.

++---

Over 90% charged; adequate to maintain cache memory for 72 hours or more in case of power loss.

+++--

Over 90% charged; adequate to maintain cache memory for 72 hours or more in case of power loss.

++++-

Over 90% charged; adequate to maintain cache memory for 72 hours or more in case of power loss.

+++++

Fully charged; adequate to maintain cache memory for 72 hours or more in case of power loss.


Your lithium ion battery should be changed every two years if the unit is continuously operated at 25 degrees C. If the unit is continuously operated at 35 degrees C or higher, it should be changed every year. The shelf life of your battery is three years.



Note - The RAID controller has a temperature sensor which shuts off battery charging above 54 degrees C. When this happens, the battery status may be reported as BAD, but no alarm is written to the event log since no actual battery failure has occurred. This behavior is normal. As soon as the temperature returns to the normal range, battery charging resumes and the battery status is reported correctly. It is not necessary to replace or otherwise interfere with the battery in this situation.



For more information, see Environmental Requirements for the acceptable operating and nonoperating temperature ranges for your array.

For information on the date of manufacture and how to replace the battery module, refer to the Sun StorEdge 3000 Family FRU Installation Guide.

5.2.2 Write-Back and Write-Through Cache Options

Unfinished writes are cached in memory in write-back mode. If power to the array is discontinued, data stored in the cache memory is not lost. Battery modules can support cache memory for 72 hours.

Write cache is not automatically disabled when the battery is offline due to battery failure or a disconnected battery. You can enable or disable the write-back cache capabilities of the RAID controller. To ensure data integrity, you can choose to disable Write Back cache option and switch to the Write Through cache option through the firmware application by choosing "view and edit Configuration parameters" and then choosing "Caching Parameters."


5.3 Software Management Tools

You can manage your array through an out-of-band or in-band connection.

5.3.1 Out-of-Band Connection



caution icon

Caution - If you assign an IP address to an array to manage it out-of-band, for security reasons it is advisable to put the IP address on a private network rather than on a publicly routable network.




5.4 Configuration Overview

The Sun StorEdge 3510 FC Array is preconfigured and requires minimal configuration. All procedures can be performed by using the COM port. You can perform all procedures except the assignment of an IP address through an Ethernet port connection to a management console.

The typical sequence of steps for completing a first-time configuration of the array is to:

1. Make sure that the mounting of the array on a rack, cabinet, desk, or table is complete.

2. Set up the serial port connection. See Configuring a COM Port to Connect to a RAID Array.

3. Set up an IP address for the controller. See Setting an IP Address.

4. Check available physical drives. Checking Available Physical Drives.

5. Determine whether sequential or random optimization is more appropriate for your applications and configure your array accordingly. See Selecting Sequential or Random Optimization.

6. (Optional) Configure host channels as drive channels. See Configuring FC Channels as Host or Drive (Optional).

7. Confirm or change the Fibre Connection Option (point-to-point or loop). See Choosing Loop or Point-to-Point Fibre Connection.

8. Revise or add host IDs on host channels. Editing and Creating Additional Host IDs (Optional).

The IDs assigned to controllers take effect only after the controller is reset.

9. (Optional) Remove default logical drives and create new logical drives. See Creating Logical Drives (Optional).

10. (Optional) In dual-controller configurations only, assign logical drives to the secondary controller to load-balance the two controllers. Changing a Logical Drive Controller Assignment (Optional).



caution icon

Caution - In single-controller configurations, do not disable the Redundant Controller Function and do not set the controller as a secondary controller. The primary controller controls all firmware operations and must be the assignment of the single controller. If you disable the Redundant Controller Function and reconfigure the controller with the Autoconfigure option or as a secondary controller, the controller module becomes inoperable and will need to be replaced.





Note - While the ability to create and manage logical volumes remains a feature of Sun StorEdge 3000 family FC and SCSI RAID arrays for legacy reasons, the size and performance of physical and logical drives have made the use of logical volumes obsolete. Logical volumes are unsuited to some modern configurations, such as Sun Cluster environments, and do not work in those configurations. Avoid using them and use logical drives instead. For more information about logical drives, refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide.



11. (Optional) Partition the logical drives. Partitioning a Logical Drive (Optional).

12. Map each logical drive partition to an ID on a host channel, or apply a host LUN filter to the logical drives. See First Steps in Mapping a Partition to a LUN for more information.



Note - Each operating environment or operating system has a method for recognizing storage devices and LUNs and might require the use of specific commands or the modification of specific files. Be sure to check the information for your operating environment to ensure that you have performed the necessary procedures.



For information about different operating environment procedures, refer to:

13. Reset the controller.

Configuration is complete.



Note - Resetting the controller can result in occasional host-side error messages such as parity error and synchronous error messages. No action is required and the condition corrects itself as soon as reinitialization of the controller is complete.



14. Save the configuration to a disk. See Saving Configuration (NVRAM) to a Disk.

15. Make sure that the cabling from the RAID array to the hosts is complete.



Note - You can reset the controller after each step or at the end of the configuration process.





caution icon

Caution - Avoid using in-band and out-of-band connections at the same time to manage the array. Otherwise, conflicts between multiple operations can cause unexpected results.



5.4.1 Point-to-Point Configuration Guidelines

Remember the following guidelines when implementing point-to-point configurations in your array and connecting to Fabric switches:



caution icon

Caution - If you keep the default loop mode and connect to a Fabric switch, the array automatically shifts to public loop mode. As a result, communication between the array and the switched Fabric runs in half duplex (send or receive) instead of providing the full duplex (send and receive) performance of point-to-point mode.



The controller displays a warning if the user is in point-to-point mode and tries to add an ID to the same channel but on the other controller. The warning is displayed because you have the ability to disable the internal connection between the channels on the primary and secondary controller using the set inter-controller link CLI command and, by doing this, you can have one ID on the primary and another ID on the secondary as a legal operation.

However, if you ignore this warning and add an ID to the other controller, the RAID controller does not allow a login as a Fabric-Loop (FL) port since this would be illegal in a point-to-point configuration.



caution icon

Caution - In point-to-point mode or in public loop mode, only one switch port is allowed per channel. Connecting more than one port per channel to a switch can violate the point-to-point topology of the channel, force two switch ports to "fight" over an AL_PA (arbitrated loop physical address) value of 0 (which is reserved for loop to Fabric attachment), or both.



For example, to provide redundancy, map half of the LUNs across Channel 0 (PID 40) and Channel 4 (PID 42), and then the other half of your LUNs across Channel 1 (SID 41) and Channel 5 (SID 43)



Note - When in loop mode and connected to a Fabric switch, each host ID is displayed as a loop device on the switch so that, if all 16 IDs are active on a given channel, the array looks like a loop with 16 nodes attached to a single switch FL port.

In public loop mode, the array can have a maximum of 1024 LUNs, where 512 LUNs are dual-mapped across two channels, primary and secondary controller respectively.



5.4.2 A SAN Point-to-Point Configuration Example

A point-to-point configuration has the following characteristics:

In a dual-controller array, one controller automatically takes over all operation of a second failed controller in all circumstances. However, when an I/O controller module needs to be replaced and a cable to an I/O port is removed, the I/O path is broken unless multipathing software has established a separate path from the host to the operational controller. Supporting hot-swap servicing of a failed controller requires the use of multipathing software, such as Sun StorEdge Traffic Manager software, on the connected servers.



Note - Multipathing for the Sun StorEdge 3510 FC Array is provided by Sun StorEdge Traffic Manager software. Refer to Sun StorEdge 3510 FC Array Release Notes for information about which versions of Sun StorEdge Traffic Manager software are supported on which platforms.



Important rules to remember are:

FIGURE 5-1 shows the channel numbers (0, 1, 4, and 5) of each host port and the host ID for each channel. N/A means that the port does not have a second ID assignment. The primary controller is in the top I/O controller module, and the secondary controller is in the bottom I/O controller module.

The dashed lines between two ports indicate a port bypass circuit that functions as a mini-hub and has the following advantages:

In FIGURE 5-1, with multipathing software to reroute the data paths, each logical drive remains fully operational when the following conditions occur:

 

 FIGURE 5-1 A Point-to-Point Configuration With a Dual-Controller Array and Two Switches

Figure shows a point-to-point configuration with two servers connecting to the array through two switches.


Note - This illustration shows the default controller locations; however, the primary controller and secondary controller locations can occur on either slot and depend on controller resets and controller replacement operations.



TABLE 5-2 summarizes the primary and secondary host IDs assigned to logical drives 0 and 1, based on FIGURE 5-1.

TABLE 5-2 Example Point-to-Point Configuration With Two Logical Drives in a Dual-Controller Array

Task

Logical Drive

LUN IDs

Channel Number

Primary ID Number

Secondary ID Number

Map 32 partitions of LG0 to CH0

LG 0

0-31

0

40

N/A

Duplicate-map 32 partitions of LG0 to CH1

LG 0

0-31

1

41

N/A

Map 32 partitions of LG1 to CH4

LG 1

0-31

4

N/A

50

Duplicate-map 32 partitions of LG1 to CH5

LG 1

0-31

5

N/A

51


Perform the following steps, which are described in more detail later in this manual, to set up a typical point-to-point SAN configuration based on FIGURE 5-1.

1. Check the position of installed SFPs. Move them as necessary to support the connections needed.

2. Connect expansion units, if needed.

3. Create at least two logical drives (logical drive 0 and logical drive 1) and configure spare disks.

For half of the logical drives, keep the logical drives assigned to the primary controller by default. For the other half of the logical drives, assign the logical drives to the secondary controller to load-balance the I/O.

4. Create up to 32 partitions (LUNs) in each logical drive, for each server.

5. Change the Fibre Connection Option to "Point to point only."

6. For ease of use in configuring LUNs, change the host IDs on the four channels to be the following assignments:

Channel 0: PID 40 (assigned to the primary controller)

Channel 1: PID 41 (assigned to the primary controller)

Channel 4: SID 50 (assigned to the secondary controller)

Channel 5: SID 51 (assigned to the secondary controller)



caution icon

Caution - Do not use the command, "Point to point preferred, otherwise loop." This command is reserved for special use and should be used only if directed by technical support.



7. Map logical drive 0 to channels 0 and 1 of the primary controller.

Map LUN numbers 0 through 31 to the single ID on each host channel.

8. Map logical drive 1 to channels 4 and 5 of the secondary controller.

Map LUN numbers 0 through 31 to the single ID on each host channel. Since each set of LUNs is assigned to two channels for redundancy, the total working maximum number of LUNs is 64 LUNs.



Note - The LUN ID numbers and the number of LUNs available per logical drive can vary according to the number of logical drives and the ID assignments you want on each channel.



9. Connect the first switch to ports 0 and 4 of the upper controller.

10. Connect the second switch to ports 1 and 5 of the lower controller.

11. Connect each server to each switch.

12. Install and enable multipathing software on each connected server.

The multipathing software prevents path failure but does not alter the controller redundancy through which one controller automatically takes over all functions of a second failed controller.

5.4.3 A DAS Loop Configuration Example

The typical direct attached storage (DAS) configuration shown in FIGURE 5-2 includes four servers, a dual-controller array, and two expansion units. Expansion units are optional.

 FIGURE 5-2 A DAS Configuration With Four Servers, a Dual-Controller Array, and Two Expansion Units

Figure shows a loop direct-attached storage configuration with four servers connected to a dual-controller array attached to two expansion units.

Establishing complete redundancy and maintaining high availability requires the use of multipathing software such as Sun StorEdge Traffic Manager software. To configure for multipathing:

1. Establish two connections between a server and a Sun StorEdge 3510 FC Array,

2. Install and enable multipathing software on the server,

3. Map the logical drive the server is using to the controller channels the server is connected to.

DAS configurations are typically implemented using a Fabric loop (FL_port) mode. A loop configuration example is described under A DAS Loop Configuration Example.

Fabric loop (FL_port) connections between a Sun StorEdge 3510 FC Array and multiple servers allow up to 1024 LUNs to be presented to servers.

For guidelines on how to create 1024 LUNs, see Planning for 1024 LUNs (Optional, Loop Mode Only).

Perform the following steps, which are described in more detail later in this manual, to set up a DAS loop configuration based on FIGURE 5-2.

1. Check the position of installed SFPs. Move them as necessary to support the connections needed.

You need to add SFP connectors to support more than four connections between servers and a Sun StorEdge 3510 FC Array. For example, add two SFP connectors to support six connections and add four SFP connectors to support eight connections.

2. Connect expansion units, if needed.

3. Create at least one logical drive per server, and configure spare disks as needed.

4. Create one or more logical drive partitions for each server.

5. Confirm that the Fibre Connection Option is set to "Loop only."

Do not use the "Loop preferred, otherwise, point to point" option, which should not be used for this product.



caution icon

Caution - Do not use the command, "Loop preferred, otherwise point to point." This command is reserved for special use and should be used only if directed by technical support.



6. Set up to eight IDs on each channel, if needed.

TABLE 5-3 Example Primary and Secondary ID Numbers in a Loop Configuration With Two IDs per Channel

Channel
Number

Primary
ID Number

Secondary
ID Number

0

40

41

1

43

42

4

44

45

5

47

46


7. Map logical drive 0 to channels 0 and 5 of the primary controller.

8. Map logical drive 1 to channels 1 and 4 of the secondary controller.

9. Map logical drive 2 to channels 0 and 5 of the primary controller.

10. Map logical drive 3 to channels 1 and 4 of the secondary controller.

11. Connect the first server to port 0 of the upper controller and port 5 of the lower controller.

12. Connect the second server to port 1 of the lower controller and port 4 of the upper controller.

13. Connect the third server to port 0 of the lower controller and port 5 of the upper controller.

14. Connect the fourth server to port 1 of the upper controller and port 4 of the lower controller.

15. Install and enable multipathing software on each connected server.

5.4.4 Larger Configurations

Larger configurations are possible using additional expansion units. Up to eight expansion units are supported when connected to a Sun StorEdge 3510 FC RAID array. Several such configurations are possible. For more detailed information, and for suggestions about the most appropriate configurations for your applications and environment, refer to the Sun StorEdge 3000 Family Best Practices Manual for the Sun StorEdge 3510 FC array.


5.5 Initial Configuration Steps

The topics in this section present both required and commonly used optional procedures, which apply to both the point-to-point and loop mode configurations in most cases.



Note - If you want to create logical volumes, refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide. Logical volumes are not widely used since they do not allow partitions and limit the number of available LUNs.



Most of the configuration you do involves using firmware menus to change settings on the array. However, each host platform involves some initial configuration as well. Refer to the appendix for your host platform for instructions on how to connect your host to an array, for host-specific instructions on recognizing and formatting LUNs including modifying host configuration files, and for other platform-specific details.

5.5.1 Viewing the Initial Firmware Windows

You see the initial controller screen when you first access the RAID controller firmware.

The initial screen is displayed when the RAID controller is powered on.



Note - Since Fibre Channel and SCSI arrays share the same controller firmware, most menu options are the same. Parameter values might vary according to the product.



1. Use the up and down arrow keys to select the VT100 terminal emulation mode, and then press Return to enter the Main Menu.

See Introducing Key Screens and Commands for detailed information about understanding and using the initial firmware screen.

2. Use the following keys to navigate within the application:

left arrow right arrow up arrow down arrow

To select options

Return or Enter

To perform the selected menu option or display a submenu

Esc

To return to the previous menu without performing the selected menu option

Ctrl-L (Ctrl key and L key together)

To refresh the screen information

Press a letter as a keyboard shortcut for commands which have a boldface capital letter

To access a Main Menu command quickly


The firmware procedures use the term "Choose" as a shortcut description. Quotation marks are used to indicate a specific menu option or a series of menu options.

Procedure

Meaning

Choose "menu option."

Highlight the menu option and press Return.

or

Press the key that corresponds to the capitalized letter in the menu option if one is available.

Choose "menu option 1right arrow menu option 2 right arrow menu option 3."

This represents a series of nested menus options which are selected with arrow keys. Press Return after each selection in order to access the next menu item and to complete the series.


3. Proceed to configure the array using options from the Main Menu as described in the rest of this chapter.

 FIGURE 5-3 Firmware Main Menu

Firmware Main Menu with eleven commands listed.

5.5.2 Checking Available Physical Drives

Before configuring disk drives into a logical drive, you must know the status of physical drives in your enclosure.

1. Choose "view and edit scsi Drives" from the Main Menu.

A list of all the installed physical drives is displayed.

 Screen capture shows the physical drives status window accessed with the "view and edit Scsi drives" command.

2. Use the arrow keys to scroll through the table. Check that all installed drives are listed here.



Note - If a drive is installed but is not listed, it might be defective or installed incorrectly.



When the power is on, the controller scans all hard drives that are connected through the drive channels. If a hard drive was connected after the controller completed initialization, use the "Scan scsi drive" menu option to let the controller recognize the newly added hard drive and configure it.



Caution - Using the "Scan scsi drive" menu option to recognize and configure a drive removes its assignment to any logical drive. All data on that drive is lost.



3. To review more information about a drive, highlight it and press Return. Then choose "View drive information" and press Return to view details about that drive.

 Screen capture shows "View drive information" selected.

Additional information is displayed about the drive you selected.

 Screen capture shows drive selected with additional information displayed.

5.5.3 Configuring FC Channels as Host or Drive (Optional)

Sun StorEdge 3510 FC RAID arrays are preconfigured when they arrive from the factory. Default channel settings and rules are specified as follows:

To change a host channel to a drive channel, reconfigure the channel according to the following procedure:

1. Choose "view and edit Scsi channels" from the Main Menu.

Channel information is displayed.

 Screen capture shows the "view and edit Scsi channels" menu option selected and its status table displaying the channel information.


Note - The Mode column for at least one channel must include the RCC abbreviation for Redundant Controller Communications.



2. Highlight the channel that you want to modify, and press Return.

3. Choose "channel Mode."

A menu of channel mode options is displayed.

 Screen capture shows "channel Mode" menu options selected.

4. Modify the channel to suit your requirements.

Refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide for more information about modifying channels.

5. Choose Yes to confirm that you want to change the host or drive assignment.

A confirmation message is displayed:

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?


6. Choose Yes to reset the controller.

5.5.4 Choosing Loop or Point-to-Point Fibre Connection

To confirm or change the fibre connection for the array, perform the following steps:

1. Choose "view and edit Configuration parameters right arrow Host-side SCSI Parameters right arrow Fibre Connection Option."

 Screen capture shows "Host-side SCSI Parameters" selected from the Main Menu and the Maximum Queued I/O Count set at 256 and selected.

2. If you want to view or change the Fibre Connection Option, choose either "Loop only" or "Point to point only" and press Return.

A confirmation dialog is displayed.

3. Choose Yes to confirm.

 Screen capture shows the "Fibre Connection Option" menu option selected and "Point to point only" selected.


caution icon

Caution - Do not use the bottom command, "Loop preferred, otherwise point to point." This command is reserved for special use and should be used only if directed by technical support.



A confirmation message is displayed:

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?


4. Choose Yes to reset the controller.

5.5.5 Editing and Creating Additional Host IDs (Optional)

All RAID arrays are preconfigured when they arrive from the factory.

Default host channel IDs are displayed in TABLE 5-4.

TABLE 5-4 Default Host Channel IDs

Channel

 

Primary Controller ID (PID)

Secondary Controller ID (SID)

Channel 0

40

N/A

Channel 1

N/A

42

Channel 4

44

N/A

Channel 5

N/A

46


The number of host IDs depends on the configuration mode:

Typically host IDs are distributed between the primary and secondary controllers to load-balance the I/O in the most effective manner for the network.

Each ID number must be a unique number within the host channel. You can:



Note - To map 1024 partitions in loop mode, you must add additional host IDs so that 32 IDs are mapped to the array's channels. Several configurations are possible, such as eight IDs mapped to each of the four host channels or sixteen IDs mapped to two channels and none to the other two. For more information, see Planning for 1024 LUNs (Optional, Loop Mode Only).



To add a unique host ID number to a host channel, perform the following steps.

1. Choose "view and edit Scsi channels" from the Main Menu.

 Screen capture shows the "view and edit Scsi channels" menu option selected and its status table displaying the channel information.

2. Select the host channel whose Primary/Secondary ID you want to edit and press Return.

3. Choose "view and edit scsi Id."

A dialog is displayed that says "No SCSI ID Assignment - Add Channel SCSI ID?"

4. Choose Yes to confirm.

"Primary Controller" and "Secondary Controller" are displayed in a menu.

5. Select the primary or secondary controller to which you want to add a host ID.

By default, channel 0 has a primary ID (PID) and no secondary ID (SID), while channel 1 has a SID but no PID.

6. Select an ID number for that controller and press Return.

7. Confirm your selection by choosing Yes and pressing Return.

 Screen capture shows a SCSI ID selected and the "Yes" option highlighted underneath the "Add Primary Controller SCSI ID" confirmation message.

A confirmation message is displayed:

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?


8. Choose Yes to Reset the controller.

The configuration change takes effect only after the controller is reset.

5.5.6 Selecting Sequential or Random Optimization

Before creating or modifying logical drives, select the optimization mode for all logical drives you create. The optimization mode determines the block size used when writing data to the drives in an array.



Note - Your array is preconfigured for Sequential Optimization. If Random Optimization is most appropriate for your use, you will need to delete all of the preconfigured logical drives, change the optimization mode, and then create new logical drives.



The type of application the array is working with determines whether random or sequential I/O should be applied. Video and imaging application's I/O size can be 128, 256, 512 Kbyte, or up to 1 Mbyte, so the application reads and writes data to and from the drive as large-block, sequential files. Database and transaction-processing applications read and write data from the drive as small-block, randomly accessed files.

There are two limitations that apply to the optimization modes:



Note - The maximum allowable size of a logical drive optimized for sequential I/O is 2 Tbyte. The maximum allowable size of a logical drive optimized for random I/O is 512 Gbyte. When creating a logical drive that is greater than these limits, an error message is displayed.



For more information about optimization modes, refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide for your array.

5.5.6.1 Maximum Number of Disks and Maximum Usable Capacity for Random and Sequential Optimization

Your choice of random or sequential optimization affects the maximum number of disks you can include in an array and the maximum usable capacity of a logical drive. The following tables contain the maximum number of disks per logical drive and the maximum usable capacity of a logical drive.



Note - You can have a maximum of eight logical drives and 36 disks, using one array and two expansion units.



TABLE 5-5 Maximum Number of Disks per Logical Drive for a 2U Array

Disk Capacity (GB)

RAID 5 Random

RAID 5 Sequential

RAID 3 Random

RAID 3 Sequential

RAID 1 Random

RAID 1 Sequential

RAID 0 Random

RAID 0 Sequential

36.2

14

31

14

31

28

36

14

36

73.4

7

28

7

28

12

30

6

27

146.8

4

14

4

14

6

26

3

13


TABLE 5-6 Maximum Usable Capacity (Gbyte) per Logical Drive for a 2U Array

Disk Capacity

 

RAID 5 Random

RAID 5 Sequential

RAID 3 Random

RAID 3 Sequential

RAID 1 Random

RAID 1 Sequential

RAID 0 Random

RAID 0 Sequential

36.2

471

1086

471

1086

507

543

507

1122

73.4

440

1982

440

1982

440

1101

440

1982

146.8

440

1908

440

1908

440

1908

440

1908




Note - You might not be able to use all disks for data when using 36 146-Gbyte disks. Any remaining disks can be used as spares.



5.5.7 Reviewing Default Logical Drives and RAID Levels

A logical drive is a set of drives grouped together to operate under a particular RAID level. Each RAID array is capable of supporting as many as eight logical drives.

A drive can be assigned as the local spare drive to one specified logical drive, or as a global spare drive that is available to all logical drives on the RAID array.

Spares can be part of an automatic array rebuild.



Note - A spare is not available for logical drives with no data redundancy (RAID 0).



The logical drives can have the same or different RAID levels.

For a 12-drive array, the RAID array is preconfigured as follows:

For a 5-drive array, the RAID array is preconfigured as follows:

The following table highlights the RAID levels available.

TABLE 5-7 RAID Level Definitions

RAID Level

Description

RAID 0

Striping without data redundancy; provides maximum performance.

RAID 1

Mirrored or duplexed disks; for each disk in the system, a duplicate disk is maintained for data redundancy. It requires 50% of total disk capacity for overhead.

RAID 3

Striping with dedicated parity. Parity is dedicated to one drive. Data is divided into blocks and striped across the remaining drives.

RAID 5

Striping with distributed parity; this is the best-suited RAID level for multitasking or transaction processing.The data and parity are striped across each drive in the logical drive, so that each drive contains a combination of data and parity blocks.

NRAID

NRAID is an older configuration that is rarely used and is not recommended.

RAID 1+0

RAID 1+0 combines RAID 1 and RAID 0--mirroring and disk striping. RAID 1+0 allows multiple drive failures because of the full redundancy of the hard disk drives. If four or more hard disk drives are chosen for a RAID 1 logical drive, RAID 1+0 is performed automatically.

RAID (3+0)

A logical volume with several RAID 3 member logical drives.

RAID (5+0)

A logical volume with several RAID 5 member logical drives.




Note - Logical volumes are unsuited to some modern configurations, such as Sun Cluster environments, and do not work in those configurations. Use logical drives instead. For more information see Configuration Overview.



For more information about logical drives, spares, and RAID levels, refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide for your array.

5.5.8 Completing Basic Configuration

In a point-to-point configuration, the last required step is mapping the logical drives to host LUNs.

In loop mode, you have additional options to pursue, if needed, in addition to the mapping requirement:

For the procedure on the required mapping to LUNs, see First Steps in Mapping a Partition to a LUN.



Note - Alternatively, you can use the graphical user interface described in the Sun StorEdge 3000 Family Configuration Service User's Guide to map the logical drives to host LUNs.



5.5.9 Creating Logical Drives (Optional)

The RAID array is already configured with one or two RAID 5 logical drives and one or two global spares. Each logical drive consists of a single partition by default.

If you prefer a different configuration, use the procedure described in this section to modify the RAID level or to add more logical drives. In this procedure, you configure a logical drive to contain one or more hard drives based on the desired RAID level, and partition the logical drive into additional partitions.



Note - Logical volumes are unsuited to some modern configurations, such as Sun Cluster environments, and do not work in those configurations. Use logical drives instead. For more information see Configuration Overview.





Note - If you want to create 1024 LUNs in loop mode, you need eight logical drives with each having 128 partitions.



For redundancy across separate channels, you can also create a logical drive containing drives distributed over separate channels. You can then partition the logical unit into one or several partitions.

 FIGURE 5-4 Example of an Allocation of Local and Spare Drives in Logical Configurations

Diagram shows an example allocation of local and spare drives in logical configurations.


Note - To reassign drives and add additional local or global spares on your preconfigured array, you must first unmap and then delete the existing logical drives, and then create new logical drives. For more information on deleting a logical drive, see Deleting Logical Drives.



Create a logical drive with the following steps:

1. Choose "view and edit Logical drives" from the Main Menu.

 Screen capture shows the "view and edit Logical drives" menu option selected on the Main Menu.

2. Select the first available unassigned logical drive (LG) and press Return to proceed.

 Screen capture shows an empty logical drive status window displayed when there are no preconfigured logical drives.

You can create as many as eight logical drives from drives on any loop.

3. When prompted to "Create Logical Drive?" choose Yes.

 Screen capture shows "Create Logical Drive?" prompt with "Yes" selected.

A pull-down list of supported RAID levels is displayed.

4. Select a RAID level for this logical drive.



Note - RAID 5 is used as an example in the following steps.



 Screen capture shows RAID levels window with "RAID 5" selected.

For brief descriptions of RAID levels, see Reviewing Default Logical Drives and RAID Levels. For more information about RAID levels, refer to the Sun StorEdge 3000 Family RAID Firmware User's Guide.

5. Select your member drives from the list of available physical drives and press Return.

 Screen capture shows a list of available physical drives for logical drive 0, and member drive 0 is selected.

The drives can be tagged for inclusion by highlighting the drive and then pressing Return. An asterisk mark (*) is displayed on the selected physical drives.

To deselect a drive, press Return again on the selected drive. The asterisk disappears.



Note - You must select at least the minimum number of drives required per RAID level.



a. Use the up and down arrow keys to select more drives.

 Screen capture shows a list of available physical drives with three drives marked with an asterisk.

b. After all physical drives have been selected for the logical drive, press the Esc key to continue to the next series of options.

A list of selections is displayed.

 Screen capture shows Maximum Drive Capacity selected.

6. (Optional) Set the maximum physical drive capacity and assign spares.

a. (Optional) Choose "Maximum Drive Capacity" from the menu, and press Return.



Note - Changing the maximum drive capacity reduces the size of the logical drive and leaves some disk space unused.



 

 

Screen capture shows Maximum Available Drive Capacity and Maximum Drive Capacity parameters.

A logical drive should be composed of physical drives with the same capacity. A logical drive can only use the capacity of each drive up to the maximum capacity of the smallest drive.

b. (Optional) Add a local spare drive from the list of unused physical drives.



Note - A global spare cannot be created while creating a logical drive.



The spare chosen here is a local spare and automatically replaces any failed disk drive in this logical drive. The local spare is not available for any other logical drive.

 Screen capture shows the "Assign Spare Drives" command selected.

 Screen capture shows a list of unused drives with their properties, namely slot, channel, ID, size, speed, LG_DRV, status, and Vendor and Product ID.


Note - A logical drive created in RAID Level 0, which has no data redundancy, does not support spare drive rebuilding.



7. (Optional) For dual-controller configurations only, choose "Logical Drive Assignments" to assign this logical drive to the secondary controller.

By default, all logical drives are automatically assigned to the primary controller.



caution icon

Caution - Do not assign logical drives to secondary controllers in single-controller arrays. Only the primary controller works in these arrays.



 Screen capture shows the "Redundant Controller Logical Drive Assign to Secondary Controller?" confirmation window displayed with "Yes" selected.

If you use two controllers for the redundant configuration, a logical drive can be assigned to either of the controllers to balance the workload. Logical drive assignments can be changed later, but require a controller reset to take effect.

a. Press the Esc key or choose No and press Return to exit from this window without changing the controller assignment.

b. Choose Yes, press Return to confirm, and then press the Esc key to continue when all the preferences have been set.

A confirmation window is displayed on the screen.

 Screen capture shows the confirmation window with "Redundant Controller Logical Drive Assign to Secondary Controller?" displayed and "Yes" selected.

c. Verify all information in the window and then choose Yes to proceed.

A message indicates that the logical drive initialization has begun. A processing bar displays the progress of the initialization as it occurs.



Note - You can press the Esc key to remove the initialization progress bar and continue working with menu options to begin creating additional logical drives. The percentage of completion for each initialization in progress is displayed in the upper left corner of the window.



The following message appears when the initialization is completed:

 Screen capture shows a notification that initialization of the logical drive is complete.

d. Press Esc to dismiss the notification.

e. After the logical drive initialization is completed, press the Esc key to return to the Main Menu.

8. Choose "view and edit Logical drives" to view the first created logical drive (P0) on the first line of the status window.

 Screen capture shows status window with first created logical drive (P0) selected.

5.5.10 Preparing for Logical Drives Larger Than 253 Gbyte

The Solaris operating environment requires drive geometry for various operations, including newfs. For the appropriate drive geometry to be presented to the Solaris operating environment for logical drives larger than 253 Gbyte, change the default settings to "< 65536 Cylinders" and "255 Heads" to cover all logical drives over 253 GB and under the maximum limit. The controller automatically adjusts the sector count, and then the operating environment can read the correct drive capacity.

For Solaris operating environment configurations, use the values in the following table.

TABLE 5-8 Cylinder and Head Mapping for the Solaris Operating Environment

Logical Drive Capacity

Cylinder

Head

Sector

< 253 GB

variable (default)

variable (default)

variable (default)

253 GB - 1 TB

< 65536 Cylinders

255

variable (default)




Note - Earlier versions of the Solaris operating environment do not support drive capacities larger than 1 terabyte.



For Solaris operating environment configurations, use the values in the following table.

TABLE 5-9 Cylinder and Head Mapping for the Solaris Operating Environment

Logical Drive Capacity

Cylinder

Head

Sector

< 253 GB

variable (default)

variable (default)

variable (default)

253 GB - 1 TB

< 65536 Cylinders *

255 *

variable (default)


* These settings are also valid for all logical drives under 253 GBytes. After settings are changed, they apply to all logical drives in the chassis.

To revise the Cylinder and Head settings, perform the following steps.

1. Choose "view and edit Configuration parameters right arrow Host-Side SCSI Parameters right arrow Host Cylinder/Head/Sector Mapping Configuration right arrow Sector Ranges - Variable right arrow 255 Sectors right arrow Head Ranges - Variable."

2. Specify "255 Heads."

3. Choose "Cylinder Ranges - Variable right arrow < 65536 Cylinders."

 Screen capture shows "< 65536 Cylinders" selected.

Refer to Sun StorEdge 3000 Family RAID Firmware User's Guide for more information about firmware commands used with logical drives.

5.5.11 Changing a Logical Drive Controller Assignment (Optional)

By default, logical drives are automatically assigned to the primary controller. If you assign half the drives to the secondary controller, the maximum speed and performance is somewhat improved due to the redistribution of the traffic.

To balance the workload between both controllers, you can distribute your logical drives between the primary controller (displayed as the Primary ID or PID) and the secondary controller (displayed as the Secondary ID or SID).



caution icon

Caution - In single-controller configurations, do not disable the Redundant Controller Function and do not set the controller as a secondary controller. The primary controller controls all firmware operations and must be the assignment of the single controller. If you disable the Redundant Controller Function and reconfigure the controller with the Autoconfigure option or as a secondary controller, the controller module becomes inoperable and will need to be replaced in a single-controller configuration.



After a logical drive has been created, it can be assigned to the secondary controller. Then the host computer associated with the logical drive can be mapped to the secondary controller (see First Steps in Mapping a Partition to a LUN).

To change a logical drive controller assignment:

1. Choose "view and edit Logical drives" from Main Menu.

 Screen capture shows "view and edit Logical drives" selected on the Main Menu.

2. Select the drive you want to reassign and press Return.

3. Choose "logical drive Assignments" and press Return.

 Screen capture shows the "logical drive Assignments" selected.

 

The reassignment is evident from the "view and edit Logical drives" screen.

A "P" in front of the LG number means that the logical drive is assigned to the primary controller. An "S" in front of the LG number means that the logical drive is assigned to a secondary controller.

For example, "S1" indicates that logical drive 1 is assigned to the secondary controller.

4. Reassign the controller by choosing Yes.

A confirmation message is displayed:

NOTICE: Change made to this setting will NOT take effect until the controller is RESET. Prior to resetting the controller, operation may not proceed normally. Do you want to reset the controller now?


5. Choose Yes to reset the controller.

5.5.12 Creating or Changing a Logical Drive Name (Optional)

You can create a name for a logical drive. This logical drive name is used only in RAID firmware administration and monitoring and does not appear anywhere on the host. You can also edit this drive name.

You can create a logical drive name after the logical drive is created:

1. Choose "view and edit Logical drives" from the Main Menu.

2. Select the logical drive and press Return.

3. Choose "logical drive Name."

 Screen capture shows "Current Logical Drive name:" prompt displayed.

4. Type the name you want to give the logical drive and press Return to save the name.

5.5.13 Partitioning a Logical Drive (Optional)

You can divide a logical drive into several partitions, or use the entire logical drive as a single partition. You can configure up to 128 partitions for each logical drive.

For guidelines on setting up 1024 LUNs, see Planning for 1024 LUNs (Optional, Loop Mode Only).



caution icon

Caution - If you modify the size of a partition or logical drive, you lose all data on those drives.





Note - If you plan to map hundreds of LUNs, the process is easier if you use the Sun StorEdge Configuration Service program.



 FIGURE 5-5 Partitions in Logical Configurations

Diagram shows logical drive 0 with three partitions and logical drive 1 with three partitions.

To partition a logical drive, perform the following steps.

1. Choose "view and edit Logical drives" from the Main Menu.

 Screen capture shows "view and edit Logical drives" selected on the Main Menu.

2. Select the logical drive you want to partition, and press Return.

 Screen capture shows a logical drive selected in the "view and edit Logical drives" window.

3. Choose "Partition logical drive."

 Screen capture shows the "Partition logical drive" menu option selected in the "view and edit Logical drives" window.

This message is displayed:

Partitioning the Logical Drive will make it no longer eligible for membership in a logical volume.

Continue Partition Logical Drive?




Note - Logical volumes are unsuited to some modern configurations, such as Sun Cluster environments, and do not work in those configurations. Use logical drives instead. For more information see Configuration Overview.



4. Choose Yes and press Return to confirm that you want to partition the logical drive if you do not want to include it in a logical volume.

 Screen capture shows a Warning notice with "Continue Partition Logical Drive?" prompt displayed and "No" selected.

A list of the partitions for this logical drive is displayed. If the logical drive has not yet been partitioned, all the logical drive capacity is listed as "partition 0."

5. Select from the list of undefined partitions and press Return.

6. Type the desired size for the selected partition and press Return.

 Screen capture shows a selected partition and Partition Size <MB>: 3000.

A warning prompt is displayed:

This operation will result in the loss of all data on the partition.
Partition Logical Drive?




caution icon

Caution - Make sure any data that you want to save on this partition has been backed up before you partition the logical drive.



7. Choose Yes to confirm.

The remaining capacity of the logical drive is automatically allotted to the next partition. In the following figure, a partition size of 3000 MB was entered; the remaining storage of 27000 MB is allocated to the partition below the partition created.

 Screen capture shows the partition allocation with the 3000MB partition and the remaining 27000 MB storage allocated to the partition below.

8. Repeat the preceding steps to partition the remaining capacity of your logical drive.

You can create up to 128 partitions per logical drive, with a total number of partitions not to exceed 1024 partitions per RAID array.



Note - When you modify a partition or logical drive size, you must reconfigure all host LUN mappings. All the host LUN mappings are removed with any change to partition capacity. See First Steps in Mapping a Partition to a LUN.





Note - When a partition of a logical drive or logical volume is deleted, the capacity of the deleted partition is added to the partition above the deleted partition.




5.6 Mapping Logical Drive Partitions to Host LUNs

The next step is to map each storage partition as one system drive (host ID/LUN). The host adapter recognizes the system drives after reinitializing the host bus.



Note - The UNIX and Solaris format and Solaris probe-scsi-all commands do not display all mapped LUNs if there is not a logical drive mapped to LUN 0.



A FC channel can connect up to 16 IDs in loop mode.

The following figure illustrates the idea of mapping a system drive to a host ID/LUN combination.

 Figure shows the FC ID as a file cabinet and its LUNs as file drawers.

Each ID/LUN looks like a storage device to the host computer.

 FIGURE 5-6 Mapping Partitions to Host ID/LUNs

Figure shows LUN partitions mapped to ID 0 on Channel 1 and to ID 1 on Channel 3.

5.6.1 Planning for 1024 LUNs (Optional, Loop Mode Only)

If you want to create 1024 LUNs, which is the maximum number of storage partitions that can be mapped for a RAID array, you must map 32 IDs to the array's channels. There are several ways you can meet this requirement. For example, you can set up the following configuration:

5.6.2 First Steps in Mapping a Partition to a LUN

To map a logical drive partition to a LUN, perform the following steps.

1. Choose "view and edit Host luns" from the Main Menu.

 Screen capture shows "view and edit Host luns" selected on the Main Menu.

A list of available channels and their associated controllers is displayed.

2. Choose the desired channel and ID to which the logical drive will be mapped, and press Return.

 Screen capture shows channel and ID selected.

3. If you see Logical Drive and Logical Volume menu options, choose "Logical Drive."

 Screen capture shows "Logical Drive" selected in the Logical Drive and Logical Volume menu.

The LUN table is displayed.

 Screen capture shows the LUN table displayed.

4. Using the arrow keys, select the desired LUN (for example, CHL 0 ID 40) and press Return.

A list of available logical drives is displayed.

 Screen capture shows information about the selected LUN.


Note - A device must be mapped to LUN 0 as a minimum.



5. Select the desired logical drive (LD) and press Return.

The partition table is displayed.

 Screen capture shows the partition table selected.

6. Select the desired partition and press Return.

 Screen capture shows two mapping options with "Map Host LUN" selected.

7. Select the mapping option that is appropriate for your network and proceed with one of the following procedures.

5.6.3 Using the Map Host LUN Option

Each partition must be mapped to a host LUN. The "Map Host LUN" menu option is used when multiple hosts are not on the same loop.

If multiple hosts will share the same loop on the array, use the host filter command and see Setting Up Host Filter Entries.



Note - If you plan to map hundreds of LUNs, the process is easier if you use the Sun StorEdge Configuration Service program. Refer to Sun StorEdge 3000 Family Configuration Service User's Guide for more information.



1. After the completion of the steps in First Steps in Mapping a Partition to a LUN, choose "Map Host LUN" and press Return.

 Screen capture shows "Map Host LUN" selected.

2. Choose Yes to confirm the mapping scheme.

 Screen capture shows "Mapping Scheme" prompt.

The partition is now mapped to a LUN.

 Screen capture shows the partition mapped to a LUN.

3. Press the Esc key to return to the Main Menu.

4. Repeat Step 1 through Step 3 for each partition until all partitions are mapped to LUNs.

5. On the Main Menu, choose "system Functions right arrow Reset controller," and choose Yes to confirm to implement the new configuration settings.

6. To verify unique mapping of each LUN (unique LUN number, unique DRV number, or unique Partition number), choose the "view and edit Host luns" command and press Return.

7. Select the appropriate controller and ID and press Return to review the LUN information.



Note - If you are using host-based multipathing software, map each partition to two or more host IDs to make multiple paths to the same partition available to the host.



5.6.4 Setting Up Host Filter Entries

For multiple servers connected to the same array, LUN filtering organizes how the array devices are accessed and viewed from host devices. LUN filtering is used to provide exclusive access from a server to a logical drive and exclude other servers from seeing or accessing the same logical drive.

LUN filtering also enables mapping of multiple logical drives or partitions to the same LUN number, allowing different servers to have their own LUN 0. LUN filtering is valuable in clarifying mapping when each HBA typically sees twice the number of logical drives when viewed through a hub.

 FIGURE 5-7 Example of LUN Filtering

Diagram shows multiple hosts with access to the same LUNs where LUN filtering creates exclusive paths from a server to a specific LUN.

An advantage of LUN filtering is that it allows many hosts to attach to an array through a common Fibre Channel port and still maintains LUN security.

Each Fibre Channel device is assigned a unique identifier called a world wide name (WWN). A WWN is assigned by the IEEE and stays with the device for its lifetime. LUN filtering uses the WWN to specify which server is to have exclusive use of a specific logical drive.



Note - It is possible to see somewhat different information when a Fabric switch queries the WWN of the Sun StorEdge 3510 FC Array. When the RAID controller does a Fibre Channel Fabric Login to a switch, during the Fabric login process the switch obtains the WWN of the RAID controller. This WWN presented by the RAID controller is a Dot Hill Systems Corporation WWN, so the switch displays this company name. When the switch issues an inquiry command to a mapped LUN on the array, the switch obtains the company name from the inquiry data of the LUN. In this case, the switch displays Sun StorEdge 3510, which is the inquiry data returned by the RAID controller.



As shown in FIGURE 5-7, when you map LUN 01 to host channel 0 and select WWN1, server A has a proprietary path to that logical drive. All servers continue to see and access LUN 02 and LUN 03 unless filters are created on them.

Prior to using the LUN Filter feature, identify which array is connected to which HBA card, and the WWN assigned to each card. This procedure varies according to the HBA you are using. Refer to the appendix for your host for instructions on identifying the WWN for your host.

5.6.4.1 Creating Host Filter Entries

The "Create Host Filter Entry" command is used when multiple hosts share the same loop, can view all the drives, and need to be filtered so that a host sees only the logical drives that are exclusive to its use.

"Map Host LUN" is used when multiple hosts are not on the same loop. To use this option, see Using the Map Host LUN Option.



Note - You can create a maximum of 128 host filters.





Note - If you plan to map hundreds of LUNs, the process is easier if you use the Sun StorEdge Configuration Service program.



1. After the completion of the steps in the procedure First Steps in Mapping a Partition to a LUN, choose "Create Host Filter Entry right arrow Add from current device list."

 Screen capture shows "Add from current device list" selected.

This step automatically performs a discovery of the attached HBAs. Alternatively, you can add them manually.

2. From the device list, choose the server WWN number for which you are creating a filter and press Return.

 Screen capture shows the server WWN number selected.

3. At the confirmation screen, choose Yes and press Return.

 Screen capture shows the confirmation screen with "Yes" selected.

4. Review the filter configuration screen. Make any changes necessary by using the arrow keys to choose an item and press Return.

 Screen capture shows "Logical Drive 0 Partition 0" selected in the filter configuration screen.

5. To edit the WWN, use the arrow keys to choose "Host-ID/WWN" and press Return.

 Screen capture shows "Host-ID/WWN."

6. Make the desired changes, and press Return.



caution icon

Caution - Be sure that you edit the WWN correctly. If the WWN is incorrect, the host will be unable to recognize the LUN.



7. To edit the WWN Mask, use the arrow keys to choose "Host-ID/WWN Mask" and press Return.

 Screen capture shows "Host-ID/WWN Mask."

8. To change the filter setting, use the arrow keys to choose "Filter Type -" and press Return.

9. At the confirmation screen, choose Yes to exclude or No to include the Host-ID/WWN selection, and press Return.

 Screen capture shows the confirmation screen with "set Filter Type to Exclude? prompt displayed with "Yes" selected.

10. To change the access mode that assigns Read-Only or Read/Write privileges, use the arrow keys to choose "Access mode -" and press Return.

11. At the confirmation screen, choose Yes and press Return.

 Screen capture shows the confirmation screen with "Set Access Mode to Read-Only?" prompt displayed with "Yes" selected.

12. To set a name, use the arrow keys to choose "Name -" and press Return.

 Screen capture shows "Name:mars" displayed.

13. Type the name you want to use and press Return.

14. Verify all settings and press Esc to continue.

 Screen capture shows settings displayed and "Name - mars" is selected.


Note - Unlike most firmware operations where you must complete each entry individually and repeat the procedure if you want to perform a similar operation, you can add multiple WWNs to your list before you actually create the host filter entry in Step 15. Be sure to follow the instructions carefully.



15. At the confirmation screen, choose Yes and press Return.

 Screen capture shows a confirmation screen with "Add Host Filter Entry?" displayed and "Yes" selected.

16. At the server list, repeat the previous steps to create additional filters or press Esc to continue.

 Screen capture shows the server WWN number selected.

17. At the confirmation screen, verify the settings, choose Yes and press Return to complete the host LUN filter entry.

 Screen capture shows settings listed in the confirmation screen with "Yes" selected.

A mapped LUN displays a number and a filtered LUN displays an "M" for masked LUN in the host LUN partition window.

 Screen capture shows a masked filtered LUN displayed and marked with an "M."

5.6.5 Creating Device Files for the Solaris Operating Environment

1. To create device files for the newly mapped LUNs on the host in the Solaris 8 operating environment and Solaris 9 operating environment, type:

# /usr/sbin/devfsadm -v 

2. To display the new LUNs, type:

# format

3. If the format command does not recognize the newly mapped LUNs, reboot the host:

# reboot -- -r

5.6.6 Saving Configuration (NVRAM) to a Disk

You can back up your controller-dependent configuration information. Use this function to save configuration information whenever a configuration change is made.

The logical configuration information is stored within the logical drive.



Note - A logical drive must exist for the controller to write NVRAM content onto it.



1. Choose "system Functions right arrow Controller maintenance right arrow Save nvram to disks."

 Screen capture shows "Save nvram to disks" selected.

2. Choose Yes to confirm.

 Screen capture shows the "Save nvram to disks" command accessed through the "system Functions" command and "Configuration Parameters" command.

A prompt confirms that the NVRAM information has been successfully saved.

To restore the configuration, see Restoring Your Configuration (NVRAM) From a File.


5.7 Using Software to Monitor and Manage the Sun StorEdge 3510 FC Array

This section describes the software management tools available for monitoring and managing the Sun StorEdge 3510 FC array with in-band connections.

The following software management tools are provided on the Sun StorEdge 3000 Family Professional Storage Manager CD, provided with your array. The Sun StorEdge 3000 Family Documentation CD provides the related user guides.

For details on how to install Sun StorEdge Configuration Service, Sun StorEdge Diagnostic Reporter, or the Sun StorEdge CLI software, refer to the Sun StorEdge 3000 Family Software Installation Manual.

The Sun StorEdge 3000 Family Documentation CD provides the related user guides, with detailed installation and configuration procedures for Sun StorEdge Configuration Service and Sun StorEdge Diagnostic Reporter.

5.7.1 Other Supported Software

Multipathing for the Sun StorEdge 3510 FC Array is provided by Sun StorEdge Traffic Manager software. Multipathing software is required when you have multiple connections from a server to an array (directly or through a switch), want to avoid a single point of failure, and are setting up a configuration with redundant pathing. Multipathing software makes multiple paths between the server and the storage system and provides full services on each path for path failover.

Refer to the appendix for your host and to the Sun StorEdge 3510 FC Array Release Notes for information about which versions of Sun StorEdge Traffic Manager software are supported on your platform.

For information about additional supported or provided software, also refer to the Sun StorEdge 3510 FC Array Release Notes.

5.7.2 Enabling VERITAS DMP

To enable VERITAS Dynamic Multi-Pathing (DMP) support on VERITAS Volume Manager in the Solaris operating environment, perform the following steps.



Note - To see instructions for enabling VERITAS DMP on other supported platforms, refer to your VERITAS user documentation.



1. Configure at least two channels as host channels (channels 1 and 3 by default) and add additional host IDs, if needed.

2. Connect host cables to the I/O host ports in Step 1.

3. Map each LUN to two host channels to provide dual-pathed LUNs.

4. Add the correct string to vxddladm so VxVM can manage the LUNs as a multipathed JBOD.

# vxddladm addjbod vid=SUN pid="StorEdge 3510"
# vxddladm listjbod
VID       PID     Opcode   Page     Code   Page Offset SNO length
================================================================
SEAGATE ALL 		     PIDs         18    		-1       36          12
SUN      StorEdge  3510       18     -1       36          12

5. Reboot the hosts. System reboot is required to implement these changes.



Note - JBOD arrays require a license from VERITAS in order to enable any of its advanced features. Refer to VERITAS Volume Manager Release Notes or contact VERITAS Software Corporation for licensing terms and information.



5.7.3 The VERITAS Volume Manager ASL

VERITAS has provided an Array Software Library (ASL) that must be installed on the same host system as the Volume Manager 3.2 or 3.5 software to enable the software to recognize the Sun StorEdge 3510 FC array. For the procedure to download the ASL and the accompanying installation guide for the Sun StorEdge 3510 FC array from Sun's Download Center, refer to the Sun StorEdge 3510 FC Array Release Notes.