C H A P T E R  1

Product and Architecture Overview

This chapter provides a brief overview of your Sun StorEdge 3510 FC (Fibre Channel) Array. Topics covered in this chapter are:


1.1 Introduction

The Sun StorEdge 3510 FC Array is a rack-mountable, Network Equipment Building System (NEBS) Level 3-compliant, Fibre Channel mass storage subsystem. NEBS Level 3 is the highest level of NEBS criteria used to assure maximum operability of networking equipment in mission-critical environments such as telecommunications central offices. The array is designed for high availability, high performance, and high capacity.

 FIGURE 1-1 Sun StorEdge 3510 FC Array Front View

Photograph shows the Sun StorEdge 3510 FC array front bezel.

 

The Sun StorEdge 3510 FC Array models include:

See Using Standalone JBOD Arrays for detailed information about using JBOD arrays.



Note - A label on the bottom lip of an array chassis, underneath the front bezel, indicates whether it is a JBOD array or a RAID array. For instance, "3510 AC JBOD" refers to an alternating-current version of a JBOD array, "3510 DC JBOD" refers to a direct-current version of a JBOD array, and "3510 AC RAID" refers to an alternating-current version of a RAID array. Similarly, using a UNIX command such as probe-scsi-all provides similar information, using an "A" designator for RAID arrays and a "D" designator for disks in a JBOD array. For example, "StorEdge 3510F D1000" identifies a JBOD array with SES firmware version 1000.



TABLE 1-1 shows the configuration options for the Sun StorEdge 3510 FC Array.

TABLE 1-1 Sun StorEdge 3510 FC Array Configuration Options

Internal RAID controllers

Up to 2, with a minimum of 1

2 GHz / 1 GHz Fibre Channel disks

Up to 12 per array or per expansion unit, with a minimum of 4 plus 1 spare

Fibre Channel expansion units[1]

Up to 8

Fibre Channel JBOD arrays[2]

1

 

Connection options

  • Serial port
  • Ethernet
  • Fibre Channel Small Form-Factor Pluggable (SFP)

Supported RAID levels

0, 1, 3, 5, 1+0, 3+0, and 5+0

Redundant field-replaceable units (FRUs)

  • Power supply and fan modules
  • I/O controller modules
  • I/O expansion modules
  • Battery board module
  • Disk drive modules

Configuration management and enclosure event reporting options[3]

  • In-band Fibre Channel
  • Out-of-band 10/100 BASE-T Ethernet
  • RS-232 connectivity
  • Enclosure monitoring by SCSI Enclosure Services (SES)

 


For a list of supported racks and cabinets, refer to the release notes for the Sun StorEdge 3510 FC array. You can find these release notes at:

http://www.sun.com/products-n-solutions/hardware/docs/Network_Storage_Solutions/Workgroup/3510

Reliability, availability, and serviceability (RAS) are supported by:

For information about specifications and agency approvals, see Sun StorEdge 3510 FC Array Specifications.


1.2 Field-Replaceable Units (FRUs)

This section describes the FRUs contained in the Sun StorEdge 3510 FC Array.

1.2.1 RAID I/O Controller Modules

A dual-controller configuration offers increased reliability and availability because it eliminates a single point of failure, the controller. In a dual-controller configuration, if the primary controller fails, the array automatically fails over to the second controller without an interruption of data flow.

The Sun StorEdge 3510 FC Array I/O controller modules are hot-swappable. Each of these RAID controller modules provides six Fibre Channel ports that support 2- gigabit (Gb) or 1-Gb data rates. Single- and dual-controller models are available, with the dual-controller version supporting active/passive and active/active configurations. Each 3510 RAID controller is configured with 1 gigabyte (Gbyte) of cache.

In the unlikely event of a failure, a redundant RAID controller immediately begins servicing all I/O requests. The failure does not affect application programs.

Each RAID controller module can support up to 1 gigabyte of Sychronous Dynamic Random Access Memory (SDRAM) with Error Control Check (ECC) memory. In addition, each controller supports 64 megabytes (Mbyte) of on-board memory. Two Application Specific Integrated Circuit (ASIC) controller chips handle the interconnection between the controller bus, DRAM memory, and Peripheral Component Interconnect (PCI) internal buses. They also handle the interface between the on-board 2 Mbyte flash, 32 Kbyte nonvolatile random access memory (NVRAM) RS-232 port chip, and 10/100 BASE-T Ethernet chip.

The RAID controller module is a multifunction board that provides six Small Form-Factor Pluggable (SFP) ports, the SES logic, and the RAID controller. The SES logic monitors various temperature thresholds, fan speed from each fan, voltage status from each power supply, and the FRU ID.

Each RAID controller module incorporates SES direct-attached Fibre Channel capability to monitor and maintain enclosure environmental information. The SES controller chip monitors all internal +12 and +5 voltages, various temperature sensors located throughout the chassis, each fan, and controls the front and back panel LEDs and audible alarm. Both the RAID chassis and the expansion chassis support dual SES failover capabilities for fully redundant event monitoring.

1.2.2 I/O Expansion Modules

The hot-serviceable I/O expansion modules also provide four SFP ports but do not have battery modules or controllers, and are used only in expansion units.

1.2.3 Disk Drives

Each disk drive is mounted in its own sled assembly, known as an air management sled. Each sled assembly has EMI shielding, an insertion and locking mechanism, and a compression spring for maximum shock and vibration protection.

The drives can be ordered in 36-GB, 73-GB, and 146-GB sizes. Thirty-six-GB drives have a rotation speed of 15,000 RPM, while 73-GB drives and 146-GB drives have a rotation speed of 10,000 RPM. Each disk drive is slot-independent, meaning that once a RAID set has been initialized, the system can be shut down and the drives can be removed and replaced in any order. In addition, disk drives are field-upgradeable to larger drives without interruption of service to user applications. The drive firmware is also field-upgradeable, but the firmware upgrade procedure requires interruption of service.

In the event of a single disk drive failure, with the exception of RAID 0, the system continues to service all I/O requests. Either the mirrored data or parity data is utilized in the rebuilding of the failed drives data to a spare disk drive, assuming one is assigned. If a spare is not assigned, you have to manually rebuild the array.

In the unlikely event that multiple disk drive failures occur within the same RAID set, data that has not been replicated or backed up might be lost. This is an inherent limitation of all RAID subsystems and could affect application programs.

An air management sled FRU is available for use when you remove a disk drive and do not replace it. You can insert the air management sled into the empty slot to maintain optimum airflow through the chassis.

1.2.4 Battery Module

The battery module is designed to provide power to the system cache for 72 hours in the case of a power failure. When power is reapplied, the cache is purged to disk. The battery module is a hot-swappable FRU that is mounted on the I/O board with guide rails and a transition board. It also contains the EIA-232 and DB-9 serial interface (COM) ports.

1.2.5 Power and Fan Modules

Each array contains redundant (two) power and fan modules. Each module contains a 420-watt power supply with autoranging capabilities from 90 Volts Alternating Current (VAC) to 264 VAC and two radial 52 Cubic Feet per Minute (CFM) fans. A single power and fan module can sustain an array.


1.3 Interoperability

The array is designed for heterogeneous operation and supports the following operating environments:



Note - For information about supported versions of these operating environments, refer to the release notes for your array.



The array does not require any host-based software for configuration, management, and monitoring, which are handled through the built-in firmware application. The console window can be accessed via the DB-9 communications (COM) port using the tip command, or via the Ethernet port using the telnet command.


1.4 Fibre Channel Technology Overview

As a device protocol capable of high data transfer rates, Fibre Channel simplifies data bus sharing and supports not only greater speed than SCSI, but also more devices on the same bus. Fibre Channel can be used over both copper wire and optical cable. It can be used for concurrent communications among multiple workstations, servers, storage systems, and other peripherals using SCSI and IP protocols. When a Fibre Channel hub or Fabric switch is employed, it provides flexible topologies for interconnections.

1.4.1 FC Protocols

Two common protocols are used to connect Fibre Channel (FC) nodes together:

The point-to-point protocol is straightforward, doing little more than establishing a permanent communication link between two ports.

The arbitrated loop protocol creates a simple network featuring distributed (arbitrated) management between two or more ports using a circular (loop) data path. Arbitrated loops can support more nodes than point-to-point connections.

The Sun StorEdge 3510 FC Array supports point-to-point and arbitrated loop protocols. You select the protocol you prefer by setting the desired FC connection option in the configuration parameters of the firmware application.

1.4.2 FC Topologies

The presence or lack of switches establishes the topology of an FC environment. In a direct attached storage (DAS) topology, servers connect straight to arrays without switches. In a storage area network (SAN) topology, servers and arrays connect to an FC network created and managed by switches.

Refer to Sun StorEdge 3000 Family Best Practices Manual for your array to see information about optimal configurations for site requirements.

1.4.3 Fibre Hubs and Switches

A storage network built on a Fibre Channel architecture might employ several of the following components: Fibre Channel host adapters, hubs, Fabric switches, and fibre-to-SCSI bridges.

A loop configuration allows different devices in the loop to be configured in a token ring style. With a fibre hub, a fibre loop can be rearranged in a star-like configuration because the hub itself contains port bypass circuitry that forms an internal loop inside. Bypass circuits can automatically reconfigure the loop once a device is removed or added without disrupting the physical connection to other devices.

1.4.4 Data Availability

Data availability is one of the major requirements for today's mission-critical applications. Highest availability can be accomplished with the following functionality:

1.4.5 Scalability

The Fibre Channel architecture brings scalability and easier upgrades to storage. Storage expansion can be as easy as cascading another expansion unit to a configured RAID array without powering down the running system. Up to two expansion units can be daisy-chained and connected to a single RAID array.

Up to 125 devices can be configured in a single FC loop. By default, the array provides two drive loops and four host/drive loops, and operates in Fibre Channel-Arbitrated Loop (FC-AL) and Fabric topologies.


1.5 Fibre Channel Architecture

Each RAID array has six Fibre Channels that have the following defaults:

The expansion unit has a total of four FC-AL ports.



Note - Throughout this manual, Fibre Channel-Arbitrated Loops are referred to simply as loops.



1.5.1 Host and Drive FC Architecture

RAID controller channels 0, 1, 4, and 5 are designated for host connections or for connections to expansion chassis drives, depending on controller configuration settings.

In a dual RAID controller configuration, both RAID controllers have the same channel designators, due to the architecture of the loops within the chassis. Each host and drive channel of the top RAID controller shares a loop with the matching channel on the bottom RAID controller. For example, Channel 0 of the top RAID controller shares the same loop as channel 0 of the bottom RAID controller. This provides four separate loops for direct connectivity to hosts, expansion chassis, or hub and switch devices.

Each host and drive loop includes multiple components. Each I/O board contains FC-AL loops and port bypass circuits that are associated with three components: a channel on the RAID controllers, the SFP port residing within the I/O board, and a connection to the opposite I/O board. This architecture provides a data path for both RAID controllers to either the top or bottom SFP port on a given channel. Plugging an SFP transceiver into a port enables external connections to that port. The optical connector used is the low-profile LC connector. Plugging and unplugging SFP connectors into ports is straightforward and easy.

The configuration of a single RAID controller configuration is slightly different. The connection to the lower I/O board does not exist; however, the same number of loops are available.

1.5.2 Disk Drive FC Architecture

Channels 2 and 3 are disk drive channels only and cannot be used as host channels. In the disk drive FC architecture, channel 2 from both controllers appears on the upper I/O board and channel 3 from both controllers appears on the lower I/O board.

The components residing on the disk drive loops are:



Note - RAID array channels 0, 1, 4, and 5 can be host or drive ports; by default, they are host ports. The CH2 and CH3 ports are dedicated drive ports and cannot be used as host ports.



1.5.3 Redundant Configuration Considerations

This section provides information about setting up redundant configurations for increased reliability.

1.5.3.1 Host Bus Adapters

Fibre Channel is widely applied to storage configurations with topologies that aim to avoid loss of data by component failure. As a rule, the connections between source and target should be configured in redundant pairs.

The recommended host-side connection consists of two or more host bus adapters (HBAs). Each HBA is used to configure a Fibre Channel loop between the host computer and the array. In active-to-active redundant controller mode, the primary loop serves the I/O traffic directed to the primary controller, and its pair loop serves the I/O traffic to the secondary controller. The host-side management software directs I/O traffic to the pair loop if one of the redundant loops fails.

1.5.3.2 Active-to-Active Redundant Controller

Since each fibre interface supports only a single loop ID, two HBAs are necessary for the active-to-active redundant controller operation. Using two HBAs in each server ensures continued operation even when a data path fails.

In active-to-active mode, the connection to each host adapter should be considered a data path connecting the host to either the primary or the secondary controller. One adapter should be configured to serve the primary controller and the other adapter to serve the secondary controller. Each target ID on the host channels should be assigned either a primary ID or a secondary ID. If one controller fails, the existing controller can inherit the ID from its counterpart and activate the one standby channel to serve host I/O.

1.5.3.3 Host Redundant Paths

The controller passively supports redundant fibre loops on the host side, provided that the host has implemented software support for this feature.

In the unlikely event of controller failure, the standby channels on an existing controller become an I/O route serving the host I/O originally directed to the active channel on its pair of controllers. Moreover, application failover software should be running on the host computer to control the transfer of I/O from one HBA to another in case either data path fails.


1.6 Additional Software Tools

The following additional software tools are available on the Sun StorEdge 3000 Family Professional Storage Manager CD, provided with your array:

Refer to the Sun StorEdge 3000 3000 Family Software Installation Guide for information about installing these tools.

The Sun StorEdge 3000 Family Documentation CD provides the related user guides with configuration procedures for these tools.

For other supported software tools, refer to the release notes, located at:

http://www.sun.com/products-n-solutions/hardware/docs/Network_Storage_Solutions/Workgroup/3510


1 (TableFootnote) A disk array with no controller. Each expansion unit has two Fibre Channel loops that can provide redundant data paths back to the RAID array.
2 (TableFootnote) A disk array with no controller that is connected directly to a host computer, with no RAID array in the loop.
3 (TableFootnote) The host-based Sun StorEdge Configuration Service software provides a graphical user interface (GUI) and additional event-reporting capabilities.