C H A P T E R 1 |
System Requirements and Preinstallation Tasks |
This chapter explains the system requirements for installing the Sun StorEdge QFS and Sun StorEdge SAM-FS software. This chapter starts with an overview. The remaining sections describe the requirements you must meet or the actions you must take before you begin to install and configure your software. These requirements are as follows:
The Sun StorEdge QFS and Sun StorEdge SAM-FS file systems are similar, but this manual notes differences when necessary. The following subsections describe these software products and introduce additional file system features that you can enable:
The following sections contain file system descriptions and installation checklists that you can use when configuring the file systems.
The Sun StorEdge QFS file system shares many features with the file system included in the Sun StorEdge SAM-FS product. The Sun StorEdge QFS file system, however, is designed for high performance and contains more features than are supported within the Sun StorEdge SAM-FS file system.
You can use TABLE 1-1 as a checklist when configuring a Sun StorEdge QFS file system.
Defining the Sun StorEdge QFS Configuration By Creating the mcf File |
|
The Sun StorEdge SAM-FS environment includes a general-purpose file system along with the storage and archive manager, SAM. The Sun StorEdge SAM-FS environment's file system allows data to be archived to automated libraries at device-rated speeds. In addition, data can also be archived to files in another file system through a process known as disk archiving. The file system in the Sun StorEdge SAM-FS environment is a complete file system. The user is presented with a standard file system interface and can read and write files as though they were all on primary disk storage.
You can use TABLE 1-2 as a checklist when configuring a Sun StorEdge SAM-FS file system.
(Optional) Verifying and Updating the st.conf and samst.conf Files |
|
Defining the Sun StorEdge SAM-FS Configuration By Creating the mcf File |
|
(Optional) Creating Parameters Files for Network-Attached Automated Libraries |
|
If you purchase licenses for both Sun StorEdge QFS and Sun StorEdge SAM-FS software, you can run the Sun StorEdge QFS file system with the storage and archive manager found in the Sun StorEdge SAM-FS software. Such a system is referred to as Sun SAM-QFS.
This manual does not call out the Sun SAM-QFS configuration unless it is necessary for clarity. In this manual, you can assume that references to Sun StorEdge SAM-FS software also apply to Sun SAM-QFS configurations when describing storage and archive management. Likewise, you can assume that references to Sun StorEdge QFS also apply to Sun SAM-QFS configurations when describing file system design and capabilities.
For a depiction of a Sun SAM-QFS configuration, see FIGURE 1-2.
You can use TABLE 1-3 as a checklist when configuring a Sun SAM-QFS environment. To create a Sun SAM-QFS environment, follow the instructions for creating a Sun StorEdge SAM-FS file system, but when you define your file system in the mcf file, use the Sun StorEdge QFS instructions for defining file system devices.
(Optional) Verifying and Updating the st.conf and samst.conf Files |
|
Defining the Sun StorEdge QFS Configuration By Creating the mcf File Use the information in this section for configuring the file systems in your Sun SAM-QFS environment. |
|
Defining the Sun StorEdge SAM-FS Configuration By Creating the mcf File Use the information in this section for configuring the removable media devices in your Sun SAM-QFS environment. |
|
(Optional) Creating Parameters Files for Network-Attached Automated Libraries |
|
A Sun StorEdge QFS or Sun SAM-QFS shared file system is a distributed, multihost file system that you can mount on multiple Solaris operating system (OS) hosts. One Solaris OS host acts as the metadata server, and the others are clients. If you want the ability to change the metadata server, you must designate one or more clients as potential metadata servers.
You can use TABLE 1-4 as a checklist when configuring a Sun StorEdge QFS shared file system on Solaris OS hosts. If you are configuring a Sun StorEdge QFS shared file system on Sun Cluster hosts, see Sun StorEdge QFS File Systems in a Sun Cluster Environment for a checklist.
Defining the Sun StorEdge QFS Configuration By Creating the mcf File |
|
The following sections describe the type of Sun StorEdge QFS file systems you can configure in a Sun Cluster Environment.
A Sun StorEdge QFS local file system is local to one host. This manual contains all the instructions you need to configure this type of file system. A local file system is one that is configured on disks that are accessible only to the host system upon which the Sun StorEdge QFS software is installed. In a Sun Cluster environment, local file systems are accessible only to the node upon which they are created.
For a checklist to use when configuring a Sun StorEdge QFS file system as a local file system in a Sun Cluster environment, see TABLE 1-1.
A Sun StorEdge QFS highly available file system is a multihost file system resource that the Sun Cluster software can move to another node in the event of a Sun Cluster host failure. This file system uses the SUNW.HAStoragePlus resource type, and it can automatically fail over to other nodes.
You can use TABLE 1-5 as a checklist when configuring a Sun StorEdge QFS highly available file system on Sun Cluster hosts.
Defining the Sun StorEdge QFS Configuration By Creating the mcf File |
|
A Sun StorEdge QFS shared file system is a scalable, multihost file system on Sun Cluster nodes. If you configure a Sun StorEdge QFS shared file system on Sun Cluster nodes, the Sun Cluster software moves this file system's metadata server operations to another node if the Sun Cluster node that is the metadata server fails. This file system uses the SUNW.qfs(5) resource type.
Unlike a Sun StorEdge QFS shared file system on Solaris OS hosts, all Sun Cluster nodes configured in the Sun StorEdge QFS shared file system are potential metadata servers.
If the metadata server for the Sun StorEdge QFS shared file system is a node in a Sun Cluster, all hosts of the file system must also be cluster nodes. No hosts can reside outside the cluster.
This manual describes how to install the software and select the devices to use.
TABLE 1-6 shows the tasks you must perform and the documentation you need to consult in order to configure a Sun StorEdge QFS shared file system.
You can use TABLE 1-7 as a checklist when configuring a Sun StorEdge QFS shared file system on Sun Cluster hosts.
Defining the Sun StorEdge QFS Configuration By Creating the mcf File |
|
Sun SAM-Remote client and the Sun SAM-Remote server storage management system allow you to share libraries and other removable media devices in a Sun StorEdge SAM-FS or Sun SAM-QFS environment. All host systems included in a Sun SAM-Remote environment must have the same Sun StorEdge SAM-FS software release level installed and operational.
If you want to configure SAM-Remote, follow the procedures in this manual to create a Sun StorEdge SAM-FS file system. After the Sun StorEdge SAM-FS file system is tested and is known to be configured properly, you can use the instructions in the Sun SAM-Remote Administration Guide to enable remote storage and archive management.
The Sun StorEdge QFS and Sun StorEdge SAM-FS software must be installed on a Sun server based on UltraSPARC® technology.
For example, the following uname(1M) command retrieves information for ontheball:
If you plan to install the SAM-QFS Manager graphical user interface tool, there are additional requirements for the server that you want to use as the web server host. For more information about these requirements, see (Optional) Verifying Requirements for the SAM-QFS Manager.
Sun StorEdge QFS and Sun StorEdge SAM-FS software packages run on many Sun workstations and servers. Before installation, you should verify the applicability of the hardware, the level of the Solaris Operating System (OS), and the patch release installed. To install the Sun StorEdge QFS or Sun StorEdge SAM-FS software, you also must ensure that you have root-level access to your system.
|
Repeat these steps for each host on which you want to install the Sun StorEdge QFS or Sun StorEdge SAM-FS software.
1. Verify that your system has a CD-ROM drive or that it can access the release package at the Sun Download Center.
The Sun Download Center is at the following URL:
http://www.sun.com/software/downloads
2. Log in to your system as root.
You must have superuser access to install the software.
3. Verify your system's Solaris OS level.
The software relies on properly configured Solaris sofware at one of the following minimum release levels:
For example, the following command retrieves operating system and release level information for ontheball:
ontheball% cat /etc/release Solaris 9 4/04 s9s_u6wos_08a SPARC Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 22 March 2004 ontheball% |
Sun Microsystems provides Solaris OS patches to customers with a maintenance contract by means of CD-ROM, anonymous FTP, and the Sun Microsystems SunSolveSM web site (http://sunsolve.sun.com).
To install a patch after you install the Sun StorEdge QFS or Sun StorEdge SAM-FS release packages, load the CD-ROM or transfer the patch software to your system. Follow the instructions outlined in the Patch Installation Instructions and Special Install Instructions in the README file included in the patch or jumbo patch cluster.
If you plan to install Sun StorEdge QFS or Sun StorEdge SAM-FS software in a multihost environment, for example in a Sun SAM-Remote configuration or in a Sun StorEdge QFS shared file system configuration, make sure that you install the same release level and patch collection on all hosts that you want to include in the configuration. All host systems included in a multihost environment must have the same Sun StorEdge QFS or Sun StorEdge SAM-FS software release level installed and operational.
The Sun StorEdge QFS and Sun StorEdge SAM-FS software packages require a certain amount of disk cache (file system devices) in order for them to create and manage data files and directories.
The disk devices or partitions do not require any special formatting. You might see better performance if you configure multiple devices across multiple interfaces (HBAs) and disk controllers.
The disks must be connected to the server through a Fibre Channel or SCSI controller. You can specify individual disk partitions for a disk, or you can use the entire disk as a disk cache. The software supports disk arrays, including those under the control of volume management software, such as Solstice DiskSuiteTM, Solaris Volume Manager, and other volume management software products.
|
Familiarize yourself with Sun StorEdge QFS and Sun StorEdge SAM-FS file system layout possibilities.
Describing all the aspects of Sun StorEdge QFS and Sun StorEdge SAM-FS file systems is beyond the scope of this manual. For information on volume management, file system layout, and other aspects of file system design, see the Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.
|
1. Estimate the minimum disk cache requirements for Sun StorEdge QFS software (ma file systems).
2. Estimate the minimum disk cache requirements for Sun StorEdge SAM-FS software.
3. Estimate the minimum disk cache requirements for Sun SAM-QFS software (ma file systems plus the storage and archive manager).
You can create a Sun SAM-QFS file system when you install both the SUNWsamfsr and SUNWsamfsu packages and you are licensed for both Sun StorEdge QFS and Sun StorEdge SAM-FS software. You install the Sun StorEdge SAM-FS software package, and the license key enables the faster Sun StorEdge QFS file system. Use the following guidelines if you are creating Sun SAM-QFS file systems:
4. Enter the format(1M) command to verify that you have sufficient disk cache space.
Use the format(1M) command if you are installing a Sun StorEdge QFS or Sun StorEdge SAM-FS file system on a single server or if you are installing a Sun StorEdge QFS file system as a local file system on a Sun Cluster node.
Remember to use Ctrl-d to exit the format(1M) command.
CODE EXAMPLE 1-1 shows six disks attached to a server. There are two internal disks connected by means of controller 0 on targets 10 and 11 (c0t10d0 and c0t11d0). The other disks are external.
For the sake of clarity, the format(1M) command output in CODE EXAMPLE 1-1 has been edited.
CODE EXAMPLE 1-2 shows four disks attached to a server. There are two internal disks connected by means of controller 0 on targets 0 (c0t0d0) and 1 (c0t1d0). There are two external disks connected by means of controller 3 on targets 0 (c3t0d0) and 2 (c3t2d0).
The software requires a disk cache consisting of RAID devices, JBOD devices, or both. It also requires a certain amount of disk space in the / (root), /opt, and /var directories. The actual amount needed varies depending on the packages you install. TABLE 1-8 shows the minimum amount of disk space required in these various directories.
Note that the archiver data directory, the archiver queue files, and the log files are written to the /var directory, so the sizes shown in TABLE 1-8 should be considered a minimum amount for the /var directory.
|
The following procedure shows how to verify whether there is enough disk space on your system to accommodate the SUNWsamfsu and SUNWsamfsr packages.
CODE EXAMPLE 1-3 shows this command and its output.
2. Verify that there are at least 2,000 kilobytes available in the avail column for the / directory.
3. Verify that there are at least 21,000 kilobytes in the avail column for the /opt directory.
4. Verify that there are at least 6,000 kilobytes available in the /var directory.
A quantity of 30,000 kilobytes or more is recommended to allow for the growth of log files and other system files.
5. If there is not enough room for the software under each directory, repartition the disk to make more space available to each file system.
To repartition a disk, see your Sun Solaris system administration documentation.
Perform this verification if you plan to use the Sun StorEdge SAM-FS software.
If you plan to archive to disk space in another file system, which is called disk archiving, verify the following:
If you plan to archive to removable media devices, your environment must include the following:
The Sun StorEdge SAM-FS environment supports a wide variety of removable media devices. You can obtain a list of currently supported drives and libraries from your Sun Microsystems sales or support staff. To make sure that your devices are attached and enumerated in an easily retrieved list, perform one or both of the following procedures:
|
This section explains how to attach removable media devices to a server. These are general guidelines for attaching removable media hardware to a server. For explicit instructions on connecting these peripherals to a server, refer to the hardware installation guide supplied by the vendor with the automated library and drives.
1. Ensure that you are on a console connection to the server.
2. Power off the server before connecting devices.
Typically, you power off central components first and then the peripheral equipment. So, use the init(1M) command to power off the server, as follows:
This command brings down the system to the PROM level. At this point it is safe to power off the server and peripherals. For specific instructions regarding your equipment, see the documentation from the hardware vendor for proper power-on and power-off sequences.
3. Ensure that the removable media devices and the disk(s) to be used for the Sun StorEdge SAM-FS file system are connected and properly addressed.
4. (Optional) Ensure that the SCSI target IDs are unique for each SCSI initiator (host adapter).
Perform this step if you have libraries attached to the host system through a SCSI interface.
Avoid setting SCSI target IDs for peripherals to ID 7 because this ID is typically reserved for the initiator. For example, if you are using a SCSI host adapter with a previously attached disk drive set to use a target ID of 3, any additional peripheral connected to this bus must not have an ID of 3. Typically, the internal disk drive ID is 3 for SPARC® systems and 0 for UltraSPARC systems.
5. Power on the peripherals according to the manufacturer's recommended sequence.
Typically, you power on the outermost peripherals first, working toward more central components in sequence.
At the >ok prompt, enter the following command to disable autobooting:
7. Type reset at the next prompt.
Reenabling autobooting is described later in this procedure.
8. (Optional) Conduct an inventory of target IDs and LUNs for each device connected to the host system through a SCSI interface.
Perform this step if you have libraries attached to the host system through a SCSI interface.
CODE EXAMPLE 1-4 shows the PROM >ok prompt and the output from the probe-scsi-all command.
9. (Optional) Save the output from the previous step.
If you performed the previous step, save the output. You use the information in this output for the next procedure, To Create a List of Devices.
10. (Optional) Conduct an inventory of target IDs and LUNs for each device connected to the host system through a Fibre Channel interface.
Perform this step if you have libraries or tape drives attached to the host system through a Fibre Channel interface.
CODE EXAMPLE 1-5 shows the commands to use to locate the host adapter directory, to select an item, and to display the Fibre Channel host bus adapter (HBA) devices.
If the server does not acknowledge all the known devices (disk drives, tape or optical drives, the automated library, and so on), you should check the cabling. Cabling is often the problem when devices and controllers are not communicating. Do not proceed until all devices appear when probed.
11. (Optional) Save the output from the previous step.
If you performed the previous step, save the output. You use the information in this output for the next procedure, To Create a List of Devices.
At the >ok prompt, enter the following command to enable autobooting:
Due to special driver requirements, no device information appears in /var/adm/messages for magneto-optical devices or libraries until after you install the Sun StorEdge SAM-FS software packages.
15. Disable autocleaning and autoloading.
If your automated library supports autocleaning or autoloading, disable those features when using that library with the Sun StorEdge SAM-FS software. Consult the documentation from your library's manufacturer for information on disabling autocleaning and autoloading.
16. Go to Creating a List of Devices.
The device(s) that you intend to use must be attached and recognized by the server upon which you intend to install the Sun StorEdge SAM-FS software. To configure the Sun StorEdge SAM-FS software, you need to know the following about your devices:
For SCSI-attached drives, you need to know each drive's SCSI target ID and LUN.
For Fibre Channel-attached drives, you need to know each drive's LUN and worldwide node name.
Libraries that use SCSI or Fibre Channel attachments are called direct-attached libraries. For SCSI-attached libraries, you need to know each library's SCSI target ID and LUN. For Fibre Channel-attached libraries, you need to know each library's LUN and worldwide node name.
Libraries that use a network attachment are called network-attached libraries. You cannot configure network-attached libraries in the existing system configuration files. You need to create a parameters file for each network-attached library; this is explained later in the installation process.
This procedure shows you how to gather device information.
1. Make an inventory list of your devices.
Fill in TABLE 1-9 to include the name, manufacturer, model, and connection types for each device that you want to include in your Sun StorEdge SAM-FS environment.
2. Retain TABLE 1-9 for use again later in the configuration procedure.
Make sure that you have a software license key for the Sun StorEdge QFS or Sun StorEdge SAM-FS release that you are installing.
If you do not have a Sun Microsystems license key for the release level that you are installing, contact your authorized service provider (ASP) or Sun. When you contact Sun for a license, you will be asked to provide information regarding your environment.
For a Sun StorEdge QFS license, you will need to provide information such as the following:
For a Sun StorEdge SAM-FS license, you will need to provide information such as the following:
The license keys for the Sun StorEdge QFS and Sun StorEdge SAM-FS packages allow the system to run indefinitely unless one of the following conditions is present:
If your license expires, you can mount the file systems, but you cannot archive or stage files in a Sun StorEdge SAM-FS environment.
After your initial installation, if you upgrade your software or if you change your environment's configuration, you might need to change your software license. Changes to the environment that might necessitate upgrading your license include adding a library or changing a host system. If you have questions regarding your existing license, you can enter the samcmd(1M) l command (lowercase l for license). If you need to upgrade your license, contact your Sun sales representative.
Note - If you are upgrading from a Sun StorEdge QFS or Sun StorEdge SAM-FS 4.0 or 4.1 release, you might need to upgrade your license depending on other changes in your environment. |
Make sure that you have a copy of the release software. You can obtain the Sun StorEdge QFS and Sun StorEdge SAM-FS software from the Sun Download Center or on a CD-ROM. Contact your authorized service provider (ASP) or your Sun sales representative if you have questions on obtaining the software.
After the release, upgrade patches are available from the following URL:
|
1. Enter the following URL in your browser:
http://www.sun.com/software/download/sys_admin.html
2. Click on the Sun StorEdge QFS or Sun StorEdge SAM-FS software package you want to receive.
3. Follow the instructions at the web site for downloading the software.
|
1. Log in as root on your Sun StorEdge QFS or Sun StorEdge SAM-FS server.
The Sun StorEdge QFS and Sun StorEdge SAM-FS software uses the Sun Solaris operating system (OS) packaging utilities for adding and removing software. You must be logged in as superuser (root) to make changes to software packages. The pkgadd(1M) utility prompts you to confirm various actions necessary to install the packages.
2. Insert the CD into the CD-ROM drive.
The system should automatically detect the CD's presence. If it does not, issue the commands shown in CODE EXAMPLE 1-6 to stop and start the Sun Solaris Volume Manager and to change to the directory that contains the Sun StorEdge QFS and Sun StorEdge SAM-FS software packages.
# /etc/init.d/volmgt stop # /etc/init.d/volmgt start # volcheck # cd /cdrom/cdrom0 |
On the CD, the packages reside in the /cdrom/cdrom0 directory organized by Sun Solaris version.
|
If you need to remove the 4.2 software packages in the future, perform the following steps.
1. (Optional) Remove the SAM-QFS Manager software from the management station and from the Sun StorEdge QFS and Sun StorEdge SAM-FS server.
If you have installed the SAM-QFS Manager software, perform the procedure described in Removing the SAM-QFS Manager Software.
2. Use the pkginfo(1) command to determine which Sun StorEdge QFS and Sun StorEdge SAM-FS software packages are installed on your system.
To find the Sun StorEdge QFS 4.2 packages, enter the following command:
To find the Sun StorEdge SAM-FS 4.2 packages, enter the following command:
3. Use the pkgrm(1M) command to remove the existing software.
If you are using any optional packages, make sure you remove them before removing the main SUNWqfsr/SUNWqfsu or SUNWsamfsr/SUNWsamfsu packages. In addition, make sure that you remove the SUNWqfsu and SUNWsamfsu packages before removing the SUNWqfsr and SUNWsamfsr packages.
Example 1. To remove all possible Sun StorEdge QFS packages, enter the following command:
SUNWqfsr must be the last package removed.
Example 2. To remove all possible Sun StorEdge SAM-FS packages, enter the following command:
SUNWsamfsr must be the last package removed.
The Sun StorEdge QFS and Sun StorEdge SAM-FS software interoperates with many different hardware and software products from third-party vendors. Depending on your environment, you might need to upgrade other software or firmware before installing or upgrading the Sun StorEdge QFS or Sun StorEdge SAM-FS packages. Consult the Sun StorEdge QFS and Sun StorEdge SAM-FS 4.2 Release Notes for information pertaining to library model numbers, firmware levels, and other compatibility information.
Perform this verification if you plan to configure a Sun StorEdge QFS shared file system.
The following sections describe the system requirements that must be met in order for you to install a Sun StorEdge QFS shared file system.
There must be at least one Solaris metadata server. If you want to be able to change the metadata server, there must be at least one other host that can become the metadata server; these other host systems are known as potential metadata servers. On a Sun Cluster, all nodes included in a Sun StorEdge QFS shared file system are potential metadata servers.
The following are configuration recommendations with regard to metadata:
Ensure that your configuration meets the following operating system and hardware requirements:
Ensure that your configuration meets the following Sun StorEdge QFS requirements:
The system writes the preceding message to the metadata server's /var/adm/messages file.
If you want to be able to change the metadata server in a Sun SAM-QFS environment, the following requirements must be met:
Perform this verification if you want to install a Sun StorEdge QFS file system in a Sun Cluster environment.
You can configure both a Sun StorEdge QFS file system and a Sun StorEdge QFS shared file system in a Sun Cluster environment, as follows:
Also make sure that your environment meets the requirements listed in (Optional) Verifying Sun StorEdge QFS Shared File System Requirements.
If you plan to configure a Sun StorEdge QFS shared file system in a Sun Cluster environment, verify the following:
Ensure that you have between two and eight UltraSPARC hosts to use as a cluster.
Ensure that you have the following minimum software levels installed on each cluster node:
Each node must have the same Sun Cluster software level and Sun Cluster patch collection. You must install Sun StorEdge QFS software packages on each node in the cluster that will host a Sun StorEdge QFS file system.
3. Ensure that you are familiar with how disks are used in a Sun Cluster.
In a Sun Cluster, the disk cache space must be configured on storage that is highly available and redundant. Ensure that you have a good understanding of the concepts in the Sun Cluster System Administration Guide for Solaris OS.
You should also be familiar with Sun Cluster operations. For information on Sun Cluster operations, see the following manuals:
4. Verify your disk space according to the instructions in Verifying Disk Space.
Verifying Disk Space explains how much disk space to allow for the various directories that the file systems need.
5. Verify that you have the correct kinds of disk devices.
For the file system to be highly available, it must be constructed from highly available devices. The types of disk devices you can use depend on the kind of file system you are configuring and whether you are using a volume manager, as follows:
When you specify these devices in your mcf file, you use the /dev/did devices from the scdidadm(1M) output. For more information about this, see Defining the Sun StorEdge QFS Configuration By Creating the mcf File.
![]() |
Caution - Do not use a volume manager if you are going to configure a Sun StorEdge QFS shared file system on a Sun Cluster. Data corruption can result. |
If you want to configure from raw devices, use Sun Cluster global devices. Use the output from the scdidadm(1M) command to determine the names of the global devices and substitute global for did when specifying the devices in the mcf(1) file. Global devices are accessible from all nodes in a Sun Cluster, even if these devices are not physically attached to all nodes. If all nodes that have a hardware connection to the disk crash or lose their connection, then the remaining nodes cannot access the disk. File systems created on global devices are not necessarily highly available.
If you want to use a volume manager, use one of the following:
Use scsetup(1M) to register volume-managed devices with the Sun Cluster framework prior to configuring your file system.
If you are unsure about your devices, issue the scdidadm(1M) command with its -L option to determine which devices in your Sun Cluster are highly available. This command lists the paths of the devices in the DID configuration file. In the output from the scdidadm(1M) command, look for devices that have two or more DID devices listed with the exact same DID device number. Such devices are highly available in a Sun Cluster and can also be configured as global devices for a file system, even if they directly connect only to a single node.
I/O requests issued to global devices from a node other then the direct-attached node are issued over the Sun Cluster interconnect. These single-node, global devices cease to be available when all nodes that have direct access to the device are unavailable.
After the set of highly available devices has been determined, check for device redundancy. All devices must employ mirroring (RAID-1) or striping (RAID-5) to ensure continued operation in the event of a failure, as follows:
For more information about volume sizing and redundancy configurations, see the Solaris Volume Manager Administration Guide or your VERITAS Volume Manager documentation.
To find suitable devices, first determine which devices are highly available, and then determine which devices are redundant.
CODE EXAMPLE 1-7 shows the scdidadm(1M) Sun Cluster command. This example uses the -L option for this command to list paths of the devices in the DID configuration file for all nodes. In the output from the scdidadm(1M) command, look for output that shows a device that is visible from two or more nodes and that bears the same worldwide name. These are global devices.
CODE EXAMPLE 1-7 uses Sun StorEdge T3 arrays in a RAID-5 configuration. The command output on your disk devices might differ depending on the equipment you use.
CODE EXAMPLE 1-7 shows that you can use devices 4 through 9 for configuring the disk cache for a file system.
There are two types of redundancy to consider in a Sun Cluster environment: RAID-based redundancy and data path redundancy. The implications of these redundancies are as follows:
To determine redundancy, consult the hardware documentation for your disk controllers and disk devices. You need to know (or need to investigate) whether the disk controller or disk devices that are reported by scdidadm(1M) are on redundant storage. For information, see the storage controller vendor's documentation set and view the current controller configuration.
The scdidadm(1M) command in this example lists device /dev/rdsk/c6t50020F2300004921d0, which is DID device /dev/did/rdsk/d5 or global device /dev/global/rdsk/d5. This device has a two partitions (0 and 1), each of which yields 212152320 blocks for use by a Sun StorEdge QFS highly available file system as /dev/global/rdsk/d5s0 and /dev/global/rdsk/d5s1.
You need to issue the scdidadm(1M) and format(1M) commands for all devices to be configured for use by the Sun StorEdge QFS highly available file system.
You cannot use a volume manager to construct redundant devices to support a Sun StorEdge QFS shared file system.
For more information about configuring devices that are on redundant storage, see your Sun Cluster software installation documentation.
For optimal file system performance, the metadata and file data should be accessible through multiple interconnects and multiple disk controllers. In addition, plan to write file data to separate, redundant, highly available disk devices.
Plan to write your file system's metadata to RAID-1 disks. You can write file data to either RAID-1 or RAID-5 disks.
If are configuring a Sun StorEdge QFS highly available file system and you are using a volume manager, the best performance is realized when the file system is striping data over all controllers and disks, versus having the volume manager perform the striping. You should use a volume manager only to provide redundancy.
Perform this verification if you want to use SAM-QFS Manager to configure, control, monitor, or reconfigure a Sun StorEdge QFS or Sun StorEdge SAM-FS environment through a web server.
You can install the SAM-QFS Manager in one of the following configurations:
After the SAM-QFS Manager software is installed, you can invoke the SAM-QFS Manager from any machine on the network that is allowed access to its web server.
If you plan to use SAM-QFS Manager, the host upon which you are configuring the SAM-QFS Manager software must meet the requirements described in the following sections:
You must install the SAM-QFS Manager on a SPARC server. Additional minimum hardware requirements are as follows:
Ensure that your installation meets the following browser requirements:
Make sure that one of the following minimum Solaris levels is installed on the web server:
The SAM-QFS Manager installation packages include revisions of the following software at the minimum levels indicated:
During the installation procedure, you will be asked to answer questions. Based on your answers, the installation software can install the correct revisions for you if the compatible revisions of these software packages are not present.
Perform this verification if you want to monitor your configuration through Simple Management Network Protocol (SNMP) software.
You can configure the Sun StorEdge QFS and Sun StorEdge SAM-FS software to notify you when potential problems occur in its environment. The SNMP software manages information exchange between network devices such as servers, automated libraries, and drives. When the Sun StorEdge QFS and Sun StorEdge SAM-FS software detects potential problems in its environment, it sends information to a management station, which allows you to monitor the system remotely.
The management stations you can use include the following:
If you want to enable SNMP traps, make sure that the management station software is installed and operating correctly before installing the Sun StorEdge QFS and Sun StorEdge SAM-FS software. Refer to the documentation that came with your management station software for information on installation and use.
The types of problems, or events, that the Sun StorEdge QFS and Sun StorEdge SAM-FS software can detect are defined in the Sun StorEdge QFS and Sun StorEdge SAM-FS Management Information Base (MIB). The events include errors in configuration, tapealert(1M) events, and other atypical system activity. For complete information on the MIB, see /opt/SUNWsamfs/mibs/SUN-SAM-MIB.mib after the packages are installed.
The Sun StorEdge QFS and Sun StorEdge SAM-FS software supports the TRAP SNMP (V2c) protocol. The software does not support GET-REQUEST, GETNEXT-REQUEST, and SET_REQUEST.
Copyright © 2004, Sun Microsystems, Inc. All rights reserved.