C H A P T E R  8

Sun StorEdge QFS in a Sun Cluster Environment

This chapter describes how the Sun StorEdge QFS software works in a Sun Cluster environment. It also provides configuration examples for a Sun StorEdge QFS shared file system in a Sun Cluster environment and for an unshared Sun StorEdge QFS file system in a Sun Cluster environment.

This chapter contains the following sections:


Before You Begin

With version 4.2 of the Sun StorEdge QFS software, you can install a Sun StorEdge QFS file system in a Sun Cluster environment and can configure the file system for high availability. The configuration method you use varies, depending on whether your file system is shared or unshared.

This chapter assumes that you are an experienced user of both the Sun StorEdge QFS software and the Sun Cluster environment. It also assumes you have performed either or both of the following:

It is recommended that you read the following documentation before continuing with this chapter:

Local Disks

Global Devices

Device ID (DID)

Disk Device Groups

Disk Device Group Failover

Local and Global Namespaces

Cluster File Systems

HAStoragePlus Resource Type

Volume Managers



Note - All references in this document to "Oracle Real Application Clusters" apply also to "Oracle Parallel Server" unless otherwise specified.




Restrictions

The following restrictions apply to the Sun StorEdge QFS software in a Sun Cluster environment:


How the Sun Cluster and the Sun StorEdge QFS Software Interact

The shared file system uses Sun Cluster Disk ID (DID) support to enable data access by the Sun Cluster data service for Oracle Real Application Clusters. The unshared file system uses global device volume support and volume manager-controlled volume support to enable data access by failover applications supported by Sun Cluster.

Data Access With a Shared File System

With DID support, each device that is under the control of the Sun Cluster system, whether it is multipathed or not, is assigned a unique disk ID. For every unique DID device, there is a corresponding global device. The Sun StorEdge QFS shared file system can be configured on redundant storage that consists only of DID devices (/dev/did/*), where DID devices are accessible only on nodes that have a direct connection to the device through a host bust adapater (HBA).

Configuring the Sun StorEdge QFS shared file system on DID devices and configuring the SUNW.qfs resource type for use with the file system makes the file system's shared metadata server highly available. The Sun Cluster data service for Oracle Real Application Clusters can then access data from within the file system. Additionally, the Sun StorEdge QFS Sun Cluster agent can then automatically relocate the metadata server for the file system as necessary.

Data Access With an Unshared File System

A global device is Sun Cluster's mechanism for accessing an underlying DID device from any node within the Sun Cluster, assuming that the nodes hosting the DID device are available. Global devices and volume manager-controlled volumes can be made accessible from every node in the Sun Cluster. The unshared Sun StorEdge QFS file system can be configured on redundant storage that consists of either raw global devices (/dev/global/*) or volume manager-controlled volumes.

Configuring the unshared file system on these global devices or volume manager-controlled devices and configuring the HAStoragePlus resource type for use with the file system makes the file system highly available with the ability to fail over to other nodes.


About Configuration Examples

This chapter provides configuration examples for the Sun StorEdge QFS shared file system on a Sun Cluster and for the unshared Sun StorEdge QFS file system on a Sun Cluster. All configuration examples are based on a platform consisting of the following:

All configurations in this chapter are also based on CODE EXAMPLE 8-1. In this code example, the scdidadm(1M) command displays the disk identifier (DID) devices, and the -L option lists the DID device paths, including those on all nodes in the Sun Cluster system.

CODE EXAMPLE 8-1 Command That Lists the DID Devices and Their DID Device Paths
# scdidadm -L
1   scnode-A:/dev/dsk/c0t0d0    /dev/did/dsk/d1
2   scnode-A:/dev/dsk/c0t1d0    /dev/did/dsk/d2
3   scnode-A:/dev/dsk/c0t6d0    /dev/did/dsk/d3
4   scnode-A:/dev/dsk/c6t1d0    /dev/did/dsk/d4
4   scnode-B:/dev/dsk/c7t1d0    /dev/did/dsk/d4
5   scnode-A:/dev/dsk/c6t2d0    /dev/did/dsk/d5
5   scnode-B:/dev/dsk/c7t2d0    /dev/did/dsk/d5
6   scnode-A:/dev/dsk/c6t3d0    /dev/did/dsk/d6
6   scnode-B:/dev/dsk/c7t3d0    /dev/did/dsk/d6
7   scnode-A:/dev/dsk/c6t4d0    /dev/did/dsk/d7
7   scnode-B:/dev/dsk/c7t4d0    /dev/did/dsk/d7
8   scnode-A:/dev/dsk/c6t5d0    /dev/did/dsk/d8    
8   scnode-B:/dev/dsk/c7t5d0    /dev/did/dsk/d8
9   scnode-B:/dev/dsk/c0t6d0    /dev/did/dsk/d9    
10  scnode-B:/dev/dsk/c1t0d0    /dev/did/dsk/d10
11  scnode-B:/dev/dsk/c1t1d0    /dev/did/dsk/d11

CODE EXAMPLE 8-1 shows that DID devices d4 through d8 are accessible from both Sun Cluster systems (scnode-A and scnode-B). With the Sun StorEdge QFS file system sizing requirements and with knowledge of your intended application and configuration, you can decide on the most appropriate apportioning of devices to file systems. By using the Solaris format(1M) command, you can determine the sizing and partition layout of each DID device and resize the partitions on each DID device, if needed. Given the available DID devices, you can also configure multiple devices and their associated partitions to contain the file systems, according to your sizing requirements.


Configuring a Sun StorEdge QFS Shared File System on a Sun Cluster

When you install a Sun StorEdge QFS shared file system on a Sun Cluster, you configure the file system's metadata server under the SUNW.qfs resource type. This makes the metadata server highly available and enables the Sun StorEdge QFS shared file system to be globally accessible on all configured nodes in the Sun Cluster.

A Sun StorEdge QFS shared file system is typically associated with a scalable application. The Sun StorEdge QFS shared file system is mounted on, and the scalable application is active on, one or more Sun Cluster nodes.

If a node in the Sun Cluster system fails, or if you switch over the resource group, the metadata server resource (Sun StorEdge QFS Sun Cluster agent) automatically relocates the file system's metadata server as necessary. This ensures that the other nodes' access to the shared file system is not affected.



Note - To manually relocate the metadata server for a Sun StorEdge QFS shared file system that is under Sun Cluster control, you must use the Sun Cluster administrative commands. For more information about these commands, see the Sun Cluster documentation.



Metadata Server Resource Considerations

When the Sun Cluster boots, the metadata server resource ensures that the file system is mounted on all nodes that are part of the resource group. However, the file system mount on those nodes is not monitored. Therefore, in certain failure cases, the file system might be unavailable on certain nodes, even if the metadata server resource is in the online state.

If you use Sun Cluster administrative commands to bring the metadata server resource group offline, the file system under the metadata server resource remains mounted on the nodes. To unmount the file system (with the exception of a node that is shut down), you must bring the metadata server resource group into the unmanaged state by using the appropriate Sun Cluster administrative command.

To remount the file system at a later time, you must bring the resource group into a managed state and then into an online state.

Example Configuration

This section shows an example of the Sun StorEdge QFS shared file system installed on raw DID devices with the Sun Cluster data service for Oracle Real Application Clusters. For detailed information on how to use the Sun StorEdge QFS shared file system with the Sun Cluster data service for Oracle Real Application Clusters, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.

As shown in CODE EXAMPLE 8-1, DID devices d4 through d8 are highly available and are contained on controller-based storage. For you to configure a Sun StorEdge QFS shared file system on a Sun Cluster, the controller-based storage must support device redundancy by using RAID-1 or RAID-5.

For simplicity in this example, two file systems are created:

Additionally, device d4 is used for Sun StorEdge QFS metadata. This device has two 50 GB slices. The remaining devices, d5 through d8, are used for Sun StorEdge QFS file data.

This configuration involves five main steps, as detailed in the following subsections:

1. Preparing to create Sun StorEdge QFS file systems.

2. Creating the file systems and configuring the Sun Cluster nodes.

3. Validating the configuration.

4. Configuring the network name service.

5. Configuring the Sun Cluster data service for Oracl Real Application Clusters.


procedure icon  To Prepare to Create Sun StorEdge QFS Shared File Systems

Steps 1 through 3 in this procedure must be performed from one node in the Sun Cluster system. In this example, the steps are performed from node scnode-A.

1. From one node in the Sun Cluster system, use the format(1M) utility to lay out partitions on /dev/did/dsk/d4.

CODE EXAMPLE 8-2 Laying Out Partitions on /dev/did/dsk/d4
# format /dev/did/rdsk/d4s2
# format> partition
[ output deleted ]
# partition> print
Current partition table (unnamed):
Total disk cylinders available: 12800 + 2 (reserved cylinders)
 
Part      Tag    Flag     Cylinders         Size            Blocks
  0        usr    wm       1 -  6400       50.00GB    (6400/0/0)  104857600
  1        usr    wm    6401 - 12800       50.00GB    (6400/0/0)  104857600
  2     backup    wu       0 - 12800      100.00GB    (6400/0/0)  209715200
  3 unassigned    wu       0                0         (0/0/0)             0
  4 unassigned    wu       0                0         (0/0/0)             0
  5 unassigned    wu       0                0         (0/0/0)             0
  6 unassigned    wu       0                0         (0/0/0)             0
  7 unassigned    wu       0                0         (0/0/0)             0
 
NOTE: Partition 2 (backup) will not be used and was created by format(1M) by default.

Partition (or slice) 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 50 GB partition. Partition 1 is configured to be the same size as partition 0.

2. Use the format(1M) utility to lay out partitions on /dev/did/dsk/d5.

CODE EXAMPLE 8-3 Laying Out Partitions on /dev/did/dsk/d5
# format /dev/did/rdsk/d5s2
# format> partition
[ output deleted ]
# partition> print
Current partition table (unnamed):
Total disk cylinders available: 34530 + 2 (reserved cylinders)
 
Part      Tag    Flag     Cylinders         Size            Blocks
  0        usr    wm       1 - 34529      269.77GB    (34529/0/0)  565723136
  1        usr    wm       0 - 0            0         (0/0/0)      
  2     backup    wu       0 - 34529      269.77GB    (34530/0/0)  565739520
  3 unassigned    wu       0                0         (0/0/0)              0
  4 unassigned    wu       0                0         (0/0/0)              0
  5 unassigned    wu       0                0         (0/0/0)              0
  6 unassigned    wu       0                0         (0/0/0)              0
  7 unassigned    wu       0                0         (0/0/0)              0
 
NOTE: Partition 2 (backup) will not be used and was created by format(1M) by default.

3. Replicate the device d5 partitioning to devices d6 through d8.

This example shows the command for device d6.

# prtvtoc /dev/did/rdsk/d5s2 | fmthard -s - /dev/did/rdsk/d6s2

4. On all nodes that are potential hosts of the file systems, perform the following:

a. Configure the six partitions into two Sun StorEdge QFS shared file systems by adding two new configuration entries (qfs1 and qfs2) to the mcf file.

CODE EXAMPLE 8-4 Adding Configuration Entries to the mcf File
# cat >> /etc/opt/SUNWsamfs/mcf <<EOF
#
# Sun StorEdge QFS file system configurations
#
# Equipment				   Equipment	     Equipment	   Family	    Device   Additional
# Identifier				    Ordinal	      Type	         Set	      State   Parameters
# ------------------ ---------    ---------    -------    ------  ----------
qfs1				     100		         ma		          qfs1         -       shared
/dev/did/dsk/d4s0				     101		         mm		          qfs1         -
/dev/did/dsk/d5s0				     102         mr		          qfs1         -
/dev/did/dsk/d6s0				     103		         mr		          qfs1         -
 
qfs2				     200		         ma		          qfs2         -       shared
/dev/did/dsk/d4s1				     201		         mm		          qfs2         -
/dev/did/dsk/d7s0				     202         mr		          qfs2         -
/dev/did/dsk/d8s0				     203		         mr		          qfs2         -
 
EOF

For more information about the mcf file, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

b. Edit the /etc/opt/SUNWsamfs/samfs.cmd file to add the mount options that are required for the Sun Cluster data service for Oracle Real Application Clusters.

CODE EXAMPLE 8-5 Example samfs.cmd File
fs = qfs2
   stripe = 1
   sync_meta = 1
   mh_write
   qwrite
   forcedirectio
   nstreams = 1024
   rdlease = 600

For more information about the mount options that are required by the Sun Cluster data service for Oracle Real Application Clusters, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.

c. Validate that the configuration is correct.

Be sure to perform this validation after you have configured the mcf file and the samfs.cmd file on each node.

# /opt/SUNWsamfs/sbin/sam-fsd 


procedure icon  To Create the Sun StorEdge QFS Shared File System and Configure Sun Cluster Nodes

Perform this procedure for each file system you are creating. This example describes how to create the qfs1 file system.

1. Obtain the Sun Cluster private interconnect names by using the following command.

CODE EXAMPLE 8-6 Obtaining the Sun Cluster Private Interconnect Names
# /usr/cluster/bin/scconf -p |egrep "Cluster node name:|Node private \hostname:"
Cluster node name:                                 scnode-A
  Node private hostname:                           clusternode1-priv
Cluster node name:                                 scnode-B
  Node private hostname:                           clusternode2-priv

2. On all nodes that are potential hosts of the file system, perform the following:

a. Use the samd(1M) config command, which signals to the Sun StorEdge QFS daemon that a new Sun StorEdge QFS configuration is available.

# samd config

b. Create the Sun StorEdge QFS shared hosts file for the file system (/etc/opt/SUNWsamfs/hosts.family-set-name), based on the Sun Cluster's private interconnect names that you obtained in Step 1.

3. Edit the unique Sun StorEdge QFS shared file system's host configuration file with the Sun Cluster interconnect names.

For Sun Cluster failover and fencing operations, the Sun StorEdge QFS shared file system must use the same interconnect names as the Sun Cluster system.

CODE EXAMPLE 8-7 Editing Each File System's Host Configuration File
# cat > hosts.qfs1 <<EOF
# File  /etc/opt/SUNWsamfs/hosts.qfs1
# Host          Host IP                                 Server   Not  Server
# Name          Addresses                               Priority Used Host
# ------------- --------------------------------------- -------- ---- ----
scnode-A        clusternode1-priv                         1        -    server
scnode-B        clusternode2-priv                         2        -
 
EOF

4. From one node in the Sun Cluster, use the sammkfs(1M) -S command to create the Sun StorEdge QFS shared file system.

# sammkfs -S qfs1 < /dev/null

5. On all nodes that are potential hosts of the file system, perform the following:

a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access.

CODE EXAMPLE 8-8 Creating a Global Mount Point for the qfs1 File System
# mkdir /global/qfs1
# chmod 755 /global/qfs1
# chown root:other /global/qfs1

b. Add the Sun StorEdge QFS shared file system entry to the /etc/vfstab file.

CODE EXAMPLE 8-9 Adding the Shared File System Entry to the /etc/vfstab File
# cat >> /etc/vfstab <<EOF
# device       device       mount      FS      fsck    mount      mount
# to mount     to fsck      point      type    pass    at boot    options
#
qfs1             -     /global/qfs1    samfs    -       no        shared
EOF


procedure icon  To Validate the Configuration

Perform this procedure for each file system you create. This example describes how to validate the configuration for file system qfs1.

1. If you do not know which node is acting as the metadata server for the file system, use the samsharefs(1M) -R command.

CODE EXAMPLE 8-10 Determining Which Node is the Metadata Server
# samsharefs -R qfs1#
# Host file for family set 'qfs1'
#
# Version: 4    Generation: 1    Count: 2
# Server = host 1/scnode-A, length = 165
#
scnode-A clusternode2-priv 1 - server
scnode-B clusternode2-priv 2 -

The example shows that the metadata server for qfs1 is scnode-A.

2. Use the mount(1M) command to mount the file system first on the metadata server and then on each node in the Sun Cluster system.

It is very imporant that you mount the file system on the metadata server first.

CODE EXAMPLE 8-11 Mounting File System, qfs1, on a Sun Cluster Node
# mount qfs1
# ls /global/qfs1
lost+found/

3. Validate voluntary failover by issuing the samsharefs(1M) -s command, which changes the Sun StorEdge QFS shared file system between nodes.

CODE EXAMPLE 8-12 Switching Over File System qfs1 to Validate Voluntary Failover
# samsharefs -s scnode-B qfs1
# ls /global/qfs1
lost+found/
# samsharefs -s scnode-A qfs1
# ls /global/qfs1
lost+found

4. Validate that the required Sun Cluster resource type is added to the resource configuration.

# scrgadm -p | egrep "SUNW.qfs"

5. If you cannot find the Sun Cluster resource type, use the scrgadm(1M) -a -t command to add it to the resource configuration.

# scrgadm -a -t SUNW.qfs

6. Register and configure the SUNW.qfs resource type.

CODE EXAMPLE 8-13 Configuring the SUNW.qfs Resource
# scrgadm -a -g qfs-rg -h scnode-A,scnode-B
# scrgadm -a -g qfs-rg -t SUNW.qfs -j qfs-res \
	   -x QFSFileSystem=/global/qfs1,/global/qfs2

7. Use the scswitch(1M) -Z -g command to bring the resource group online.

# scswitch -Z -g qfs-rg

8. Ensure that the resource group is functional on all configured nodes.

CODE EXAMPLE 8-14 Testing the Resource Group on Configured Nodes
# scswitch -z -g qfs-rg -h scnode-B
# scswitch -z -g qfs-rg -h scnode-A


procedure icon  To Configure the Sun Cluster Data Service for Oracle Real Application Clusters

This section provides an example of how to configure the data service for Oracle Real Application Clusters for use with Sun StorEdge QFS shared file systems. For more information, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.

1. Install the data service as described in the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.

2. Mount the Sun StorEdge QFS shared file systems.

3. Set the correct ownership and permissions on the file systems so that the Oracle database operations are successful.

CODE EXAMPLE 8-15 Setting Ownership and Permissions on the File Systems qfs1 and qfs2
# chown oracle:dba /global/qfs1 /global/qfs2
# chmod 755 /global/qfs1 /global/qfs2

4. As the oracle user, create the subdirectories that are required for the Oracle Real Application Clusters installation and database files.

CODE EXAMPLE 8-16 Creating Subdirectories Within File Systems qfs1 and qfs2
$ id
uid=120(oracle) gid=520(dba)
$ mkdir /global/qfs1/oracle_install
$ mkdir /global/qfs2/oracle_db

The Oracle Real Application Clusters installation uses the /global/qfs1/oracle_install directory path as the value for the ORACLE_HOME environment variable that is used in Oracle operations. The Oracle Real Application Clusters database files' path is prefixed with the /global/qfs2/oracle_db directory path.

5. Install the Oracle Real Application Clusters software.

During the installation, provide the path for the installation as defined in Step 4 (/global/qfs1/oracle_install).

6. Create the Oracle Real Application Clusters database.

During database creation, specify that you want the database files located in the qfs2 shared file system.

7. If you are automating the startup and shutdown of Oracle Real Application Clusters database instances, ensure that the required dependencies for resource groups and resources are set.

For more information, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.



Note - If you plan to automate the startup and shutdown of Oracle Real Application Clusters database instances, you must use Sun Cluster 3.1 9/04 or a compatible version.




Configuring an Unshared File System on a Sun Cluster

When you install the unshared Sun StorEdge QFS file system on a Sun Cluster system, you configure the file system for high availability (HA) under the Sun Cluster HAStoragePlus resource type. An unshared Sun StorEdge QFS file system on a Sun Cluster is typically associated with one or more failover applications, such as HA-NFS, HA-ORACLE, and so on. Both the unshared Sun StorEdge QFS file system and the failover applications are active in a single resource group; the resource group is active on one Sun Cluster node at a time.

An unshared Sun StorEdge QFS file system is mounted on a single node at any given time. If the Sun Cluster fault monitor detects an error, or if you switch over the resource group, the unshared Sun StorEdge QFS file system and its associated HA applications fail over to another node, depending on how the resource group has been previously configured.

Any file system contained on a Sun Cluster global device group (/dev/global/*) can be used with the HAStoragePlus resource type. When a file system is configured with the HAStoragePlus resource type, it becomes part of a Sun Cluster resource group and the file system under Sun Cluster Resource Group Manager (RGM) control is mounted locally on the node where the resource group is active. When the RGM causes a resource group switchover or fails over to another configured Sun Cluster node, the unshared Sun StorEdge QFS file system is unmounted from the current node and remounted on the new node.

Each unshared Sun StorEdge QFS file system requires a minimum of two raw disk partitions or volume manager-controlled volumes (Solstice DiskSuite/Solaris Volume Manager or VERITAS Clustered Volume Manager), one for Sun StorEdge QFS metadata (inodes) and one for Sun StorEdge QFS file data. Configuring multiple partitions or volumes across multiple disks through multiple data paths increases unshared Sun StorEdge QFS file system performance. For information about sizing metadata and file data partitions, see Design Basics.

This section provides three examples of Sun Cluster configurations using the unshared Sun StorEdge QFS file system. In these examples, a file system is configured in combination with an HA-NFS file mount point on the following:

For simplicity in all of these configurations, ten percent of each file system is used for Sun StorEdge QFS metadata and the remaining space is used for Sun StorEdge QFS file data. For information about sizing and disk layout considerations, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

Example 1

This example shows how to configure the unshared Sun StorEdge QFS file system with HA-NFS on raw global devices. For this configuration, the raw global devices must be contained on controller-based storage. This controller-based storage must support device redundancy by using RAID-1 or RAID-5.

As shown in CODE EXAMPLE 8-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. (This example uses devices d4 through d7.)The HAStoragePlus resource type requires the use of global devices, so each DID device (/dev/did/dsk/dx) is accessible as a global device by using the following syntax: /dev/global/dsk/dx.

The main steps in this example are as follows:

1. Prepare to create an unshared file system.

2. Create the file system and configure the Sun Cluster nodes.

3. Configure the network name service and the IPMP validation testing.

4. Configure HA-NFS and configure the file system for high availability.


procedure icon  To Prepare to Create an Unshared Sun StorEdge QFS File System

1. Use the format(1M) utility to lay out the partitions on /dev/global/dsk/d4.

CODE EXAMPLE 8-17 Command that Lays Out Partitions on /dev/global/dsk/d4 .
# format /dev/global/rdsk/d4s2
# format> partition
[ output deleted ]
# partition> print
Current partition table (original):
Total disk cylinders available: 34530 + 2 (reserved cylinders)
Part      Tag     Flag      Cylinders         Size            Blocks
 0   unassigned    wm       1 -  3543        20.76GB    (3543/0/0)   43536384
 1   unassigned    wm    3544 - 34529       181.56GB    (30986/0/0) 380755968
 2   backup        wu       0 - 34529       202.32GB    (34530/0/0) 424304640
 3   unassigned    wu       0                 0         (0/0/0)             0
 4   unassigned    wu       0                 0         (0/0/0)             0
 5   unassigned    wu       0                 0         (0/0/0)             0
 6   unassigned    wu       0                 0         (0/0/0)             0
 7   unassigned    wu       0                 0         (0/0/0)             0
 
NOTE: Partition 2 (backup) will not be used and was created by format(1m) by default.

Partition (or slice) 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 20 GB partition. The remaining space is configured into partition 1.

2. Replicate the global device d4 partitioning to global devices d5 through d7.

This example shows the command for global device d5.

# prtvtoc /dev/global/rdsk/d4s2 | fmthard \
-s - /dev/global/rdsk/d5s2

3. On all nodes that are potential hosts of the file system, perform the following:

a. Configure the eight partitions (four global devices, with two partitions each) into a Sun StorEdge QFS file system by adding a new file system entry to the mcf file.

CODE EXAMPLE 8-18 Adding the New File System to the mcf File
# cat >> /etc/opt/SUNWsamfs/mcf <<EOF
 
#
# Sun StorEdge QFS file system configurations
#
# Equipment	   	       Equipment	    Equipment	  Family	    Device  Additional
# Identifier	   	      Ordinal	       Type	       Set	      State   Parameters
# --------------     ---------    ---------   -------  ------  -----------
qfsnfs1		             100	           ma	     qfsnfs1     on
/dev/global/dsk/d4s0		   101	           mm	     qfsnfs1
/dev/global/dsk/d5s0		   102	           mm	     qfsnfs1
/dev/global/dsk/d6s0		   103	           mm	     qfsnfs1
/dev/global/dsk/d7s0		   104	           mm	     qfsnfs1
/dev/global/dsk/d4s1		   105	           mr	     qfsnfs1
/dev/global/dsk/d5s1		   106	           mr	     qfsnfs1
/dev/global/dsk/d6s1		   107	           mr	     qfsnfs1
/dev/global/dsk/d7s1		   108	           mr	     qfsnfs1
EOF

For information about the mcf file, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

b. Validate that the configuration information you added to the mcf file is correct.

It is important to complete this step before you configure the Sun StorEdge QFS file system under the HAStoragePlus resource type.

# /opt/SUNWsamfs/sbin/sam-fsd


procedure icon  Step 2: Create the Sun StorEdge QFS File System and Configure The Sun Cluster Nodes

1. On all nodes that are potential hosts of the file system, use the samd(1M) config command, which signals to the Sun StorEdge QFS daemon that a new Sun StorEdge QFS configuration is available.

# samd config

2. From one node in the Sun Cluster, use the sammkfs(1M) command to create the file system.

# sammkfs qfsnfs1 < /dev/null

3. On all nodes that are potential hosts of the file system, perform the following:

a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access.

CODE EXAMPLE 8-19 Creating a Global Mount Point for File System qfsnfs1
# mkdir /global/qfsnfs1
# chmod 755 /global/qfsnfs1
# chown root:other /global/qfsnfs1

b. Add the Sun StorEdge QFS file system entry to the /etc/vfstab file.

Note that the mount options field contains the sync_meta=1 value.

CODE EXAMPLE 8-20 Adding the File System Entry to the /etc/vfstab File
# cat >> /etc/vfstab <<EOF
 
# device      device         mount       FS       fsck     mount      mount
# to mount    to fsck        point       type     pass     at boot    options
#
qfsnfs1         -       /global/qfsnfs1    samfs       2         no       sync_meta=1
EOF

c. Validate the configuration by mounting and unmounting the file system.

CODE EXAMPLE 8-21 Validating the Configuration
# mount qfsnfs1
# ls /global/qfsnfs1
lost+found/
# umount qfsnfs1

4. Use the scrgadm(1M) -p | egrep command to validate that the required Sun Cluster resource types have been added to the resource configuration.

CODE EXAMPLE 8-22 Searching for the Required Sun Cluster Resource Types
# scrgadm -p | egrep "SUNW.HAStoragePlus|SUNW.LogicalHostname|SUNW.nfs"

5. If you cannot find a required Sun Cluster resource type, use the scrgadm(1M) -a -t command to add it to the configuration.

CODE EXAMPLE 8-23 Adding the Required Sun Cluster Resource Types

# scrgadm -a -t SUNW.HAStoragePlus

# scrgadm -a -t SUNW.LogicalHostname

# scrgadm -a -t SUNW.nfs



procedure icon  To Configure the Network Name Service and the IPMP Validation Testing

This section provides an example of how to configure the network name service and the IPMP Validation Testing for your Sun Cluster nodes. For more information, see the Sun Cluster Software Installation Guide for Solaris OS.

1. Use vi or another text editor to edit the /etc/nsswitch.conf file so that it looks in the Sun Cluster and files for node names.

Perform this step before you configure the NIS server.

CODE EXAMPLE 8-24 Editing the /etc/nssswitch File to Look in the Sun Cluster and Files for Node Names
# cat /etc/nsswitch.conf 
#
# /etc/nsswitch.nis:
#
# An example file that could be copied over to /etc/nsswitch.conf; it 
# uses NIS (YP) in conjunction with files.
#
# the following two lines obviate the "+" entry in /etc/passwd and /etc/group.
passwd:    files nis
group:     files nis
 
# Cluster s/w and local /etc/hosts file take precedence over NIS
hosts:    cluster files nis [NOTFOUND=return]
ipnodes:  files
# Uncomment the following line and comment out the above to resolve
# both IPv4 and IPv6 addresses from the ipnodes databases. Note that
# IPv4 addresses are searched in all of the ipnodes databases before 
# searching the hosts databases. Before turning this option on, consult
# the Network Administration Guide for more details on using IPv6.
# ipnodes: nis [NOTFOUND=return] files
 
networks: nis[NOTFOUND=return] files
protocols: nis [NOTFOUND=return] files
rpc: nis[NOTFOUND=return] files 
ethers: nis[NOTFOUND=return] files
netmaks: nis[NOTFOUND=return] files
bootparams: nis[NOTFOUND=return] files
publickey: nis[NOTFOUND=return] files
 
netgroup: nis
 
automount: files nis
aliases: files nis
[remainder of file content not shown]

2. Verify that the changes you made to the /etc/nsswitch.conf are correct.

CODE EXAMPLE 8-25 Verifying the /etc/nsswitch.conf File Changes
# grep `^hosts:' /etc/nsswitch.conf
hosts:    cluster files nis [NOTFOUND=return]
#

3. Set up IPMP validation testing by using available network adapters.

The adapters qfe2 and qfe3 are used as examples.

a. Statically configure the IPMP test address for each adapter.

CODE EXAMPLE 8-26 Statically Configuring the IPMP Test Address for Adapters qfe2 and qfe3

#cat >> /etc/hosts << EOF

 
#
# Test addresses for scnode-A
#
192.168.2.2      `uname -n`-qfe2
192.168.2.3      `uname -n`-qfe2-test
192.168.3.2      `uname -n`-qfe3
192.168.3.3      `uname -n`-qfe3-test
 
#
# Test addresses for scnode-B
#
192.168.2.4      `uname -n`-qfe2
192.168.2.5      `uname -n`-qfe2-test
192.168.3.4      `uname -n`-qfe3
192.168.3.5      `uname -n`-qfe3-test
EOF

b. Dynamically configure the IPMP Adapters

CODE EXAMPLE 8-27 Dynamically Configuring the IPMP Adapters, qfe2 and qfe3
# ifconfig qfe2 plumb `uname -n`-qfe2-test netmask + broadcast + deprecated \
	-failover -standby group ipmp0 up 
# ifconfig qfe2 addif `uname -n`-qfe2 up
# ifconfig qfe3 plumb `uname -n`-qfe3-test netmask + broadcast + deprecated \
	-failover -standby group ipmp0 up
# ifconfig qfe3 addif `uname -n`-qfe3 up

c. Verify the configuration.

CODE EXAMPLE 8-28 Verifying the Configuration of the IPMP Adapters, qfe2 and qfe3
# cat > /etc/hostname.qfe2 << EOF
`uname -n`-qfe2-test netmask + broadcast + deprecated -failover -standby \
	group ipmp0 up addif `uname -n`-qfe2 up
EOF
 
# cat > /etc/hostname.qfe3 << EOF
`uname -n`-qfe3-test netmask + broadcast + deprecated -failover -standby \
	group ipmp0 up addif `uname -n`-qfe3 up
EOF


procedure icon  To Configure HA-NFS and the Sun StorEdge QFS File System for High Availability

This section provides an example of how to configure HA-NFS. For more information about HA-NFS, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS and your NFS documentation.

1. Create the NFS share point for the Sun StorEdge QFS file system.

Note that the share point is contained within the /global file system, not within the Sun StorEdge QFS file system.

CODE EXAMPLE 8-29 Creating the NFS Share Points for the Two File Systems
# mkdir -p /global/nfs/SUNW.nfs
# echo "share -F nfs -o rw /global/qfsnfs1" > \ /global/nfs/SUNW.nfs/dfstab.nfs1-res

2. Create the NFS resource group.

# scrgadm -a -g nfs-rg -y PathPrefix=/global/nfs

3. Add the NFS logical host to the /etc/hosts table, using the address for your site.

CODE EXAMPLE 8-30 Adding the NFS Logical Host to the /etc/hosts Table
# cat >> /etc/hosts << EOF
#
# IP Addresses for LogicalHostnames
#
192.168.2.10     lh-qfs1
EOF

4. Use the scrgadm(1M) -a -L -g command to add the logical host to the NFS resource group.

# scrgadm -a -L -g nfs-rg -l lh-nfs1

5. Use the scrgadm(1M) -c -g command to configure the HAStoragePlus resource type.

CODE EXAMPLE 8-31 Configuring the HAStoragePlus Resource Type
# scrgadm -c -g nfs-rg -h scnode-A,scnode-B 
# scrgadm -a -g nfs-rg -j qfsnfs1-res -t SUNW.HAStoragePlus \
	-x FilesystemMountPoints=/global/qfsnfs1 \
	-x FilesystemCheckCommand=/bin/true

6. Bring the resource group online.

# scswitch -Z -g nfs-rg

7. Configure the NFS resource type and set a dependency on the HAStoragePlus resource.

CODE EXAMPLE 8-32 Configuring the NFS Resource Type to Depend on the HAStoragePlus Resource
# scrgadm -a -g nfs-rg -j nfs1-res -t SUNW.nfs -y \ Resource_dependencies=qfsnfs1-res

8. Bring the NFS resource online.

# scswitch -e -j nfs1-res

The NFS resource /net/lh-nfs1/global/qfsnfs1 is now fully configured and is also highly available.

9. Before announcing the availability of the highly available NFS file system on the Sun StorEdge QFS file system, ensure that the resource group can be switched between all configured nodes without errors and can be taken online and offline.

CODE EXAMPLE 8-33 Testing the Resource Groups
# scswitch -z -g nfs-rg -h scnode-A# scswitch -z -g nfs-rg -h scnode-B
# scswitch -F -g nfs-rg
# scswitch -Z -g nfs-rg

Example 2

This example shows how to configure the unshared Sun StorEdge QFS file system with HA-NFS on volumes controlled by Solstice DiskSuite/Solaris Volume Manager software. With this configuration, you can choose whether the DID devices are contained on redundant controller-based storage using RAID-1 or RAID-5 volumes. Typically, Solaris Volume Manager is used only when the underlying controller-based storage is not redundant.

As shown in CODE EXAMPLE 8-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. Solaris Volume Manager requires that DID devices be used to populate the raw devices from which Solaris Volume Manager can configure volumes. Solaris Volume Manager creates globally accessible disk groups, which can then be used by the HAStoragePlus resource type for creating Sun StorEdge QFS file systems.

This example follows these steps:

1. Prepare the Solstice DiskSuite/Solaris Volume Manager software.

2. Prepare to create an unshared file system.

3. Create the file system and configure the Sun Cluster nodes.

4. Configure the network name service and the IPMP validation testing.

5. Configure HA-NFS and configure the file system for high availability.


procedure icon  To Prepare the Solstice DiskSuite/Solaris Volume Manager Software

1. Determine whether a Solaris Volume Manager metadatabase (metadb) is already configured on each node that is a potential host of the Sun StorEdge QFS file system.

CODE EXAMPLE 8-34 Determining Whether a Solaris Volume Manager Metadatabase is Already Configured
# metadb
        flags           first blk       block count
     a m  p  luo        16              8192            /dev/dsk/c0t0d0s7
     a    p  luo        16              8192            /dev/dsk/c1t0d0s7
     a    p  luo        16              8192            /dev/dsk/c2t0d0s7

If the metadb(1M) command does not return a metadatabase configuration, then on each node, create three or more database replicas on one or more local disks. Each replica must be at least 16 MB in size. For more information about creating the metadatabase configuration, see the Sun Cluster Software Installation Guide for Solaris OS.

2. Create an HA-NFS disk group to containall Solaris Volume Manager volumes for this Sun StorEdge QFS file system.

# metaset -s nfsdg -a -h scnode-A scnode-B

3. Add DID devices d4 through d7 to the pool of raw devices from which Solaris Volume Manager can create volumes.

CODE EXAMPLE 8-35 Adding DID Devices d4 Through d7 to the Pool of Raw Devices
# metaset -s nfsdg -a /dev/did/dsk/d4 /dev/did/dsk/d5 \
	/dev/did/dsk/d6 /dev/did/dsk/d7 


procedure icon  To Prepare to Create a Sun StorEdge QFS File System

1. Use the format(1M) utility to lay out partitions on /dev/global/dsk/d4.

CODE EXAMPLE 8-36 Command that Lays Out Partitions on /dev/global/dsk/d4 .
# format /dev/global/rdsk/d4s2
# format> partition
[ output deleted ]
# partition> print
Current partition table (original):
Total disk cylinders available: 34530 + 2 (reserved cylinders)
Part      Tag     Flag      Cylinders         Size            Blocks
 0   unassigned    wm       1 -  3543        20.76GB    (3543/0/0)   43536384
 1   unassigned    wm    3544 - 34529       181.56GB    (30986/0/0) 380755968
 2   backup        wu       0 - 34529       202.32GB    (34530/0/0) 424304640
 3   unassigned    wu       0                 0         (0/0/0)             0
 4   unassigned    wu       0                 0         (0/0/0)             0
 5   unassigned    wu       0                 0         (0/0/0)             0
 6   unassigned    wu       0                 0         (0/0/0)             0
 7   unassigned    wu       0                 0         (0/0/0)             0
 
NOTE: Partition 2 (backup) will not be used and was created by format(1m) by default.

CODE EXAMPLE 8-36 shows that partition or slice 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 20 GB partition. The remaining space is configured into partition 1.

2. Replicate the partitioning of DID device d4 to DID devices d5 through d7.

This example shows the command for device d5.

# prtvtoc /dev/global/rdsk/d4s2 | fmthard \
-s - /dev/global/rdsk/d5s2

3. Configure the eight partitions (four DID devices, two partitions each) into two RAID-1 (mirrored) Sun StorEdge QFS metadata volumes and two RAID-5 (parity-striped) Sun StorEdge QFS file data volumes.

Combine partition (slice) 0 of these four drives into two RAID-1 sets.

CODE EXAMPLE 8-37 Configuring Partitions Into RAID-1 Metadata and Into RAID-5 Data Volumes, and Then Combining Partition Zero of the Four Drives into RAID-1 Sets
# metainit -s nfsdg -f d1 1 1 /dev/did/dsk/d4s0
# metainit -s nfsdg -f d2 1 1  /dev/did/dsk/d5s0
# metainit -s nfsdg d10 -m d1 d2
# metainit -s nfsdg -f d3 1 1 /dev/did/dsk/d6s0
# metainit -s nfsdg -f d4 1 1  /dev/did/dsk/d7s0
# metainit -s nfsdg d11 -m d3 d4

4. Combine partition 1 of these four drives into two RAID-5 sets.

CODE EXAMPLE 8-38 Combining Partition One of the Four Drives Into Two RAID-5 Sets
# metainit -s nfsdg d20 -p /dev/did/dsk/d4s1 205848574b
# metainit -s nfsdg d21 -p /dev/did/dsk/d5s1 205848574b
# metainit -s nfsdg d22 -p /dev/did/dsk/d6s1 205848574b
# metainit -s nfsdg d23 -p /dev/did/dsk/d7s1 205848574b
# metainit -s nfsdg d30 -r d20 d21 d22 d23

5. On each node that is a potential host of the file system, add the Sun StorEdge QFS file system entry to the mcf file.

CODE EXAMPLE 8-39 Adding the Sun StorEdge QFS File System to the Metadata Server's mcf File
# cat >> /etc/opt/SUNWsamfs/mcf <<EOF
 
# Sun StorEdge QFS file system configurations
#
# Equipment	           	Equipment   	Equipment	   Family	   Device    Additional
# Identifier		            Ordinal	     Type	         Set	   State     Parameters
# ------------------- ---------    ---------  -------   ------    ----------
qfsnfs1                  		100	         ma      qfsnfs1     		on
/dev/md/nfsdg/dsk/d10    		101	         mm      qfsnfs1
/dev/md/nfsdg/dsk/d11    		102	         mm      qfsnfs1
/dev/md/nfsdg/dsk/d30    		103	         mr      qfsnfs1
EOF

For more information about the mcf file, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

6. Validate that the mcf configuration is correct on each node.

# /opt/SUNWsamfs/sbin/sam-fsd


procedure icon  To Create the Sun StorEdge QFS File System and Configure Sun Cluster Nodes

1. On each node that is a potential host of the file system, use the samd(1M) config command.

This command signals to the Sun StorEdge QFS daemon that a new Sun StorEdge QFS configuration is available.

# samd config

2. Enable Solaris Volume Manager mediation detection of disk groups, which assists the Sun Cluster system in the detection of drive errors.

CODE EXAMPLE 8-40 Enabling Solaris Volume Manager Mediation Detection of Disk Groups
# metaset -s nfsdg -a -m scnode-A
# metaset -s nfsdg -a -m scnode-B

3. On each node that is a potential host of the file system, ensure that the NFS disk group exists.

# metaset -s nfsdg -t

4. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the Sun StorEdge QFS file system.

# sammkfs qfsnfs1 < /dev/null

5. On each node that is a potential host of the file system, perform the following:

a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access.

CODE EXAMPLE 8-41 Creation of a Global Mount Point for the qfsnfs1 File System
# mkdir /global/qfsnfs1
# chmod 755 /global/qfsnfs1
# chown root:other /global/qfsnfs1

b. Add the Sun StorEdge QFS file system entry to the /etc/vfstab file.

Note that the mount options field contains the sync_meta=1 value.

CODE EXAMPLE 8-42 Editing the /etc/vfstab File to Add the File System Entry
# cat >> /etc/vfstab << EOF
# device       device       mount      FS      fsck    mount      mount
# to mount     to fsck      point      type    pass    at boot    options
#
qfsnfs1         -    /global/qfsnfs1   samfs    2       no      sync_meta=1
EOF

c. Ensure that the nodes are configured correctly by mounting and unmounting the file system.

Perform this step one node at a time. In this example, the qfsnfs1 file system is being mounted and unmounted on one node.

CODE EXAMPLE 8-43 Validating the Configuration
# mount qfsnfs1
# ls /global/qfsnfs1
lost+found/
# umount qfsnfs1



Note - When testing the mount point, use the metaset -r (release) and -t (take) command to move the nfsdg disk group between Sun Cluster nodes. Then use the samd(1M) config command to alert the daemon of the configuration changes.



6. Use the scrgadm(1M) -p | egrep command to validate that the required Sun Cluster resource types have been added to the resource configuration.

# scrgadm -p | egrep "SUNW.HAStoragePlus|SUNW.LogicalHostname|SUNW.nfs"

If you cannot find a required Sun Cluster resource type, add it with one or more of the following commands.

CODE EXAMPLE 8-44 Adding the Resource Types to the Resource Configuration
# scrgadm -a -t SUNW.HAStoragePlus
# scrgadm -a -t SUNW.LogicalHostname
# scrgadm -a -t SUNW.nfs


procedure icon  To Configure the Network Name Service and the IPMP Validation Testing

This section provides an example of how to configure the network name service and IPMP validation testing for use with the Sun StorEdge QFS software. For more information, see the System Administration Guide: IP Services and the System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP).

1. Use vi or another text editor to edit the /etc/nsswitch.conf file so that it looks in the Sun Cluster and files for node names.

Perform this step before you configure the NIS server.

CODE EXAMPLE 8-45 Editing the /etc/nssswitch File to Look in the Sun Cluster and Files for Node Names
# cat /etc/nsswitch.conf 
#
# /etc/nsswitch.nis:
#
# An example file that could be copied over to /etc/nsswitch.conf; it 
# uses NIS (YP) in conjunction with files.
#
# the following two lines obviate the "+" entry in /etc/passwd and /etc/group.
passwd:    files nis
group:     files nis
 
# Cluster s/w and local /etc/hosts file take precedence over NIS
hosts:    cluster files nis [NOTFOUND=return]
ipnodes:  files
# Uncomment the following line and comment out the above to resolve
# both IPv4 and IPv6 addresses from the ipnodes databases. Note that
# IPv4 addresses are searched in all of the ipnodes databases before 
# searching the hosts databases. Before turning this option on, consult
# the Network Administration Guide for more details on using IPv6.
# ipnodes: nis [NOTFOUND=return] files
 
networks: nis[NOTFOUND=return] files
protocols: nis [NOTFOUND=return] files
rpc: nis[NOTFOUND=return] files 
ethers: nis[NOTFOUND=return] files
netmaks: nis[NOTFOUND=return] files
bootparams: nis[NOTFOUND=return] files
publickey: nis[NOTFOUND=return] files
 
netgroup: nis
 
automount: files nis
aliases: files nis
[remainder of file content not shown]

2. Verify that the changes you made to the /etc/nsswitch.conf are correct.

CODE EXAMPLE 8-46 Verifying the /etc/nsswitch.conf File Changes
# grep `^hosts:' /etc/nsswitch.conf
hosts:    cluster files nis [NOTFOUND=return]
#

3. Set up IPMP validation testing using available network adapters.

The adapters qfe2 and qfe3 are used in the examples.

a. Statically configure the IPMP test address for each adapter.

CODE EXAMPLE 8-47 Statically Configuring the IPMP Test Address for Each Adapter
# cat >> /etc/hosts << EOF
#
# Test addresses for scnode-A
#
192.168.2.2      `uname -n`-qfe2
192.168.2.3      `uname -n`-qfe2-test
192.168.3.2      `uname -n`-qfe3
192.168.3.3      `uname -n`-qfe3-test
#
# Test addresses for scnode-B
#
192.168.2.4      `uname -n`-qfe2
192.168.2.5      `uname -n`-qfe2-test
192.168.3.4      `uname -n`-qfe3
192.168.3.5      `uname -n`-qfe3-test
#
# IP Addresses for LogicalHostnames
#
192.168.2.10     lh-qfs1
 
EOF

b. Dynamically configure the IPMP adapters.

CODE EXAMPLE 8-48 Dynamically Configuring the IPMP Adapters
# ifconfig qfe2 plumb `uname -n`-qfe2-test netmask + broadcast + deprecated \
	-failover -standby group ipmp0 up 
# ifconfig qfe2 addif `uname -n`-qfe2 up
# ifconfig qfe3 plumb `uname -n`-qfe3-test netmask + broadcast + deprecated \
	-failover -standby group ipmp0 up
# ifconfig qfe3 addif `uname -n`-qfe3 up

c. Validate the configuration.

CODE EXAMPLE 8-49 Dynamically Configuring the IPMP Adapters
# cat > /etc/hostname.qfe2 << EOF
`uname -n`-qfe2-test netmask + broadcast + deprecated -failover -standby \
	group ipmp0 up addif `uname -n`-qfe2 up
EOF
# cat > /etc/hostname.qfe3 << EOF
`uname -n`-qfe3-test netmask + broadcast + deprecated -failover -standby \
	group ipmp0 up addif `uname -n`-qfe3 up
EOF


procedure icon  To Configure HA-NFS and the Sun StorEdge QFS File System for High Availability

This section provides an example of how to configure HA-NFS. For more information about HA-NFS, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS and your NFS documentation.

1. Create the NFS share point for the Sun StorEdge QFS file system.

Note that the share point is contained within the /global file system, not within the Sun StorEdge QFS file system.

CODE EXAMPLE 8-50 Creating the NFS Share Points for the File Systems
# mkdir -p /global/nfs/SUNW.nfs
# echo "share -F nfs -o rw /global/qfsnfs1" > \ /global/nfs/SUNW.nfs/dfstab.nfs1-res

2. Create the NFS resource group.

# scrgadm -a -g nfs-rg -y PathPrefix=/global/nfs

3. Add a logical host to the NFS resource group.

# scrgadm -a -L -g nfs-rg -l lh-nfs1

4. Configure the HAStoragePlus resource type.

CODE EXAMPLE 8-51 Configuring the HAStoragePlus Resource Type
# scrgadm -c -g nfs-rg -h scnode-A,scnode-B 
# scrgadm -a -g nfs-rg -j qfsnfs1-res -t SUNW.HAStoragePlus \
	-x FilesystemMountPoints=/global/qfsnfs1 \
	-x FilesystemCheckCommand=/bin/true

5. Bring the resource group online.

# scswitch -Z -g nfs-rg

6. Configure the NFS resource type and set a dependency on the HAStoragePlus resource.

CODE EXAMPLE 8-52 Configuring the NFS Resource Type
# scrgadm -a -g nfs-rg -j nfs1-res -t SUNW.nfs -y \ Resource_dependencies=qfsnfs1-res

7. Use the scswitch(1M) -e -j command to bring the NFS resource online.

# scswitch -e -j nfs1-res

The NFS resource /net/lh-nfs1/global/qfsnfs1 is fully configured and highly available.

8. Before you announce the availability of the highly available NFS file system on the Sun StorEdge QFS file system, ensure that the resource group can be switched between all configured nodes without errors and can be taken online and offline.

CODE EXAMPLE 8-53 Testing the Resource Group
# scswitch -z -g nfs-rg -h scnode-A# scswitch -z -g nfs-rg -h scnode-B
# scswitch -F -g nfs-rg
# scswitch -Z -g nfs-rg

Example 3

This example shows how to configure the unshared Sun StorEdge QFS file system with HA-NFS on VERITAS Clustered Volume manager-controlled volumes (VxVM volumes). With this configuration, you can choose whether the DID devices are contained on redundant controller-based storage using RAID-1 or RAID-5. Typically, VxVM is used only when the underlying storage is not redundant.

As shown in CODE EXAMPLE 8-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. VxVM requires that shared DID devices be used to populate the raw devices from which VxVM configures volumes. VxVM creates highly available disk groups by registering the disk groups as Sun Cluster device groups. These disk groups are not globally accessible, but can be failed over, making them accessible to at least one node. The disk groups can be used by the HAStoragePlus resource type.



Note - The VxVM packages are separate, additional packages that must be installed, patched, and licensed. For information about installing VxVM, see the VxVM Volume Manager documentation.



To use Sun StorEdge QFS software with VxVM, you must install the following VxVM packages:

This example follows these steps:

1. Configure the VxVM software.

2. Prepare to create an unshared file system.

3. Create the file system and configure the Sun Cluster nodes.

4. Validate the configuration.

5. Configure the network name service and the IPMP validation testing.

6. Configure HA-NFS and configure the file system for high availability.


procedure icon  To Configure the VxVM Software

This section provides an example of how to configure the VxVM software for use with the Sun StorEdge QFS software. For more detailed information about the VxVM software, see the VxVM documentation.

1. Determine the status of DMP (dynamic multipathing) for VERITAS.

# vxdmpadm listctlr all

2. Use the scdidadm(1M) utility to determine the HBA controller number of the physical devices to be used by VxVM.

As shown in the following example, the multi-node accessible storage is available from scnode-A using HBA controller c6, and from node scnode-B using controller c7.

CODE EXAMPLE 8-54 Determining the HBA Controller Number of the Physical Devices
# scdidadm -L
[ some output deleted]
4   scnode-A:/dev/dsk/c6t60020F20000037D13E26595500062F06d0 /dev/did/dsk/d4
4   scnode-B:/dev/dsk/c7t60020F20000037D13E26595500062F06d0 /dev/did/dsk/d4

3. Use VxVM to configure all available storage as seen through controller c6.

# vxdmpadm getsubpaths ctlr=c6

4. Place all of this controller's devices under VxVM control.

# vxdiskadd fabric_

5. Create a disk group, create volumes, and then start the new disk group. Ensure that the previously started disk group is active on this system.

# /usr/sbin/vxdg init qfs-dg qfs-dg00=disk0 \
qfsdg01=disk1 qfsdg02=disk2 qfsdg03=disk3

CODE EXAMPLE 8-55 Validating that the Disk Group is Active on This System
# vxdg import nfsdg
# vxdg free

 

6. Configure two mirrored volumes for Sun StorEdge QFS metadata and two volumes for Sun StorEdge QFS file data volumes.

These mirroring operations are performed as background processes, given the length of time they take to complete.

CODE EXAMPLE 8-56 Configure Metadata and Data Volumes
# vxassist -g nfsdg make m1 10607001b
# vxassist -g nfsdg mirror m1&
# vxassist -g nfsdg make m2 10607001b
# vxassist -g nfsdg mirror m2&
# vxassist -g nfsdg make m10 201529000b
# vxassist -g nfsdg mirror m10&
# vxassist -g nfsdg make m11 201529000b
# vxassist -g nfsdg mirror m11&

7. Configure the previously created VxVM disk group as a Sun Cluster-controlled disk group.

# scconf -a -D type=vxvm,name=nfsdg,nodelist=scnode-A:scnode-B


procedure icon  To Prepare to Create a Sun StorEdge QFS File System

Perform this procedure on each node that is a potential host of the file system.

1. Add the Sun StorEdge QFS file system entry to the mcf file.

CODE EXAMPLE 8-57 Addition of the File System to the mcf File
# cat >> /etc/opt/SUNWsamfs/mcf   <<EOF
# Sun StorEdge QFS file system configurations
#
# Equipment	             	Equipment	  Equipment  	Family	     Device    Additional
# Identifier		            Ordinal	    Type	       Set	        State     Parameters
# ------------------    --------    ---------  -------    ------    ----------
qfsnfs1                   100        ma        qfsnfs1     on
/dev/vx/dsk/nfsdg/m1      101        mm        qfsnfs1
/dev/vx/dsk/nfsdg/m2      102        mm        qfsnfs1
/dev/vx/dsk/nfsdg/m10     103        mr        qfsnfs1
/dev/vx/dsk/nfsdg/m11     104        mr        qfsnds1
EOF

For more information about the mcf file, see the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

2. Validate that the mcf configuration is correct.

# /opt/SUNWsamfs/sbin/sam-fsd


procedure icon  To Create the Sun StorEdge QFS File System and Configure Sun Cluster Nodes

1. On each node that is a potential host of the file system, use the samd(1M) config command.

This command signals to the Sun StorEdge QFS daemon that a new Sun StorEdge QFS configuration is available.

# samd config

2. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the Sun StorEdge QFS file system.

# sammkfs qfsnfs1 < /dev/null

3. On each node that is a potential host of the file system, perform the following:

a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access.

CODE EXAMPLE 8-58 Creating a Global Mount Point for the qfsnfs1 File System
# mkdir /global/qfsnfs1
# chmod 755 /global/qfsnfs1
# chown root:other /global/qfsnfs1

b. Add the Sun StorEdge QFS file system entry to the /etc/vfstab file.

Note that the mount options field contains the sync_meta=1 value.

CODE EXAMPLE 8-59 Adding the File System Entry to the /etc/vfstab File
# cat >> /etc/vfstab << EOF
# device       device       mount       FS      fsck    mount      mount
# to mount     to fsck      point       type    pass    at boot    options
# 
qfsnfs1           -    /global/qfsnfs1  samfs    2        no      sync_meta=1
EOF


procedure icon  To Validate the Configuration

1. Validate that all nodes that are potential hosts of the file system are configured correctly.

To do this, move the disk group that you created in To Configure the VxVM Software to the node, and mount and then unmount the file system. Perform this validation one node at a time.

CODE EXAMPLE 8-60 Validating the Configuration
# scswitch -z -D nfsdg -h scnode-B
# mount qfsnfs1
# ls /global/qfsnfs1
lost+found/
# umount qfsnfs1

2. Ensure that the required Sun Cluster resource types have been added to the resource configuration. If you cannot find a required Sun Cluster resource type, add it with one or more of the following commands.

# scrgadm -p | egrep "SUNW.HAStoragePlus|SUNW.LogicalHostname|SUNW.nfs"

CODE EXAMPLE 8-61 Adding Sun Cluster Resources to the Resource Configuration
# scrgadm -a -t SUNW.HAStoragePlus# scrgadm -a -t SUNW.LogicalHostname# scrgadm -a -t SUNW.nfs

 

procedure icon  To Configure the Network Name Service and the IPMP Validation Testing

This section provides an example of how to configure the network name service and the IPMP validation testing. For more information, see the Sun Cluster Software Installation Guide for Solaris OS.

1. Use vi or another text editor to edit the /etc/nsswitch.conf file so that it looks in the Sun Cluster and files for node names.

Perform this step before you configure the NIS server.

CODE EXAMPLE 8-62 Editing the /etc/nssswitch File to Look in the Sun Cluster and Files for Node Names
# cat /etc/nsswitch.conf 
#
# /etc/nsswitch.nis:
#
# An example file that could be copied over to /etc/nsswitch.conf; it 
# uses NIS (YP) in conjunction with files.
#
# the following two lines obviate the "+" entry in /etc/passwd and /etc/group.
passwd:    files nis
group:     files nis
 
# Cluster s/w and local /etc/hosts file take precedence over NIS
hosts:    cluster files nis [NOTFOUND=return]
ipnodes:  files
# Uncomment the following line and comment out the above to resolve
# both IPv4 and IPv6 addresses from the ipnodes databases. Note that
# IPv4 addresses are searched in all of the ipnodes databases before 
# searching the hosts databases. Before turning this option on, consult
# the Network Administration Guide for more details on using IPv6.
# ipnodes: nis [NOTFOUND=return] files
 
networks: nis[NOTFOUND=return] files
protocols: nis [NOTFOUND=return] files
rpc: nis[NOTFOUND=return] files 
ethers: nis[NOTFOUND=return] files
netmaks: nis[NOTFOUND=return] files
bootparams: nis[NOTFOUND=return] files
publickey: nis[NOTFOUND=return] files
 
netgroup: nis
 
automount: files nis
aliases: files nis
[remainder of file content not shown]

2. Verify that the changes you made to the /etc/nsswitch.conf are correct.

CODE EXAMPLE 8-63 Verifying the /etc/nsswitch.conf File Changes
# grep `^hosts:' /etc/nsswitch.conf
hosts:    cluster files nis [NOTFOUND=return]
#

3. Set up IPMP validation testing using available network adapters.

The adapters qfe2 and qfe3 are used as examples.

a. Statically configure IPMP test address for each adapter.

CODE EXAMPLE 8-64 Statically Configuring the IPMP Test Address for Each Adapter
# cat >> /etc/hosts << EOF
#
# Test addresses for scnode-A
#
192.168.2.2      `uname -n`-qfe2
192.168.2.3      `uname -n`-qfe2-test
192.168.3.2      `uname -n`-qfe3
192.168.3.3      `uname -n`-qfe3-test
#
# Test addresses for scnode-B
#
192.168.2.4      `uname -n`-qfe2
192.168.2.5      `uname -n`-qfe2-test
192.168.3.4      `uname -n`-qfe3
192.168.3.5      `uname -n`-qfe3-test
#
# IP Addresses for LogicalHostnames
#
192.168.2.10     lh-qfs1
EOF

b. Dynamically configure IPMP adapters.

CODE EXAMPLE 8-65 Dynamically Configuring the IPMP Adapters
# ifconfig qfe2 plumb `uname -n`-qfe2-test netmask + broadcast + deprecated \
	-failover -standby group ipmp0 up 
# ifconfig qfe2 addif `uname -n`-qfe2 up
# ifconfig qfe3 plumb `uname -n`-qfe3-test netmask + broadcast + deprecated \
	-failover -standby group ipmp0 up
# ifconfig qfe3 addif `uname -n`-qfe3 up

c. Validate the configuration.

CODE EXAMPLE 8-66 Dynamically Configuring the IPMP Adapters
# cat > /etc/hostname.qfe2 << EOF
`uname -n`-qfe2-test netmask + broadcast + deprecated -failover -standby \
	group ipmp0 up addif `uname -n`-qfe2 up
EOF
 
# cat > /etc/hostname.qfe3 << EOF
`uname -n`-qfe3-test netmask + broadcast + deprecated -failover -standby \
	group ipmp0 up addif `uname -n`-qfe3 up
EOF


procedure icon  To Configure HA-NFS and the Sun StorEdge QFS File System for High Availability

This section provides an example of how to configure HA-NFS. For more information about HA-NFS, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS and your NFS documentation.

1. On each node that is a potential host of the file system, create the NFS share point for the Sun StorEdge QFS file system.

Note that the share point is contained within the /global file system, not within the Sun StorEdge QFS file system.

CODE EXAMPLE 8-67 Creating the NFS Share Point for the File System
# mkdir -p /global/qfsnfs1/SUNW.nfs
# echo "share -F nfs -o rw /global/qfsnfs1" > \ /global/qfsnfs1/SUNW.nfs/dfstab.nfs1-res

2. From one node in the Sun Cluster system, create the NFS resource group.

# scrgadm -a -g nfs-rg -y PathPrefix=/global/nfs

3. Add a logical host to the NFS resource group.

# scrgadm -a -L -g nfs-rg -l lh-nfs1

4. Configure the HAStoragePlus resource type.

CODE EXAMPLE 8-68 Configuring the HAStoragePlus Resource Type
# scrgadm -c -g nfs-rg -h scnode-A,scnode-B 
# scrgadm -a -g nfs-rg -j qfsnfs1-res -t SUNW.HAStoragePlus \
	-x FilesystemMountPoints=/global/qfsnfs1 \
	-x FilesystemCheckCommand=/bin/true

5. Bring the resource group online.

# scswitch -Z -g nfs-rg

6. Configure the NFS resource type and set a dependency on the HAStoragePlus resource.

# scrgadm -a -g nfs-rg -j nfs1-res -t SUNW.nfs -y \
Resource_dependencies=qfsnfs1-res
 

7. Bring the NFS resource online.

# scswitch -e -j nfs1-res

The NFS resources /net/lh-nfs1/global/qfsnfs1 is fully configured and highly available.

8. Before you announce the availability of the highly available NFS file system on the Sun StorEdge QFS file system, validate that the resource group can be switched between all configured nodes without errors and taken online and offline.

CODE EXAMPLE 8-69 Testing the Resource Group
# scswitch -z -g nfs-rg -h scnode-A# scswitch -z -g nfs-rg -h scnode-B
# scswitch -F -g nfs-rg
# scswitch -Z -g nfs-rg


Changing the Sun StorEdge QFS Configuration

This section demonstrates how to make changes to, disable, or remove the Sun StorEdge QFS shared or unshared file system configuration. It contains the following sections:


procedure icon  To Change the Shared File System Configuration

This procedure is based on the example in Example Configuration.

1. Log into each node as the oracle user and shut down the database instance and stop the listener.

CODE EXAMPLE 8-70 Shutting Down the Database Instance and Listener
$ sqlplus "/as sysdba"
SQL > shutdown immediate
SQL > exit
$ lsnrctl stop listener 

2. Log into the metadata server as superuser and bring the metadata server resource group into the unmanaged state.

CODE EXAMPLE 8-71 Bringing the Resource Group Into an Unmanaged State
# scswitch -F -g qfs-rg
# scswitch -u -g qfs-rg

At this point, the shared file systems are unmounted on all nodes. You can now apply any changes to the file systems' configuration, mount options, and so on. You can also re-create the file systems, if necessary. To use the file systems again after recreating them, follow the steps in Example Configuration.

If you want to make changes to the metadata server resource group configuration or to the Sun StorEdge QFS software (For example, you might need to upgrade to new packages.), continue to Step 3.

3. As superuser, remove the resource, the resource group, and the resource type, and verify that everything is removed.

CODE EXAMPLE 8-72 Disabling Resource Groups
# scswitch -n -j qfs-res
# scswitch -r -j qfs-res
# scrgadm -r -g qfs-rg
# scrgadm -r -t SUNW.qfs
# scstat

At this point, you can re-create the resource group to define different names, node lists, and so on. You can also remove or upgrade the Sun StorEdge QFS shared software, if necessary. After the new software is installed, the metadata resource group and the resource can be recreated and can be brought online.


procedure icon  To Disable HA-NFS on a File System That Uses Raw Global Devices

Use this procedure to disable HA-NFS on an unshared Sun StorEdge QFS file system that is using raw global devices. This example procedure is based on Example 1.

1. Use the scswitch(1M) -F -g command to take the resource group offline.

# scswitch -F -g nfs-rg

2. Disable the NFS, Sun StorEdge QFS, and LogicalHost resource types.

CODE EXAMPLE 8-73 Disabling the Resource Types
# scswitch -n -j nfs1-res
# scswitch -n -j qfsnfs1-res
# scswitch -n -j lh-nfs1

3. Remove the previously configured resources.

CODE EXAMPLE 8-74 Removing the Resources
# scrgadm -r -j nfs1-res
# scrgadm -r -j qfsnfs1-res
# scrgadm -r -j lh-nfs1

4. Remove the previously configured resource group.

# scrgadm -r -g nfs-rg

5. Clean up the NFS configuration directories.

# rm -fr /global/nfs

6. Disable the resource types used, if they were previously added and are no longer needed.

CODE EXAMPLE 8-75 Disabling the Resource Types That are no Longer Needed
# scrgadm -r -t SUNW.HAStoragePlus
# scrgadm -r -t SUNW.LogicalHostname
# scrgadm -r -t SUNW.nfs


procedure icon  To Disable HA-NFS on a File System That Uses Solaris Volume Manager-Controlled Volumes

Use this procedure to disable HA-NFS on an unshared Sun StorEdge QFS file system that is using Solstice DiskSuite/Solaris Volume Manager-controlled volumes. This example procedure is based on Example 2.

1. Take the resource group offline.

# scswitch -F -g nfs-rg

2. Disable the NFS, Sun StorEdge QFS, and LogicalHost resources types

CODE EXAMPLE 8-76 Disabling the Resource Types
# scswitch -n -j nfs1-res
# scswitch -n -j qfsnfs1-res
# scswitch -n -j lh-nfs1

3. Remove the previously configured resources.

CODE EXAMPLE 8-77 Removing the Previously Configured Resources
# scrgadm -r -j nfs1-res
# scrgadm -r -j qfsnfs1-res
# scrgadm -r -j lh-nfs1

4. Remove the previously configured resource group.

# scrgadm -r -g nfs-rg

5. Clean up the NFS configuration directories.

# rm -fr /global/nfs

6. Disable the resource types used, if they were previously added and are no longer needed.

CODE EXAMPLE 8-78 Disabling the Resource Types
# scrgadm -r -t SUNW.HAStoragePlus
# scrgadm -r -t SUNW.LogicalHostname
# scrgadm -r -t SUNW.nfs

7. Delete RAID-5 and RAID-1 sets.

CODE EXAMPLE 8-79 Deleting the RAID-5 and RAID-1 Sets
# metaclear -s nfsdg -f d30 d20 d21 d22 d23 d11 d1 d2 d3 d4

8. Remove mediation detection of drive errors.

CODE EXAMPLE 8-80 Removing the Mediation Detection of Drive Errors
# metaset -s nfsdg -d  -m scnode-A
# metaset -s nfsdg -d  -m scnode-B

9. Remove the shared DID devices from the nfsdg disk group.

# metaset -s nfsdg -d -f /dev/did/dsk/d4 /dev/did/dsk/d5 \
	/dev/did/dsk/d6 /dev/did/dsk/d7

10. Remove the configuration of disk group nfsdg across nodes in the Sun Cluster system.

# metaset -s  nfsdg -d -f -h scnode-A scnode-B

11. Delete the metadatabase, if it is no longer needed.

CODE EXAMPLE 8-81 Deleting the Metadatabase
# metadb -d -f /dev/dsk/c0t0d0s7
# metadb -d -f /dev/dsk/c1t0d0s7
# metadb -d -f /dev/dsk/c2t0d0s7


procedure icon  To Disable HA-NFS on a Sun StorEdge QFS File System That Uses VxVM-Controlled Volumes

Use this procedure to disable HA-NFS on an unshared Sun StorEdge QFS file system that is using VxVM-controlled volumes. This example procedure is based on Example 3.

1. Take the resource group offline.

# scswitch -F -g nfs-rg

2. Disable the NFS, Sun StorEdge QFS, and LogicalHost resources types.

CODE EXAMPLE 8-82 Disabling the Resource Types
# scswitch -n -j nfs1-res
# scswitch -n -j qfsnfs1-res
# scswitch -n -j lh-nfs1

3. Remove the previously configured resources.

CODE EXAMPLE 8-83 Removing the Resources
# scrgadm -r -j nfs1-res
# scrgadm -r -j qfsnfs1-res
# scrgadm -r -j lh-nfs1

4. Remove the previously configured resource group.

# scrgadm -r -g nfs-rg

5. Clean up the NFS configuration directories.

# rm -fr /global/nfs

6. Disable the resource types used, if they were previously added and are no longer needed.

CODE EXAMPLE 8-84 Disabling the Resource Types That are no Longer Needed
# scrgadm -r -t SUNW.HAStoragePlus
# scrgadm -r -t SUNW.LogicalHostname
# scrgadm -r -t SUNW.nfs

7. Delete the subdisk.

# vxdg destroy nfsdg

8. Remove the VxVM devices.

# vxdisk rm fabric_0 fabric_1 fabric_2 fabric_3 fabric_4