Previous  |  Next  >  
Product: Storage Foundation for Oracle RAC Guides   
Manual: Storage Foundation 4.1 for Oracle RAC Installation and Configuration   

Testing Disks Using vxfentsthdw

Use the vxfentsthdw utility to test the shared storage arrays support SCSI-3 persistent reservations and I/O fencing. Make sure to test disks serving as coordinator disks; see Setting Up Coordinator Disks.

Reviewing Guidelines on vxfentsthdw

  • Verify the connection of the shared storage for data to two of the nodes on which you installed Storage Foundation for Oracle RAC.

  • Caution  Caution    The tests overwrite and destroy data on the disks unless you use the -r option.
  • The two nodes must have remsh permission set to ensure each node has root user access to the other. Temporarily modify the /.rhosts file to enable cluster communications for the vxfentsthdw utility, placing a "+ +" character in the first line of the file. You can also limit the remote access to specific systems. Refer to the manual page for the /.rhosts file for more information. See Removing remsh Permissions and Restoring Public Network Connections after completing the testing process.
  • To ensure both nodes are connected to the same disk during the testing, use the vxfenadm -i diskpath command to verify the disk serial number. See Verifying the Nodes See the Same Disk.

Running vxfentsthdw

This procedure uses the /dev/rdsk/c4t8d0 disk in the steps.

  1. Make sure system-to-system communication is functioning properly. See System-to-System Communication.
  2. From one node, start the utility:
      # /opt/VRTSvcs/vxfen/bin/vxfentsthdw
  3. After reviewing the overview and warning that the tests overwrite data on the disks, confirm to continue the process and enter the node names.
    ******** WARNING!!!!!!!! ********
      THIS UTILITY WILL DESTROY THE DATA ON THE DISK!!
      
      Do you still want to continue : [y/n] (default: n) 
      y
      Enter the first node of the cluster: 
      slpas06
      Enter the second node of the cluster: 
      slpas07
  4. Enter the names of the disks you are checking. For each node, the disk may be known by the same name:
    Enter the disk name to be checked for SCSI-3 PGR on node slpas06
    in the format: /dev/rdsk/cxtxdx

       /dev/rdsk/c4t8d0

      Enter the disk name to be checked for SCSI-3 PGR on node slpas07
      in the format: /dev/rdsk/cxtxdx
      Make sure it's the same disk as seen by nodes slpas06 and slpas07
      /dev/rdsk/c4t8d0

    The disk names, regardless if they are identical, must refer to the same physical disk to facilitate the testing.

  5. Review the output as the utility performs the checks and report its activities:
    Evaluating the disk before testing 0 Pre-existing keys........
    ......................................................Passed.

      Registering keys on disk /dev/rdsk/c4t8d0 from node
       slpas06 ............................................. Passed.
      Verifying registrations for disk /dev/rdsk/c4t8d0 on node
       slpas06 ............................................. Passed. 
      Reads from disk /dev/rdsk/c4t8d0 on node slpas06 ...... Passed. 
      Writes to disk /dev/rdsk/c4t8d0 from node slpas06 ..... Passed. 
      Reads from disk /dev/rdsk/c4t8d0 on node slpas07 ...... Passed. 
      Writes to disk /dev/rdsk/c4t8d0 from node slpas07 ..... Passed. 
      Reservations to disk /dev/rdsk/c4t8d0 from node slpas06 .......
       ...................................................... Passed. 
      Verifying reservation for disk /dev/rdsk/c4t8d0s2 on node 
       slpas06 .............................................. Passed. 
      .

    If a disk is ready for I/O fencing on each node, the utility reports success:


      ALL tests on the disk /dev/rdsk/c4t8d0 have PASSED.
      The disk is now ready to be configured for I/O Fencing on node
       slpas06.

      ALL tests on the disk /dev/rdsk/c4t8d0 have PASSED.
      The disk is now ready to be configured for I/O Fencing on node
       slpas07.
     
      Removing test keys and temporary files, if any ...
      .
  6. Run the vxfentsthdw utility for each disk you intend to verify.
    Note   Note    The vxfentsthdw utility has additional options suitable for testing many disks. The options for testing disk groups (-g) and disks listed in a file (-f) are described in detail in vxfentsthdw Options. You can also test disks without destroying data using the -r option.

Setting Up Coordinator Disks

I/O fencing requires coordinator disks that are configured in a disk group and accessible to each node in the cluster. These disks enables the vxfen driver to resolve potential split-brain conditions and prevent data corruption. See I/O Fencing for a description of I/O fencing and the role of coordinator disks.

Because a coordinator disk does not apply to data storage, configure the disk as the smallest possible LUN on a disk array to avoid wasting space. Make sure you already added and initialized disks for use as coordinator disks, as described in the requirements below.

However, to use the vxfentsthdw utility to verify SCSI-3 persistent reservation support, use disks of at least 1 MB. Disks smaller than 1 MB can be tested manually; contact VERITAS support (http://support.veritas.com) for the procedure.)

Requirements for Coordinator Disks

    Checkmark  You must have at least three coordinator disks and the total number of coordinator disks must be an odd number. This requirement ensures a majority of disks can be achieved.

    Checkmark  Each of the coordinator disks must use a physically separate disk or LUN.

    Checkmark  Each of the coordinator disks should exist on a different disk array, if possible.

    Checkmark  You must initialize each disk as a VxVM disk. VERITAS recommends the default (CDS) format.

    Checkmark  The coordinator disks must support SCSI-3 persistent reservations. See Requirements for Testing the Coordinator Disk Group.

    Checkmark  The coordinator disks must exist in a disk group (for example, vxfencoorddg). See Creating the vxfencoorddg Disk Group.

VERITAS recommends using hardware-based mirroring for coordinator disks.

Creating the vxfencoorddg Disk Group

From one node, create a disk group named vxfencoorddg. This group must contain an odd number of disks or LUNs and a minimum of three disks. For example, assume the disks have the device names c1t1d0, c2t1d0, and c3t1d0. Refer to the VERITAS Volume Manager Administrator's Guide for details on creating disk groups.

  1. On any node, create the disk group by specifying the device name of the disks:
      # vxdg init vxfencoorddg c1t1d0 c2t1d0 c3t1d0

    Refer to the VERITAS Volume Manager Administrator's Guide for details on creating disk groups.

Requirements for Testing the Coordinator Disk Group

  • The utility requires that the coordinator disk group, vxfencoorddg, be accessible from two nodes.
  • The two nodes must have remsh permission set such that each node has root user access to the other. Temporarily modify the /.rhosts file to enable cluster communications for the vxfentsthdw utility, placing a "+ +" character in the first line of the file. You can also limit the remote access to specific systems. Refer to the manual page for the /.rhosts file for more information. See Removing remsh Permissions and Restoring Public Network Connections when you complete the testing process.
  • To ensure both nodes are connected to the same disks during the testing process, use the vxfenadm -i diskpath command to verify the serial number. See Verifying the Nodes See the Same Disk.

Using vxfentsthdw -c to Test the Coordinator Disk Group

Use the vxfentsthdw utility to verify disks are configured to support I/O fencing. In this procedure, the vxfentsthdw utility tests the three disks one disk at a time from each node. From the node slpas06, the disks are /dev/rdsk/c1t1d0, /dev/rdsk/c2t1d0, and /dev/rdsk/c3t1d0. From the node slpas07, the same disks are seen as /dev/rdsk/c4t1d0, /dev/rdsk/c5t1d0, and /dev/rdsk/c6t1d0.

  1. Use the vxfentsthdw command with the -c option. For example:
    /opt/VRTSvcs/vxfen/bin/vxfentsthdw -c vxfencoorddg
  2. Enter the nodes you are using to test the coordinator disks:
      Enter the first node of the cluster:
      slpas06
      Enter the second node of the cluster:
      slpas07
  3. Review the output of the testing process:
     Testing slpas06 /dev/rdsk/c1t1d0 slpas07 /dev/rdsk/c4t1d0
     Evaluating the disk before testing 0 Pre-existing keys.........
      ........................................................Passed.
     Registering keys on disk /dev/rdsk/c1t1d0 from node
      slpas06 ............................................... Passed.
    Verifying registrations for disk /dev/rdsk/c1t1d0 on node
    slpas06 ............................................... Passed.

     Registering keys on disk /dev/rdsk/c4t1d0 from node
      slpas07 ............................................... Passed.
    Verifying registrations for disk /dev/rdsk/c1t1d0 on node
    slpas06 .............................................. Passed.

    Verifying registrations for disk /dev/rdsk/c4t1d0 on node
    slpas07 .............................................. Passed.

    Preempt and aborting key KeyA using key KeyB on node
    slpas07 .............................................. Passed.

    Verifying registrations for disk /dev/rdsk/c1t1d0 on node
    slpas06 .............................................. Passed.

    Verifying registrations for disk /dev/rdsk/c4t1d0 on node
    slpas07 .............................................. Passed.

     Removing key KeyB on node slpas07 ..................... Passed.
    Check to verify there are no keys from node slpas06 ..........
    ...................................................... Passed.

     ALL tests on the disk /dev/rdsk/c1t1d0 have PASSED.
     The disk is now ready to be configured for I/O Fencing on node
      slpas06 as a COORDINATOR DISK.
     ALL tests on the disk /dev/rdsk/c4t1d0 have PASSED.
     The disk is now ready to be configured for I/O Fencing on node
     slpas07 as a COORDINATOR DISK.
     ********************************************
     Testing slpas06 /dev/rdsk/c1t1d0 slpas07 /dev/rdsk/c1t1d0
     .
     .

    After you test all disks in the disk group, the vxfencoorddg disk group is ready for use.


Removing and Replacing a Failed Disk

If a disk in the coordinator disk group fails verification, remove the failed disk or LUN from the vxfencoorddg disk group, replace it with another, and retest the disk group.

Configuring /etc/vxfendg Disk Group for I/O Fencing

After setting up and testing the coordinator disk group, configure it for use.

  1. Deport the disk group:
      # vxdg deport vxfencoorddg
  2. Import the disk group with the -t option to avoid automatically importing it when the nodes restart:
      # vxdg -t import vxfencoorddg
  3. Deport the disk group. Deporting the disk group prevents the coordinator disks from serving other purposes:
      # vxdg deport vxfencoorddg
  4. On all nodes, type:
      # echo "vxfencoorddg" > /etc/vxfendg

    No spaces should appear between the quotes in the "vxfencoorddg" text.

    This command creates the /etc/vxfendg file, which includes the name of the coordinator disk group.

    Based on the contents of the /etc/vxfendg file, the rc script creates the /etc/vxfentab file for use by the vxfen driver when the system starts. /etc/vxfentab invokes the vxfenconfig command, which configures the vxfen driver to start and use the coordinator disks listed in /etc/vxfentab. /etc/vxfentab is a generated file; do not modify this file.


Example /etc/vxfentab File

The list of coordinator disks is in the /etc/vxfentab file on each node. The same disks may appear using different names on each node. For example, a device could appear as c0t1d2 on one node and c4t1d2 on another node. In this case, use the commands vxfenadm -i /dev/rdsk/c0t1d2 on the first node and vxfenadm -i /dev/rdsk/c4t1d2 on the second node to verify if the serial number is the same number.

An example of the /etc/vxfentab file on one node resembles:


/dev/rdsk/c1t1d0 
/dev/rdsk/c2t1d0  
/dev/rdsk/c3t1d0 

If you must remove disks from or add disks to an existing coordinator disk group, refer to Removing or Adding Coordinator Disks.

Starting I/O Fencing

On each node, start the I/O fencing driver:

# /sbin/init.d/vxfen start

Removing remsh Permissions and Restoring Public Network Connections

After completing the installation of VERITAS Storage Foundation for Oracle RAC and verification of disk support for I/O fencing, remove the temporary remsh access permissions you set for the nodes and restore the connections to the public network.


Note   Note    If the nodes use ssh for secure communications, and you temporarily removed the connections to the public network, restore the connections at this time.

Editing the UseFence Attribute in VCS Configuration

After adding coordinator disks and configuring I/O fencing, add the UseFence = SCSI3 cluster attribute to the VCS configuration file, /etc/VRTSvcs/conf/config/main.cf. This entry gives the user the flexibility to disable or enable I/O fencing by modifying the UseFence attribute.

  1. Save the existing configuration:
      # haconf -dump -makero
  2. Stop VCS on all nodes:
      # hastop -all
  3. Make a backup copy of the main.cf file:
      # cd /etc/VRTSvcs/conf/config 
      # cp main.cf main.orig
  4. On one node, use vi or another text editor to edit the main.cf file. Modify the list of cluster attributes by adding the UseFence attribute and assigning its value of SCSI3.
      cluster rac_cluster1 (
            UserNames = { admin = "cDRpdxPmHpzS." }
            Administrators = { admin }
            HacliUserLevel = COMMANDROOT
            CounterInterval = 5
            UseFence = SCSI3
            )
  5. Save and close the file.
  6. Verify the syntax of the file /etc/VRTSvcs/conf/config/main.cf:
      # hacf -verify /etc/VRTSvcs/conf/config
  7. Using rcp or another utility, copy the VCS configuration file from a node (for example, slpas06) to the remaining cluster nodes. On each remaining node, enter:
      # rcp slpas06:/etc/VRTSvcs/conf/config/main.cf  
        /etc/VRTSvcs/conf/config

Starting VCS, CVM, and CFS on All Nodes

With the configuration file in place on each system, start VCS, CVM, and CFS:


hastart

Make sure to run this command from each node.

 ^ Return to Top Previous  |  Next  >  
Product: Storage Foundation for Oracle RAC Guides  
Manual: Storage Foundation 4.1 for Oracle RAC Installation and Configuration  
VERITAS Software Corporation
www.veritas.com