Previous  |  Next  >  
Product: Storage Foundation Cluster File System Guides   
Manual: Cluster File System 4.1 Installation and Administration Guide   

Using the vxfentsthdw Utility

Use the vxfentsthdw utility to verify that the shared storage arrays support SCSI-3 persistent reservations and I/O fencing. VERITAS recommends testing several disks in each array. Only supported storage devices can be used with I/O fencing of shared storage.

The vxfentsthdw utility, which can be run from a single system in the cluster, tests storage by setting SCSI-3 registrations on the specified disk, verifying the registrations, and removing them from the disk.


Caution  Caution    The vxfentsthdw utility overwrites and destroys existing data on the disks. Use the -r option to ensue non-destructive testing.

In the following example, assume you are checking the shared device that both systems recognize. (It is also possible that each system would use a different name for the same physical device.)

  To verify support for SCSI-3 persistent reservations

  1. Start the utility on a single system:
      # cd /opt/VRTSvcs/vxfen/bin
      # ./vxfentsthdw

    The utility provides an overview of its function and behavior. It warns you that the tests it performs overwrites any data on the disks you check. Output resembles:


      ******** WARNING!!!!!!!! ******** THIS UTILITY WILL DESTROY THE
    DATA ON THE DISK!!
    Do you still want to contnue : [y/n] (default: n)
  2. Enter y to continue.
  3. When prompted, enter the name of the first cluster node. In this example, "star33."
  4. When prompted, enter the name of the second cluster node. In this example, "star34."
  5. You are then prompted to enter the first and second disk names (see above). Verify the same disk is recognized by star33 and star34. Note the disk names, whether or not they are identical, must refer to the same physical disk. If they do not, testing terminates. The utility begins its check and reports its activities.
  6. Run the vxfentsthdw utility on each disk to be verified.
  7. Continue to the next section to configure coordinator disks.

Configuring Coordinator Disks

  1. Create a disk group vxfencoorddg on any cluster node.
  2. Deport the disk group:
      # vxdg deport vxfencoorddg
  3. Import the disk group with the -t option to avoid automatically importing it when the systems are rebooted:
      # vxdg -t import vxfencoorddg
  4. Deport the disk group. Deporting the disk group prevents the coordinator disks from being used for other purposes.
      # vxdg deport vxfencoorddg
  5. On each system, enter the command:
      # echo "vxfencoorddg" > /etc/vxfendg

    No spaces should appear between the quotes in the "vxfencoorddg" text.

    This command creates the file /etc/vxfendg, which includes the name of the coordinator disk group. Based on the /etc/vxfendg file, the rc script creates the file /etc/vxfentab for use by the vxfen driver. The /etc/vxfentab file is a generated file and should not be modified.

  6. Issue the following command on each system to start I/O fencing:
      # /etc/init.d/vxfen start

    The rc startup script automatically creates /etc/vxfentab by reading the disks contained in the disk group listed in /etc/vxfendg. The script then invokes the vxfenconfig command, which configures the vxfen driver to start and use the coordinator disks listed in /etc/vxfentab. The same disks may be listed using different names on each system.

 ^ Return to Top Previous  |  Next  >  
Product: Storage Foundation Cluster File System Guides  
Manual: Cluster File System 4.1 Installation and Administration Guide  
VERITAS Software Corporation
www.veritas.com