Previous  |  Next  >  
Product: Cluster Server Guides   
Manual: Cluster Server 4.1 Installation Guide   

Setting Up Coordinator Disks

I/O Fencing requires coordinator disks configured in a disk group accessible to each system in the cluster. The use of coordinator disks enables the vxfen driver to resolve potential split brain conditions and prevent data corruption. See the topic I/O Fencing for a discussion of I/O fencing and the role of coordinator disks. See also How I/O Fencing Works in Different Event Scenarios for additional description of how coordinator disks function to protect data in different split brain scenarios.

A coordinator disk is not used for data storage, and so may be configured as the smallest possible LUN on a disk array to avoid wasting space.

Procedures for setting up coordinator disks:

Requirements for Coordinator Disks

Coordinator disks must meet the following requirements:

    Checkmark  There must be at least three coordinator disks and the total number of coordinator disks must be an odd number. This ensures a majority of disks can be achieved.

    Checkmark  Each of the coordinator disks must use a physically separate disk or LUN.

    Checkmark  Each of the coordinator disks should be on a different disk array, if possible.

    Checkmark  Each disk must be initialized as a VxVM disk.

    Checkmark  The coordinator disks must be included in a disk group with the recommended name vxfencoorddg. See Setting Up the Disk Group for Coordinator Disks.

    Checkmark  The coordinator disks must support SCSI-III persistent group reservations. See Requirements for Testing the Coordinator Disk Group.

It is recommended that coordinator disks use hardware-based mirroring.

Setting Up the Disk Group for Coordinator Disks

If you have already added and initialized disks you intend to use as coordinator disks, you can begin the following procedure at step 4.

  1. Physically add the three disks you intend to use for coordinator disks. Add them as physically shared by all cluster systems. It is recommended you use the smallest size disks/LUNs, so that space for data is not wasted.
  2. If necessary, use the vxdisk scandisks command to scan the disk drives and their attributes. This command updates the VxVM device list and reconfigures DMP with the new devices. For example:
      # vxdisk scandisks
  3. You can use the command vxdisksetup command to initialize a disk as a VxVM disk. The example command that follows specifies the CDS format:
        vxdisksetup -i device_name format=cdsdisk

      For example:


        # vxdisksetup -i EMC0_17 format=cdsdisk

    Repeat this command for each disk you intend to use as a coordinator disk.

  4. From one system, create a disk group for the coordinator disks (for example, vxfencoorddg). This group must contain an odd number of disks/LUNs and a minimum of three disks.

    For example, assume the disks have the device names EMC0_12, EMC0_16, and EMC0_17.

    1. On any node, create the disk group by specifying the device name of one of the disks.

    2.     # vxdg init vxfencoorddg EMC0_12
    3. Add the other two disks to the disk group.

    4.     # vxdg -g vxfencoorddg adddisk EMC0_16
          # vxdg -g vxfencoorddg adddisk EMC0_17

      Refer to the VERITAS Volume Manager Administrator's Guide for more information about creating disk groups.

Requirements for Testing the Coordinator Disk Group

  • The utility requires that the coordinator disk group, vxfencoorddg, be accessible from two systems. For example, if you have a four-system cluster, select any two systems for the test.
  • The two systems must have rsh permission set so that each node has root user access to the other. Temporarily modify the /.rhosts file to enable cluster communications for the vxfentsthdw utility, placing a "+" character in the first line of the file. You can also limit the remote access to specific systems. Refer to the manual page for the /.rhosts file for more information. See Removing rsh Permissions and Restoring Public Network Connections when you complete testing.
  • To ensure both systems are connected to the same disks during the testing, you can use the vxfenadm -i diskpath command to verify a disk's serial number. See Verifying that Systems See the Same Disk.

Using the vxfentsthdw -c to Test the Coordinator Disk Group

In the example that follows, the three disks are tested by the vxfentsthdw utility, one disk at a time from each node. From the node north, the disks are /dev/rdsk/c1t12d0, /dev/rdsk/c1t14d0, and /dev/rdsk/c1t16d0. From the node south, the same disks are seen as /dev/rdsk/c1t13d0, /dev/rdsk/c1t15d0, /dev/rdsk/c1t17d0.

  1. Use the vxfentsthdw command with the -c option. For example:
    /opt/VRTSvcs/vxfen/bin/vxfentsthdw -c vxfencoorddg
  2. The script prompts you for the names of the systems you are using to test the coordinator disks.
      Enter the first node of the cluster:
      north
      Enter the second node of the cluster:
      south
    ********************************************
    Testing north /dev/rdsk/c1t12d0 south /dev/rdsk/c1t13d0
    Evaluating the disk before testing .......... pre-existing keys.
    Registering keys on disk /dev/rdsk/c1t12d0 from node north........
    Passed
    Verifying registrations for disk /dev/rdsk/c1t12d0 on node north .
    Passed.
    Registering keys on disk /dev/rdsk/c1t13d0 from node south........
    Passed.
    Verifying registrations for disk /dev/rdsk/c1t12d0 on node north .
    Passed.
    Verifying registrations for disk /dev/rdsk/c1t13d0 on node south .
    Passed.
    Preempt and aborting key KeyA using key KeyB on node south.....
    Passed.
    Verifying registrations for disk /dev/rdsk/c1t12d0 on node north .
    Passed.
    Verifying registrations for disk /dev/rdsk/c1t13d0 on node south .
    Passed.
    Removing key KeyB on node south................................
    Passed.
    Check to verify there are no keys from node north .............
    Passed.
    ALL tests on the disk /dev/rdsk/c1t12d0 have PASSED.
    The disk is now ready to be configured for I/O Fencing on node north as a COORDINATOR DISK.
    ALL tests on the disk /dev/rdsk/c1t13d0 have PASSED.
    The disk is now ready to be configured for I/O Fencing on node south as a COORDINATOR DISK.
    ********************************************
    Testing north /dev/rdsk/c1t12d0 south /dev/rdsk/c1t13d0
    .
    .

    The preceding shows the output of the test utility as it tests one disk. The disk group, vxfencoorddg, is ready for use when all disks in the disk group are successfully tested.


Removing and Adding a Failed Disk

If a disk in the coordinator disk group fails verification, remove the failed disk or LUN from the vxfencoorddg disk group, replace it with another, and retest the disk group.

Creating /etc/vxfendg to Configure the Disk Group for Fencing

After you have set up and tested the coordinator disk group, configure it for use.

  1. Deport the disk group:
      # vxdg deport vxfencoorddg
  2. Import the disk group with the -t option so that it is not automatically imported when the systems are restarted:
      # vxdg -t import vxfencoorddg
  3. Deport the disk group again. Deporting the disk group prevents the coordinator disks from being used for other purposes.
      # vxdg deport vxfencoorddg
  4. On all systems, enter the command:
      # echo "vxfencoorddg" > /etc/vxfendg

    No spaces should appear between the quotes in the "vxfencoorddg" text.

    This command creates the file /etc/vxfendg, which includes the name of the coordinator disk group.

    Based on the contents of the /etc/vxfendg file, the rc script creates the file /etc/vxfentab for use by the vxfen driver when the system starts. The /etc/vxfentab file is a generated file and should not be modified.

  5. Go to Editing VCS Configuration to Add the UseFence Attribute to edit the main.cf file and add the UseFence = SCSI3 attribute to the VCS configuration.
    Note   Note    Do not shut down the system at this time. Stop and restart the system after you have edited the main.cf file to add the UseFence = SCSI3 attribute.

An Example /etc/vxfentab File

On each system, the coordinator disks are listed in the file /etc/vxfentab. The same disks may be listed using different names on each system. An example /etc/vxfentab file on one system resembles:


/dev/rdsk/c1t12d0
/dev/rdsk/c1t13d0
/dev/rdsk/c1t14d0

When the system starts, the rc startup script automatically creates /etc/vxfentab and then invokes the vxfenconfig command, which configures the vxfen driver to start and use the coordinator disks listed in /etc/vxfentab.

If you must remove disks from or add disks to an existing coordinator disk group, please see Adding or Removing Coordinator Disks.

Removing rsh Permissions and Restoring Public Network Connections

When you have completed setting I/O fencing, remove the temporary rsh access permissions you have set for the systems in the cluster and restore the connections of the cluster systems to the public network.


Note   Note    If your cluster systems use ssh for secure communications, and you temporarily removed the connections to the public network, restore them at this time.
 ^ Return to Top Previous  |  Next  >  
Product: Cluster Server Guides  
Manual: Cluster Server 4.1 Installation Guide  
VERITAS Software Corporation
www.veritas.com