Previous  |  Next  >  
Product: Volume Replicator Guides   
Manual: Volume Replicator 4.1 Cluster Server Agents Configuration Guide   

Example---Setting Up VVR in a VCS Environment

Configuring VVR with VCS requires the completion of several tasks, each of which must be performed in the order presented below.

Before setting up the VVR configuration, verify whether all the nodes in the cluster that have VVR installed use the same port number for replication. To verify and change the port numbers, use the vrport command. For instructions on using the vrport command, see the VERITAS Volume Replicator Administrator's Guide. If the port number is the same on all nodes, add the VVR agents to the VCS configuration.

Setting Up the VVR Configuration

The example in this section refers to the sample configuration shown in Example VVR Configuration in a VCS Environment. Note that the VVR configuration that is being set up in this example applies to the RVG Agent, that is, it uses the names that are used in the sample configuration file of the RVG agent.

The procedure to configure VVR is the same for all the VVR agents. Use the sample configuration files located in /etc/VRTSvcs/conf/sample_vvr/RVG directory to configure the other agents. For more information on configuring VVR, refer to the VERITAS Volume Replicator Administrator's Guide. The example uses the names listed in the following table.

Name of Cluster: Seattle

Disk group

hrdg

Primary RVG

hr_rvg

Primary RLINK to london1

rlk_london_hr_rvg

Primary data volume #1

hr_dv01

Primary data volume #2

hr_dv02

Primary SRL for hr_rvg

hr_srl

Cluster IP

10.216.144.160

Name of ClusterLondon 

Disk group

hrdg

Secondary RVG

hr_rvg

Secondary RLINK to seattle

rlk_seattle_hr_rvg

Secondary data volume #1

hr_dv01

Secondary data volume #2

hr_dv02

Secondary SRL for hr_rvg

hr_srl

Cluster IP

10.216.144.162

This example assumes that each of the hosts seattle1 and london1 has a disk group named hrdg with enough free space to create the VVR objects mentioned in the example. Set up the VVR configuration on seattle1 and london1 to include the objects used in the sample configuration files, main.cf.seattle and main.cf.london, located in the /etc/VRTSvcs/conf/sample_vvr/RVG directory.

  1. On london1:
    1. Create the Secondary data volumes.

    2.      # vxassist -g hrdg make hr_dv01 100M \
             layout=mirror logtype=dcm mirror=2

           # vxassist -g hrdg make hr_dv02 100M \
             layout=mirror logtype=dcm mirror=2
    3. Create the Secondary SRL.

    4.     # vxassist -g hrdg make hr_srl 200M mirror=2
  2. On seattle1:
    1. Create the Primary data volumes.

    2.     # vxassist -g hrdg make hr_dv01 100M \
            layout=mirror logtype=dcm mirror=2

          # vxassist -g hrdg make hr_dv02 100M \
                 layout=mirror logtype=dcm mirror=2
    3. Create the Primary SRL.

    4.     # vxassist -g hrdg make hr_srl 200M mirror=2
    5. Create the Primary RVG.

    6.     # vradmin -g hrdg createpri hr_rvg \
             hr_dv01,hr_dv02 hr_srl
    7. Determine the virtual IP address to be used for replication, and then verify that the device interface for this IP is plumbed. If the device interface for this IP is not plumbed, then plumb the device. Get the IP up using the OS-specific command. This IP address that is to be used for replication must be configured as the IP resource for this RVG service group.
    8. Create the Secondary RVG.

    9.     vradmin -g hrdg addsec hr_rvg \ 
            10.216.144.160 10.216.144.162 prlink=rlk_london_hr_rvg \
              srlink=rlk_seattle_hr_rvg
      Note   Note    The RLINKs must point to the virtual IP address for failovers to succeed. The virtual IP address 10.216.144.160 must be able to ping virtual IP address 10.216.144.162 and vice versa.
    10. Start Replication.

    11.     vradmin -g hrdg -f startrep hr_rvg
  3. Create the following directories on seattle1 and seattle2. These directories will be used as mount points for volumes hr_dv01 and hr_dv02 on the seattle site.
      # mkdir /hr_mount01
      # mkdir /hr_mount02
  4. On seattle1 and seattle2, create file systems on the volumes hr_dv01 and hr_dv02.

Verifying the VVR Replication State

Test the replication state between seattle1 and london1 to verify that VVR is configured correctly. Type the following command on each node:


  vxprint -g hrdg hr_rvg

    Checkmark  Verify that the state of the RVG is ENABLED/ACTIVE.

    Checkmark  Verify that the state of the RLINK is CONNECT/ACTIVE.

Configuring the Agents

This section explains how to configure the VVR agents.

Configuration Tasks

This section gives instructions on how to configure the RVG agent and RVGPrimary agent when VCS is stopped and when VCS is running. Sample configuration files, main.cf.seattle and main.cf.london, are located in the /etc/VRTSvcs/conf/sample_vvr/RVG and /etc/VRTSvcs/conf/sample_vvr/RVGPrimary directories respectively, and can be used for reference.

You can add the RVG resource to your existing VCS configuration using any one of the following procedures:


Configuring the Agents When VCS is Running

The example in this section explains how to configure the RVG and RVGPrimary agents when VCS is running. For details about the example configuration, see Example Configuration for a Failover Application


Note   Note    Use this example as a reference when creating or changing your resources and attributes.

Perform the following steps on the system seattle1 in the Primary cluster Seattle:

  1. Log in as root.
  2. Set the VCS configuration mode to read/write by issuing the following command:
      # haconf -makerw
  3. Create the replication service group, VVRGrp. This group contains all the storage and replication resources.
    1. Add a service group, VVRGrp, to the cluster Seattle and modify the attributes SystemList and AutoStartList of the service group to populate SystemList and AutoStartList:

    2.     hagrp -add VVRGrp
          hagrp -modify VVRGrp SystemList seattle1 0 seattle2 1
          hagrp -modify VVRGrp AutoStartList seattle1 seattle2
    3. Add the DiskGroup resource Hr_Dg to the service group VVRGrp and modify the attributes of the resource:

    4.     hares -add Hr_Dg DiskGroup VVRGrp
          hares -modify Hr_Dg DiskGroup hrdg
    5. Add the RVG resource Hr_Rvg to the service group VVRGrp and modify the attributes of the resource:

    6.     hares -add Hr_Rvg RVG VVRGrp
          hares -modify Hr_Rvg RVG hr_rvg
          hares -modify Hr_Rvg DiskGroup hrdg
    7. Add a NIC resource vvrnic to the service group VVRGrp and modify the attributes of the resource:

    8.     hares -add vvrnic NIC VVRGrp
          hares -modify vvrnic Device lan3
    9. Add the IP resource vvrip to the service group VVRGrp and modify the attributes of the resource:

    10.     hares -add vvrip IP VVRGrp
          hares -modify vvrip Device lan3
          hares -modify vvrip Address 192.2.40.20 
          hares -modify vvrip NetMask "255.255.248.0"
    11. Specify resource dependencies for the resources you added in the previous steps:

    12.     hares -link Hr_Rvg vvrip
          hares -link Hr_Rvg Hr_Dg 
          hares -link vvrip vvrnic
    13. Enable all resources in VVRGrp

    14.     hagrp -enableresources VVRGrp
  4. Create the application service group, ORAGrp. This group contains all the application specific resources.
    1. Add a service group, ORAGrp, to the cluster Seattle and populate the attributes SystemList, AutoStartList and ClusterList of the service group

    2.     hagrp -add ORAGrp
          hagrp -modify ORAGrp SystemList seattle1 0 seattle2 1
          hagrp -modify ORAGrp AutoStartList seattle1 seattle2
          hagrp -modify ORAGrp ClusterList  Seattle 0 London 1
    3. Add a NIC resource oranic to the service group ORAGrp and modify the attributes of the resource:

    4.     hares -add oranic NIC ORAGrp
          hares -modify oranic Device lan0
    5. Add an IP resource oraip to the service group ORAGrp and modify the attributes of the resource:

    6.     hares -add oraip IP ORAGrp
          hares -modify oraip Device lan0
          hares -modify oraip Address 192.2.40.1
          hares -modify oraip NetMask "255.255.248.0"
    7. Add the Mount resource Hr_Mount01 to mount the volume hr_dv01 in the RVG resource Hr_Rvg:

    8.     hares -add Hr_Mount01 Mount ORAGrp
          hares -modify Hr_Mount01 MountPoint /hr_mount01
          hares -modify Hr_Mount01 BlockDevice \ 
            /dev/vx/dsk/Hr_Dg/hr_dv01
          hares -modify Hr_Mount01 FSType vxfs
          hares -modify Hr_Mount01 FsckOpt %-n
          hares -modify Hr_Mount01 MountOpt rw
    9. Add the Mount resource Hr_Mount02 to mount the volume hr_dv02 in the RVG resource Hr_Rvg:

    10.     hares -add Hr_Mount02 Mount ORAGrp
          hares -modify Hr_Mount02 MountPoint /hr_mount02
          hares -modify Hr_Mount02 BlockDevice \ 
            /dev/vx/dsk/Hr_Dg/hr_dv02
          hares -modify Hr_Mount02 FSType vxfs
          hares -modify Hr_Mount02 FsckOpt %-n
          hares -modify Hr_Mount02 MountOpt rw
    11. Add the Oracle resource Hr_Oracle

    12.     hares -add Hr_Oracle Oracle ORAGrp
          hares -modify Hr_Oracle Sid hr1
          hares -modify Hr_Oracle Owner oracle
          hares -modify Hr_Oracle Home "/hr_mount01/OraHome1"
          hares -modify Hr_Oracle Pfile "inithr1.ora"
          hares -modify Hr_Oracle User dbtest
          hares -modify Hr_Oracle Pword dbtest
          hares -modify Hr_Oracle Table oratest
          hares -modify Hr_Oracle MonScript "./bin/Oracle/SqlTest.pl"
          hares -modify Hr_Oracle StartUpOpt STARTUP
          hares -modify Hr_Oracle ShutDownOpt IMMEDIATE
          hares -modify Hr_Oracle AutoEndBkup 1
    13. Add the Oracle listener resource LISTENER

    14.     hares -add LISTENER Netlsnr ORAGrp
          hares -modify LISTENER Owner oracle
          hares -modify LISTENER Home "/hr_mount01/OraHome1"
          hares -modify LISTENER Listener LISTENER
          hares -modify LISTENER EnvFile "/oracle/.profile"
          hares -modify LISTENER MonScript "./bin/Netlsnr/LsnrTest.pl"
    15. Add the RVGPrimary resource Hr_RvgPri

    16.     hares -add Hr_RvgPri RVGPrimary ORAGrp
          hares -modify Hr_RvgPri RvgResourceName Hr_Rvg
    17. Specify resource dependencies for the resources you added in the previous steps:

    18.     hares -link LISTENER Hr_Oracle
          hares -link LISTENER oraip
          hares -link Hr_Oracle Hr_Mount01
          hares -link Hr_Oracle Hr_Mount02
          hares -link Hr_Mount01 rvg-pri
          hares -link Hr_Mount02 rvg-pri
          hares -link oraip oranic
    19. Specify an online local hard group dependency between ORAGrp and VVRGrp.

    20.     hagrp -link ORAGrp VVRGrp online local hard
    21. Enable all resources in ORAGrp

    22.     hagrp -enableresources ORAGrp
    23. Save and close VCS configuration

    24.     haconf -dump -makero
  5. Repeat steps 1 to 4 on the system london1 in the Secondary cluster London with the changes described below:
    1. Repeat steps 1 and 2.
    2. At step 3a, replace seattle1 and seattle2 with london1 and london2, as follows:
      • Add a service group, VVRGrp, to the cluster London and modify the attributes SystemList and AutoStartList of the service group to populate SystemList and AutoStartList:


            hagrp -add VVRGrp
            hagrp -modify VVRGrp SystemList london1 0 london2 1
            hagrp -modify VVRGrp AutoStartList london1 london2
    3. Repeat steps 3b, 3c, 3d.
    4. At step 3e, modify the Address attribute for the IP resource appropriately.
      • Add the IP resource vvrip to the service group VVRGrp and modify the attributes of the resource:


            hares -add vvrip IP VVRGrp
            hares -modify vvrip Device lan3
            hares -modify vvrip Address 192.2.40.21 
            hares -modify vvrip NetMask "255.255.248.0"
    5. Repeat steps 3f and 3g.
    6. At step 4a, replace seattle1 and seattle2 with london1 and london2, as follows:
      • Add a service group, ORAGrp, to the cluster London and populate the attributes SystemList, AutoStartList and ClusterList of the service group


            hagrp -add ORAGrp
            hagrp -modify ORAGrp SystemList london1 0 london2 1
            hagrp -modify ORAGrp AutoStartList london1 london2
            hagrp -modify ORAGrp ClusterList  Seattle 0 London 1
    7. Repeat step 4b
    8. At step 4c, modify the Address attribute for the IP resource appropriately.
      • Add the IP resource oraip to the service group ORAGrp and modify the attributes of the resource:


            hares -add oraip IP ORAGrp
            hares -modify oraip Device lan0
            hares -modify oraip Address 192.2.40.1
            hares -modify oraip NetMask "255.255.248.0"
    9. Repeat steps 4d through 4l.
  6. Bring the service groups online, if not already online.
      # hagrp -online VVRGrp -sys seattle1
      # hagrp -online ORAGrp -sys seattle1
  7. Verify that the service group ORAGrp is ONLINE on the system seattle1 by issuing the following command:
      # hagrp -state ORAGrp

Configuring the Agents When VCS is Stopped

Perform the following steps to configure the RVG agent using the sample configuration file on the first node in the Primary cluster and Secondary cluster. In the example in this guide, seattle1 is the first Primary node and london1 is the first Secondary node.

  1. Log in as root.
  2. Ensure that all changes to the existing configuration have been saved and that further changes are prevented while you modify main.cf:

    If the VCS cluster is currently writeable, run the following command:


      # haconf -dump -makero

    If the VCS cluster is already read only, run the following command:


      # haconf -dump 
  3. Do not edit the configuration files while VCS is started. The following command will stop the had daemon on all systems and leave resources available:
      # hastop -all -force
  4. Make a backup copy of the main.cf file:
      # cd /etc/VRTSvcs/conf/config
      # cp main.cf main.cf.orig
  5. Edit the main.cf files for the Primary and Secondary clusters. The files main.cf.seattle and main.cf.london located in the /etc/VRTSvcs/conf/sample_vvr/RVGPrimary directory can be used for reference for the primary cluster and the secondary cluster respectively.
  6. Save and close the file.
  7. Verify the syntax of the file /etc/VRTSvcs/conf/config/main.cf:
      # cd /etc/VRTSvcs/conf/config/
      # hacf -verify .
  8. Start the VCS engine:
      # hastart
  9. Go to Administering the Service Groups.
 ^ Return to Top Previous  |  Next  >  
Product: Volume Replicator Guides  
Manual: Volume Replicator 4.1 Cluster Server Agents Configuration Guide  
VERITAS Software Corporation
www.veritas.com