Skip Headers
Oracle® Clusterware Administration and Deployment Guide
11g Release 1 (11.1)

Part Number B28255-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Cloning Oracle Clusterware

This chapter describes how to clone an existing Oracle Clusterware home and use it to create a new cluster or to extend Oracle Clusterware to new nodes on the same cluster. You implement cloning through the use of scripts in silent mode.

The cloning procedures described in this chapter are applicable to Linux, UNIX, and Windows systems. Although the examples in this chapter use Linux and UNIX commands, the cloning concepts and procedures apply to all platforms. For the Windows platform, you need to adjust the examples or commands to be Windows specific.

This chapter contains the following topics:

Introduction to Cloning Oracle Clusterware

Cloning is the process of copying an existing Oracle installation to a different location and then updating the copied installation to work in the new environment. The changes made by one-off patches applied on the source Oracle home are also present after the clone operation. During cloning, you run a script that replays the actions that installed the Oracle Clusterware home.

Cloning requires that you start with a successfully installed Oracle Clusterware home that you use as the basis for implementing a script that extends the Oracle Clusterware home to either create a new cluster or to extend the Oracle Clusterware environment to more nodes in the same cluster. Manually creating the cloning script can be prone to errors, because you must prepare the script without the benefit of any interactive checks to validate your input. Despite this, the initial effort is worthwhile for scenarios where you run a single script to install tens or even hundreds of clusters. If you have only one cluster to install, then you should use the traditional automated and interactive installation methods, such as Oracle Universal Installer (OUI) or the Provisioning Pack feature of Oracle Enterprise Manager.

Note:

Cloning is not a replacement for Oracle Enterprise Manager cloning that is a part of the Provisioning Pack. During Enterprise Manager cloning, the provisioning process simplifies the process by interactively asking you the details about the Oracle home (such as the location to which you want to deploy the clone, the name of the Oracle Database home, a list of the nodes in the cluster, and so on).

The Provisioning Pack feature of Oracle Grid Control provides a framework that automates the provisioning of new nodes and clusters. For data centers with many clusters, the investment in creating a cloning procedure to provision new clusters and new nodes to existing clusters is worth the effort.

The following list describes some situations in which cloning is useful:

The cloned installation acts the same as the source installation. For example, you can remove the cloned Oracle Clusterware home using OUI or patch it using OPatch. You can also use the cloned Oracle Clusterware home as the source for another cloning operation. You can create a cloned copy of a test, development, or production installation by using the command-line cloning scripts. The default cloning procedure is adequate for most cases. However, you can also customize some aspects of the cloning process, for example, to specify custom port assignments or to preserve custom settings.

The cloning process works by copying all of the files from the source Oracle Clusterware home to the destination Oracle Clusterware home. Thus, any files used by the source instance that are located outside the source Oracle Clusterware home's directory structure are not copied to the destination location.

The size of the binary files at the source and the destination may differ because these are relinked as part of the cloning operation, and the operating system patch levels may also differ between these two locations. Additionally, the number of files in the cloned home would increase because several files copied from the source, specifically those being instantiated, are backed up as part of the clone operation.

Preparing the Oracle Clusterware Home for Cloning

To prepare the source Oracle Clusterware home to be cloned, you create a copy of an installed Oracle Clusterware home that you then use to perform the cloning procedure on one or more nodes.

Use the following step-by-step procedure to prepare a copy of the Oracle Clusterware home.


Step 1   Install Oracle Clusterware.

Use the detailed instructions in your platform-specific Oracle Clusterware installation guide to perform the following steps on the source node:

  1. Install the Oracle Clusterware 11g release.

  2. Install any patches that are required (for example, 11.1.0.n), if necessary.

  3. Apply one-off patches, if necessary.

Step 2   Shutdown Oracle Clusterware.

Before copying the source Oracle Clusterware home, shut down Oracle Clusterware using the crsctl stop crs command. The following example shows the command and the messages that display during the shutdown:

[root@node1 root]# crsctl stop crs
Stopping resources.
This could take several minutes.
Successfully stopped Oracle Clusterware resources
Stopping Cluster Synchronization Services.
Shutting down the Cluster Synchronization Services daemon.
Shutdown request successfully issued.

Note that you copy the Oracle Clusterware home from only one of the nodes.

Step 3   Make a copy of the Oracle Clusterware home

To keep the installed Oracle Clusterware home as a working home, you should make a full copy of the source Oracle Clusterware home and remove the unnecessary files from the copy. For example, as the root user on Linux systems you could issue the cp command:

cp -prf CRS_HOME location_for_the_copy_of_crs_home

Step 4   Remove unnecessary files from the copy of the Oracle Clusterware home.

The Oracle Clusterware home contains files that are relevant only to the source node so you should remove the unnecessary files from the copy. You should exclude files in the log, crs/init, racg/dump, srvm/log, and cdata directories.

Use one of the following methods to exclude files from your backup file:

Step 5   Create a copy of the source Oracle Clusterware home.

On the source node, create a copy of the Oracle Clusterware home using WinZip on Windows systems and tar or gzip on Linux and UNIX systems. Make sure that the tool that you use preserves the permissions and file timestamps.

When creating the copy, the best practice is to include the release number in the name of the file. For example, the following Linux example uses the cd command to change to the Oracle Clusterware home location, and then uses the tar command to create the copy named crs11101.tgz.

The following examples describe how to archive and compress the source Oracle Clusterware home on various platforms:

Note:

Do not use the jar utility to copy and compress the Oracle Clusterware home.

Cloning Oracle Clusterware to Create a New Cluster

This section explains how to create a new cluster by cloning a successfully installed Oracle Clusterware environment and copying it to the nodes on the destination cluster. The procedures in this section describe how to use cloning for Linux, UNIX, and Windows systems.

For example, you can use cloning to quickly duplicate a successfully installed Oracle Clusterware environment to create a new cluster. Figure 3-1 shows the end result of a cloning procedure in which the Oracle Clusterware home on Node 1 has been cloned to Node 2 and Node 3 on Cluster 2, making Cluster 2 a new two-node cluster.

Figure 3-1 Cloning to Create a New Oracle Clusterware Environment

Description of Figure 3-1 follows
Description of "Figure 3-1 Cloning to Create a New Oracle Clusterware Environment"

At a high level, the steps to create a new cluster through cloning are as follows:

  1. Prepare the new cluster nodes

  2. Deploy Oracle Clusterware on the destination nodes

  3. Run the clone.pl script on each destination node

  4. Run the orainstRoot.sh script on each node

  5. Run the CRS_home/root.sh script

  6. Run the configuration assistants and the Oracle Cluster Verify utility


Step 1   Prepare the new cluster nodes

On each destination node, perform the following preinstallation steps:

See your platform-specific Oracle Clusterware installation guide for the complete preinstallation checklist.

Note:

Unlike traditional methods of installation, the cloning process does not validate your input during the preparation phase. (By comparison, during the traditional method of installation using the OUI, various checks take place during the interview phase.) Thus, if you make any mistakes during the hardware setup or in the preparation phase, then the cloned installation will fail.

Step 2   Deploy Oracle Clusterware on the destination nodes

Before you begin the cloning procedure described in this section, ensure that you have completed the prerequisite tasks to create a copy of the Oracle Clusterware home, as described in the "Preparing the Oracle Clusterware Home for Cloning" section.

On each destination node, deploy the copy of the Oracle Clusterware home by performing the following steps:

  1. If you do not have a shared Oracle Clusterware home, then restore the copy of the Oracle Clusterware home on each node in the destination cluster in the equivalent directory structure as the directory structure in which the Oracle Clusterware home resided on the source node. Skip this step if you have a shared Oracle Clusterware home.

    For example:

    • On Linux or UNIX systems, issue commands similar to the following:

      [root@node1 root]# mkdir -p /opt/oracle/product/11g/crs
      [root@node1 root]# cd /opt/oracle/product/11g/crs
      [root@node1 crs]# tar –zxvf /pathname/crs11101.tgz
      

      In the example, the pathname variable represents the directory structure in which you want to install the Oracle Clusterware home.

    • On Windows systems, unzip the Oracle Clusterware home on the destination node in the equivalent directory structure as the directory structure in which the Oracle Clusterware home resided on the source node.

  2. Change the ownership of all files to oracle:oinstall group, and create a directory for the Oracle Inventory. For example, the following commands are for a Linux system:

    [root@node1 crs]# chown –R oracle:oinstall /opt/oracle/product/11g/crs
    [root@node1 crs]# mkdir -p /opt/oracle/oraInventory
    [root@node1 crs]# chown oracle:oinstall /opt/oracle/oraInventory
    

    Note:

    You can perform this step at the same time you perform steps 3 and 4 that run the clone.pl and orainstRoot.sh scripts on each cluster node.
  3. Run the preupgrade.sh script from the CRS_Home/install directory on each target node as follows:

    @ preupgrade.sh -crshome target_crs_oh   -crsuser user_who_runs_cloning -noshutdown
    
    

Step 3   Run the clone.pl script on each destination node

To set up the new Oracle Clusterware environment, the clone.pl script requires you to provide a number of setup values to the script. You can supply the variables by either supplying input when you run the clone.pl script, or by creating a file in which you can assign values to the cloning variables. The following discussions describe these options.

Step 4   Run the orainstRoot.sh script on each node

In the Central Inventory directory on each destination node, run the orainstRoot.sh script as the operating system user that installed Oracle Clusterware. This script populates the /etc/oraInst.loc directory with the location of the central inventory.

Note that you can run the script on each node simultaneously. For example:

[root@node1 root]# /opt/oracle/oraInventory/orainstRoot.sh

Ensure the orainstRoot.sh script has completed on each destination node before proceeding to the next step.

Step 5   Run the CRS_home/root.sh script

On each destination node, run the CRS_home/root.sh script. You can run the script on only one node at a time. The following example is for a Linux or UNIX system:

  1. On the first node, issue the following command:

    [root@node1 root]# /opt/oracle/product/11g/crs/root.sh
    

    Ensure the CRS_home/root.sh script has completed on the first node before running it on the second node.

  2. On each subsequent node, issue the following command:

    [root@node2 root]# /opt/oracle/product/11g/crs/root.sh
    

The root.sh script automatically sets up the node applications: Global Services Daemon (GSD), Oracle Notification Services (ONS), and Virtual IP (VIP) resources in the OCR.

Step 6   Run the configuration assistants and the Oracle Cluster Verify utility

At the end of the Oracle Clusterware installation on each new node, run the configuration assistants and CVU using the commands in the CRS_home/cfgtoollogs/configToolAllCommands file.

Cloning to Extend Oracle Clusterware to More Nodes in the Same Cluster

For example, you can use cloning to quickly extend a successfully installed Oracle Clusterware environment to more nodes in the same cluster. Figure 3-2 shows the end result of a cloning procedure in which the Oracle Clusterware home on Node 1 has been cloned to Node 2 in the same cluster, making it a two-node cluster.

Figure 3-2 Cloning to Extend the Oracle Clusterware Environment to Another Node

Description of Figure 3-2 follows
Description of "Figure 3-2 Cloning to Extend the Oracle Clusterware Environment to Another Node"

At a high level, the steps to extend Oracle Clusterware to more nodes are nearly identical to the steps described in the "Cloning Oracle Clusterware to Create a New Cluster" section.

The following list describes the steps you perform to extend Oracle Clusterware to additional nodes in the cluster:

  1. Prepare the new cluster nodes.

  2. Deploy Oracle Clusterware on the destination nodes.

  3. Run the clone.pl script on each destination node. The following example is for Linux or UNIX systems:

    perl clone.pl ORACLE_BASE=/opt/oracle ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_
    NAME=Oracle_home_name "sl_tableList={node2:node2_priv:node2-vip}" 
    INVENTORY_LOCATION=central_inventory_location -noConfig
    
  4. Run the orainstRoot.sh script on each node on each destination node.

  5. Run the addNode script on the source node.

    Run the following command on the source node, where new_node is the name of the new node, new_node-priv is the private interconnect protocol address for the new node, and new_node-vip is the virtual interconnect protocol address for the new node:

    $ORACLE_HOME/oui/bin/addNode.sh –silent "CLUSTER_NEW_NODES=(new_nodes)"
     "CLUSTER_NEW_PRIVATE_NODE_NAMES=(new_node-priv)" "CLUSTER_NEW_VIRTUAL_
    HOSTNAMES=(new_node-vip)" –noCopy 
    

    Note:

    Because the clone.pl script has already been run on the new node, this step only updates the inventories on the nodes and instantiates scripts on the local node
  6. On the source node, run a script to instantiate the node:

    • On Linux and UNIX systems, run the rootaddnode.sh script from the CRS_HOME/install directory as root user.

    • On Windows systems, run the crssetup.add.bat script from the %CRS_HOME%\install directory.

  7. Run the CRS_home/root.sh script on each destination node in Linux and UNIX environments.

  8. Run the configuration assistants and the CLUVFY utility.

    As the user that owns the clusterware on the source node of the cluster, run the configuration assistants as described in the following steps:

    1. On Linux or UNIX systems, issue the following onsconfig command:

      onsconfig add_config node2:remote_port node3:remote_port
      

      You can obtain the remote port by issuing the cat ons.config command from the /opmn/conf directory in the CRS home location.

    2. On Windows systems, issue the racgons command:

      ./racgons add_config node2:remote_port node3:remote_port
      
    3. On Linux, UNIX, or Windows systems, run the CLUVFY utility in postinstallation verification mode to confirm that the installation of Oracle Clusterware was successful. For example:

      CRS_HOME/bin/cluvfy stage -post crsinst -n node1,node2
      

Cloning Script Variables Reference

Table 3-2 describes the variables that can be passed to the clone.pl script when you include the -O option on the command.

Table 3-2 Variables for the clone.pl Script with the -O option

Variable Datatype Description

s_clustername

String

Set the value for this variable to be the unique name of the cluster that you are creating from a cloning operation. Use a maximum of 15 characters. Valid characters for the cluster name can be any combination of lower and uppercase alphabetic characters A to Z, numerics 0 through 9 , hyphens (-), pound signs (#) and underscores (_).

INVENTORY_LOCATION

String

The location of the inventory. This directory location must exist and must be owned by the Oracle operating system group: oinstall.

sl_tableList

String List

A list of the nodes that make up the cluster. The format is a comma-delimited list of public_name:private_name:vip_name:N:Y.

Set the value of this variable to be equal to the information in the cluster configuration information table. This file contains a comma-delimited list of values. The first field designates the public node name, the second field designates the private node name, and the third field designates the virtual host name. The fourth and fifth fields are used only by OUI and should default to N:Y. OUI parses these values and assign s_publicname and s_privatename variables accordingly. For example:

{"node1:node1-priv:node1-vip:N:Y","node2:node2-priv:node2-vip:N:Y"}.

ret_PrivIntrList

String List

This is the return value from the Private Interconnect Enforcement table. This variable has values in the format {Interface Name, Subnet, Interface Type}. The value for Interface Type can be one of the following:

  • 1 to denote public,

  • 2 to denote private

  • 3 to denote Do Not Use

For example:

{"eth0:10.87.24.0:2","eth1:140.87.24.0:1","eth3:140.74.30.0:3"}

You can run the ipconfig command to identify the initial values from which you can determine the entries for ret_PrivIntrList.

n_storageTypeVDSK

Integer

If you are using:

  • A single voting disk, set this parameter to 1 (not redundant).

  • Multiple voting disks, set this parameter to 2 (redundant).

n_storageTypeOCR

Integer

If you are using:

  • A single OCR disk, set this parameter to 1 (not redundant).

  • Multiple OCR disks, set this parameter to 2 (redundant).

s_clustername

String

This variable contains user-entered cluster name information; allow a maximum of 15 characters.

VdskMirrorNotReqd

String

This variable is not required in the Oracle Cluster Registry (OCR) dialog.

CLUSTER_CONFIGURATION_FILE

String

This variable is used to pass the cluster configuration file information which is the same file as that specified during installation. You may use this file instead of sl_tablelist. This file contains the public node name, private node name, and virtual host name which is white space-delimited information for the nodes of the cluster. For example,

node1    node1-priv    node1-vip
node2    node2-priv    node2-vip

Note that if you are cloning from an existing installation, then you should use sl_tableList. Do not specify this variable for a clone installation.

s_votingdisklocation

String

Set the value of this variable to be the location of the voting disk. For example:

/oradbshare/oradata/vdisk

If you are using:

  • A single voting disk, only specify the voting disk location with the s_votingdisklocation parameter.

  • Multiple voting disks, set the s_votingdisklocation, s_OcrVdskMirror1RetVal, and the s_VdskMirror2RetVal parameters.

s_OcrVdskMirror1RetVal

String

Set the value of this variable to be the location of the first additional voting disk. You must set this variable if you choose a value of 1 for the n_storageTypeVDSK variable or Not Redundant. For example:

/oradbshare/oradata/vdiskmirror1

s_ocrpartitionlocation

String

Set the value of this variable to the OCR location. Oracle places this value in the ocr.loc file when you run the root.sh script. For example:

/oradbshare/oradata/ocr

If you are using:

  • A single OCR disk, only set the s_ocrpartitionlocation parameter to specify the location of the OCR partition.

  • Multiple OCR disks, set the s_ocrpartitionlocation parameter and the s_ocrMirrorLocation parameter.

s_ocrMirrorLocation

String

Set the value of this variable to the value for the OCR mirror location. Oracle places this value in the ocr.loc file when you run the root.sh script. You must set this variable if you choose a value of 1 for the n_storageTypeOCR variable or Not Redundant. For example:

/oradbshare/oradata/ocrmirror

s_VdskMirror2RetVal

String

Set the value of this variable to be the location of the second additional voting disk. You must set this variable if you choose a value of 1 for the n_storageTypeVDSK variable or Not Redundant.

/oradbshare/oradata/vdiskmirror2

CLUSTER_NODES

String List

The value of this variable represents the cluster node names that you selected for installation. For example, if you selected node1:

CLUSTER_NODES = {"node1"}

b_Response

Boolean

Only set this variable when performing a silent installation with a response file. The valid values are true or false.

sl_OHPartitionsAndSpace_valueFromDlg

String List

Set the value for this variable using the following format:

1 = disk number

2 = partition number

3 = partition size

4 = format type, 0 for raw and 1 for cluster file system

5 = Drive Letter (this value is not applicable if you use raw devices, use the available drive letter if you are using a cluster file system.

6 Usage type values:

  • 0 = Data or software use only

  • 1 = Primary OCR only

  • 2 = Voting disk only

  • 3 = Primary OCR and voting disk on the same cluster file system partition

  • 4 = OCR mirror only

  • 5 = OCR mirror and voting disk on the same cluster file system partition

For example, to configure the OCR and voting disk on raw devices and to not use a cluster file system for either data or software, set sl_OHPartitionsAndSpace_valueFromDlg to list only the partitions that you intend to use for an Oracle Clusterware installation using the following format:

sl_OhPartitionsAndSpace_valueFromDlg =
 {Disk,Partition,partition size, 0,N/A,1,Disk,Partition,
 partition size,0,N/A,2,.....)

Locating and Viewing Log Files Generated During Cloning

The cloning script runs multiple tools, each of which may generate its own log files. After the clone.pl script finishes running, you can view log files to obtain more information about the cloning process.

The following log files that are generated during cloning are the key log files of interest for diagnostic purposes:

Table 3-3 describes how to find the location of the Oracle inventory directory.

Table 3-3 Finding the Location of the Oracle Inventory Directory

Type of System ,,, Location of the Oracle Inventory Directory

All UNIX computers except Linux and IBM AIX

/var/opt/oracle/oraInst.loc

IBM AIX and Linux

/etc/oraInst.loc file.

Windows

Obtain the location from the Windows Registry key: HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\INST_LOC