Skip Headers
Oracle® Clusterware Administration and Deployment Guide
11g Release 1 (11.1)

Part Number B28255-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

4 Adding and Deleting Oracle Clusterware Homes

This chapter describes how to use the addNode.sh and rootdeletenode.sh scripts to copy the Oracle Clusterware home from an existing Oracle Clusterware home to other nodes. This chapter provides instructions for Linux and UNIX systems.

You should use the add node procedures described in this chapter to add or delete Oracle Clusterware from nodes in the cluster. If your goal is create new clusters or extend Oracle Clusterware to more nodes in the same cluster, then use the cloning procedures that are described in Chapter 3.

The topics in this chapter include the following:

Prerequisite Steps for Adding Oracle Clusterware

The following steps assume that you already have an operative Linux or UNIX environment.

Complete the following steps to prepare the new nodes in the cluster:

  1. Make physical connections

    Connect the new nodes' hardware to the network infrastructure of your cluster. This includes establishing electrical connections, configuring network interconnects, configuring shared disk subsystem connections, and so on. See your hardware vendor documentation for details about this step.

  2. Install the operating system

    Install a cloned image of the operating system that matches the operating system on the other nodes in your cluster. This includes installing required service patches and drivers. See your hardware vendor documentation for details about this process.

  3. Create Oracle users.

    As root user, create the Oracle users and groups using the same user ID and group ID as on the existing nodes.

  4. Verify the installation with the Cluster Verification Utility (CVU) using the following steps:

    1. From the /bin directory in the CRS_home on the existing nodes, run the CVU command to verify your installation at the post hardware installation stage as shown in the following example, where node_list is a comma-delimited list of nodes you want in your cluster:

      cluvfy stage -post hwos -n node_list|all [-verbose]
      

      This command causes CVU to verify your hardware and operating system environment at the post-hardware setup stage. After you have configured the hardware and operating systems on the new nodes, you can use this command to verify the node is reachable, for example, to all of the nodes from the local node. You can also use this command to verify user equivalence to all given nodes the local node, node connectivity among all of the given nodes, accessibility to shared storage from all of the given nodes, and so on.

      Note:

      You can only use the all option with the -n argument if you have set the CV_NODELIST variable to represent the list of nodes on which you want to perform the CVU operation.
    2. From the /bin directory in the CRS_home on the existing nodes, run the CVU command to obtain a detailed comparison of the properties of the reference node with all of the other nodes that are part of your current cluster environment where ref_node is a node in your existing cluster against which you want CVU to compare, for example, the newly added nodes that you specify with the comma-delimited list in node_list for the -n option, orainventory_group is the name of the Oracle inventory group, and osdba_group is the name of the OSDBA group:

      cluvfy comp peer [ -refnode ref_node ] -n node_list 
      [ -orainv orainventory_group ] [ -osdba osdba_group ] [-verbose]
      

    Note:

    For the reference node, select a node from your existing cluster nodes against which you want CVU to compare, for example, the newly added nodes that you specify with the -n option.
  5. Check the installation

    To verify that your installation is configured correctly, perform the following steps:

    1. Ensure that the new nodes can access the private interconnect. This interconnect must be properly configured before you can complete the procedures described in this chapter.

    2. If you are not using a cluster file system, then determine the location on which your cluster software was installed on the existing nodes. Make sure that you have at least 250MB of free space on the same location on each of the new nodes to install Oracle Clusterware. In addition, ensure you have enough free space on each new node to install the Oracle binaries.

    3. Ensure that the Oracle Cluster Registry (OCR) and the voting disk are accessible by the new nodes using the same path as the other nodes use. In addition, the OCR and voting disk devices must have the same permissions as on the existing nodes.

    4. Verify user equivalence to and from an existing node to the new nodes using rsh or ssh on Linux and UNIX systems, or on Window systems make sure that you can run the following command from all of the existing nodes of your cluster where the hostname is the public network name of the new node:

      NET USE \\hostname\C$
      

      You have the required administrative privileges on each node if the operating system responds with:

      Command completed successfully.
      

    After completing the procedures in this section, your new nodes are connected to the cluster and configured with the required software to make them visible to Oracle Clusterware.

    Note:

    Avoid changing host names after you complete the Oracle Clusterware installation, including adding or deleting domain qualifications. Nodes with changed host names must be deleted from the cluster and added back with the new name.

Adding and Deleting Oracle Clusterware Homes on Linux and UNIX Systems

This section explains Oracle Clusterware home addition and deletion on Linux and UNIX systems and it assumes that you have already performed the steps in the "Prerequisite Steps for Adding Oracle Clusterware" section.

For node addition, ensure that you install the required operating system patches and updates on the new nodes. Then configure the new nodes to be part of your cluster at the network level. Use the instructions in this section to extend the Oracle Clusterware home from an existing Oracle Clusterware home to the new nodes

Finally, you can optionally extend the Oracle database software with Oracle RAC components to the new nodes and make the new nodes members of the existing Oracle RAC database. See the node addition procedures described in Oracle Real Application Clusters Administration and Deployment Guide.

This section includes the following topics:

Adding an Oracle Clusterware Home to a New Node On Linux or UNIX Systems

This section describes how to use Oracle Universal Installer (OUI) to add an Oracle Clusterware home to a node in your cluster. This documentation assumes:

  • There is an existing cluster that has a node named node1

  • You are adding Oracle Clusterware from node2

  • You have already successfully installed Oracle Clusterware on node1 in a nonshared home, where CRS_home represents the successfully installed home

You can use either of the following procedures to add an Oracle Clusterware home to a node:

Adding an Oracle Clusterware Home to a New Node Using OUI in Interactive Mode

This procedure assumes that you have performed the tasks outlined in "Prerequisite Steps for Adding Oracle Clusterware". OUI requires access to the private interconnect that you verified as part of the installation validation in Step 1. If OUI cannot make the required connections, then you will not be able to complete the following steps to add Oracle Clusterware to other nodes.

Note:

Instead of performing the first six steps of this procedure, you can alternatively run the addNode.sh script in silent mode as described "Adding an Oracle Clusterware Home to a New Node Using OUI in Silent Mode".
  1. Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. Also, for these procedures to complete successfully, you must ensure that CRS_home identifies your successfully installed Oracle Clusterware home.

  2. Start OUI:

    Go to CRS_home/oui/bin and run the addNode.sh script on one of the existing nodes. The OUI runs in add node mode and the OUI Welcome page appears. Click Next and the Specify Cluster Nodes for Node Addition page displays.

  3. OUI displays the Node Selection Page on which you should select the node or nodes that you want to add and click Next.

    The upper table on the Specify Cluster Nodes for Node Addition page shows the existing nodes, the private node names, and the virtual IP (VIP) addresses that are associated with Oracle Clusterware. Use the lower table to enter the public, private node names and the virtual hostnames of the new nodes.

  4. Verify the entries that OUI displays on the Summary Page and click Next.

    If any verifications fail, then OUI redisplays the Specify Cluster Nodes for Node Addition page with a Status column in both tables indicating errors. Correct the errors or deselect the nodes that have errors and proceed. However, you cannot deselect existing nodes; you must correct problems on nodes that are already part of your cluster before you can proceed with node addition. If all the checks succeed, then OUI displays the Node Addition Summary page.

  5. The Node Addition Summary page displays the following information showing the products that are installed in the Oracle Clusterware home that you are extending to the new nodes:

    • The source for the add node process, which in this case is the Oracle Clusterware home

    • The private node names that you entered for the new nodes

    • The new nodes that you entered

    • The required and available space on the new nodes

    • The installed products listing the products that are already installed on the existing Oracle Clusterware home

    Click Next and OUI displays the Cluster Node Addition Progress page.

  6. The Cluster Node Addition Progress page shows the status of the cluster node addition process. The table on this page has two columns showing the four phases of the node addition process and the phases' statuses as follows:

    • Instantiate Root Scripts—Instantiates rootaddNode.sh with the public nodes, private node names, and virtual hostnames that you entered on the Cluster Node Addition page.

    • Copy the Oracle Clusterware home to the New Nodes—Copies the Oracle Clusterware home to the new nodes unless the Oracle Clusterware home is on a cluster file system.

    • Save Cluster Inventory—Updates the node list associated with the Oracle Clusterware home and its inventory.

    • Run rootaddNode.sh and root.sh—Displays a dialog prompting you to run the rootaddNode.sh scriptFoot 1  from the local node (the node on which you are running OUI) and to run the root.sh scriptFoot 2  on the new nodes. If OUI detects that the new nodes do not have an inventory location, then OUI instructs you to run the orainstRoot.sh scriptFoot 3  on those nodes. The central inventory location is the same as that of the local node. The addNodeActionstimestamp.log file, where timestamp shows the session start date and time, contains information about which scripts you need to run and on which nodes you need to run them.

    The Cluster Node Addition Progress page's Status column displays In Progress while the phase is in progress, Suspended when the phase is pending execution, and Succeeded after the phase completes. On completion, click Exit to end the OUI session. After OUI displays the End of Node Addition page, click Exit to end the OUI session.

  7. Run the configuration assistants and CVU using the commands in the CRS_home/cfgtoollogs/configToolAllCommands file, replacing the existing node name with the new node name for each command.

Adding an Oracle Clusterware Home to a New Node Using OUI in Silent Mode

  1. Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To perform the following procedure, the CRS_home must identify your successfully installed Oracle Clusterware home.

  2. Go to CRS_home/oui/bin and run the addNode.sh script using the following syntax where node2 is the name of the new node that you are adding, node2-priv is the private node name for the new node, and node2-vip is the VIP name for the new node:

    ./addNode.sh –silent "CLUSTER_NEW_NODES={node2}" 
    "CLUSTER_NEW_PRIVATE_NODE_NAMES={node2-priv}" 
    "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-vip}" 
    

    Alternatively, you can specify the variable=value entries in a response file and run the addNode script as follows:

    addNode.sh  -responseFile filename OR addNode.bat  -responseFile filename
    

    See Also:

    Oracle Universal Installer and OPatch User's Guide for details about how to configure command-line response files

    Note:

    command-line values always override response file values.
  3. Perform step 7 in the "Adding an Oracle Clusterware Home to a New Node Using OUI in Interactive Mode" section.

Deleting an Oracle Clusterware Home from a Linux or UNIX System

The procedures for deleting an Oracle Clusterware home assume that you have successfully installed the Oracle Clusterware on the node from which you want to delete the Oracle Clusterware home. You can use either of the following procedures to delete an Oracle Clusterware home from a node:

Note:

Oracle recommends that you back up your voting disk and OCR files after you complete the node deletion process.

Deleting an Oracle Clusterware Home Using OUI in Interactive Mode

Use the following steps to remove Oracle Clusterware from a cluster node.


Step 1   Verify the location of the Oracle Clusterware home

Ensure that CRS_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where CRS_home is the location of the installed Oracle Clusterware software

Step 2   Remove the stored network configuration

If you ran the Oracle Interface Configuration Tool (OIFCFG) with the -global option during the installation, then skip this step.

Otherwise, from a node that is going to remain in your cluster, in the CRS_home/bin directory, run the following command where node2 is the name of the node that you are deleting:

./oifcfg delif –node node2

Step 3   Obtain the remote port number

Obtain the remote port number, which you will use in the next step. To do this, issue the following command from the CRS_home/opmn/conf directory:

cat ons.config

Step 4   Remove the ONS daemon configuration

From CRS_home/bin on a node that is going to remain in the cluster, run the Oracle Notification Service Utility (RACGONS). In the following example, the remote_port variable represents the ONS remote port number that you obtained in step 3 and node2 is the name of the node that you are deleting:

./racgons remove_config node2:remote_port

Step 5   Disable the Oracle Clusterware applications

On the node to be deleted, run the rootdelete.sh script as the root user from the CRS_home/install directory to disable the Oracle Clusterware applications and daemons running on the node. If you are deleting Oracle Clusterware from more than one node, then perform this step on each node that you are deleting.

Step 6   Delete the node and update the cluster registry

From any node that you are not deleting, issue the following command from the CRS_home/install directory as the root user to delete the node from the cluster and to update the Oracle Cluster Registry (OCR). In the following command, the variable node2,node2-number represents the node and the node number that you want to delete:

./rootdeletenode.sh node2,node2-number

If necessary, identify the node number using the following command on the node that you are deleting:

CRS_home/bin/olsnodes -n

Step 7   Remove the node from the node list

On the node that is to be deleted, run the following command from the CRS_home/oui/bin directory where node_to_be_deleted is the name of the node that you are deleting:

./runInstaller -updateNodeList ORACLE_HOME=CRS_home 
"CLUSTER_NODES={node_to_be_deleted}" 
CRS=TRUE -local

Step 8   Detach or deinstall the Oracle Clusterware software

On the node that you are deleting, run OUI using the runInstaller command from the CRS_home/oui/bin directory. Depending on whether you have a shared or nonshared Oracle home, complete one of the following procedures:

  • If you have a shared home, then on any node other than the node to be deleted, run the following command from the CRS_home/oui/bin directory:

    ./runInstaller -detachHome  ORACLE_HOME=CRS_home
    
  • For a nonshared home, deinstall the Oracle Clusterware home from the node that you are deleting using OUI as follows by issuing the following command from the Oracle_home/oui/bin directory, where CRS_home is the name defined for the Oracle Clusterware home:

    ./runInstaller -deinstall "REMOVE_HOMES={CRS_home}"
    

Step 9   Update the node list on the remaining nodes

On any node other than the node you are deleting, run the following command from the CRS_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your cluster:

./runInstaller -updateNodeList ORACLE_HOME=CRS_home 
"CLUSTER_NODES={remaining_nodes_list}" 
CRS=TRUE

Deleting an Oracle Clusterware Home Using OUI in Silent Mode

Use the following steps to remove Oracle Clusterware from a cluster node.

  1. Ensure that CRS_home correctly identifies the Oracle Clusterware home on each node.

  2. Detach or deinstall Oracle Clusterware.

    Depending on whether you have a shared or nonshared Oracle Clusterware home, complete one of the following two procedures:

    • For shared homes, do not perform a deinstallation operation. Instead, perform a detach home operation on the node that you are deleting. To do this, run the following command from CRS_home/oui/bin:

      ./runInstaller -detachHome ORACLE_HOME=CRS_home
      
    • For a nonshared home, deinstall the Oracle Clusterware home from the node that you are deleting using OUI as follows by issuing the following command from the Oracle_home/oui/bin directory, where CRS_home is the name defined for the Oracle Clusterware home:

      ./runInstaller -deinstall –silent "REMOVE_HOMES={CRS_home}"
      
  3. Update the node list.

    On any node other than the node you are deleting, run the following command from the CRS_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your cluster:

    ./runInstaller -updateNodeList ORACLE_HOME=CRS_home 
    "CLUSTER_NODES={remaining_nodes_list}" 
    CRS=TRUE
    

For node additions, ensure that you install the required operating system patches and updates on the new nodes. Then configure the new nodes to be part of your cluster at the network level. Use the instructions in the following sections to extend the Oracle Clusterware home from an existing Oracle Clusterware home to the new nodes

Finally, you can optionally extend the Oracle database software with Oracle RAC components to the new nodes and make the new nodes members of the existing Oracle RAC database. See the node addition procedures described in Oracle Real Application Clusters Administration and Deployment Guide.

This section includes the following topics:

Adding an Oracle Clusterware Home to a Windows System

This section describes how to add new nodes to Oracle Clusterware using OUI. The OUI requires access to the private interconnect that you checked in the "Prerequisite Steps for Adding Oracle Clusterware" section.

Perform the following steps:

  1. On one of the existing nodes go to the CRS_home\oui\bin directory and run the addnode.bat script to start OUI.

  2. The OUI runs in the add node mode and the OUI Welcome page appears. Click Next and the Specify Cluster Nodes for Node Addition page appears.

  3. The upper table on the Specify Cluster Nodes for Node Addition page shows the existing nodes, the private node names and the virtual IP (VIP) addresses that are associated with Oracle Clusterware. Use the lower table to enter the public, private node names and the virtual hostnames of the new nodes.

  4. Click Next and OUI verifies connectivity on the existing nodes and on the new nodes. The verifications that OUI performs include determining whether:

    • The nodes are up

    • The nodes are accessible by way of the network

      Note:

      If any of the existing nodes are down, then you can proceed with the procedure. However, once the nodes are up, you must run the following command on each of those nodes:
      setup.exe -updateNodeList -local 
      "CLUSTER_NODES={available_node_list}"
      ORACLE_HOME=CRS_home
      

      This operation should be run from the CRS_home\oui\bin directory and the available_node_list value is a comma-delimited list of all of nodes currently in the cluster and CRS_home defines the Oracle Clusterware home directory.

    • The virtual hostnames are not already in use on the network

    • The user has write permission to create the Oracle Clusterware home on the new nodes

    • The user has write permission to the OUI inventory in the C:\Program Files\Oracle\Inventory directory

  5. If OUI detects that the new nodes do not have an inventory location, then:

    • The OUI automatically updates the inventory location in the Registry key

    If any verifications fail, then OUI redisplays the Specify Cluster Nodes for Node Addition page with a Status column in both tables indicating errors. Correct the errors or deselect the nodes that have errors and proceed. However, you cannot deselect existing nodes; you must correct problems on nodes that are already part of your cluster before you can proceed with node addition. If all of the checks succeed, then OUI displays the Node Addition Summary page.

  6. The Node Addition Summary page displays the following information showing the products that are installed in the Oracle Clusterware home that you are extending to the new nodes:

    • The source for the add node process, which in this case is the Oracle Clusterware home

    • The private node names that you entered for the new nodes

    • The new nodes that you entered

    • The required and available space on the new nodes

    • The installed products listing the products that are already installed on the existing the Oracle Clusterware home

    Click Next and OUI displays the Cluster Node Addition Progress page.

  7. The Cluster Node Addition Progress page shows the status of the cluster node addition process. The table on this page has two columns showing the phase of the node addition process and the phase's status according to the following:

    This page shows the following three OUI phases:

    • Copy the Oracle Clusterware Home to New Nodes—Copies the Oracle Clusterware home to the new nodes unless the Oracle Clusterware home is on the Oracle Cluster File System.

    • Perform Oracle Home Setup—Updates the Registry entries for the new nodes, creates the services, and creates folder entries.

    • Save Cluster Inventory—Updates the node list associated with the Oracle Clusterware home and its inventory.

    The Cluster Node Addition Progress page's Status column displays In Progress while the phase is in progress, Suspended when the phase is pending execution, and Succeeded after the phase completes. On completion, click Exit to end the OUI session. After OUI displays the End of Node Addition page, click Exit to end the OUI session.

  8. From CRS_home\install on node1, run the crssetup.add.bat script.

  9. Use the following command to perform an integrated validation of the Oracle Clusterware setup on all of the configured nodes, both the preexisting nodes and the nodes that you have added

    cluvfy comp stage -post crinst -n all [-verbose]
    

The CVU -post crinst stage check verifies the integrity of the Oracle Clusterware components. After you have completed the procedures in this section for adding nodes at the Oracle Clusterware layer, you have successfully extended the Oracle Clusterware home from your existing the Oracle Clusterware home to the new nodes. Proceed to Step 3 to prepare the storage for Oracle RAC on the new nodes.

You can optionally run addnode.bat in silent mode, replacing steps 1 through 6 as follows, where nodeI, nodeI+1, and so on are the new nodes that you are adding:

addnode.bat  "CLUSTER_NEW_NODES={nodeI,nodeI+1,…nodeI+n}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={node-privI,node-privI+1,…node-privI+n}" 
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={node-vipI,node-vipI+1,…,node-vipI+n}"

You can alternatively specify the variable=value entries in a response file and run addnode as follows:

addnode.bat  -responseFile filename

Command-line values always override response file values.

See Also:

Oracle Universal Installer and OPatch User's Guide for details about how to configure command-line response files

Run crssetup.add.bat on the local node, or on the node on which you were performing this procedure.

After you have completed the procedures in this section for adding nodes at the Oracle Clusterware layer, you have successfully extended the Oracle Clusterware home from your existing the Oracle Clusterware home to additional nodes.

Deleting an Oracle Clusterware Home from a Windows System

This section describes how to delete Oracle Clusterware from Windows systems. These procedures assume that an Oracle Clusterware home is installed on node1 and node2, and that you want to delete node2 from the cluster.

This section contains the following topics:

Note:

Oracle recommends that you back up your voting disk and Oracle Cluster Registry files after you complete any node addition or deletion procedures.

Deleting an Oracle Clusterware Home Using OUI in Interactive Mode

Perform the following procedure to use OUI to delete the Oracle Clusterware home from a node:

  1. Perform the delete node operation for database homes as described in Oracle Real Application Clusters Administration and Deployment Guide.

  2. If you did not run the oifcfg command with the -global option, then from node1 run the following command:

    oifcfg delif –node node2
    
  3. From node1 in the CRS_home\bin directory run the following command where remote_port is the ONS remote port number:

    racgons remove_config node2:remote_port
    

    You can determine the remote port by viewing the contents of the file, CRS_home\opmn\conf\ons.config.

  4. Run srvctl to stop and remove the nodeapps from node 2. From CRS_home/bin run the following commands:

    srvctl stop nodeapps –n node2
    srvctl remove nodeapps –n node2
    
  5. On node1, or on any node that is not being deleted, run the following command from CRS_home\bin where node_name is the node to be deleted and node_number is the node's number as obtained from the output from the olsnodes -n command:

    crssetup del –nn node_name,node_number
    
  6. On each node that you want to delete (node2 in this case), run the following command from CRS_home\oui\bin:

    setup.exe –updateNodeList ORACLE_HOME=CRS_home 
    "CLUSTER_NODES={node2}" CRS=TRUE –local
    
  7. On node2, using OUI setup.exe from CRS_home\oui\bin:

    • If you do not have a shared home, then deinstall the Oracle Clusterware installation by running the setup.exe script from CRS_home\oui\bin.

    • If you have a shared home, then do not perform a deinstallation. Instead, perform the following steps on the node that you want to delete:

      • Run the following command from CRS_home\oui\bin to detach the Oracle Clusterware home on node2:

        setup.exe –detachHome –silent ORACLE_HOME=CRS_home
        
      • On node2, stop and delete any services that are associated with this Oracle Clusterware home. In addition, delete any Registry entries and path entries that are associated with this Oracle Clusterware home. Also, delete all of the start menu items associated with this Oracle Clusterware home. Delete the central inventory and the files (not Oracle home files) under C:\WINDOWS\system32\drivers.

  8. On node1, or in the case of a multiple node installation, on any node other than the one to be deleted, run the following command from CRS_home\oui\bin where node_list is a comma-delimited list of nodes that are to remain part of the Oracle Clusterware:

    setup.exe –updateNodeList ORACLE_HOME=CRS_home
    "CLUSTER_NODES={node_list}" CRS=TRUE
    

Deleting an Oracle Clusterware Home Using OUI in Silent Mode

Use the following procedure to delete an Oracle Clusterware home by using OUI in silent mode:

  1. Perform steps 1 through 6 from the previous section for using OUI interactively under the heading "Deleting an Oracle Clusterware Home Using OUI in Interactive Mode" to delete nodes at the Oracle Clusterware layer.

  2. On node2, using OUI setup.exe from CRS_home\oui\bin:

    • If you have a nonshared home, then deinstall the Oracle Clusterware home as follows:

      setup.exe –silent  -deinstall "REMOVE_HOMES={CRS_home}"
      
    • If you have a shared home, then do not perform a deinstallation. Instead, perform the following steps on the node that you want to delete:

      • Run the following command from CRS_home\oui\bin to detach the Oracle Clusterware home on node2:

        setup.exe –detachHome –silent ORACLE_HOME=CRS_home
        
      • On node 2, stop and delete any services that are associated with this Oracle Clusterware home. In addition, delete any Registry entries and path entries that are associated with this Oracle Clusterware home. Also, delete all of the start menu items associated with this Oracle Clusterware home. Delete the central inventory and the files (not including CRS home) under C:\WINDOWS\system32\drivers.

  3. Perform step 8 from the previous OUI procedure to update the node list.



Footnote Legend

Footnote 1: Run the rootaddNode.sh script from the CRS_home/install/ directory on the node from which you are running OUI.
Footnote 2: Run the root.sh script on the new node from the Oracle Clusterware home to start Oracle Clusterware on the new node.
Footnote 3: Run the orainstRoot.sh script on the new node if OUI prompts you to do so.