Skip Headers
Oracle® Database 2 Day + Real Application Clusters Guide
11g Release 1 (11.1)

Part Number B28252-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

9 Adding and Deleting Nodes and Instances

This chapter describes how to add nodes and instances in Oracle Real Application Clusters (Oracle RAC) environments. You can use these methods when configuring a new Oracle RAC cluster, or when scaling up an existing Oracle RAC cluster.

This chapter includes the following sections:

Note:

For this chapter, it is very important that you perform each step in the order shown.

See Also:

About Preparing Access to the New Node

To prepare the new node prior to installing the Oracle software, see Chapter 2, "Preparing Your Cluster".

It is critical that you follow the configuration steps in order for the following procedures to work. These steps include, but are not limited to the following:

Extending the Oracle Clusterware Home Directory

Now that the new node has been configured to support Oracle Clusterware, you use Oracle Universal Installer (OUI) to add a CRS home to the node being added to your Oracle RAC cluster. This section assumes that you are adding a node named docrac3 and that you have already successfully installed Oracle Clusterware on docrac1 in a nonshared home, where CRS_home represents the successfully installed Oracle Clusterware home. Adding a new node to an Oracle RAC cluster is sometimes referred to as cloning.

To extend the Oracle Clusterware installation to include the new node:

  1. Verify the ORACLE_HOME environment variable on docrac1 directs you to the successfully installed CRS home on that node.

  2. Go to CRS_home/oui/bin and run the addNode.sh script.

    cd /crs/oui/bin
    ./addNode.sh
    

    OUI starts and first displays the Welcome window.

  3. Click Next.

    The Specify Cluster Nodes to Add to Installation window appears.

  4. Select the node or nodes that you want to add, for example, docrac3. Make sure the public, private and VIP names are configured correctly for the node you are adding. Click Next.

  5. Verify the entries that OUI displays on the Summary window and click Next.

    The Cluster Node Addition Progress window appears. During the installation process, you will be prompted to run scripts to complete the configuration.

  6. Run the rootaddNode.sh script from the CRS_home/install/ directory on docrac1 as the root user when prompted to do so. For example:

    [docrac1:oracle]$ su root
    [docrac1:root]# cd /crs/install
    [docrac1:root]# ./rootaddNode.sh
    

    This script adds the node applications of the new node to the Oracle Cluster Registry (OCR) configuration.

  7. Run the orainstRoot.sh script on the node docrac3 if OUI prompts you to do so. When finished, click OK in the OUI window to continue with the installation.

    Another window appears, prompting you to run the root.sh script.

  8. Run the CRS_home/root.sh script as the root user on the node docrac3 to start Oracle Clusterware on the new node.

    [docrac3:oracle]$ su root
    [docrac3:root]# cd /crs
    [docrac3:root]# ./root.sh
    
  9. Return to the OUI window after the script runs successfully, then click OK.

    OUI displays the End of Installation window.

  10. Exit the installer.

  11. Obtain the Oracle Notification Services (ONS) port identifier used by the new node, which you need to know for the next step, by running the ons.config script in the CRS_home/opmn/conf directory on the docrac1 node, as shown in the following example:

    [docrac1:oracle]$ cd /crs/opmn/conf
    [docrac1:oracle]$ cat ons.config
    

    After you locate the ONS port identifier for the new node, you must make sure that the ONS on docrac1 can communicate with the ONS on the new node, docrac3.

  12. Add the new node's ONS configuration information to the shared OCR. From the CRS_home/bin directory on the node docrac1, run the ONS configuration utility as shown in the following example, where remote_port is the port identifier from Step 11, and docrac3 is the name of the node that you are adding:

    [docrac1:oracle]$ ./racgons add_config docrac3:remote_port
    

You should now have Oracle Clusterware running on the new node. To verify the installation of Oracle Clusterware on the new node, you can run the following command as the root user on the newly configured node, docrac3:

[docrac1:oracle]$ opt/oracle/crs/bin/cluvfy stage -post crsinst -n docrac3 -verbose

See Also:

Extending the Automatic Storage Management Home Directory

To extend an existing Oracle RAC database to a new node, you must configure the shared storage for the new database instances that will be created on new node. You must configure access to the same shared storage that is already used by the existing database instances in the cluster. For example, the sales cluster database in this guide uses Automatic Storage Management (ASM) for the database shared storage, so you must configure ASM on the node being added to the cluster.

Because you installed ASM in its own home directory, you must configure an ASM home on the new node using OUI. The procedure for adding an ASM home to the new node is very similar to the procedure you just completed for extending Oracle Clusterware to the new node.

Note:

If the ASM home directory is the same as the Oracle home directory in your installation, then you do not need to complete the steps in this section.

To extend the ASM installation to include the new node:

  1. Ensure that you have successfully installed the ASM software on at least one node in your cluster environment. In the following steps, ASM_home refers to the location of the successfully installed ASM software.

  2. Go to the ASM_home/oui/bin directory on docrac1 and run the addNode.sh script.

  3. When OUI displays the Node Selection window, select the node to be added (docrac3), and then click Next.

  4. Verify the entries that OUI displays on the Summary window, and then click Next.

  5. Run the root.sh script on the new node, docrac3, from the ASM home directory on that node when OUI prompts you to do so.

You now have a copy of the ASM software on the new node.

See Also:

Extending the Oracle RAC Home Directory

Now that you have extended the CRS home and ASM home to the new node, you must extend the Oracle home on docrac1 to docrac3. The following steps assume that you have already completed the previous tasks described in this section, and that docrac3 is already a member node of the cluster to which docrac1 belongs.

The procedure for adding an Oracle home to the new node is very similar to the procedure you just completed for extending ASM to the new node.

To extend the Oracle RAC installation to include the new node:

  1. Ensure that you have successfully installed the Oracle RAC software on at least one node in your cluster environment. To use these procedures as shown, replace Oracle_home with the location of your installed Oracle home directory.

  2. Go to the Oracle_home/oui/bin directory on docrac1 and run the addNode.sh script.

  3. When OUI displays the Specify Cluster Nodes to Add to Installation window, select the node to be added (docrac3), and then click Next.

  4. Verify the entries that OUI displays in the Cluster Node Addition Summary window, and then click Next.

    The Cluster Node Addition Progress window appears.

  5. When prompted to do so, run the root.sh script s the root user on the new node, docrac3, from the Oracle home directory on that node.

  6. Return to the OUI window and click OK. The End of Installation window appears.

  7. Exit the installer.

After completing these steps, you should have an installed Oracle home on the new node.

See Also:

Adding an Instance to the Cluster Database

You can use Enterprise Manager to add an instance to your cluster database. You must first configured the new node to be a part of the cluster and installed the software on the new node.

To add an instance to the cluster database:

  1. From the Cluster Database Home page, click Server.

  2. Under the heading Change Database, click Add Instance.

    Description of add_instance1.gif follows
    Description of the illustration add_instance1.gif

    The Add Instance: Cluster Credentials page appears.

  3. Enter the host credentials and ASM credentials, then click Next.

    The Add Instance: Host page appears.

  4. Select the node on which you want to create the new instance, verify the new instance name is correct, and then Next.

    Description of add_instance2.gif follows
    Description of the illustration add_instance2.gif

    After the selected host has been validated, the Add Instance: Review page appears.

  5. Review the information, then click Submit Job to proceed.

    A confirmation page appears.

  6. Click View Job to check on the status of the submitted job.

    The Job Run detail page appears.

  7. Click your browser's Refresh button until the job shows a status of Succeeded or Failed.

    If the job shows a status of Failed, you can click the name of the step that failed to view the reason for the failure.

  8. Click the Database tab to return to the Cluster Database Home page.

    The number of instances available in the cluster database is increased by one.

Deleting an Instance From the Cluster Database

To delete an instance from the cluster:

  1. From the Cluster Database Home page, click Server.

  2. On the Server subpage, under the heading Change Database, click Delete Instance.

    Description of delete_instance1.gif follows
    Description of the illustration delete_instance1.gif

    The Delete Instance: Cluster Credentials page appears.

  3. Enter your cluster credentials and ASM credentials, then click Next.

    The Delete Instance: Database Instance page appears

  4. Select the instance you want to delete, then click Next.

    Description of delete_instance2.gif follows
    Description of the illustration delete_instance2.gif

    The Delete Instance: Review page appears.

  5. Review the information, and if correct, click Submit Job to continue. Otherwise, click Back and correct the information.

    A Confirmation page appears.

  6. Click View Job to view the status of the node deletion job.

    A Job Run detail page appears.

  7. Click your browser's Refresh button until the job shows a status of Succeeded or Failed.

    Description of delete_instance3.gif follows
    Description of the illustration delete_instance3.gif

    If the job shows a status of Failed, you can click the name of the step that failed to view the reason for the failure.

  8. Click the Database tab to return to the Cluster Database Home page.

    The number of instances available in the cluster database is reduced by one.