Skip Headers
Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide
10g Release 2 (10.2) for Microsoft Windows

Part Number B14207-02
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Feedback

Go to previous page
Previous
Go to next page
Next
View PDF

1 Introduction to Installing Oracle Clusterware and Oracle Real Application Clusters

This chapter provides an overview of the Oracle Clusterware and Oracle Real Application Clusters (RAC) installation and configuration procedures and includes the following topics:

1.1 Oracle Clusterware and Oracle Real Application Clusters Documentation Overview

This section describes the Oracle Clusterware and RAC documentation set.

Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide for Microsoft Windows

Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide for Microsoft Windows (this document) contains the pre-installation, installation, and post-installation information for Microsoft Windows. Additional information for this release may be available in the Oracle Database 10g README or Release Notes. The platform-specific Oracle Database 10g media contains a copy of this book in both HTML and PDF formats.

The Server Documentation media contains Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide

Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide

Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide describes how to administer the Oracle Clusterware components such as the voting disks and the Oracle Cluster Registry (OCR) devices. This book also explains how to administer storage and how to use RAC scalability features to add and delete instances and nodes. This book also discusses how to use Recovery Manager (RMAN), and how to perform backup and recovery in RAC.

Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide describes RAC deployment topics such as services, high availability, and workload management. The book describes how the Automatic Workload Repository (AWR) tracks and reports service levels and how you can use service level thresholds and alerts to balance complex workloads in your RAC environment. The book also describes how to make your applications highly available using the Oracle Clusterware.

Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide also provides information about how to monitor and tune performance in RAC environments by using Oracle Enterprise Manager and by using information in AWR and Oracle performance views. This book also highlights some application-specific deployment techniques for online transaction processing and data warehousing environments.

1.2 General System Installation Requirements for Oracle Real Application Clusters

Each node that is going to be part of your Oracle Clusterware and RAC installation must meet the hardware and software requirements described in this section. You can verify these requirements with the Cluster Verification Utility. This book provides step-by-step tasks that you can follow to prepare your hardware and software to meet these requirements for your system in Part II of this book. You can verify that you have met these requirements with the Cluster Verification Utility.

Before using this manual, however, you should read the Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide to inform yourself about concepts such as services, setting up storage, and other information relevant to configuring your cluster.

1.2.1 Cluster Verification Utility

Cluster Verification Utility (CVU) is provided with Oracle Database 10g Release 2 (10.2) with Real Application Clusters. The purpose of CVU is to enable you or your hardware vendors to verify during setup and configuration that all components required for a successful installation of a RAC database are installed and configured correctly, and to provide you with ongoing assistance any time you need to make changes to your RAC cluster. You are provided with commands to use CVU to verify completion of tasks in this guide.

There are two types of CVU commands:

  • Stage Commands are CVU commands used to test system setup and readiness for successful software installation, database creation, or configuration change steps. These commands are also used to validate successful completion of specific cluster configuration steps.

  • Component Commands are CVU commands used to check individual cluster components, and determine their state.

This guide provides stage and component CVU commands where appropriate to assist you with cluster verification.


See Also:

Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for detailed information about the Cluster Verification Utility.

1.2.2 Hardware Requirements for Oracle Database 10g Real Application Clusters

Each node in a cluster requires the following hardware:

  • External shared disks for storing the Oracle Clusterware and database files.

    The disk configuration options available to you are described in Chapter 3, "Storage Pre-Installation Tasks". Review these options before you decide which storage option to use in your RAC environment. However, note that when Database Configuration Assistant (DBCA) configures automatic disk backup, it uses a database recovery area which must be shared. The database files and recovery files do not necessarily have to be located on the same type of storage.

  • One private internet protocol (IP) address for each node to serve as the private interconnect. The following must be true for each private IP address:

    • It must be separate from the public network

    • It must be accessible on the same network interface on each node

    • It must have a unique address on each node

    The private interconnect is used for inter-node communication by both Oracle Clusterware and RAC. If the private address is available from a network name server (DNS), then you can use that name. Otherwise, the private IP address must be available in each node's C:\WINNT\system32\drivers\etc\hosts file.

    During Oracle Clusterware installation, the information you enter as the private IP address determines which private interconnects are used by RAC database instances. If you define more than one interconnect, then they must all be in an up state, just as if their IP addresses were specified in the initialization parameter, CLUSTER_INTERCONNECTS. RAC does not fail over between cluster interconnects; if one is down then the instances using them will not start.

    Oracle recommends that you use a logical IP address that is available across all OLE networks, and that you take advantage of any available operating system-based failover mechanism by configuring it according to your third-party vendor's instructions for using their product to support failover.

  • One public IP address for each node, to be used as the Virtual IP (VIP) address for client connections and for connection failover The name associated with the VIP must be different from the default host name.

    This VIP must be associated with the same interface name on every node that is part of your cluster. In addition, the IP addresses that you use for all of the nodes that are part of a cluster must be from the same subnet. If you have a domain name server (DNS), then register the host names for the VIP with DNS. The Virtual IP address should not be in use at the time of the installation, because this is a Virtual IP address that Oracle manages.

  • One public fixed hostname address for each node, typically assigned by the system administrator during operating system installation. If you have a DNS, then register both the fixed IP and the VIP address with DNS. If you do not have DNS, then you must make sure that the public IP and VIP addresses for all nodes are in each node's host file.


Note:

In addition to these requirements, Oracle recommends the following:
  • While installing and using Real Application Clusters software, you should attempt to keep the system clocks on all of your cluster nodes as close as possible to the same time.

  • Use redundant switches as a standard configuration for all cluster sizes.


1.2.3 Software Requirements for Oracle Database 10g Real Application Clusters

Each node in a cluster requires a supported interconnect software protocol to support Cache Fusion, and to support Oracle Clusterware polling. Your interconnect must be certified by Oracle for your platform. You should also have a Web browser, both to enable Oracle Enterprise Manager, and to view online documentation.

RAC databases on the same cluster must all be 64-bit or all 32-bit. A mix of 32-bit RAC databases and 64-bit RAC databases on the same cluster is not supported.


See Also:

Oracle Database Platform Guide for Microsoft Windows for additional information about the OSDBA and OSOPER groups, and the SYSDBA and SYSOPER privileges.

1.3 Cluster Setup and Pre-Installation Configuration Tasks for Real Application Clusters

Before installing RAC, perform the following procedures:

  1. Ensure that you have a certified combination of operating system and Oracle software version by referring to the OracleMetaLink certification information, which is located at the following Web site:

    http://metalink.oracle.com
    
    

    Click Certify & Availability, and select 1.View Certifications by Product.


    Note:

    The layout of the OracleMetaLink site and the site's certification policies are subject to change.

  2. Configure a high-speed interconnect that uses a private network. Some platforms support automatic failover to an additional interconnect.

  3. Determine the storage option for your system and configure the shared disk. Oracle recommends that you use Automatic Storage Management (ASM) and Oracle Managed Files (OMF), or a cluster file system. If you use ASM or a cluster file system, then you can also take advantage of OMF and other Oracle Database 10g storage features. If you use RAC on Oracle Database 10g Standard Edition, then you must use ASM.

    If you intend to use multiple voting disks, then you need at least three voting disks to provide sufficient voting disk redundancy, and you should ensure that each voting disk is located on physically independent storage. When you start the Oracle Universal Installer (OUI) to install Oracle Clusterware, you are asked to provide the paths for each voting disk you want to configure: one disk, if you have existing redundancy support for the voting disk, or three disks to provide redundant voting disks managed by Oracle.

    In addition, if you select multiple voting disks managed by Oracle, then you should ensure that all voting disks are located on a secure network protected from external security threats, and you should ensure that all voting disks are on regularly maintained systems. If a voting disk fails, then you need to fix the physical hardware and bring it back online. The Cluster Synchronization Services (CSS) component of Oracle Clusterware continues to use the other voting disks, and automatically makes use of the restored drive when it is brought online again.


    Note:

    If you use ASM, then Oracle recommends that you install ASM in a separate home from the Oracle Clusterware home and the Oracle home. You should particularly follow this recommendation if the ASM instance is to manage storage for more than one RAC database. Following this recommendation reduces downtime when upgrading or de-installing different versions of the software.

  4. Install the operating system patches that are listed in the pre-installation chapter in this book in Part II.

  5. Use the Cluster Verification Utility (CVU) to help you to verify that your system meets requirements for installing the Oracle Database with RAC.

1.4 Pre-Installation, Installation, and Post-Installation Overview

The following describes the installation procedures that are covered in Part II and Part III of this book.

1.4.1 Pre-Installation Overview for Oracle Clusterware and Oracle Real Application Clusters

The pre-installation procedures in Part II explain how to verify user equivalence, perform network connectivity tests, as well as how to set directory and file permissions. Complete all of the pre-installation procedures and verify that your system meets all of the pre-installation requirements before proceeding to the install phase.

1.4.2 Installation Overview for Oracle Clusterware and Oracle Real Application Clusters

Oracle Database 10g Real Application Clusters installation is a two-phase installation. In phase one, use the Oracle Universal Installer (OUI) to install Oracle Clusterware as described in Chapter 4, "Installing Oracle Clusterware on Windows-Based Systems". In phase two, install the database software using OUI. The Oracle Clusterware installation starts the Oracle Clusterware processes in preparation for installing Oracle Database 10g. In phase two, you install the Oracle database software for use with single-instance or with RAC databases. To install the database software for use with single-instance databases in phase two, refer to the Microsoft Windows installation guides. To install the database software with RAC in phase two, use OUI to install the RAC software as described in Chapter 5, "Installing Oracle Database 10g with Real Application Clusters". Note that the Oracle home that you use in phase one is a home for the Oracle Clusterware software which must be different from the Oracle home that you use in phase two.

If OUI detects a previous version of the Oracle Database, then OUI starts the Database Upgrade Assistant (DBUA) to upgrade your database to Oracle Database 10g Release 2 (10.2). In addition, the DBUA displays a Service Configuration page for configuring services in your RAC database.


See Also:

Oracle Database Upgrade Guide for additional information about preparing for upgrades

After the database software installation completes, OUI starts the Oracle assistants, such as the Database Configuration Assistant (DBCA), to configure your environment and create your database. For a RAC database, you can later use DBCA Instance Management feature to add or modify services and instances as described in Chapter 6, "Creating RAC Databases with the Database Configuration Assistant".

1.4.3 Post-Installation Overview for Oracle Database 10g Real Application Clusters

After you create your database, download and install the most recent patch sets for your Oracle Database 10g version as described in the single-instance installation manual or in Chapter 7, "Real Application Clusters Post-Installation Procedures". If you are using other Oracle products with your RAC database, then you must also configure them.

You must also perform several post-installation configuration tasks to use certain Oracle Database 10g products such as the Sample Schema, Oracle Net Services, or Oracle Messaging Gateway. You must also configure Oracle pre-compilers for your operating system and if desired, configure Oracle Advanced Security.

Use the Companion media to install additional Oracle Database 10g software that may improve performance or extend database capabilities, for example, Oracle JVM, Oracle interMedia or Oracle Text.


See Also:

Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for more information about using RAC scalability features of adding and deleting nodes and instances from RAC databases

1.5 The Oracle Universal Installer and Real Application Clusters

The Oracle Universal Installer (OUI) facilitates the installation of Oracle Clusterware and Oracle Database 10g software. In most cases, you use the graphical user interface (GUI) provided by OUI to install the software. However, you can also use OUI to complete non-interactive (or "silent") installations, without using the GUI. See Appendix B for information about non-interactive installations.

The Oracle Inventory maintains records of Oracle software versions and patches. Each installation has central inventory where the Oracle home is registered. Oracle software installations have a local inventory directory, whose path location is recorded in the central inventory Oracle home. The local inventory directory for each Oracle software installation contains a list of components and applied interim patches associated with that software. Because your Oracle software installation can be corrupted by faulty inventory information, OUI must perform all read and write operations on Oracle inventories. The Oracle Inventory is installed in the path systemdrive:\program files\oracle.

When you install Oracle Clusterware or RAC, OUI copies the Oracle software onto the node from which you are running it. If your Oracle home is not on a cluster file system, then OUI propagates the software onto the other nodes that you have selected to be part of your OUI installation session. The Oracle Inventory maintains a list of each node that is a member of the RAC database, and lists the paths to each node's Oracle home. This is used to maintain patches and updates for each member node of the RAC database.

If you create your RAC database using OUI, or if you create it later using DBCA, then Oracle Enterprise Manager Database Control is configured for your cluster database. Database control can manage your cluster database and, for a RAC database, all of its instances.

You can also configure Enterprise Manager Grid Control to manage multiple databases and application servers from a single console. To manage RAC databases in Grid Control, you must a install Grid Control agent on each of the nodes of your cluster. The Agent installation is designed to recognize a cluster environment and install across all cluster nodes; you need to perform the install on only one of the cluster nodes to install Grid Control agent on all cluster nodes.

When OUI installs the Oracle software, Oracle recommends that you select a preconfigured database, or use the Database Configuration Assistant (DBCA) interactively to create your cluster database. You can also manually create your database as described in procedures posted on the Oracle Technical Network, which is at the following URL:

http://www.oracle.com/technology/index.html

Oracle recommends that you use Automatic Storage Management (ASM). If you are not using ASM, or if you are not using a cluster file system, then configure shared raw devices before you create your database.


See Also:

  • Oracle Universal Installer and OPatch User's Guide for more details about OUI

  • Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for information about using Enterprise Manager to administer RAC environments

  • The Grid Technology Center on the Oracle Technology Network (OTN), which is available at the following URL:

    http://www.oracle.com/technology/tech/index.html


1.6 Storage Considerations for Installing Oracle Database 10g Real Application Clusters

This section discusses storage configuration options that you should consider before installing Oracle Database 10g Release 2 (10.2) with Real Application Clusters.

1.6.1 Overview of Automatic Storage Management

Oracle recommends using Automatic Storage Management (ASM) or a cluster file system with Oracle Managed Files (OMF) for database storage. This section provides an overview of ASM.

Note that RAC installations using Oracle Database Standard Edition must use ASM for database file storage.

You can use ASM to simplify the administration of Oracle database files. Instead of having to manage potentially thousands of database files, using ASM, you need to manage only a small number of disk groups. A disk group is a set of disk devices that ASM manages as a single logical unit. You can define a particular disk group as the default disk group for a database, and Oracle will automatically allocate storage for, create, or delete, the files associated with the appropriate database object. When administering the database, you need only refer to database objects by name, rather than by file name.

When using ASM with a single Oracle home for database instances on a node, the ASM instance can run from that same home. If you are using ASM with Oracle database instances from multiple database homes on the same node, then Oracle recommends that you run the ASM instance from an Oracle home that is distinct from the database homes. In addition, the ASM home should be installed on every cluster node. Following this recommendation prevents the accidental removal of ASM instances that are in use by databases from other homes during the de-installation of a database's Oracle home.

Benefits of Automatic Storage Management

ASM provides many of the same benefits as storage technologies such as a redundant array of independent disks (RAID) or logical volume managers (LVMs). Like these technologies, ASM enables you to create a single disk group from a collection of individual disk devices. It balances input and output (I/O) loads to the disk group across all of the devices in the disk group. It also implements striping and mirroring to improve I/O performance and data reliability.

However, unlike RAID or LVMs, ASM implements striping and mirroring at the file level. This implementation enables you to specify different storage attributes for individual files in the same disk group.

Disk Groups and Failure Groups

A disk group can include up to 10,000 disk devices. Each disk device can be an individual physical disk, a multiple disk device such as a RAID storage array or logical volume, or even a partition on a physical disk. However, in most cases, disk groups consist of one or more individual physical disks. To enable ASM to balance I/O and storage appropriately within the disk group, all devices in the disk group should have similar, if not identical, storage capacity and performance.


Note:

Do not assign more than one partition on a single physical disk to the same disk group. ASM expects each disk group device to be on a separate physical disk.

Although you can specify a logical volume as a device in an ASM disk group, Oracle does not recommend their use. Because logical volume managers can hide the physical disk architecture, ASM may not operate effectively when logical volumes are specified as disk group devices.


When you add a device to a disk group, you can specify a failure group for that device. Failure groups define ASM disks that share a common potential failure mechanism. An example of a failure group is a set of SCSI disks sharing the same SCSI controller. Failure groups are used to determine which ASM disks to use for storing redundant copies of data. For example, if two-way mirroring is specified for a file, ASM automatically stores redundant copies of file extents in separate failure groups.

Redundancy Levels

ASM provides three levels of mirroring, called redundancy levels, that you can specify when creating a disk group. The redundancy levels are:

  • External redundancy

    In disk groups created with external redundancy, the contents of the disk group are not mirrored by ASM. You might choose this redundancy level when:

    • The disk group contains devices, such as RAID devices, that provide their own data protection

    • Your use of the database does not require uninterrupted access to data, for example, in a development environment where you have a suitable back-up strategy

  • Normal redundancy

    In disk groups created with normal redundancy, the contents of the disk group are two-way mirrored by default, except the control file, which is three-way mirrored. However, you can choose to create certain files that are not mirrored or that are three-way mirrored in a disk group with normal redundancy. To create a disk group with normal redundancy, you must specify at least two failure groups (a minimum of two devices).

    The effective disk space of a disk group that uses normal redundancy is half the total disk space of all of its devices.

  • High redundancy

    In disk groups created with high redundancy, the contents of the disk group all three-way mirrored. To create a disk group with high redundancy, you must specify at least three failure groups (a minimum of three devices).

    The effective disk space of a disk group that uses high redundancy is one-third of the total disk space of all of its devices.

ASM and Installation Types

The type and number of disk groups that you can create when installing Oracle software depends on the type of database you choose to create during the installation, as follows:

  • Preconfigured database

    If you choose to create the default preconfigured database that uses ASM, then OUI prompts you for the disk device names it will use to create a disk group with the default name of DATA.

  • Advanced database

    If you choose to create an advanced database that uses ASM, then you can create one or more disk groups. These disk groups can use one or more devices. For each disk group, you can specify the redundancy level that suits your requirements.

The following table lists the total disk space required in all disk group devices for a typical preconfigured database, depending on the redundancy level you choose to use for the disk group:

Redundancy Level Total DIsk Space Required
External 1 GB
Normal 2 GB (on a minimum of two devices)
High 3 GB (on a minimum of three devices)

You can also run OUI and to install ASM only without the database and RAC software.

1.6.2 Shared Storage for Database Recovery Area

When you configure a database recovery area in a RAC environment, the database recovery area must be on shared storage. When the Database Configuration Assistant (DBCA) configures automatic disk backup, it uses a database recovery area that must be shared.

If the database files are stored on a cluster file system, then the recovery area can also be shared through the cluster file system.

If the database files are stored on an Automatic Storage Management (ASM) disk group, then the recovery area can also be shared through ASM.

If the database files are stored on raw devices, then you must use either a cluster file system or ASM for the recovery area.


Note:

ASM disk groups are always valid recovery areas, as are cluster file systems. Recovery area files do not have to be in the same location where datafiles are stored. For instance, you can store datafiles on raw devices, but use ASM for the recovery area.

1.7 Additional Considerations for Using Oracle Database 10g Features in RAC

Oracle recommends that you use the following Oracle Database 10g features to simplify RAC database management:

1.8 Oracle Database 10g and Real Application Clusters Components

Oracle Database 10g provides single-instance database software and the additional components to operate RAC databases. Some of the RAC-specific components include:

1.8.1 Oracle Clusterware

You must provide OUI with the names of the nodes on which you want to install Oracle Clusterware. The Oracle Clusterware home can be either shared by all nodes, or private to each node, depending on your responses when you run OUI. The home that you select for Oracle Clusterware must be different from the RAC-enabled Oracle home.

When third-party vendor clusterware is present, Oracle Clusterware may interact with the third-party vendor clusterware. For Oracle Database 10g on Windows, Oracle Clusterware coexists with but does not interact with previous Oracle clusterware versions.


Note:

The Oracle Database cluster manager on database versions previous to 10g Release 1 was referred to as "Cluster Manager." In Oracle Database 10g, the cluster manager role is performed by Cluster Synchronization Services (CSS), a component of Oracle Clusterware, on all platforms. The Cluster Synchronization Service Daemon (OCSSD) performs this function.

1.8.2 The Installed Real Application Clusters Components

All instances in RAC environments share the control file, server parameter file, redo log files, and all datafiles. These files reside on a shared cluster file system or on shared disks. Either of these types of file configurations are accessed by all the cluster database instances. Each instance also has its own set of redo log files. During failures, shared access to redo log files enables surviving instances to perform recovery.

1.9 Oracle Database 10g Real Application Clusters Version Compatibility

You can install and operate different versions of Oracle cluster database software on the same computer as described in the following points:

1.10 Cloning Oracle Clusterware and RAC in Grid Environments

The following section provides a summary of the procedure for deployments of RAC in grid environments with large numbers of nodes using cloned Clusterware and RAC images.


See Also:

For detailed information about cloning RAC and Oracle Clusterware images, refer to the following documents:

Cloning, and adding and deleting nodes:

Oracle Universal Installer and OPatch User's Guide

Additional information about adding and deleting nodes:

Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide


This section contains the following:

1.10.1 Cloning Oracle Clusterware Homes

This section outlines the procedure required to clone an existing Oracle Clusterware home from one node (the source node) to one or more other nodes (the target nodes). The procedure consists of the following tasks:

  1. Ensure that the Oracle Clusterware software is installed successfully on the source node. You can use the CVU for this task.

  2. As a Windows administrative user, create a zip file of the Oracle Clusterware home directory, selecting the "Save full path info" option.

  3. On a selected target node, create an Oracle Clusterware home directory, and copy the Oracle Clusterware zip file from the source node to the target node Oracle Clusterware home.

  4. As a Windows administrative user, extract the zip file contents, selecting the "Use folder names" option.

  5. Repeat steps 3 and 4 on each of the other target nodes, unless the Oracle Clusterware home is on a shared storage device.

  6. On each of the target nodes, run OUI in clone mode as described in Oracle Universal Installer and OPatch User's Guide.

  7. Complete the post-cloning installation instructions as described in Oracle Universal Installer and OPatch User's Guide.

1.10.2 Cloning Real Application Clusters Homes

Complete the following tasks to Clone a RAC database image on multiple nodes:

  1. Ensure that the Oracle Database with RAC software is installed successfully on the source node.

  2. Create a zip file of the Oracle home directory, selecting the "Save full path info" option.

  3. On a selected target node, create an Oracle home directory, and copy the Oracle Clusterware zip file from the source node to the target node Oracle Clusterware home.

  4. Extract the zip file contents, selecting the "Use folder names" option.

  5. Repeat steps 3 and 4 on each of the other target nodes, unless the Oracle home is on a shared storage device.

  6. On each of the target nodes, run OUI in clone mode as described in Oracle Universal Installer and OPatch User's Guide.

  7. Complete the post-cloning installation instructions as described in Oracle Universal Installer and OPatch User's Guide.

  8. Run the configuration assistant NetCA on a local node of the cluster, and provide a list when prompted of all nodes that are part of the cluster.

  9. Run the configuration assistant DBCA to create the database.