Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux Part Number B14203-02 |
|
|
View PDF |
This chapter provides an overview of Oracle Clusterware and Oracle Real Application Clusters (RAC) installation and configuration procedures. It includes the following topics:
Oracle Clusterware and Oracle Real Application Clusters Documentation Overview
Configuration Tasks for Oracle Clusterware and Oracle Real Application Clusters
Storage Considerations for Installing Oracle Database 10g Real Application Clusters
Additional Considerations for Using Oracle Database 10g Features in RAC
Oracle Database 10g and Real Application Clusters Components
Oracle Database 10g Real Application Clusters Release Compatibility
This section describes the Oracle Clusterware and RAC documentation set.
This book contains the information required to complete pre-installation tasks, to complete installation, and to complete post-installation tasks for Linux. Additional information for this release may be available in the Oracle Database 10g README or Release Notes. The platform-specific Oracle Database 10g installation media contains a copy of this book in both HTML and PDF formats.
The Server Documentation directory on the installation media contains Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide.
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide describes how to administer Oracle Clusterware components such as the voting disks and Oracle Cluster Registry (OCR) devices. This book also explains how to administer storage, how to use RAC scalability features to add and delete instances and nodes, how to use Recovery Manager (RMAN), and how to perform backup and recovery in RAC.
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide describes RAC deployment topics such as services, high availability, and workload management. The book describes how the Automatic Workload Repository (AWR) tracks and reports service levels, and how you can use service level thresholds and alerts to improve high availability in your RAC environment. The book also describes how to make your applications highly available using Oracle Clusterware.
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide also provides information about how to monitor and tune performance in RAC environments by using Oracle Enterprise Manager, and by using information in AWR and Oracle Database performance views. This book also provides some application-specific deployment techniques for online transaction processing and data warehousing environments.
Each node that you want to make part of your Oracle Clusterware or Oracle Clusterware and RAC installation must meet the hardware and software requirements specified in Part II of this book. You can use the new Cluster Verification Utility to assist you with verification of requirements.
If you are uncertain about concepts related to setting up and configuring a RAC database, then readOracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide to inform yourself about concepts such as services, setting up storage, and other information relevant to configuring your cluster.
Cluster Verification Utility (CVU) is provided with Oracle Clusterware and Oracle Database 10g Release 2 (10.2) with Real Application Clusters. The purpose of CVU is to enable you or your hardware vendors to verify during setup and configuration that all components required for a successful installation of Oracle Clusterware or Oracle Clusterware and a RAC database are installed and configured correctly, and to provide you with ongoing assistance any time you need to make changes to your RAC cluster. You are provided with commands to use the CVU to verify completion of tasks in this guide.
There are two types of CVU commands:
Stage Commands are CVU commands used to test system setup and readiness for successful software installation, database creation, or configuration change steps. These commands are also used to validate successful completion of specific cluster configuration steps.
Component Commands are CVU commands used to check individual cluster components, and determine their state.
This guide provides stage and component CVU commands where appropriate to assist you with cluster verification.
See Also: Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for detailed information about Cluster Verification Utility |
Oracle Universal Installer (OUI) is a graphical user interface (GUI) tool that assists you with installing and configuring Oracle Database. It can be run using different command options to perform installation pre-checks, specialized installation processes, and other tasks. To see an overview of OUI options, navigate to the directory path oui/bin
in the Oracle home directory, and type the following command:
$ ./runInstaller -help
See Also: Oracle Universal Installer and OPatch User's Guide for more detailed information about OUI options |
The path that you must take to upgrade to the new Oracle Database 10g release depends on the release number of your current database. It may not be possible to upgrade directly from your current release of Oracle Database to the latest release. Depending on your current release, you may need to upgrade through one or more intermediate releases to upgrade to the new Oracle Database 10g release.For example, if the current database is running release 8.1.6, then first upgrade to release 8.1.7 using the instructions in Oracle8i Migration for release 8.1.7. The release 8.1.7 database can then be upgraded to the new Oracle Database 10g release.
Oracle9i database can coexist with Oracle Database 10g Release 2 (10.2). However, if you want separate releases of the database to coexist, then you must install Oracle Database 10g with Oracle9i already installed. You should not install Oracle9i after installing Oracle10g.
See Also: Oracle Database Upgrade Guide for more information about upgrading |
Oracle Cluster File System 2 (OCFS2) permits the use of shared Oracle homes. The original version Oracle Cluster File System (OCFS) does not permit shared Oracle homes. Refer to"Identifying Software Requirements" in Chapter 2 to determine which OCFS version is appropriate for use with your Linux distribution, and to decide how to configure your system storage.
Each node in a cluster requires the following hardware:
External shared disks for storing the Oracle Clusterware (Oracle Cluster Registry and voting disk) files, and database files.
Chapter 3 describes the storage disk configuration options that are available. Review these options before you decide which storage option to use in your RAC environment. However, note that when Database Configuration Assistant (DBCA) configures automatic disk backup, it uses a database recovery area that must be shared.
Note: Oracle Clusterware software can be installed on Oracle Cluster File System 2 (OCFS2). However, Oracle Clusterware software cannot be installed on Oracle Cluster File System (OCFS). Oracle Clusterware software can be installed on network-attached storage (NAS). |
One private internet protocol (IP) address for each node to serve as the private interconnect. The following must be true for each private IP address:
It must be separate from the public network
It must be accessible on the same network interface on each node
It must have a unique address on each node
The private interconnect is used for internode communication by both Oracle Clusterware and RAC. The private IP address must be available in each node's /etc/hosts
file.
During Oracle Clusterware installation, the information you enter as the private IP address determines which private interconnects are used by Oracle Clusterware for its own communication. They must all be available, and capable of responding to a ping
command.
Oracle recommends that you use a logical Internet Protocol (IP) address that is available across all private networks, and that you take advantage of any available operating system-based failover mechanism by configuring it according to your third-party vendor's instructions for using their product to support failover.
One public IP address for each node, to be used as the Virtual IP address for client connections and for connection failover.
During installation this public virtual IP address (VIP) is associated with the same interface name on every node that is part of your cluster. The IP addresses that you use for all of the nodes that are part of a cluster must be from the same subnet. If you have a domain name server (DNS), then register the host names for the VIP with the DNS. The VIP should not be in use at the time of the installation, because this is a VIP that Oracle Clusterware manages.
One public fixed host name address for each node, typically assigned by the system administrator during operating system installation. If you have a domain name server (DNS), then you can register both the fixed IP and the VIP address with the DNS. If you do not have a DNS, then you must make sure that both public IP addresses are in the node /etc/hosts
file (for all cluster nodes), and any client system's /etc/hosts
file that requires access to the database.
Note: In addition to these requirements, Oracle recommends the following:
|
Each node in a cluster requires a supported interconnect software protocol to support Cache Fusion, and to support Oracle Clusterware polling. Your interconnect must be certified by Oracle for your platform. You should also have a Web browser, both to enable Oracle Enterprise Manager, and to view online documentation.
For Oracle Database 10g requirements, Oracle Clusterware provides the same functions as third-party vendor clusterware. Using Oracle Clusterware also reduces installation and support complications. However, you may require third-party vendor clusterware if you use a non-ethernet interconnect, or if you have deployed clusterware-dependent applications on the same cluster where you deploy RAC.
Before installing RAC, perform the following procedures:
Ensure that you have a certified combination of the operating system and an Oracle Database software release by referring to the OracleMetaLink certification information, which is located at the following Web site:
http://metalink.oracle.com
Click Certify & Availability, and select 1.View Certifications by Product.
Note: The layout of the OracleMetaLink site and the site's certification policies are subject to change. |
Configure a high-speed interconnect that uses a private network. Some platforms support automatic failover to an additional interconnect.
Determine the storage option for your system and configure the shared disk. Oracle recommends that you use Automatic Storage Management (ASM) and Oracle Managed Files (OMF), or a cluster file system. If you use ASM or a cluster file system, then you can also take advantage of OMF and other Oracle Database 10g storage features. If you use RAC on Oracle Database 10g Standard Edition, then you must use ASM.
When you start Oracle Universal Installer (OUI) to install Oracle Clusterware, you are asked to provide to provide the paths for voting disks, and for the Oracle Cluster Registry (OCR).
For voting disks: Configure one disk, if you have existing redundancy support for the voting disk. If you intend to use multiple voting disks managed by Oracle Clusterware, then you must have at least three disks to provide sufficient redundancy, and you must ensure that each voting disk is located on physically independent storage.
In addition, if you select multiple voting disks managed by Oracle Clusterware, then you should ensure that all voting disks are located on a secure network protected from external security threats, and you should ensure that all voting disks are on regularly maintained systems. If a voting disk fails, then you need to fix the physical hardware and bring it back online. The Cluster Synchronization Services (CSS) component of Oracle Clusterware continues to use the other voting disks, and automatically makes use of the restored drive when it is brought online again.
For OCR: Configure one disk if you have existing redundancy support. If you intend to use OCR mirroring managed by Oracle Clusterware, then you must have two OCR locations, and you must ensure that each OCR is located on physically independent storage.
In addition, if you select mirrored OCRs managed by Oracle Clusterware, then you should ensure that all OCRs are located on a secure network protected from external security threats, and you should ensure that all OCRs are on regularly maintained systems. If an OCR copy fails or becomes inaccessible, then you can use the ocrconfig
tool to replace the OCR.
Install the operating system patch updates that are listed in the pre-installation chapter in this book in Part II.
Use the Cluster Verification Utility (CVU) to help you to verify that your system meets requirements for installing Oracle Database with RAC.
The following describes the installation procedures that are covered in Part II and Part III of this book.
The pre-installation procedures in Part II explain how to verify user equivalence, perform network connectivity tests, how to set directory and file permissions, and other required pre-installation tasks. Complete all pre-installation tasks and verify that your system meets all pre-installation requirements before proceeding to the install phase.
Oracle Database 10g Real Application Clusters installation is a two-phase installation. In phase one, use Oracle Universal Installer (OUI) to install Oracle Clusterware as described in Chapter 4, "Installing Oracle Clusterware". Note that the Oracle home in phase one is a home for the Oracle Clusterware software, which must be different from the Oracle home that you use in phase two for the installation of the Oracle database software with RAC components. The Oracle Clusterware installation starts the Oracle Clusterware processes in preparation for installing Oracle Database 10g with RAC, as described in Chapter 5, "Installing Oracle Database 10g with Oracle Real Application Clusters". Use OUI in this phase to install the RAC software.
You must install Oracle Clusterware and Oracle Database in a separate home directories. If you will use multiple Oracle Database homes with ASM, then you should install a separate Oracle Database home for ASM. You should create the listener in the Oracle database Oracle home.
If OUI detects a previous release of Oracle Clusterware (previously known as Oracle Cluster Ready Services), then you are prompted to select either a rolling upgrade, or a full upgrade.
If OUI detects a previous release of the Oracle database, then OUI provides you with the option to start Database Upgrade Assistant (DBUA) to upgrade your database to Oracle Database 10g Release 2 (10.2). In addition, DBUA displays a Service Configuration page for configuring services in your RAC database.
See Also: Oracle Database Upgrade Guide for additional information about preparing for upgrades |
After the installation completes, OUI starts the Oracle Database assistants, such as Database Configuration Assistant (DBCA), to configure your environment and create your RAC database. You can later use the DBCA Instance Management feature to add or modify services and instances as described in Chapter 6, "Creating Oracle Real Application Clusters Databases with Database Configuration Assistant".
After you create your database, download and install the most recent patch set for your Oracle Database 10g release, as described in Chapter 7, "Oracle Real Application Clusters Post-Installation Procedures". If you are using other Oracle products with your RAC database, then you must also configure them.
You must also perform several post-installation configuration tasks to use certain Oracle Database 10g features.
On the installation media, you can select additional Oracle Database 10g software that may improve performance or extend database capabilities. Examples: Oracle JAccelerator, Oracle interMedia, and Oracle Text.
See Also: Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide, and Oracle Universal Installer and OPatch User's Guide for more information about using the RAC scalability features of adding and deleting nodes and instances from RAC databases |
Oracle Universal Installer (OUI) facilitates the installation of Oracle Clusterware and Oracle Database 10g software. In most cases, you use the graphical user interface (GUI) provided by OUI to install the software. However, you can also use OUI to complete non-interactive (or silent) installations, without using the GUI. Refer to Appendix B for information about non-interactive installations.
The Oracle Inventory maintains records of Oracle software releases and patches. Each installation has a central inventory where the Oracle home is registered. Oracle software installations have a local inventory directory, whose path location is recorded in the central inventory Oracle home. The local inventory directory for each Oracle software installation contains a list of components and applied interim patches associated with that software. Because your Oracle software installation can be corrupted by faulty inventory information, OUI must perform all read and write operations on Oracle inventories.
When you install Oracle Clusterware or RAC, OUI copies this Oracle software onto the node from which you are running it. If your Oracle home is not on a shared file system, then OUI propagates the software onto the other nodes that you have selected to be part of your OUI installation session. The Oracle Inventory maintains a list of each node that is a member of the RAC database, and lists the paths to each node's Oracle home. This is used to maintain software patches and updates for each member node of the RAC database.
If you create your RAC database using OUI, or if you create it later using DBCA, then Oracle Enterprise Manager Database Control is configured for your RAC database. Database Control can manage your RAC database, all its instances, and the hosts where instances are configured.
You can also configure Enterprise Manager Grid Control to manage multiple databases and application servers from a single console. To manage RAC databases in Grid Control, you must install a Grid Control agent on each of the nodes of your cluster. The Agent installation is designed to recognize a cluster environment and install across all cluster nodes; you need to perform the installation on only one of the cluster nodes to install Grid Control agent on all cluster nodes.
When OUI installs the Oracle Clusterware or Oracle Database software, Oracle recommends that you select a preconfigured database, or use Database Configuration Assistant (DBCA) interactively to create your RAC database. You can also manually create your database as described in procedures posted on the Oracle Technical Network, which is at the following URL:
http://www.oracle.com/technology/index.html
Oracle recommends that you use Automatic Storage Management (ASM). If you are not using ASM, or if you are not using a cluster file system or an NFS system, then configure shared raw devices before you create your database.
See Also:
|
This section discusses storage configuration options that you should consider before installing Oracle Database 10g Release 2 (10.2) with Real Application Clusters.
Oracle recommends using Automatic Storage Management (ASM) or a cluster file system with Oracle Managed Files (OMF) for database storage. This section provides an overview of ASM.
Note that RAC installations using Oracle Database Standard Edition must use ASM for database file storage.
You can use ASM to simplify the administration of Oracle database files. Instead of having to manage potentially thousands of database files, using ASM, you need to manage only a small number of disk groups. A disk group is a set of disk devices that ASM manages as a single logical unit. You can define a particular disk group as the default disk group for a database, and Oracle Database will automatically allocate storage for, create, or delete, the files associated with the appropriate database object. When administering the database, you need to refer to database objects only by name, rather than by file name.
When using ASM with a single Oracle home for database instances on a node, the ASM instance can run from that same home. If you are using ASM with Oracle Database instances from multiple database homes on the same node, then Oracle recommends that you run the ASM instance from an Oracle home that is distinct from the database homes. In addition, the ASM home should be installed on every cluster node. Following this recommendation prevents the accidental removal of ASM instances that are in use by databases from other homes during the de-installation of a database's Oracle home.
Benefits of Automatic Storage Management
ASM provides many of the same benefits as storage technologies such as a redundant array of independent disks (RAID) or a logical volume manager (LVM). Like these technologies, ASM lets you create a single disk group from a collection of individual disk devices. It balances input and output (I/O) loads to the disk group across all of the devices in the disk group. It also implements striping and mirroring to improve I/O performance and data reliability.
However, unlike RAID or LVMs, ASM implements striping and mirroring at the file level. This implementation lets you specify different storage attributes for individual files in the same disk group.
Disk Groups and Failure Groups
A disk group can contain between 1 to 10000 disk devices. Each disk device can be an individual physical disk, a multiple disk device such as a RAID storage array or logical volume, or even a partition on a physical disk. However, in most cases, disk groups consist of one or more individual physical disks. To enable ASM to balance I/O and storage appropriately within the disk group, all devices in the disk group should have similar, if not identical, storage capacity and performance.
Note: Do not put more than one partition of a single disk into the same disk group. You can put separate partitions of a single disk into separate disk groups.Logical volume managers are not supported on Linux. |
When you add a device to a disk group, you can specify a failure group for that device. Failure groups define ASM disks that share a common potential failure mechanism. An example of a failure group is a set of SCSI disks sharing the same SCSI controller. Failure groups are used to determine which ASM disks to use for storing redundant copies of data. For example, if two-way mirroring is specified for a file, then ASM automatically stores redundant copies of file extents in separate failure groups. Failure groups apply only to normal and high redundancy disk groups. You define the failure groups in a disk group when you create or alter the disk group.
Redundancy Levels
ASM provides three levels of mirroring, called redundancy levels, that you can specify when creating a disk group. The redundancy levels are:
External redundancy
In disk groups created with external redundancy, the contents of the disk group are not mirrored by ASM. Choose this redundancy level when:
The disk group contains devices, such as RAID devices, that provide their own data protection
Your use of the database does not require uninterrupted access to data. For example: a development environment where you have a suitable backup strategy
Normal redundancy
In disk groups created with normal redundancy, the contents of the disk group are two-way mirrored by default. However, you can choose to create certain files that are three-way mirrored, or that are not mirrored. To create a disk group with normal redundancy, you must specify at least two failure groups (a minimum of two devices).
The effective disk space of a disk group that uses normal redundancy is half the total disk space of all of its devices.
High redundancy
In disk groups created with high redundancy, the contents of the disk group are three-way mirrored by default. To create a disk group with high redundancy, you must specify at least three failure groups (a minimum of three devices).
The effective disk space of a disk group that uses high redundancy is one-third of the total disk space of all of its devices.
See Also: Oracle Database Administrator's Guide for additional information about ASM and redundancy |
ASM and Installation Types
The type and number of disk groups that you can create when installing Oracle Database software depends on the type of database you choose to create during the installation, as follows:
If you choose to create the default preconfigured database that uses ASM, then OUI prompts you to specify one or more disk device names and redundancy. By default, OUI creates a disk group named DATA, with normal redundancy.
Advanced database
If you choose to create an advanced database that uses ASM, then you can create one or more disk groups. These disk groups can use one or more devices. For each disk group, you can specify the redundancy level that suits your requirements.
Configure Automatic Storage Management
If you choose to create an ASM instance only, then OUI prompts you to create Disk Group. If OUI finds a Grid Control service is found on the system, then OUI prompts you to indicate if the ASM instance shall be managed by Grid Control. The Management Service box lists the available Oracle Management Services.
When you configure a database recovery area in a RAC environment, the database recovery area must be on shared storage. When Database Configuration Assistant (DBCA) configures automatic disk backup, it uses a database recovery area that must be shared.
If the database files are stored on a cluster file system, then the recovery area can also be shared through the cluster file system.
If the database files are stored on an Automatic Storage Management (ASM) disk group, then the recovery area can also be shared through ASM.
Note: ASM disk groups are always valid recovery areas, as are cluster file systems. Recovery area files do not have to be in the same location where data files are stored. For instance, you can store data files on raw devices, but use ASM for the recovery area. |
Oracle recommends that you use the following Oracle Database 10g features to simplify RAC database management:
Oracle Enterprise Manager—Use Enterprise Manager to administer your entire processing environment, not just the RAC database. Enterprise Manager lets you manage a RAC database with its instance targets, listener targets, host targets, and a cluster target, as well as ASM targets if you are using ASM storage for your database.
Automatic undo management—This feature automatically manages undo processing.
Automatic segment-space management—This feature automatically manages segment freelists and freelist groups.
Locally managed tablespaces—This feature enhances space management performance.
See Also: Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for more information about these features in RAC environments |
Oracle Database 10g provides single-instance database software and the additional components to operate RAC databases. Some of the RAC-specific components include the following:
Oracle Clusterware
A RAC-enabled Oracle home
OUI installs Oracle Clusterware on each node of the cluster. If third-party vendor clusterware is not present, then you must use OUI to enter the nodes on which you want Oracle Clusterware to be installed. The Oracle Clusterware home can either be shared by all nodes, or private to each node, depending on your responses when you run OUI. The home that you select for Oracle Clusterware must be different from the RAC-enabled Oracle home.
When third-party vendor clusterware is present, Oracle Clusterware may interact with the third-party vendor clusterware. For Oracle Database 10g on Linux and Windows, Oracle Clusterware coexists with but does not interact with previous Oracle clusterware releases. In using third-party vendor clusterware, note the following:
Oracle Clusterware can integrate with third-party vendor clusterware for all operating systems except Linux and Windows.
All instances in RAC environments share the control file, server parameter file, redo log files, and all data files. These files reside on a shared cluster file system or on shared disks. Either of these types of file configurations are accessed by all the cluster database instances. Each instance also has its own set of redo log files. During failures, shared access to redo log files enables surviving instances to perform recovery.
You can install and operate different releases of Oracle Database software on the same computer:
With Oracle Database 10g Release 2 (10.2) if you have an existing Oracle home, then you must install the database into the existing Oracle home. You should install Oracle Clusterware in a separate Oracle Clusterware home. Each node can have only one Oracle Clusterware home.
During installation, Oracle Universal Installer (OUI) prompts you to install additional Oracle Database 10g components if you have not already installed all of them.
OUI lets you de-install and re-install Oracle Database 10g Real Application Clusters if needed.
If you want to install Oracle9i and Oracle Database 10g Release 2 (10.2) on the same system, then you must install Oracle9i first. You cannot install Oracle9i on a system with Oracle Database 10g.
If OUI detects an earlier database release, then OUI asks you about your upgrade preferences. You have the option to upgrade one of the previous release databases with DBUA or to create a new database using DBCA. The information collected during this dialog is passed to DBUA or DBCA after the software is installed.
Note: Do not move Oracle binaries from the Oracle home to another location. Doing so can cause dynamic link failures. |
You can run different releases of Oracle Database and Automatic Storage Management (ASM). If the Oracle Database release and the ASM release are the same release, then they can both run out of the same Oracle home. If they are different releases, then the Oracle Database release and the ASM release must be in their separate release homes. For example, you can install an ASM release 10g Release 2 (10.2) instance and use it with Oracle Database 10g Release 1 (10.1) database, or you can install an Oracle 10g Release 2 (10.2) database and use it with an ASM 10g Release 1 (10.1) instance.
Note: When using different release ASM and Oracle Database releases, the functionality of each is dependent on the functionality of the earlier software release. For example, an Oracle Database 10g release 10.1.0.2 using an ASM 10.1.0.3 instance will not be able to use new features available for ASM in the 10.1.0.3 release, but instead only ASM 10.1.0.2 features. Conversely, an Oracle Database 10g release 10.1.0.3 using an ASM instance release 10.1.0.2 will function like a release 10.1.0.2 database. |
Depending on whether this is the first time that you are installing Oracle server software on your system, you may need to create several groups and a user account to own Oracle software, as described later in the pre-installation procedures. The required groups and user are:
The Oracle Inventory group (oinstall
)
You must create this group the first time you install Oracle software on the system. The usual name for this group is oinstall
. Members of this group own the Oracle inventory, which is a catalog of all of the Oracle software installed on the system. Group membership in oinstall
is also required to perform some tasks involving Oracle Cluster Registry (OCR) keys that are created during Oracle Clusterware installation.
The OSDBA group (dba
)
You must create the OSDBA group the first time you install Oracle software on the system.
The OSDBA group provides operating system verification of users that have database administrative privileges (the SYSDBA and SYSOPER privileges). The default name for this group is dba
. During installation, if you want to specify a group name other than the default, then you are prompted during installation for the name of the OSDBA group.
You must create a new OSDBA group if you have an existing OSBDA group, but you want to give a different group of users database administrative privileges in a new Oracle server installation.
The OSOPER group (oper
)
The OSOPER group is optional. Create this group if you want a separate group of users to have a limited set of database administrative privileges (the SYSOPER privilege), to perform such operations as backing up, recovering, starting up, and shutting down the database. The default name for this group is oper
. To use this group, choose the Custom installation type to install the software. To use an OSOPER group, you must create it in the following circumstances:
If an OSOPER group does not exist (for example, if this is the first installation of Oracle server software on the system)
If an OSOPER group exists, but you want to give a different group of users database operator privileges in a new Oracle server installation
The Oracle Software Owner user (oracle
)
You must create the oracle
user account the first time you install Oracle software on the system. the oracle
user account owns all of the software installed during the installation. The usual name for the oracle
user account is oracle
. the oracle
user must have the Oracle Inventory group as its primary group and the OSDBA group as its secondary group. It must also have the OSOPER group as a secondary group if you choose to create that group.
If an Oracle software owner user exists, but you want to use a different user with different group membership in a new Oracle server installation, then you must give database administrative privileges to those groups.
A single Oracle Inventory group is required for all installations of Oracle software on the system. However, you can create different Oracle software owner users, OSDBA groups, and OSOPER groups (other than oracle
, dba
, and oper
) for separate installations. In addition, you can create a separate owner for Oracle Clusterware. Using different groups lets you grant DBA
privileges to a particular operating system user on one database, which they would not have on another database on the same system.
See Also: Oracle Database Administrator's Reference, 10g Release 2 (10.2) for UNIX Systems and Oracle Database 10g Administrator's Guide for Linux for additional information about the OSDBA and OSOPER groups, and theSYSDBA and SYSOPER privileges |
The following section provides a summary of the procedure for deployments of RAC in grid environments with large numbers of nodes using cloned Clusterware and RAC images.
See Also: For detailed information about cloning RAC and Oracle Clusterware images, refer to the following documents:Cloning, and adding and deleting nodes: Oracle Universal Installer and OPatch User's Guide Additional information about adding and deleting nodes: Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide |
This section contains the following:
Complete the following tasks to clone an Oracle Clusterware home on multiple nodes:
On the source node, install Oracle Clusterware software. All required root scripts must run successfully.
As root, create a tar file of the Oracle Clusterware home
On the target node, create an Oracle Clusterware home, and copy the Oracle Clusterware tar file from the source node to the target node Oracle Clusterware home.
As root, uncompress the tar file.
Run OUI in clone mode, as described in Oracle Universal Installer and OPatch User's Guide.
Run root scripts.
Repeat steps 1 through 6 on each node that you want to add to the cluster. On the last node that you install, run the tool oifcfg
to configure the network interfaces.
Complete the following tasks to Clone a RAC database image on multiple nodes:
On the source node, install a RAC database Oracle home. All required root scripts must run successfully. Do not create a database, and do not run any configuration tools.
As root, create a tar file of the RAC database Oracle home.
On the target node, create an Oracle home directory for the RAC database, and copy the RAC database tar from the source node to the target node Oracle home.
Create required Oracle users and groups, ensuring that you use the same names, user ID numbers and group ID numbers as those on the source node.
As root, uncompress the tar file.
Run OUI in clone mode, as described in Oracle Universal Installer and OPatch User's Guide.
Run root scripts.
Repeat steps 1 through 7 for on each node that you want to add to the cluster.
Run the configuration assistant NetCA on a local node of the cluster, and provide a list when prompted of all nodes that are part of the cluster. This procedure creates the listener.
Run the configuration assistant DBCA to create the database.
Follow post-cloning phase instructions as provided in Oracle Universal Installer and OPatch User's Guide.