Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide 10g Release 2 (10.2) Part Number B14197-02 |
|
|
View PDF |
This chapter introduces the Oracle Clusterware and Oracle Real Application Clusters (RAC) by describing these products as well as how to install, administer, and deploy them. This chapter describes the Oracle Clusterware and RAC architectures as well as the software and hardware components for both of these products. This chapter also briefly describes workload management, services, and high availability for both single-instance Oracle databases and RAC environments. This chapter includes the following topics:
The Oracle Clusterware Architecture and Oracle Clusterware Processing
The Real Application Clusters Architecture and Real Application Clusters Processing
Introduction to Installing the Oracle Clusterware and Real Application Clusters
Additional Considerations and Features for Real Application Clusters
A cluster comprises multiple interconnected computers or servers that appear as if they are one server to end users and applications. Oracle Database 10g Real Application Clusters (RAC) enables the clustering of the Oracle Database. RAC uses the Oracle Clusterware for the infrastructure to bind multiple servers so that they operate as a single system.
Oracle Clusterware is a portable cluster management solution that is integrated with the Oracle database. The Oracle Clusterware is also a required component for using RAC. In addition, the Oracle Clusterware enables both single-instance Oracle databases and RAC databases to use the Oracle high availability infrastructure. The Oracle Clusterware enables you to create a clustered pool of storage to be used by any combination of single-instance and RAC databases.
Oracle Clusterware is the only clusterware that you need for most platforms on which RAC operates. You can also use clusterware from other vendors if the clusterware is certified for RAC.
Single-instance Oracle databases have a one-to-one relationship between the Oracle database and the instance. RAC environments, however, have a one-to-many relationship between the database and instances. In RAC environments, the cluster database instances access one database. The combined processing power of the multiple servers can provide greater throughput and scalability than is available from a single server. RAC is the Oracle Database option that provides a single system image for multiple servers to access one Oracle database. In RAC, each Oracle instance usually runs on a separate server.
RAC is a unique technology that provides high availability and scalability for all application types. The RAC infrastructure is also a key component for implementing the Oracle enterprise grid computing architecture. Having multiple instances access a single database prevents the server from being a single point of failure. RAC enables you to combine smaller commodity servers into a cluster to create scalable environments that support mission critical business applications. Applications that you deploy on RAC databases can operate without code changes.
The Oracle Clusterware is software that when installed on servers running the same operating system, enables the servers to be bound together to operate as if they were one server. The Oracle Clusterware requires two clusterware components: a voting disk to record node membership information and the Oracle Cluster Registry (OCR) to record cluster configuration information. The voting disk and the OCR must reside on shared storage. The Oracle Clusterware requires that each node be connected to a private network by way of a private interconnect.
The private interconnect that the Oracle Clusterware requires is a separate network that you configure between the cluster nodes. This interconnect, which is required by RAC, can be the same network that the clusterware uses, but the interconnect should not be accessible by nodes that are not part of the cluster.
Oracle recommends that you configure a redundant interconnect to prevent the interconnect from being a single point of failure. Oracle also recommends that you use User Datagram Protocol (UDP) on a Gigabit Ethernet for your cluster interconnect. Crossover cables are not supported for use with the Oracle Clusterware or RAC databases.
The Oracle Clusterware manages node membership and prevents split brain syndrome in which two or more instances attempt to control the database. This can occur in cases where there is a break in communication between nodes through the interconnect.
The Oracle Clusterware architecture supports high availability by automatically restarting stopped components. In a RAC environment, all Oracle processes are under the control of the Oracle clusterware. The Oracle Clusterware also provides an application programming interface (API) that enables you to control other Oracle processes with the Oracle Clusterware.
The Oracle Clusterware comprises several background processes that facilitate cluster operations. The Cluster Synchronization Service (CSS), Event Management (EVM), and Oracle Cluster components communicate with other cluster component layers in the other instances within the same cluster database environment. These components are also the main communication links between the Oracle Clusterware high availability components and the Oracle Database. In addition, these components monitor and manage database operations.
See Also: Chapter 14, "Making Applications Highly Available Using the Oracle Clusterware" for more detailed information about the Oracle Clusterware API |
The following list describes the functions of some of the major Oracle Clusterware components. This list includes these components which are processes on Unix and Linux operating systems or services on Windows.
Note: On Windows-based operating systems, many of the components are threads of the Oracle process instead of separate processes. |
Cluster Synchronization Services (CSS)—Manages the cluster configuration by controlling which nodes are members of the cluster and by notifying members when a node joins or leaves the cluster. If you are using third-party clusterware, then the css
process interfaces with your clusterware to manage node membership information.
Cluster Ready Services (CRS)—The primary program for managing high availability operations within a cluster. Anything that the crs
process manages is known as a cluster resource which could be a database, an instance, a service, a Listener, a virtual IP (VIP) address, an application process, and so on. The crs
process manages cluster resources based on the resource's configuration information that is stored in the OCR. This includes start, stop, monitor and failover operations. The crs
process generates events when a resource status changes. When you have installed RAC, crs
monitors the Oracle instance, Listener, and so on, and automatically restarts these components when a failure occurs. By default, the crs
process makes five attempts to restart a resource and then does not make further restart attempts if the resource does not restart.
Event Management (EVM)—A background process that publishes events that crs
creates.
Oracle Notification Service (ONS)—A publish and subscribe service for communicating Fast Application Notification (FAN) events.
RACG—Extends clusterware to support Oracle-specific requirements and complex resources. Runs server callout scripts when FAN events occur.
Process Monitor Daemon (OPROCD)—This process is locked in memory to monitor the cluster and provide I/O fencing. OPROCD performs its check, stops running, and if the wake up is beyond the expected time, then OPROCD resets the processor and reboots the node. An OPROCD failure results in the Oracle Clusterware restarting the node. OPROCD uses the hangcheck timer on Linux platforms.
In the following table, if a process has a (r
) beside it, then the process runs as the root
user. Otherwise the process runs as the oracle
user.
Table 1-1 List of Processes and Windows Services associated with Oracle Clusterware
Oracle Clusterware Component | Linux/Unix Process | Windows Services | Windows Processes |
---|---|---|---|
Process Monitor Daemon |
|
|
|
RACG |
|
|
|
Oracle Notification Service (ONS) |
|
|
|
Event Manager |
|
|
|
Cluster Ready |
|
|
|
Cluster Synchronization Services |
|
|
|
When the Oracle Clusterware operates, several platform-specific processes or services will also be running on each node in the cluster to support the Oracle Clusterware. The Oracle Clusterware platform-specific UNIX-based processes and Windows-based services are described under the following headings:
The Oracle Clusterware processes on UNIX-based systems are:
crsd
—Performs high availability recovery and management operations such as maintaining the OCR and managing application resources. This process runs as the root
user, or by a user in the admin
group on Mac OS X-based systems. This process restarts automatically upon failure.
evmd
—Event manager daemon. This process also starts the racgevt
process to manage FAN server callouts.
ocssd
—Manages cluster node membership and runs as the oracle
user; failure of this process results in cluster restart.
oprocd
—Process monitor for the cluster. Note that this process only appears on platforms that do not use vendor clusterware with the Oracle Clusterware.
Note: RAC on Linux platforms can have multiple threads that appear as separate processes with separate process identifiers. |
The Oracle Clusterware services on Windows-based systems are:
OracleCRService
—Performs high availability recovery and management operations such as maintaining the OCR and managing application resources. This process runs as the root
user, or by a user in the admin
group on Mac OS X-based systems. This process restarts automatically upon failure.
OracleCSService
—Manages cluster node membership and runs as oracle
user; failure of this process results in cluster restart.
OracleEVMService
—Event manager daemon. This process also starts the racgevt
process to manage FAN server callouts.
OraFenceService
—Process monitor for the cluster. Note that this process only appears on platforms that do not use vendor clusterware with the Oracle Clusterware.
A RAC database is a logically or physically shared everything database. All datafiles, control files, PFILEs, and redo log files in RAC environments must reside on cluster-aware shared disks so that all of the cluster database instances can access them. All of the instances must also share the same interconnect. In addition, RAC databases can share the same interconnect that the Oracle Clusterware uses.
Because a RAC database uses a shared everything architecture, RAC requires cluster-aware storage for all database files. It is your choice as to how to configure your disk, but you must use a supported cluster-aware storage solution. Oracle Database 10g provides Automatic Storage Management (ASM), which is the recommended solution to manage your disk. However you may also use a cluster-aware volume manager or a cluster file system (not required). In RAC, the Oracle Database software manages disk access and the Oracle software is certified for use on a variety of storage architectures. A RAC database can have up to 100 instances. Depending on your platform, you can use the following file storage options for RAC:
ASM, which Oracle recommends
Oracle Cluster File System (OCFS), which is available for Linux and Windows platforms, or a third-party cluster file system that is certified for RAC
RAC databases differ architecturally from single-instance Oracle databases in that each RAC database instance also has:
At least one additional thread of redo for each instance
An instance-specific undo tablespace
All nodes in a RAC environment must connect to a Local Area Network (LAN) to enable users and applications to access the database. Applications should use the Oracle Database services feature to connect to an Oracle database. Services enable you to define rules and characteristics to control how users and applications connect to database instances. These characteristics include a unique name, workload balancing and failover options, and high availability characteristics. Oracle Net Services enables the load balancing of application connections across all of the instances in a RAC database.
Users can access a RAC database using a client-server configuration or through one or more middle tiers, with or without connection pooling. Users can be DBAs, developers, application users, power users, such as data miners who create their own searches, and so on.
Most public networks typically use TCP/IP, but you can use any supported hardware and software combination. RAC database instances can be accessed through a database's defined, default IP address and through VIP addresses.
Note: Do not to use the interconnect or the private network for user communication because Cache Fusion uses the private interconnect for inter-instance communications. |
In addition to the node's host name and IP address, you must also assign a virtual host name and an IP address to each node. The virtual host name or VIP should be used to connect to the database instance. For example, you might enter the virtual host name CRM
in the address list of the tnsnames.ora
file.
A virtual IP address is an alternate public address that client connections use instead of the standard public IP address. To configure VIP addresses, you need to reserve a spare IP address for each node that uses the same subnet as the public network.
If a node fails, then the node's VIP fails over to another node on which the VIP cannot accept connections. Clients that attempt to connect to the VIP receive a rapid connection refused
error instead of waiting for TCP connect timeout messages. You configure VIP addresses in the address list for your database connection definition to enable connectivity. The following section describes the RAC software components in more detail.
RAC databases have two or more database instances that each contain memory structures and background processes. A RAC database has the same processes and memory structures as a single-instance Oracle database as well as additional process and memory structures that are specific to RAC. Any one instance's database view is nearly identical to any other instance's view within the same RAC database; the view is a single system image of the environment.
Each instance has a buffer cache in its System Global Area (SGA). Using Cache Fusion, RAC environments logically combine each instance's buffer cache to enable the instances to process data as if the data resided on a logically combined, single cache.
To ensure that each RAC database instance obtains the block that it needs to satisfy a query or transaction, RAC instances use two processes, the Global Cache Service (GCS) and the Global Enqueue Service (GES). The GCS and GES maintain records of the statuses of each data file and each cached block using a Global Resource Directory (GRD). The GRD contents are distributed across all of the active instances, which effectively increases the size of the System Global Area for a RAC instance.
After one instance caches data, any other instance within the same cluster database can acquire a block image from another instance in the same database faster than by reading the block from disk. Therefore, Cache Fusion moves current blocks between instances rather than re-reading the blocks from disk. When a consistent block is needed or a changed block is required on another instance, Cache Fusion transfers the block image directly between the affected instances. RAC uses the private interconnect for inter-instance communication and block transfers. The Global Enqueue Service Monitor and the Instance Enqueue Process manages access to Cache Fusion resources as well as enqueue recovery processing.
These RAC-specific processes and the GRD collaborate to enable Cache Fusion. The RAC-specific processes and their identifiers are as follows:
If you use Network Attached Storage (NAS), then you are required to configure a second private network. Access to this network is typically controlled by the vendor's software. The private network uses static IP addresses.
Note: Many of the Oracle components that this section describes are in addition to the components that are described for single-instance Oracle databases in Oracle Database Concepts. |
When you combine the Oracle Clusterware and RAC, you can achieve excellent scalability and high availability. The Oracle Clusterware achieves this using the components that this section describes under the following topics:
The Oracle Clusterware Voting Disk and Oracle Cluster Registry
Oracle Clusterware High Availability and the Application Programming Interface
The Oracle Clusterware requires the following two critical files:
Voting Disk—Manages cluster membership by way of a health check and arbitrates cluster ownership among the instances in case of network failures. RAC uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on shared disk. For high availability, Oracle recommends that you have multiple voting disks. The Oracle Clusterware enables multiple voting disks but you must have an odd number of voting disks, such as three, five, and so on. If you define a single voting disk, then you should use external mirroring to provide redundancy.
Oracle Cluster Registry (OCR)—Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. The OCR also manages information about processes that the Oracle Clusterware controls. The OCR stores configuration information in a series of key-value pairs within a directory tree structure. The OCR must reside on shared disk that is accessible by all of the nodes in your cluster. The Oracle Clusterware can multiplex the OCR and Oracle recommends that you use this feature to ensure cluster high availability. You can replace a failed OCR online, and you can update the OCR through supported APIs such as Enterprise Manager, the Server Control Utility (SRVCTL), or the Database Configuration Assistant (DBCA).
Note: Both the voting disks and the OCRs must reside on either cluster file system files or on shared raw devices that you configure before you install the Oracle Clusterware and RAC. |
Oracle Clusterware provides a high availability application programming interface (API) that you can use to enable the Oracle Clusterware to manage applications or processes that run a cluster. This enables you to provide high availability for all of your applications. The Oracle Clusterware with ASM enables you to create a consolidated pool of storage to support both the single-instance Oracle databases and the RAC databases that are running on your cluster.
To maintain high availability, the Oracle Clusterware components can respond to status changes to restart applications and processes according to defined high availability rules. In addition, you can use the Oracle Clusterware high availability framework by registering your applications with the Oracle Clusterware and configuring the clusterware to start, stop, or relocate your application processes. That is, you can make custom applications highly available by using the Oracle Clusterware to create profiles that monitor, relocate, and restart your applications. The Oracle Clusterware responds to FAN events that are created by a RAC database. Oracle broadcasts FAN events when cluster servers may become unreachable and network interfaces are slow or non-functional.
See Also: Chapter 14, "Making Applications Highly Available Using the Oracle Clusterware" for more detailed information about the Oracle Clusterware API |
Workload Management enables you to manage the distribution of workloads to provide optimal performance for users and applications. This includes providing the highest availability for database connections, rapid failure recovery, and balancing workloads optimally across the active configuration. Oracle Database 10g with RAC includes many features that can enhance workload management such as connection load balancing, fast connection failover (FCF), the load balancing advisory, and runtime connection load balancing. Workload management provides the greatest benefits to RAC environments. You can, however, take advantage of workload management by using Oracle services in single-instance Oracle Databases, especially those that use Data Guard or Streams. Workload management comprises the following components:
High Availability Framework—The RAC high availability framework enables the Oracle Database to maintain components in a running state at all times. Oracle high availability implies that the Oracle Clusterware monitors and restarts critical components if they stop, unless you override the restart processing. The Oracle Clusterware and RAC also provide alerts to clients when configurations change. This enables clients to immediately react to the changes, enabling application developers to hide outages and reconfigurations from end users. The scope of Oracle high availability spans from the restarting of stopped Oracle processes in an Oracle database instance to failing over the processing of an entire instance to other available instances.
Load Balancing Advisory—This is the ability of the database to provide information to applications about the current service levels being provided by the database and its instances. Applications can take advantage of this information to direct connection requests to the instance that will provide the application request with the best service quality to complete the application's processing. Oracle has integrated its Java Database Connectivity (JDBC) and Oracle Data Provider for .NET (ODP.NET) connection pools to work with the load balancing information. Applications can use the integrated connection pools without programmatic changes.
Services—Oracle Database 10g introduces a powerful automatic workload management facility, called services, to enable the enterprise grid vision. Services are entities that you can define in RAC databases. Services enable you to group database workloads and route the work to the optimal instances that are assigned to process the service. Furthermore, you can use services to define the resources that Oracle assigns to process workloads and to monitor workload resources. Applications that you assign to services transparently acquire the defined workload management characteristics, including high availability and load balancing rules. Many Oracle database features are integrated with services, such as Resource Manager, which enables you to restrict the resources that a service can use within an instance. Some database features are also integrated with Oracle Streams, Advanced Queuing, to achieve queue location transparency, and the Oracle Scheduler, to map services to specific job classes.
In RAC databases, the service performance rules that you configure control the amount of work that Oracle allocates to each available instance for that service. As you extend your database by adding nodes, applications, components of applications, and so on, you can add more services.
Connection Load Balancing— Oracle Net Services provides connection load balancing for database connections. Connection load balancing occurs when the connection is created. Connections for a given service are balanced across all of the running instances that offer the service. You should define how you want connections to be balanced in the service definition. However, you must still configure Oracle Net Services. When you enable the load balancing advisory, the Listener uses the load balancing advisory for connection load balancing.
See Also: Chapter 6, "Introduction to Workload Management" for more information about workload management and services |
This section introduces the storage options for RAC and the installation processes for both the Oracle Clusterware and RAC under the following topics:
Real Application Clusters Installation and Database Creation Process Description
Cloning Oracle Clusterware and RAC Software in Grid Environments
The Oracle Clusterware is distributed on the Oracle Database 10g installation media. The Oracle Universal Installer (OUI) installs the Oracle Clusterware into a directory structure, which can be referred to as CRS_home
, that is separate from other Oracle software running on the machine. Because the Oracle Clusterware works closely with the operating system, system administrator access is required for some of the installation tasks. In addition, some of the Oracle Clusterware processes must run as the system administrator, which is generally the root
user on Unix and Linux systems and the System Administrator
user on Windows systems.
Before you install the Oracle Clusterware, Oracle recommends that you run the Cluster Verification Utility (CVU) to ensure that your environment meets the Oracle Clusterware installation requirements. The OUI also automatically runs CVU at the end of the clusterware installation to verify various clusterware components. The CVU simplifies the installation, configuration, and overall management of the Oracle Clusterware installation process by identifying problems in cluster environments.
During the Oracle Clusterware installation, you must identify three IP addresses for each node that is going to be part of your installation. One IP address is for the private interconnect and the other is for the public interconnect. The third IP address is the virtual IP address that clients will use to connect to each instance.
The Oracle Clusterware installation process creates the voting disk and OCR on cluster-aware storage. If you select the option for normal redundant copies during the installation process, then the Oracle Clusterware automatically maintains redundant copies of these files to prevent the files from becoming single points of failure. The normal redundancy feature also eliminates the need for third party storage redundancy solutions. When you use normal redundancy, the Oracle Clusterware automatically maintains two copies of the Oracle Cluster Registry (OCR) file and three copies of the Voting Disk file.
Note: If you choose external redundancy for the OCR and voting disk, then to enable redundancy, your disk subsystem must be configurable for RAID mirroring. Otherwise, your system may be vulnerable because the OCR and voting disk are single points of failure. |
The RAC software is distributed as part of the Oracle Database 10g installation media. By default, the standard Oracle Database 10g software installation process installs the RAC option when it recognizes that you are performing the installation on a cluster. The OUI installs RAC into a directory structure, which can be referred to as Oracle_home
, that is separate from other Oracle software running on the machine. Because the OUI is cluster-aware, it installs the RAC software on all of the nodes that you defined to be part of the cluster. If you are using a certified cluster file system for the Oracle home, then only select the node that you are connected to for the installation.
You must first install the Oracle Clusterware before installing RAC. After the Oracle Clusterware is operational, you can use the OUI to install the Oracle database software with the RAC components. During the installation, the OUI runs the DBCA to create your RAC database according to the options that you select. The DBCA also runs the Net Configuration Assistant (NETCA) to configure the network for your RAC environment.
Oracle recommends that you select ASM during the installation to simplify storage management; ASM automatically manages the storage of all database files within disk groups. You can also configure services during installation, depending on your processing requirements. If you are using the Oracle Database 10g Standard Edition, then you must use ASM for storing all of the database files.
By default, the DBCA creates one service for your environment and this service is for the database. The default service is available on all instances in a RAC environment, unless the database is in restricted mode.
This section briefly summarizes the procedures for deploying RAC in grid environments that have large numbers of nodes using cloned images for Oracle Clusterware and RAC. Oracle cloning is the preferred method of extending your RAC environment by adding nodes and instances. To perform the cloning procedures that are summarized in this section, refer to the Oracle Universal Installer and OPatch User's Guide.
The cloning process assumes that you successfully installed an Oracle Clusterware home and an Oracle home with RAC on at least one node. In addition, all root scripts must have run successfully on the node from which you are extending your cluster database. To use Oracle cloning, first clone the Oracle Clusterware home and then clone the Oracle home with the RAC software.
To clone the Oracle Clusterware home, on UNIX-based systems create a tar file of the Oracle Clusterware home and copy the file to the new node's Oracle Clusterware home. On Windows-based systems you must create zip files. Then on UNIX-based systems create the required users and groups on the new nodes. On Windows-based systems, you do not need to create users and groups, but the user that performs the cloning should be the same user that performed the installation.
Extract the tar file, or unzip the zip file, and run the Oracle Universal Installer (OUI) in clone mode as described in the Oracle Universal Installer and OPatch User's Guide. Then run the installation scripts and repeat these steps on each node that you are adding. The process for cloning the Oracle home onto new nodes is similar to the process for cloning the Oracle Clusterware home. In addition, you must run the Oracle Net Configuration Assistant (NETCA) on each new node to create a Listener.
If you have not already created a database, then you can run the Database Configuration Assistant (DBCA) to create one. Finally, follow the post-cloning procedures to complete the extension of your RAC environment onto the new nodes.
See Also: Oracle Universal Installer and OPatch User's Guide for details about the Oracle cloning procedures |
In addition to configuring services to manage your workloads, also consider using the following features when you deploy RAC:
Scaling Your RAC Database—As mentioned, you can add nodes and instances to your RAC environment using Oracle cloning. If you choose to not use cloning, then you can extend your database by using the manual procedures that are described in Chapter 10, " Adding and Deleting Nodes and Instances on UNIX-Based Systems" or Chapter 11, " Adding and Deleting Nodes and Instances on Windows-Based Systems".
Enterprise Manager—Use Enterprise Manager to administer your entire RAC environment, not just the RAC database. Use Enterprise Manager to create and modify services, and to start and stop the cluster database instances and the cluster database. Enterprise Manager has additional features as detailed in the section "Overview of Using Enterprise Manager with Real Application Clusters".
Recovery Manager (RMAN)—RMAN backs up, restores, and recovers datafiles, control files, server parameter files (SPFILEs) and archived redo logs. You can use RMAN with a media manager to back up files to external storage. You can also configure parallelism when backing up or recovering RAC databases. In RAC, RMAN channels can be dynamically allocated across all of the RAC instances. Channel failover enables failed operations on one node to continue on another node. You can use RMAN in RAC from the Oracle Enterprise Manager Backup Manager or from a command line.
Automatic undo management—Automatically manages undo processing.
Automatic segment space management (ASSM)—Automatically manages segment freelists and freelist groups.
Locally managed tablespaces—Enhances space management performance.
Cluster Verification Utility (CVU)—Use CVU to verify the status of your clusterware if you experience problems or use it whenever you reconfigure your cluster.
Sequences—If you use sequence numbers, then always use CACHE
with the NOORDER
option for optimal sequence number generation performance. With the CACHE
option, however, you may have gaps in the sequence numbers. If your environment cannot tolerate sequence number gaps, then use the NOCACHE
option or consider pre-generating the sequence numbers. If your application requires sequence number ordering but can tolerate gaps, then use CACHE
and ORDER
to cache and order sequence numbers in RAC. If your application requires ordered sequence numbers without gaps, then use NOCACHE
and ORDER
. This combination has the most negative effect on performance compared to other caching and ordering combinations.
Indexes—If you use indexes, consider alternatives, such as reverse key indexes, to optimize index performance. Reverse key indexes are especially helpful if you have frequent inserts to one side of an index, such as indexes that are based on insert date.
This section describes the following RAC environment management topics:
Administrative Tools for Real Application Clusters Environments
Evaluating Performance in Real Application Clusters Environments
Consider performing the following steps during the design and development of applications that you are deploying on a RAC database. Consider tuning:
The design and the application
The memory and I/O
Contention
The operating system
Note: If an application does not scale on an SMP machine, then moving the application to a RAC database cannot improve performance. |
Consider using hash partitioning for insert-intensive online transaction processing (OLTP) applications. Hash partitioning:
Reduces contention on concurrent inserts into a single database structure
Affects sequence-based indexes when indexes are locally partitioned with a table and tables are partitioned on sequence-based keys
Is transparent to the application
If you hash partitioned tables and indexes for OLTP environments, then you can greatly improve performance in your RAC database. Note that you cannot use index range scans on an index with hash partitioning.
If you are using sequence numbers, then always use the CACHE
option. If you use sequence numbers with the CACHE
option, then:
Your system may lose sequence numbers
There is no guarantee of the ordering of the sequence numbers
Note: If your environment cannot tolerate sequence number gaps, then consider pre-generating the sequence numbers or use theORDER and CACHE options. |
Oracle enables you to administer a cluster database as a single system image through Enterprise Manager, SQL*Plus, or through RAC command-line interfaces such as Server Control (SRVCTL). You can also use several tools and utilities to manage your RAC environment and its components as follows:
Enterprise Manager—Enterprise Manager has both the Database Control and Grid Control GUI interfaces for managing both single instance and RAC environments.
Cluster Verification Utility (CVU)—CVU is a command-line tool that you can use to verify a range of cluster and RAC-specific components such as shared storage devices, networking configurations, system requirements, and the Oracle Clusterware, as well as operating system groups and users. You can use CVU for pre-installation checks as well as for post-installation checks of your cluster environment. CVU is especially useful during pre-installation and during installation of the Oracle Clusterware and RAC components. The OUI runs CVU after the Oracle Clusterware and the Oracle installation to verify your environment.
Server Control (SRVCTL)—SRVCTL is a command-line interface that you can use to manage a RAC database from a single point. You can use SRVCTL to start and stop the database and instances and to delete or move instances and services. You can also use SRVCTL to manage configuration information.
Cluster Ready Services Control (CRSCTL)—CRSCTL is a command-line tool that you can use to manage the Oracle Clusterware. You can use CRSCTL to start and stop the Oracle Clusterware. CRSCTL has many options such as enabling online debugging,
See Also: "Diagnosing the Oracle Clusterware High Availability Components" for more information about CRSCTL |
Oracle Interface Configuration Tool (OIFCFG)—OIFCFG is a command-line tool for both single-instance Oracle databases and RAC environments that you can use to allocate and de-allocate network interfaces to components. You can also use OIFCFG to direct components to use specific network interfaces and to retrieve component configuration information.
See Also: "Administering System and Network Interfaces with OIFCFG" for more information about OIFCFG |
OCR Configuration Tool (OCRCONFIG)—OCRCONFIG is a command-line tool for OCR administration. You can also use the OCRCHECK and OCRDUMP utilities to troubleshoot configuration problems that affect the OCR.
Web-based Enterprise Manager Database Control and Grid Control enable you to monitor a RAC database. The Enterprise Manager Console is a central point of control for the Oracle environment that you access by way of a graphical user interface (GUI). Use the Enterprise Manager Console to initiate cluster database management tasks. Use Enterprise Manager Grid Control to administer multiple RAC databases. Also note the following points about monitoring RAC environments:
The global views, or GV$ views, are based on V$ views. The catclustdb.sql
script creates the GV$ views. Run this script if you do not create your database with the DBCA. Otherwise, the DBCA runs this script for you.
Statspack is RAC-aware.
Note: Instead of using Statspak, Oracle recommends that you use the more sophisticated management and monitoring features of the Oracle Database 10g Diagnostic and Tuning packs which include the Automatic Database Diagnostic Monitor (ADDM). |
You do not need to perform special tuning for RAC; RAC scales without special configuration changes. If your application performed well on a single-instance Oracle database, then it will perform well in a RAC environment. Many of the tuning tasks that you would perform on a single-instance Oracle database can also improve RAC database performance. This is especially true if your environment required scalability across a greater number of CPUs.
Some of the RAC-specific performance features are:
Oracle dynamically allocates Cache Fusion resources as needed
The dynamic mastering of resources improves performance by keeping resources local to data blocks
Cache Fusion Enables A Simplified Tuning Methodology
You do not have to tune any parameters for Cache Fusion
No application-level tuning is necessary
You can use a bottom-up tuning approach with virtually no effect on your existing applications
More Detailed Performance Statistics
More views for RAC performance monitoring
Enterprise Manager Database Control and Grid Control are Integrated with RAC