Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide 10g Release 2 (10.2) Part Number B14197-02 |
|
|
View PDF |
This chapter describes administrative tasks or options within Oracle tools that are specific to Oracle Real Application Clusters (RAC) and not discussed elsewhere in this book. In some cases, you have a choice of tools to perform a task while other tasks must be performed through a specific tool, such as Enterprise Manager or SRVCTL. In addition, this chapter describes how to quiesce a RAC database and how to administer network interfaces with the Oracle Interface Configuration Tool (OIFCFG). The topics in this chapter are:
Real Application Clusters Administration Procedures for Enterprise Manager
Additional Information About SQL*Plus in Real Application Clusters
See Also: Oracle Enterprise Manager Concepts and the Enterprise Manager online help for more information about Enterprise Manager |
Within Enterprise Manager, RAC-specific administrative tasks generally focus on two levels: tasks that affect an entire cluster database and tasks that affect specific instances. For example, you can use Enterprise Manager to administer storage, the schema, and security at the cluster database level. Or you can perform instance-specific commands such as setting parameters or creating resource plans.
Because there is one Enterprise Manager Agent on each node of a RAC database, for Database Control you can use any URL for that database to administer it with Enterprise Manager. You can manage all of the following RAC components as targets in your Enterprise Manager framework:
Host cluster—Accessible from the Cluster Database Home Page.
Cluster database instances—Links to the instance pages appear on the Cluster Database Home Page.
Hosts and Listeners—Links to hosts and Listeners appear on the Cluster Database Instance Pages.
See Also: Oracle Enterprise Manager Concepts for information about creating administrator accounts, using privileges, defining roles, and so on, and Oracle Enterprise Manager Advanced Configuration for information about configuring Enterprise Manager Grid Control for multiple RAC databases |
Discovering RAC database and instance targets in Enterprise Manager enables monitoring and administration from the console. Database Control does not require discovery because DBCA performs any necessary configuration while creating the database. But for Grid Control, Enterprise Manager console interface can be used to discover RAC database and instance targets. If the Grid Control agents are installed on a cluster that already has RAC database, RAC database targets are discovered at install time. You can use the console interface to discover targets if a database is created after agents are installed or if a database is not automatically discovered at agent install time. To discover nodes and instances, use Enterprise Manager Grid Control as follows:
Log in to Enterprise Manager and click the Targets tab.
Click the Database tab to view all of the available targets. The column labeled Types shows the RAC databases using the entry "Cluster Database".
Add the database target by selecting the target name, then clicking Add. The Add Database Target: Specify Host page appears, which enables you to add databases, Listeners, and Automatic Storage Management (ASM) as monitored targets.
Click the flashlight icon to display the available host names, select a host, then click Continue. The Add Database: Specify Source page appears.
Either request Enterprise Manager to discover only single-instance databases and Listeners, or to discover all cluster databases, single-instance databases, and Listeners on the cluster, then click Continue.
Enterprise Manager performs discovery to locate and display the cluster database and its associated instances. The Targets Discovered on Cluster page appears. If this procedure did not discover your reconfigured cluster database and all of its instances, you can use this page to manually configure your cluster databases and single-instance databases.
This section describes the following Enterprise Manager pages for RAC:
This is a top-level page in Enterprise Manager Grid Control. The page shows cluster and single-instance databases. If there are cluster databases in your environment, the Databases Summary page displays "Cluster Database" in the Type column. The page also indicates cluster database availability as well as the ratio of active instances to inactive instances. Click a cluster database link and Enterprise Manager displays the Cluster Database Home Page for that database, which is described under the following heading.
From the Cluster Database Home Page you can manage the cluster nodes and hosts as well as cluster subcomponents such as instances, Listeners, and interconnects. The Cluster Database Home Page is also a summary page for cluster database management that provides an overview of cluster database activity. Enterprise Manager uses a unique database name to identify the cluster database it represents. You can use the Administration tab on this page to perform many activities such as:
Create undo tablespaces and redo threads and assign them to specific instances, SPFILE, create a backup
Start, stop, and relocate database services at the cluster database level
You can use the Maintenance tab on this page to perform operations such as:
Create backup and recovery scenarios
Toggle the archive mode on and off
Administer recovery settings
Manage resource plans for the database and its instances
You can define and modify the resource plans for the cluster database and also activate and deactivate resource plans for specific instances. You can also use the Resource Plan Schedule to schedule resource plan activation.
You can use the Interconnects tab on this page to perform tasks such as:
Monitoring the interconnect interfaces
Determining the load added by individual instances and databases on the interconnect
Determining configuration issues
Identifying transfer rate-related issues including excess traffic, and so on
Instances' pages show instance-specific information similar to the information that you would see on a single-instance Oracle database. The RAC-specific contents of Instance Pages are:
Configuration—You can view instance states, view and edit initialization parameters at the instance level and at the cluster database level, and view resource plan performance statistics. You can also view and modify the undo tablespaces assigned to an instance and the undo tablespace retention period.
Sessions—You can list the statuses of connected users, view the latest SQL for specific sessions, and terminate sessions.
Locks—You can view details for currently held User type and System type locks.
The Databases Overview Page links to Cluster Home Pages and to the node or instance Home Pages.
The Cluster Home Page displays an overview of activities and detailed reports at both the cluster and instance levels. The Cluster Home Page has the following sections:
General Section—Provides a cluster status overview.
Configuration Section—Lists the hardware platform, operating system and version, and Oracle or vendor clusterware version.
Cluster Databases Table—Displays the cluster databases associated with a cluster, their availability, and any cluster database alerts. You can access the individual Cluster Database Home Pages from the Cluster Databases Table.
Alerts Table—Provides alert information such as severity rating.
Hosts Table—Displays information about the hosts or nodes in the cluster.
The Cluster Database Home page shows all of the instances in the RAC database and provides an aggregate collection of several RAC-specific statistics that are collected by the Automatic Workload Repository (AWR) for server manageability.
You do not need to navigate to an instance-specific page to see these details. However, on the Cluster Database Home page, if an instance is down that should be operating, or if an instance has a high number of alerts, then you can drill down to the instance-specific page for each alert.
To perform specific administrative tasks as described in the remainder of this section, log in to the target RAC database, navigate to the Cluster Database Home page, and click the Administration tab.
You can administer Enterprise Manager jobs at both the database and instance levels. For example, you can create a job at the cluster database level and the job will run on any active instance of the target RAC database. Or you can create a job at the instance level and the job will only run on the specific instance for which you created it. In the event of a failure, recurring jobs can run on a surviving instance.
Because you can create jobs at the instance level, cluster level, or cluster database level, jobs can run on any available host in the cluster database. This applies to scheduled jobs as well. Enterprise Manager also displays job activity in several categories, namely, Active, History, and Library.
Use the Jobs tab to submit operating system scripts and SQL scripts and to examine scheduled jobs. For example, to create a backup job for a specific RAC database:
Click Targets and click the database for which you want to create the job.
Log in to the target database.
When Enterprise Manager displays the Database Home page, click Maintenance.
Complete the Enterprise Manage Job Wizard panels to create the job.
You can use Enterprise Manager to configure RAC environment alerts. You can also configure special RAC database tests, such as global cache converts, consistent read requests, and so on.
Enterprise Manager distinguishes between database- and instance-level alerts in RAC environments. Alert thresholds for instance level alerts, such as archive log alerts, can be set at the instance target level. This enables you to receive alerts for the specific instance if performance exceeds your threshold. You can also configure alerts at the database level, such as setting alerts for tablespaces. This enables you to avoid receiving duplicate alerts at each instance.
See Also: OTN for an example of configuring alerts in RAC and the Oracle Database PL/SQL Packages and Types Reference for information about using packages to configure thresholds |
You can define blackouts for all managed targets of a RAC database to prevent alerts from occurring while performing maintenance. You can define blackouts for an entire cluster database or for specific cluster database instances.
The following sections describe the use of SQL*Plus in RAC environments:
Most SQL statements affect the current instance. You can use SQL*Plus to start and stop instances in the RAC database. You do not need to run SQL*Plus commands as root on UNIX-based systems or as Administrator on Windows-based systems. You need only the proper database account with the privileges that you normally use for a single-instance Oracle database. Some examples of how SQL*Plus commands affect instances are:
ALTER SYSTEM CHECKPOINT LOCAL
affects only the instance to which you are currently connected, rather than the default instance or all instances.
ALTER SYSTEM CHECKPOINT
or ALTER SYSTEM CHECKPOINT GLOBAL
affects all instances in the cluster database.
ALTER SYSTEM SWITCH LOGFILE
affects only the current instance.
Table 9-1 describes how SQL*Plus commands affect instances.
Table 9-1 How SQL*Plus Commands Affect Instances
To verify that instances are running, on any node from a SQL*Plus prompt enter:
CONNECT SYS/password as SYSDBA SELECT * FROM V$ACTIVE_INSTANCES;
Oracle returns output similar to the following:
INST_NUMBER INST_NAME ----------- ----------------- 1 db1-sun:db1 2 db2-sun:db2 3 db3-sun:db3
The output columns for this example are shown in Table 9-2.
The procedure for quiescing RAC databases is identical to quiescing a single-instance database. You use the ALTER
SYSTEM
QUIESCE
RESTRICTED
statement from one instance. You cannot open the database from any instance while the database is in the process of being quiesced. Once all non-DBA sessions become inactive, the ALTER
SYSTEM
QUIESCE
RESTRICTED
statement finishes, and the database is considered as in a quiesced state. In an Oracle RAC environment, this statement affects all instances, not just the one from which the statement is issued.
To successfully issue the ALTER
SYSTEM
QUIESCE
RESTRICTED
statement in a RAC environment, you must have the Database Resource Manager feature activated, and it must have been activated since instance startup for all instances in the cluster database. It is through the facilities of the Database Resource Manager that non-DBA sessions are prevented from becoming active. Also, while this statement is in effect, any attempt to change the current resource plan will be queued until after the system is unquiesced.
These conditions apply to RAC:
If you issued the ALTER
SYSTEM
QUIESCE
RESTRICTED
statement but Oracle has not finished processing it, you cannot open the database.
You cannot open the database if it is already in a quiesced state.
The ALTER
SYSTEM
QUIESCE
RESTRICTED
and ALTER
SYSTEM
UNQUIESCE
statements affect all instances in a RAC environment, not just the instance that issues the command.
You cannot use the quiesced state to take a cold backup. This is because Oracle background processes may still perform updates for Oracle internal purposes even while the database is in quiesced state. In addition, the file headers of online datafiles continue to look like they are being accessed. They do not look the same as if a clean shutdown were done. You can still take online backups while the database is in a quiesced state. Refer to the Oracle Database Administrator's Guide for details on the quiesce database feature and the Oracle Database Reference for more information about the ALTER
SYSTEM
QUIESCE
RESTRICTED
syntax.
This section describes the following Oracle Interface Configuration (OIFCFG) topics:
Use the OIFCFG command-line tool in single-instance Oracle databases and in RAC database environments to:
Allocate and de-allocate network interfaces to components
Direct components to use specific network interfaces
Retrieve component configuration information
The Oracle Universal Installer (OUI) also uses OIFCFG to identify and display the interfaces available on the system.
The specification for a network interface uniquely identifies it using the interface name, its associated subnet, and interface type. The interface type indicates the purpose for which the network is configured. The supported interface types are:
Public—An interface that can be used for communication with components external to RAC instances, such as Oracle Net and Virtual Internet Protocol (VIP) addresses
Cluster interconnect—A private interface used for the cluster interconnect to provide inter-instance or Cache Fusion communication
A network interface can be stored as a global interface or as a node-specific interface. An interface is stored as a global interface when all of the nodes of a RAC cluster have the same interface connected to the same subnet (recommended). It is stored as a node-specific interface only when there are some nodes in the cluster that have a different set of interfaces and subnets. If an interface is configured as both a global and a node-specific interface, the node-specific definition takes precedence over the global definition. A network interface specification is in the form of:
interface_name/subnet:interface_type
For example, the following identifies qfe0 as a cluster interconnect located at the address 204.152.65.32:
qfe0/204.152.65.32:cluster_interconnect
Use the oifcfg -help
command to display online help for OIFCFG. The elements of OIFCFG commands, some of which are optional depending on the command, are:
nodename
—Name of the Oracle Clusterware node as listed in the output from the olsnodes
command
if_name
—Name by which the interface is configured in the system
You can use OIFCFG to list the interface names and the subnets of all of the interfaces available on the local node by executing the iflist
keyword as shown in this example:
oifcfg iflist hme0 139.185.141.0 qfe0 204.152.65.16
You can also retrieve specific OIFCFG information with a getif
command using the following syntax:
oifcfg getif [ [-global | -node nodename] [-if if_name[/subnet]] [-type if_type] ]
To store a new interface use the setif
keyword. For example, to store the interface hme0, with the subnet 139.185.141.0, as a global interface (to be used as an interconnect for all of the RAC instances in your cluster), you would use the command:
oifcfg setif -global hme0/139.185.141.0:cluster_interconnect
For a cluster interconnect that exists between only two nodes, for example rac1 and rac2, you could create the cms0 interface with the following commands, assuming 139.185.142.1 and 139.185.142.2 are the subnet addresses for the interconnect on rac1 and rac2 respectively:
oifcfg setif -node rac1 cms0/139.185.142.1:cluster_interconnect oifcfg setif -node rac2 cms0/139.185.142.2:cluster_interconnect
Use the OIFCFG delif
command to delete the stored configuration for global or node-specific interfaces. A specific node-specific or global interface can be deleted by supplying the interface name, with an optional subnet, on the command line. Without the -node
or -global
options, the delif
keyword deletes either the given interface or all of the global and node-specific interfaces on all of the nodes in the cluster. For example, the following command deletes the global interface named qfe0 for the subnet 204.152.65.16:
oifcfg delif -global qfe0/204.152.65.16
On the other hand, the next command deletes all of the global interfaces stored with OIFCFG:
oifcfg delif -global
Use the following procedure to change a VIP address:
Stop all database and ASM instances.
Stop the Listeners, and node applications using the srvctl stop nodeapps
command.
Run the following command to verify node connectivity between all of the nodes for which your cluster is configured. This command discovers all of the network interfaces available on the cluster nodes and verifies the connectivity between all of the nodes by way of the discovered interfaces. This command also lists all of the interfaces available on the nodes which are suitable for use as VIPs.
cluvfy comp nodecon -n all [-verbose]
See Also: Appendix A, "Troubleshooting" for more information about enabling and using the Cluster Verification Utility (CVU) |
Run the srvctl modify nodeapps
command with the -A
option as described in Appendix E. Use the crs_stat
command to identify all active node applications.
Restart all of the instances and node applications that you stopped in Step 1 and 2.