Oracle® Data Guard Concepts and Administration 10g Release 2 (10.2) Part Number B14239-01 |
|
|
View PDF |
This chapter describes how to configure redo transport services to transmit redo from a production database to one or more destinations. It contains the following topics:
Redo transport services control the automated transfer of redo data from a database destination to one or more destinations. Redo transport services also manage the process of resolving any gaps in the archived redo log files due to a network failure.
Redo transport services can transmit redo data to local and remote destinations. Remote destinations can include any of the following types: physical and logical standby databases, archived redo log repositories, Oracle Change Data Capture staging databases, and Oracle Streams downstream capture databases.
Figure 5-1 shows a simple Data Guard configuration with redo transport services archiving redo data to a local destination on the primary database while also transmitting it to archived redo log files or standby redo log files on a remote standby database destination.
This section contains the following topics:
There are several types of destinations supported by redo transport services:
Oracle Data Guard standby databases
Standby database destinations can be either physical standby databases or logical standby databases. Section 1.1.2 discusses standby databases.
This type of destination allows off-site archiving of redo data. An archive log repository is created by using a physical standby control file, starting the instance, and mounting the database. This database contains no data files and cannot be used for switchover or failover. This alternative is useful as a way of holding archived redo log files for a short period of time, perhaps a day, after which the log files can then be deleted. This avoids most of the storage and processing expense of another fully configured standby database.
Oracle recommends using an archived redo log repository for temporary storage of archived redo log files. This can be accomplished by configuring the repository destination for archiver-based transport (using the ARCH
attribute on LOG_ARCHIVE_DEST_
n
parameter) in a Data Guard configuration running in maximum performance mode. For a no data loss environment, you should use a fully configured standby database using the LGWR
, SYNC
, and AFFIRM
transport settings in a Data Guard configuration and running in either maximum protection mode or maximum availability mode.
Oracle Streams real-time downstream capture database
This destination type allows Oracle Streams to configure a capture process on a remote downstream database. The Streams downstream capture process uses redo transport services to transfer redo data to the downstream database where a Streams capture process captures changes in the archived redo log files on the remote destination. See Oracle Streams Concepts and Administration for more information.
Oracle Change Data Capture staging database
This destination type supports a Change Data Capture Asynchronous AutoLog configuration remotely at a staging database. Redo data is copied from the source database to the staging database using redo transport services. The Change Data Capture configuration captures changes from the redo data. See Oracle Database Data Warehousing Guide for more information.
For discussion purposes, this guide refers to the production database as a primary database and to archival destinations as standby databases (as defined in Section 1.1). If you are using Oracle Change Data Capture, substitute the terms source and staging database for primary and standby database, respectively. If you are using Oracle Streams, substitute the terms source and downstream capture database for primary and standby database, respectively.
The LOG_ARCHIVE_DEST_
n
initialization parameter defines up to ten (where n = 1, 2, 3, ... 10) destinations, each of which must specify either the LOCATION
or the SERVICE
attribute to specify where to archive the redo data.
The LOCATION
and SERVICE
attributes describe either a local disk location or an Oracle Net service name that represents a standby destination to which redo transport services will transmit redo data. Specifying remote destinations with the SERVICE
attribute allows Data Guard to maintain a transactionally consistent remote copy of the primary database for disaster recovery.
For every LOG_ARCHIVE_DEST_
n
initialization parameter that you define, specify a corresponding LOG_ARCHIVE_DEST_STATE_
n
parameter. The LOG_ARCHIVE_DEST_STATE_
n
(where n is an integer from 1 to 10) initialization parameter specifies whether the corresponding destination is currently on (enabled) or off (disabled). Table 5-1 describes the LOG_ARCHIVE_DEST_STATE_
n
parameter attributes.
Table 5-1 LOG_ARCHIVE_DEST_STATE_n Initialization Parameter Attributes
Example 5-1 provides an example of one destination with the LOCATION
attribute.
Example 5-1 Specifying a Local Archiving Destination
LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/' LOG_ARCHIVE_DEST_STATE_1=ENABLE
Figure 5-2 shows what this simple configuration, consisting of a single local destination, would look like. The log writer process writes redo data to the online redo log file. As each online redo log file is filled, a log switch occurs and an ARCn process archives the filled online redo log file to an archived redo log file. The filled online redo log file is now available for reuse.
Figure 5-2 Primary Database Archiving When There Is No Standby Database
It is important to note that the configuration shown in Figure 5-2 does not include a standby database and thus does not provide disaster-recovery protection. To make this simple configuration into a Data Guard configuration that provides disaster recovery, add a standby database at a remote destination by specifying the SERVICE
attribute.
Example 5-2 shows the initialization parameters that enable redo transport services to archive the online redo log on the local destination chicago
and transmit redo data to a remote standby database with the Oracle Net service name boston
. The example takes the default values for all of the other LOG_ARCHIVE_DEST_
n
attributes:
Example 5-2 Specifying a Remote Archiving Destination
LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_2='SERVICE=boston' LOG_ARCHIVE_DEST_STATE_2=ENABLE
These initialization parameters set up a Data Guard configuration that uses archiver (ARCn) processes to archive to both the local and remote destinations. This configuration provides the maximum performance level of data protection.
Although you can create a Data Guard configuration by specifying only the LOCATION
or the SERVICE
attributes on the LOG_ARCHIVE_DEST_
n
parameter, you can optionally specify more attributes to further define each destination's behavior. Chapter 14 provides reference information for all of the LOG_ARCHIVE_DEST_
n
parameter attributes.
You can dynamically update most of the attribute values of the LOG_ARCHIVE_DEST_
n
and the LOG_ARCHIVE_DEST_STATE_
n
parameters using the ALTER SYSTEM SET
statement.
The modifications take effect after the next log switch on the primary database. For example, to defer redo transport services from transmitting redo data to the remote standby database named boston
, issue the following statements on the primary database:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=boston 2> VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)'; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;
The Oracle database enables you to configure a disk area called a flash recovery area that is a directory or Oracle Storage Manager disk group that serves as the default storage area for files related to recovery.
To configure a flash recovery area, use the DB_RECOVERY_FILE_DEST
initialization parameter. LOG_ARCHIVE_DEST_10
is implicitly set to USE_DB_RECOVERY_FILE_DEST
(meaning that archived redo log files will be sent to the flash recovery area) if you create a recovery area and do not set any other local archiving destinations. (See Oracle Database Backup and Recovery Basics to configure the flash recovery area and Oracle Database Administrator's Guide for more information about Oracle Storage Manager and Oracle Managed Files.)
Note: The filenames for archived redo log files stored in a flash recovery area are generated automatically by Oracle Managed Files (OMF); the filenames are not based on the format specified by theLOG_ARCHIVE_FORMAT initialization parameter. |
This section contains the following topics:
Sharing a Flash Recovery Area Between Primary and Standby Databases
Note: A primary database cannot write redo data to the flash recovery area of a logical standby database. |
See Oracle Database Backup and Recovery Basics to configure flash recovery areas and Section 10.3.4 for information about setting up a deletion policy for archived redo log files in flash recovery areas.
If a flash recovery area has been configured and no local destinations are defined, Data Guard implicitly uses the LOG_ARCHIVE_DEST_10
destination as the flash recovery area.
When the LOG_ARCHIVE_DEST_10
destination is used, Data Guard automatically uses the default values for all of the LOG_ARCHIVE_DEST_10
parameter attributes. To override the defaults, you can dynamically set the values for most of the attributes by explicitly specifying the LOG_ARCHIVE_DEST_10
parameter. For example, the following ALTER SYSTEM SET
statement specifies several attributes on the LOG_ARCHIVE_DEST_10
initialization parameter:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_10='LOCATION=USE_DB_RECOVERY_FILE_DEST LGWR MANDATORY REOPEN=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)'
When setting LOG_ARCHIVE_DEST_
n
attributes, the TEMPLATE
attribute of a LOG_ARCHIVE_DEST_
n
parameter will override all other specifications for the flash recovery area. If the TEMPLATE
attribute is specified for a remote destination and that destination archives redo data to a flash recovery area, the archived redo log file will use the directory and file name specified by the TEMPLATE
attribute.
You can explicitly set up one or more other LOG_ARCHIVE_DEST_
n
destinations to point to a flash recovery area. For example, you can optionally:
Configure destinations other than LOG_ARCHIVE_DEST_10
For example, an existing Data Guard configuration may have already used the LOG_ARCHIVE_DEST_
10 destination for another purpose, or you may want to release the LOG_ARCHIVE_DEST_
10 destination for other uses.
To configure another archival destination to point to the flash recovery area, you must specify the LOCATION=USE_DB_RECOVERY_FILE_DEST
attribute to define the new destination. For example:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST ARCH MANDATORY REOPEN=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)'
The implicit setting (for LOG_ARCHIVE_DEST_
10 to use the flash recovery area) will be cleared.
Configure destinations in addition to LOG_ARCHIVE_DEST_
10 destination for use after a role transition
For example, you can configure one destination to be valid for standby redo log archival when the database operates in the standby role and another destination to be valid for online redo log archival when the database operates in the primary role.
To configure a LOG_ARCHIVE_DEST_
n
destination in addition to LOG_ARCHIVE_DEST_10
, you must explicitly specify both destinations:
LOG_ARCHIVE_DEST_9='LOCATION=USE_DB_RECOVERY_FILE_DEST ARCH MANDATORY REOPEN=5 VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)' LOG_ARCHIVE_DEST_10='LOCATION=USE_DB_RECOVERY_FILE_DEST ARCH MANDATORY REOPEN=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)'
On a physical standby database, you can define the STANDBY_ARCHIVE_DEST
parameter to point to the flash recovery area. For example:
STANDBY_ARCHIVE_DEST='LOCATION=USE_DB_RECOVERY_FILE_DEST'
Note: Flash recovery area destinations pointed to by theSTANDBY_ARCHIVE_DEST parameter on logical standby databases (SQL Apply) are ignored. |
You can share a flash recovery area between databases provided each database that shares the flash recovery area has a unique database name, specified with the DB_UNIQUE_NAME
initialization parameter.
The following examples show how to specify initialization parameters on the primary and standby databases that will share a flash recovery area in the /arch/oradata
location. Although the DB_UNIQUE_NAME
parameter is not specified in Example 5-3, it defaults to PAYROLL
, which is the name specified for the DB_NAME
initialization parameter.
Example 5-3 Primary Database Initialization Parameters for a Shared Recovery Area
DB_NAME=PAYROLL LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' DB_RECOVERY_FILE_DEST='/arch/oradata' DB_RECOVERY_FILE_DEST_SIZE=20G
Example 5-4 Standby Database Initialization Parameters for a Shared Recovery Area
DB_NAME=PAYROLL DB_UNIQUE_NAME=boston LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' STANDBY_ARCHIVE_DEST='LOCATION=USE_DB_RECOVERY_FILE_DEST' DB_RECOVERY_FILE_DEST='/arch/oradata' DB_RECOVERY_FILE_DEST_SIZE=5G
See Oracle Database Backup and Recovery Advanced User's Guide for more information about sharing a flash recovery area among multiple databases.
On the primary database, Oracle Data Guard uses archiver processes (ARCn) or the log writer process (LGWR) to collect transaction redo data and transmit it to standby destinations. Although you cannot use both the archiver and log writer processes to send redo data to the same destination, you can choose to use the log writer process for some destinations, while archiver processes send redo data to other destinations.This section contains the following topics:
Data Guard also uses the fetch archive log (FAL) client and server to send archived redo log files to standby destinations following a network outage, for automatic gap resolution, and resynchronization. The FAL process and gap resolution are discussed in Section 5.8.
By default, redo transport services use ARCn processes to archive the online redo log files on the primary database. ARCn archival processing supports only the maximum performance level of data protection in Data Guard configurations. You must use the LGWR process to transmit redo data to standby locations that operate in other data protection modes. (See Section 5.6 for more information about the Data Guard data protection modes.)
The following sections discuss these topics:
The following descriptions tell how to use the LOG_ARCHIVE_DEST_
n
and the LOG_ARCHIVE_MAX_PROCESSES
initialization parameters.
Enabling ARCn Processes to Archive to Local or Remote Destinations
You specify attributes on the LOG_ARCHIVE_DEST_
n
initialization parameter to control the automated transfer of redo data from the primary database to other destinations. Because ARCn archiver processing is the default archival behavior, specifying the ARCH
attribute on the LOG_ARCHIVE_DEST_
n
parameter is optional. However, you must specify either the LOCATION
attribute to archive to a local destination or the SERVICE
attribute for remote archival (as described in Section 5.2.2).
Specifying the Number of ARCn Processes to be Invoked
The LOG_ARCHIVE_MAX_PROCESSES
initialization parameter specifies the maximum number of ARC
n processes. By default, 4 archiver processes are invoked when the primary database instance starts and Oracle Database dynamically adjusts the number of processes to balance the archiver workload. Thus, the actual number of archiver processes may vary at any time.
If you anticipate a heavy workload for archiving, you can increase the maximum number of archiver processes to as many as 30 by setting the initialization parameter LOG_ARCHIVE_MAX_PROCESSES
. This initialization parameter is dynamic and can be altered by the ALTER SYSTEM
command to increase or decrease the maximum number of archiver processes. For example:
ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES = 20;
Figure 5-3 shows an example of archival processing in a Data Guard configuration, with a primary database located in Chicago and one physical standby database located in Boston. (This is the configuration that was created in Chapter 3.)
Archiving happens when a log switch occurs on the primary database:
On the primary database, after the ARC0
process successfully archives the local online redo log to the local destination (LOG_ARCHIVE_DEST_1
), the ARC1
process transmits redo from the local archived redo log files (instead of the online redo log files) to the remote standby destination (LOG_ARCHIVE_DEST_2
).
On the remote destination, the remote file server process (RFS) will, in turn, write the redo data to an archived redo log file from a standby redo log file. Log apply services use Redo Apply (MRP processFoot 1 ) or SQL Apply (LSP processFoot 2 ) to apply the redo to the standby database.
Because the online redo log files are archived locally first, the LGWR process reuses the online redo log files much earlier than would be possible if the ARCn processes archived to the standby database concurrently with the local destination. Removed per Bob McGuirk 10gR2 --- This behavior is useful when archiving to remote destinations that use a slow network connection, such as a long-distance wide area network (WAN). A benefit of the default ARC archival behavior is that local archiving, and hence, processing on the primary database, is not affected by archiving to non-mandatory, remote destinations.
As shown in Figure 5-3, you need to have at least 2 ARCn processes to separate local archival from remote archival. This can be done by setting the LOG_ARCHIVE_MAX_PROCESSES
initialization parameter (the default setting is 4, but the maximum value is 30).
Figure 5-3 Archiving to Local Destinations Before Archiving to Remote Destinations
Because the default ARCn archival processing disassociates local archiving from remote archiving, sites that may have policies to delete archived redo log files on the primary database immediately after backing them up must make sure that the standby destinations receive the corresponding redo data before deleting the archived redo log files on the primary database. You can query the V$ARCHIVED_LOG
view to verify the redo data was received on standby destinations.
You can optionally enable redo transport services to use the LGWR process to transmit redo data to remote destinations.
Using the LGWR process differs from ARCn processing (described in Section 5.3.1), because instead of waiting for the online redo log to switch at the primary database and then writing the entire archived redo log at the remote destination all at once, the LGWR process selects a standby redo log file at the standby site that reflects the log sequence number (and size) of the current online redo log file of the primary database. Then, as redo is generated at the primary database, it is also transmitted to the remote destination. The transmission to the remote destination will either be synchronous or asynchronous, based on whether the SYNC
or the ASYNC
attribute is set on the LOG_ARCHIVE_DEST_
n
parameter. Synchronous LGWR processing is required for the maximum protection and maximum availability modes of data protection in Data Guard configurations. (See Section 5.6 for information about the Data Guard data protection modes.)This section contains the following topics:
The following sections describe the LGWR
, SYNC
, and ASYNC
attributes.
Enabling Redo Transport Services to Use the LGWR Process
You must specify the LGWR
and SERVICE
attributes on the LOG_ARCHIVE_DEST_
n
parameter to enable redo transport services to use the LGWR process to transmit redo data to remote archival destinations.
Specifying Synchronous or Asynchronous Network Transmission
The LGWR process synchronously writes to the local online redo log files at the same time it transmits redo data to the remote destination:
The SYNC
attribute performs all network I/O synchronously, in conjunction with each write operation to the online redo log file, and waits for the network I/O to complete. Section 5.3.2.2 shows an example of synchronous network transmission in a Data Guard configuration. This is the default network transmission setting.
The ASYNC
attribute performs all network I/O asynchronously and control is returned to the executing application or user immediately, without waiting for the network I/O to complete. Section 5.3.2.3 shows an example of asynchronous network transmission in a Data Guard configuration.
Note: If you configure a destination to use the LGWR process, but for some reason the LGWR process becomes unable to archive to the destination, then redo transport will revert to using the ARCn process to complete archival operations. |
Example 5-5 shows the primary role LOG_ARCHIVE_DEST_
n
parameters that configure the LGWR process for synchronous network transmission.
Example 5-5 Initialization Parameters for LGWR Synchronous Archival
LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago' LOG_ARCHIVE_DEST_2='SERVICE=boston LGWR SYNC NET_TIMEOUT=30' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE
Specifying the SYNC
attribute on the LOG_ARCHIVE_DEST_
n
parameter is optional, because this is the default for LGWR archival processing. The NET_TIMEOUT
attribute is recommended, because it controls the amount of time that the LGWR process waits for status from the network server process before terminating the network connection. If there is no reply within NET_TIMEOUT seconds, then the LGWR process returns an error message.
Figure 5-4 shows a Data Guard configuration that uses the LGWR process to synchronously transmit redo data to the standby system at the same time it is writing redo data to the online redo log file on the primary database:
On the primary database, the LGWR process submits the redo data to one or more network server (LNSn) processes, which then initiate the network I/O in parallel to multiple remote destinations. Transactions are not committed on the primary database until the redo data necessary to recover the transaction is received by all LGWR SYNC
destinations.
On the standby system, the remote file server (RFS) receives redo data over the network from the LGWR process and writes the redo data to the standby redo log files.
A log switch on the primary database triggers a log switch on the standby database, causing ARCn processes on the standby database to archive the standby redo log files to archived redo log files on the standby database. Then, Redo Apply (MRP process) or SQL Apply (LSP process) applies the redo data to the standby database. If real-time apply is enabled, Data Guard recovers redo data directly from the current standby redo log file as it is being filled up by the RFS process.
Figure 5-4 LGWR SYNC Archival to a Remote Destination with Standby Redo Log Files
Example 5-6 shows the primary role LOG_ARCHIVE_DEST_
n
parameters that configure the LGWR process for asynchronous network transmission.
Example 5-6 Initialization Parameters for LGWR Asynchronous Archiving
LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago' LOG_ARCHIVE_DEST_2='SERVICE=boston LGWR ASYNC' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE
Figure 5-5 shows the LNSn process collecting redo data from the online redo log files and transmitting it over Oracle Net to the RFS process on the standby database.
When the LGWR
and ASYNC
attributes are specified, the log writer process writes to the local online redo log file, while the network server (LNSn) processes (one for each destination) asynchronously transmit the redo to remote destinations. The LGWR process continues processing the next request without waiting for the LNS network I/O to complete.
If redo transport services transmit redo data to multiple remote destinations, the LNSn processes (one for each destination) initiate the network I/O to all of the destinations in parallel.
When an online redo log file fills up, a log switch occurs and an archiver process archives the log file locally, as usual.
Figure 5-5 LGWR ASYNC Archival with Network Server (LNSn) Processes
Note: Do not use theLOG_ARCHIVE_DEST or the LOG_ARCHIVE_DUPLEX_DEST initialization parameters to specify a flash recovery area destination. |
Data Guard provides a secure environment and prevents the possible tampering of redo data as it is being transferred to the standby database.
Redo transport services use authenticated network sessions to transfer redo data. These sessions are authenticated using the SYS user password contained in the password file. All databases in the Data Guard configuration must use a password file, and the SYS password contained in this password file must be identical on all systems. This authentication can be performed even if Oracle Advanced Security is not installed, and provides some level of security when shipping redo.
Note: To further protect redo (for example, to encrypt redo or compute an integrity checksum value for redo traffic over the network to disallow redo tampering on the network), Oracle recommends that you install and use Oracle Advanced Security. See the Oracle Database Advanced Security Administrator's Guide. |
Perform the following steps on the primary database and each standby database:
Create a password file (using the orapwd
utility) on the primary and all standby databases. For example:
ORAPWD FILE=orapw PASSWORD=mypassword ENTRIES=10
This example creates a password file with 10 entries, where the password for SYS is mypassword. For redo data transmission to succeed, ensure you set the password for the SYS user account identically for every primary and standby database.
Set the REMOTE_LOGIN_PASSWORDFILE
initialization parameter to EXCLUSIVE
or SHARED
to enable Oracle to check for a password file and to specify how many databases can use the password file. For example:
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
See the Oracle Database Reference for more information about this parameter.
This section contains the following topics:
The VALID_FOR
attribute enables you to configure destination attributes for both the primary and standby database roles in one server parameter file (SPFILE), so that your Data Guard configuration operates properly after a role transition. This simplifies switchovers and failovers by removing the need to enable and disable the role-specific parameter files after a role transition.
When you specify the VALID_FOR
attribute of the LOG_ARCHIVE_DEST_
n
parameter, it identifies when redo transport services can transmit redo data to destinations based on the following factors:
Whether the database is currently running in the primary or the standby role
Whether archival of the online redo log file, standby redo log file, or both is required depending on the current role of the database
To configure these factors for each LOG_ARCHIVE_DEST_
n
destination, you specify this attribute with a pair of keywords: VALID_FOR=(
redo_log_type
,
database_role
)
. The redo_log_type keyword identifies the destination as valid for archiving the following: ONLINE_LOGFILE
, STANDBY_LOGFILE
, or ALL_LOGFILES
. The database_role keyword identifies the role in which the current database must be in for the destination to be valid: PRIMARY_ROLE
, STANDBY_ROLE
, or ALL_ROLES
.
If you do not specify the VALID_FOR
attribute for a destination, by default, archiving the online redo log and standby redo log is enabled to the destination, regardless of the database role. This default behavior is equivalent to setting the (ALL_LOGFILES,ALL_ROLES)
keyword pair on the VALID_FOR
attribute. For example:
LOG_ARCHIVE_DEST_1='LOCATION=/ARCH1/CHICAGO/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES)'
Although the (ALL_LOGFILES,ALL_ROLES)
keyword pair is the default, it is not recommended for every destination. For example, logical standby databases, unlike physical standby databases, are open databases that generate redo data and have multiple log files (online redo log files, archived redo log files, and standby redo log files). In most cases, the online redo log files generated by the logical standby database are located in the same directory as the standby redo logs files that are receiving redo from the primary database.
Therefore, it is recommended that you define a VALID_FOR
attribute for each destination so that your Data Guard configuration operates properly, including after a role transition. See the scenarios in Section 12.1 for examples of the VALID_FOR
attribute settings for various Data Guard configurations,
If you choose not to use the VALID_FOR
attribute to configure destinations, you must maintain two database server parameter files (SPFILEs) for each database: one for when the database is in the primary role and the other for the standby role. See Chapter 12 for more configuration examples.
The DB_UNIQUE_NAME
attribute enables you to specify unique database names when you configure destinations. This makes it possible to dynamically add a standby database to a Data Guard configuration that contains a Real Applications Clusters primary database, when that primary database is operating in either the maximum protection or the maximum availability level of protection. The DB_UNIQUE_NAME
initialization parameter is required if the LOG_ARCHIVE_CONFIG
parameter has been defined.
Note: If the standby database on a remote destination has not been identified using theDB_UNIQUE_NAME initialization parameter, the standby database must be accessible before the primary instance is started. |
Together, the DB_UNIQUE_NAME
attribute of the LOG_ARCHIVE_DEST_
n
parameter and the DG_CONFIG
attribute of the LOG_ARCHIVE_CONFIG
parameter specify the unique name of each database of the Data Guard configuration. The names you supply must match what was defined for each database with the DB_UNIQUE_NAME
initialization parameter.
For example, the following initialization parameters show the DB_UNIQUE_NAME
and LOG_ARCHIVE_CONFIG
definitions for the primary database (chicago
) in the Data Guard configuration described in Chapter 3:
DB_NAME=chicago DB_UNIQUE_NAME=chicago LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago, boston)' LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) LOG_ARCHIVE_DEST_2= 'SERVICE=boston LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=boston'
The DB_UNIQUE_NAME
attribute is required for remote destinations specified with the SERVICE
attribute. In the example, the LOG_ARCHIVE_DEST_2
parameter specifies the DB_UNIQUE_NAME=boston
for the remote destination; redo transport services validate this information at the remote destination. If the names do not match, the connection to that destination is refused.
The LOG_ARCHIVE_CONFIG
parameter also has SEND
, NOSEND
, RECEIVE
, and NORECEIVE
attributes:
SEND
enables a database to send redo data to remote destinations
RECEIVE
enables the standby database to receive redo from another database
To disable these settings, use the NOSEND
and NORECEIVE
keywords.
For example, to ensure the primary database never accidentally receives any archived redo data, set the LOG_ARCHIVE_CONFIG
initialization parameter to NORECEIVE
on the primary database, as follows:
LOG_ARCHIVE_CONFIG='NORECEIVE,DG_CONFIG=(chicago,boston)'
However, keep in mind that specifying either the NOSEND
or the NORECEIVE
attributes may limit the database instance's capabilities after a role transition. For example, if a standby database with the NOSEND
attribute set is transitioned to the primary role, it would not be able to transmit redo data to other standby databases until you reset the parameter value to SEND
. Similarly, a database that has the NORECEIVE
attribute specified cannot receive redo from the primary database.
By default, the LOG_ARCHIVE_CONFIG
parameter allows the primary database to send redo data to the standby database and allows the standby database to receive redo from the primary database for archiving. This is equivalent to setting both SEND
and RECEIVE
attributes on the LOG_ARCHIVE_CONFIG
parameter.
Note: TheLOG_ARCHIVE_CONFIG initialization parameter replaces the REMOTE_ARCHIVE_ENABLE initialization parameter, which is deprecated. Do not specify both parameters in the same SPFILE or text initialization parameter file. |
To handle archiving failures, you can use the REOPEN
, MAX_FAILURES
, and ALTERNATE
attributes of the LOG_ARCHIVE_DEST_
n
parameter to specify what actions are to be taken when archival processing to a destination fails. These actions include:
Retrying the archival operation to a failed destination after a specified period of time, up to a limited number of times
Using an alternate or substitute destination
Controlling the number of attempts to reestablish communication and resume sending redo data to a failed destination.
Use the REOPEN
attribute to determine if and when the ARCn process or the LGWR process attempts to transmit redo data again to a failed destination following an error.
Use the REOPEN=
seconds attribute to specify the minimum number of seconds that must elapse following an error before the archiving process will try again to access a failed destination. The default value is 300 seconds. The value set for the REOPEN
attribute applies to all errors, not just connection failures. You can turn off the option by specifying REOPEN=0
, which prevents the destination from being retried after a failure occurs.
If transmission to the alternate destination fails and the REOPEN
attribute is set to zero (0), redo transport services will attempt to send redo data to the alternate destination the next time redo data is archived.
The ALTERNATE
attribute defines an alternate archiving destination that can be used when the original archiving destination fails. If no alternate destination is specified, the destination does not automatically change to another destination upon failure.
Figure 5-6 shows a scenario where redo data is archived to a local disk device. If the original destination device becomes full or unavailable, the archival operation is automatically redirected to the alternate destination device.
Figure 5-6 Archival Operation to an Alternate Destination Device
The REOPEN
attribute takes precedence over the ALTERNATE
attribute. The alternate destination is used only if one of the following conditions is true:
A value of zero (0) is specified for the REOPEN
attribute.
A nonzero REOPEN
attribute and a nonzero MAX_FAILURE
count have been exceeded.
The ALTERNATE
attribute takes precedence over the MANDATORY
attribute. This means that a destination fails over to a valid alternate destination even if the current destination is mandatory.
Use the MAX_FAILURE
attribute to specify the maximum number of consecutive times that redo transport services attempt to transmit redo data to a failed destination. To limit the number of consecutive attempts that will be made to reestablish communication with a failed destination, use the REOPEN
attribute in conjunction with the MAX_FAILURE
attribute. Once the specified number of consecutive attempts is exceeded, the destination is treated as if the REOPEN
attribute was set to zero.
The REOPEN
attribute is required when you use the MAX_FAILURE
attribute. Example 5-7 shows how to set a retry time of 60 seconds and limit retries to 3 attempts.
Data Guard provides three modes of data protection: maximum protection, maximum availability, and maximum performance. The level of data protection you choose controls what happens if the primary database loses its connection to the standby database. This section contains the following topics:
To determine the appropriate data protection mode to use, review the following descriptions of the data protection modes to help assess your business requirements for data availability against user demands for response time and performance. Also, see Section 5.6.2 for information about setting up the data protection mode.
This protection mode ensures that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover each transaction must be written to both the local online redo log and to the standby redo log on at least one standby database before the transaction commits. To ensure data loss cannot occur, the primary database shuts down if a fault prevents it from writing its redo stream to at least one remote standby redo log. For multiple-instance RAC databases, Data Guard shuts down the primary database if it is unable to write the redo records to at least one properly configured database instance. The maximum protection mode requires that at least one standby instance has a standby redo log and the LGWR
, SYNC
, and AFFIRM
attributes be used on the LOG_ARCHIVE_DEST_
n
parameter for this destination.
This protection mode provides the highest level of data protection that is possible without compromising the availability of the primary database. Like maximum protection mode, a transaction will not commit until the redo needed to recover that transaction is written to the local online redo log and to at least one remote standby redo log. Unlike maximum protection mode, the primary database does not shut down if a fault prevents it from writing its redo stream to a remote standby redo log. Instead, the primary database operates in maximum performance mode until the fault is corrected and all gaps in redo log files are resolved. When all gaps are resolved, the primary database automatically resumes operating in maximum availability mode.
This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.
Like maximum protection mode, the maximum availability mode requires that you:
Configure standby redo log files on at least one standby database.
Set the SYNC
, LGWR
, and AFFIRM
attributes of the LOG_ARCHIVE_DEST_
n
parameter for at least 1 standby database.
This protection mode (the default) provides the highest level of data protection that is possible without affecting the performance of the primary database. This is accomplished by allowing a transaction to commit as soon as the redo data needed to recover that transaction is written to the local online redo log. The primary database's redo data stream is also written to at least one standby database, but that redo stream is written asynchronously with respect to the commitment of the transactions that create the redo data.
When network links with sufficient bandwidth are used, this mode provides a level of data protection that approaches that of maximum availability mode with minimal impact on primary database performance.
The maximum performance mode enables you to either set the LGWR
and ASYNC
attributes, or set the ARCH
attribute on the LOG_ARCHIVE_DEST_
n
parameter for the standby database destination. If the primary database fails, you can reduce the amount of data that is not received on the standby destination by setting the LGWR
and ASYNC
attributes.
To set up redo transport services and specify a level of data protection for the Data Guard configuration, perform the following steps.
Step 1 Configure the LOG_ARCHIVE_DEST_n
parameters on the primary database.
On the primary database, configure the LOG_ARCHIVE_DEST_
n
parameter attributes appropriately. Each of the Data Guard data protection modes requires that at least one standby database in the configuration meet the minimum set of requirements listed in Table 5-2.
Table 5-2 Minimum Requirements for Data Protection Modes
Note: Oracle recommends that a Data Guard configuration that is running in maximum protection mode contains at least two standby databases that meet the requirements listed in Table 5-2. That way, the primary database can continue processing if one of the standby databases cannot receive redo data from the primary database. |
The following example shows how to configure the maximum availability mode:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=chicago 2> OPTIONAL LGWR SYNC AFFIRM 3> VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) 4> DB_UNIQUE_NAME=chicago';
If they are not already specified in the SPFILE, you should also specify unique names with the DB_UNIQUE_NAME
initialization parameter and list all databases on the LOG_ARCHIVE_CONFIG
parameter with the DG_CONFIG
attribute. For example:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)'
This will enable the dynamic addition of a standby database to a Data Guard configuration that has a Real Application Clusters primary database running in either maximum protection or maximum availability mode.
Step 1 If you are upgrading the protection mode, perform this step.
Perform this step only if you are upgrading the protection mode (for example, from maximum performance to maximum availability mode). Otherwise, go to Step 3.
Assume this example is upgrading the Data Guard configuration from the maximum performance mode to the maximum availability mode. Shut down the primary database and restart it in mounted mode:
SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP MOUNT;
For a Real Application Clusters database, shut down all of the primary instances but start and mount only one primary instance.
Step 2 Set the data protection mode.
To specify a data protection mode, issue the SQL ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE {PROTECTION | AVAILABILITY | PERFORMANCE}
statement on the primary database. For example, the following statement specifies the maximum availability mode:
SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;
Step 3 Open the primary database.
If you performed Step (UNKNOWN STEP NUMBER) to upgrade the protection mode, open the database:
SQL> ALTER DATABASE OPEN;
If you are downgrading the protection mode, the database will already be open.
Step 4 Configure the LOG_ARCHIVE_DEST_n
parameters on standby databases.
On the standby databases, configure the LOG_ARCHIVE_DEST_
n
parameter attributes so the configuration can continue to operate in the new protection mode after a switchover.
For example:
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=boston 2> OPTIONAL LGWR SYNC AFFIRM 3> VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) 4> DB_UNIQUE_NAME=boston';
Step 5 Confirm the configuration is operating in the new protection mode.
Query the V$DATABASE
view to confirm the Data Guard configuration is operating in the new protection mode. For example:
SQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL FROM V$DATABASE; PROTECTION_MODE PROTECTION_LEVEL --------------------- --------------------- MAXIMUM AVAILABILITY MAXIMUM AVAILABILITY
See Chapter 15 and Oracle Database SQL Reference for information about SQL statements.
This section contains the following topics:
Specifying Alternate Directory Locations for Archived Redo Log Files
Sharing a Log File Destination Among Multiple Standby Databases
Typically, when redo data is received from the primary database, the redo data is written to archived redo log files that are stored in the directory you specify with the LOCATION
attribute of the LOG_ARCHIVE_DEST_
n
parameter. Alternatively, you can specify the STANDBY_ARCHIVE_DEST
initialization parameter on the standby database to indicate an alternate directory where the archived redo log files are to be stored when received from the primary database.
If both parameters are specified, the STANDBY_ARCHIVE_DEST
initialization parameter overrides the directory location specified with the LOG_ARCHIVE_DEST_
n
parameter.
The location where archived redo log files are stored on the standby database is determined according to the following list of rules. When the database instance is started, the archived redo log files are evaluated in the list order:
If the STANDBY_ARCHIVE_DEST
initialization parameter is specified on the standby database, that location is used.
If the LOG_ARCHIVE_DEST_
n
parameter contains the VALID_FOR=(STANDBY_LOGFILE,*)
attribute, then the location specified for this destination is used.
If the COMPATIBLE
parameter is set to 10.0 or greater and none of the LOG_ARCHIVE_DEST_
n
parameters contain the VALID_FOR=(STANDBY_LOGFILE,*)
attribute, then an arbitrary LOG_ARCHIVE_DEST_
n
parameter that is valid for the destination is used.
If none of the initialization parameters have been specified, then archived redo log files are stored in the default location for the STANDBY_ARCHIVE_DEST
initialization parameter.
To see the implicit default value of the STANDBY_ARCHIVE_DEST
initialization parameter, query the V$ARCHIVE_DEST
view:
SQL> SELECT DEST_NAME, DESTINATION FROM V$ARCHIVE_DEST 2> WHERE DEST_NAME='STANDBY_ARCHIVE_DEST'; DEST_NAME ------------------------------------------------------------------------------------------------------------------------------------ DESTINATION ------------------------------------------------------------------------------------------------------------------------------------ STANDBY_ARCHIVE_DEST /oracle/dbs/arch
Redo transport services use the value specified with the STANDBY_ARCHIVE_DEST
initialization parameter in conjunction with the LOG_ARCHIVE_FORMAT
parameter to generate the filenames for the archived redo log files on the standby site. For example:
STANDBY_ARCHIVE_DEST='/arc_dest/arls' LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
In the example, %s corresponds to the sequence number, and %r corresponds to the resetlogs ID. Together, these ensure unique names are constructed for the archived redo log files across multiple incarnations of the database. The %t, which is required for Real Application Clusters configurations, corresponds to the thread number.
For a physical standby database, redo transport services store the fully qualified filenames in the standby database control file, and Redo Apply uses this information to perform recovery on the standby database.
Note: If you have specified theTEMPLATE attribute of the LOG_ARCHIVE_DEST_ n parameter, it will override the filename generated with the STANDBY_ARCHIVE_DEST and LOG_ARCHIVE_FORMAT parameter. See Chapter 14 for information about the TEMPLATE attributes. |
To display the list of archived redo log files that are on the standby system, query the V
$ARCHIVED_LOG
view on the standby database:
SQL> SELECT NAME FROM V$ARCHIVED_LOG; NAME -------------------------------------------------------------------------------- /arc_dest/log_1_771.arc /arc_dest/log_1_772.arc /arc_dest/log_1_773.arc /arc_dest/log_1_774.arc /arc_dest/log_1_775.arc
You can specify a policy for reusing the online redo log file by setting the OPTIONAL
or MANDATORY
attribute of the LOG_ARCHIVE_DEST_
n
parameter. By default, remote destinations are set to OPTIONAL
. The archival operation of an optional destination can fail, and the online redo log file can be reused even though transmitting the redo data and writing the log contents was not successful. If the archival operation of a mandatory destination fails, online redo log files cannot be overwritten until the failed archive is completed to the mandatory destination.
By default, one local destination is mandatory even if you designate all destinations to be optional.
Example 5-8 shows how to set a mandatory local archiving destination and enable that destination. When specifying the MANDATORY
attribute, also consider specifying the REOPEN
and MAX_FAILURE
attributes as described in Section 5.5 to handle failure conditions.
This section contains the following topics:
The easiest way to verify the standby redo log has an appropriate number of log file groups is to examine the RFS process trace file and database alert log. If either log contains messages that indicate the RFS process frequently has to wait for a group because archiving did not complete, then add more log file groups to the standby redo log. The additional standby redo log file groups give the archival operation time to complete before the standby redo log file is reused by the RFS process.
In some cases, it might not be necessary to create a complete group of standby redo log files. A group could already exist, but may not be complete because one or more members were dropped (for example, because of disk failure). In this case, you can add new members to an existing group.
To add new members to a standby redo log file group, use the ALTER DATABASE
statement with the ADD STANDBY LOGFILE MEMBER
clause. The following statement adds a new member to the standby redo log file group number 2:
SQL> ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/disk1/oracle/dbs/log2b.rdo' 2> TO GROUP 2;
Use fully qualified filenames of new members to indicate where the file should be created. Otherwise, files will be created in either the default or current directory of the database, depending on your operating system.
As archived redo log files are generated and RMAN backups are made, Oracle adds new records to the reusable section of the control file. If no records are available for reuse (because all records are still within the number of days specified by CONTROL_FILE_RECORD_KEEP_TIME
), then the control file is expanded and new records are added to the control file.
The maximum control file size is 20000 database blocks. If DB_BLOCK_SIZE
equals 8192, then the maximum control file size is 156 MB. If the control files are stored in pre-created volumes, then the volumes that contain the primary and standby control files should be sized to accommodate a control file of maximum size. If the control file volume is too small and cannot be extended, then existing records in the control file will be overwritten before their intended reuse. This behavior is indicated by the following message in the alert log:
krcpwnc: following controlfile record written over:
The CONTROL_FILE_RECORD_KEEP_TIME
initialization parameter specifies the minimum number of days that must pass before a reusable record in the control file can be reused. Setting this parameter appropriately prevents redo transport services from overwriting a reusable record in the control file and ensures redo information remains available on the standby database:
Set CONTROL_FILE_RECORD_KEEP_TIME
to a value that allows all on disk backup information to be retained in the control file. CONTROL_FILE_RECORD_KEEP_TIME
specifies the number of days that records are kept within the control file before becoming a candidate for reuse.
Set CONTROL_FILE_RECORD_KEEP_TIME
to a value slightly longer than the oldest backup file that you intend to keep on disk, as determined by the size of the backup area.
For example, if the backup area is sized to maintain two full backups that are taken every 7 days, as well as daily incremental backups and archived redo log files, then set CONTROL_FILE_RECORD_KEEP_TIME
to a value of 21 or 30. Records older than this will be reused. However, the backup metadata will still be available in the RMAN recovery catalog.
Make sure you specify a large enough value if an apply delay is also set for the standby database (described in Section 6.2.2). The range of values for this parameter is 0 to 365 days. The default value is 7 days.
See Oracle Database Reference for more details about the CONTROL_FILE_RECORD_KEEP_TIME
initialization parameter and Oracle Database Backup and Recovery Advanced User's Guide.
Use the DEPENDENCY
attribute of the LOG_ARCHIVE_DEST_
n
initialization parameter to define one archival destination to receive redo data on behalf of several destinations, rather than transmitting redo data to each individual destination.
Figure 5-7 shows a Data Guard configuration in which the primary database transports redo data to one archiving destination that shares its archived redo log files with both a logical standby database and a physical standby database. These destinations are dependent on the successful completion of archival operations to the parent destination.
Figure 5-7 Data Guard Configuration with Dependent Destinations
Specifying a destination dependency can be useful in the following situations:
When you configure a physical standby database and a logical standby database on the same system.
When you configure the standby database and the primary database on the same system. Therefore, the archived redo log files are implicitly accessible to the standby database.
When clustered file systems are used to provide remote standby databases with access to the primary database archived redo log files.
When operating system-specific network file systems are used, providing remote standby databases with access to the primary database archived redo log files.
For example, assume there are two standby databases stdby1
and stdby2
that reside on the same piece of hardware. There is no need to use network bandwidth and disk space to send the same redo data to both destinations. The databases can share the same archived redo log files if you use the DEPENDENCY
attribute to designate one of the destinations as being a dependent destination. That is, the primary database sends redo to be archived on the destination that is not defined as the dependent destination. If the redo data successfully arrives at that destination, the primary database considers it archived to both destinations. For example:
LOG_ARCHIVE_DEST_1='LOCATION=DISK1 MANDATORY' LOG_ARCHIVE_DEST_2='SERVICE=stdby1 OPTIONAL' LOG_ARCHIVE_DEST_3='SERVICE=stdby2 OPTIONAL DEPENDENCY=LOG_ARCHIVE_DEST_2'
With these parameter definitions, the primary database transmits redo data to stdby1
but not to stdby2
. The stdby2
database instead recovers redo from the archived redo log files that are shipped to stdby1
.
An archive gap can occur on the standby system when it is has not received one or more archived redo log files generated by the primary database. The missing archived redo log files are the gap. If there is a gap, it is automatically detected and resolved by Data Guard by copying the missing sequence of log files to the standby destination. For example, an archive gap can occur when the network becomes unavailable and automatic archiving from the primary database to the standby database temporarily stops. When the network is available again, automatic transmission of the redo data from the primary database to the failed standby database resumes.
Data Guard requires no manual intervention by the DBA to detect and resolve such gaps. The following sections describe gap detection and resolution.
An archive gap can occur whenever the primary database archives a log locally, but the log is not received at the standby site. Every minute, the primary database polls its standby databases to see if there are gaps in the sequence of archived redo log files.
Gap recovery is handled through the polling mechanism. For physical and logical standby databases, Oracle Change Data Capture, and Oracle Streams, Data Guard performs gap detection and resolution by automatically retrieving missing archived redo log files from the primary database. No extra configuration settings are required to poll the standby databases, to detect any gaps, or to resolve the gaps.
The important consideration here is that automatic gap recovery is contingent on the availability of the primary database. If the primary database is not available and you have a configuration with multiple physical standby databases, you can set up additional initialization parameters so that the Redo Apply can resolve archive gaps from another standby database, as described in Section 5.8.3.
See Section 12.11 for a scenario that shows how to resolve a gap manually.
Note: Prior to Oracle Database 10g Release 1, the FAL client and server were used to resolve gaps from the primary database. |
The fetch archive log (FAL) client and server resolve gaps detected in the range of archived redo log files generated at the primary database and received at the physical standby database.
The FAL client requests the transfer of archived redo log files automatically.
The FAL server services the FAL requests coming from the FAL client.
The FAL mechanism handles the following types of archive gaps and problems:
When creating a physical or logical standby database, the FAL mechanism can automatically retrieve any archived redo log files generated during a hot backup of the primary database.
When there are problems with archived redo log files that have already been received on the standby database, the FAL mechanism can automatically retrieve archived redo log files to resolve any of the following situations:
When the archived redo log file is deleted from disk before it is applied to the standby database.
When the archived redo log file cannot be applied because of a disk corruption.
When the archived redo log file is accidentally replaced by another file (for example, a text file) that is not an archived redo log file before the redo data has been applied to the standby database.
When you have multiple physical standby databases, the FAL mechanism can automatically retrieve missing archived redo log files from another physical standby database.
The FAL client and server are configured using the FAL_CLIENT
and FAL_SERVER
initialization parameters that are set on the standby database. Define the FAL_CLIENT
and FAL_SERVER
initialization parameters only for physical standby databases in the initialization parameter file as shown in the following table:
Parameter | Function | Syntax |
---|---|---|
FAL_SERVER |
Specifies the network service name that the standby database should use to connect to the FAL server. It can consist of multiple values in a list. | Syntax
Example
|
FAL_CLIENT |
Specifies the network service name that the FAL server should use to connect to the standby database. | Syntax
Example
|
In some situations, automatic gap recovery may not take place and you will need to perform gap recovery manually. For example, you will need to perform gap recovery manually if you are using logical standby databases and the primary database is not available.
The following sections describe how to query the appropriate views to determine which log files are missing and perform manual recovery.
On a physical standby database
To determine if there is an archive gap on your physical standby database, query the V$ARCHIVE_GAP
view as shown in the following example:
SQL> SELECT * FROM V$ARCHIVE_GAP; THREAD# LOW_SEQUENCE# HIGH_SEQUENCE# ----------- ------------- -------------- 1 7 10
The output from the previous example indicates your physical standby database is currently missing log files from sequence 7 to sequence 10 for thread 1. After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo log files on your primary database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1
):
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND 2> SEQUENCE# BETWEEN 7 AND 10; NAME -------------------------------------------------------------------------------- /primary/thread1_dest/arcr_1_7.arc /primary/thread1_dest/arcr_1_8.arc /primary/thread1_dest/arcr_1_9.arc
Copy these log files to your physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE
statement on your physical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/thread1_dest/arcr_1_7.arc'; SQL> ALTER DATABASE REGISTER LOGFILE '/physical_standby1/thread1_dest/arcr_1_8.arc';
After you register these log files on the physical standby database, you can restart Redo Apply.
Note: TheV$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking Redo Apply from continuing. After resolving the gap and starting Redo Apply, query the V$ARCHIVE_GAP fixed view again on the physical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps. |
On a logical standby database:
To determine if there is an archive gap, query the DBA_LOGSTDBY_LOG
view on the logical standby database. For example, the following query indicates there is a gap in the sequence of archived redo log files because it displays two files for THREAD 1 on the logical standby database. (If there are no gaps, the query will show only one file for each thread.) The output shows that the highest registered file is sequence number 10, but there is a gap at the file shown as sequence number 6:
SQL> COLUMN FILE_NAME FORMAT a55 SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L 2> WHERE NEXT_CHANGE# NOT IN 3> (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#) 4> ORDER BY THREAD#,SEQUENCE#; THREAD# SEQUENCE# FILE_NAME ---------- ---------- ----------------------------------------------- 1 6 /disk1/oracle/dbs/log-1292880008_6.arc 1 10 /disk1/oracle/dbs/log-1292880008_10.arc
Copy the missing log files, with sequence numbers 7, 8, and 9, to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE
statement on your logical standby database.
For example:
SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log-1292880008_10.arc';
After you register these log files on the logical standby database, you can restart SQL Apply.
Note: TheDBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap that is currently blocking SQL Apply from continuing. After resolving the identified gap and starting SQL Apply, query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps. |
This section contains the following topics:
This section describes using views to monitor redo log archival activity for the primary database. See Oracle Data Guard Broker and Oracle Enterprise Manager online help for more information about the graphical user interface that automates many of the tasks involved in monitoring a Data Guard environment
Step 1 Determine the status of redo log files.
Enter the following query on the primary database to determine the status of all online redo log files:
SQL> SELECT THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$LOG;
Step 2 Determine the most recent archived redo log file.
Enter the following query on the primary database to determine recently archived thread and sequence number:
SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;
Step 3 Determine the most recent archived redo log file at each destination.
Enter the following query on the primary database to determine which archived redo log file was most recently transmitted to each of the archiving destinations:
SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ# 2> FROM V$ARCHIVE_DEST_STATUS 3> WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE'; DESTINATION STATUS ARCHIVED_THREAD# ARCHIVED_SEQ# ------------------ ------ ---------------- ------------- /private1/prmy/lad VALID 1 947 standby1 VALID 1 947
The most recently written archived redo log file should be the same for each archive destination listed. If it is not, a status other than VALID
might identify an error encountered during the archival operation to that destination.
Step 4 Find out if archived redo log files have been received.
You can issue a query at the primary database to find out if an archived redo log file was not received at a particular site. Each destination has an ID number associated with it. You can query the DEST_ID
column of the V$ARCHIVE_DEST
fixed view on the primary database to identify each destination's ID number.
Assume the current local destination is 1, and one of the remote standby destination IDs is 2. To identify which log files are missing at the standby destination, issue the following query:
SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM 2> (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1) 3> LOCAL WHERE 4> LOCAL.SEQUENCE# NOT IN 5> (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND 6> THREAD# = LOCAL.THREAD#); THREAD# SEQUENCE# --------- --------- 1 12 1 13 1 14
See Appendix A for details about monitoring the archiving status of the primary database.
Step 5 Trace the progression of transmitted redo on the standby site.
To see the progression of the transmission of redo data to the standby destination, set the LOG_ARCHIVE_TRACE
parameter in the primary and standby initialization parameter files. See Appendix G for complete details and examples.
This section describes the wait events that monitor the performance of the redo transport services that were specified on the primary database with the ARCH
, LGWR
, SYNC
, and ASYNC
attributes on the LOG_ARCHIVE_DEST_
n
initialization parameter.
The following sections describe the wait events and associated timing information that are displayed by the V$SYSTEM_EVENT
view:
For ARCn archival processing, Table 5-3 shows several of the wait events that monitor the time it takes to spawn or delete RFS connections and to send the redo data to the standby database when using ARCH for the transport mode. See Section 5.3.1 for information about ARCn archival processing.
Table 5-3 Wait Events for Destinations Configured with the ARCH Attribute
Wait Event | Monitors the Amount of Time Spent By . . . |
---|---|
ARCH wait on ATTACH | All ARCn processes to spawn an RFS connection. |
ARCH wait on SENDREQ | All ARCn processes to write the received redo data to disk as well as open and close the remote archived redo log files. |
ARCH wait on DETACH | All ARCn processes to delete an RFS connection. |
For LGWR
SYNC
archival processing, Table 5-4 shows several of the wait events that monitor the time it takes for the LGWR process on the primary database to:
Complete writing to the online redo log files on the primary database
Transmit the redo data to the remote standby destination
Wait for the redo data to be written to the standby redo log files
Receive acknowledgment from the remote standby destination
See Section 5.3.2 for information about LGWR
SYNC
archival processing.
Table 5-4 Wait Events for Destinations Configured with the LGWR SYNC Attributes
Wait Event | Monitors the Amount of Time Spent By . . . |
---|---|
LGWR wait on LNS | The LGWR process waiting to receive messages from the LNSn process. |
LNS wait on ATTACH | All network servers to spawn an RFS connection. |
LNS wait on SENDREQ | All network servers to write the received redo data to disk as well as open and close the remote archived redo log files. |
LNS wait on DETACH | All network servers to delete an RFS connection. |
For LGWR
ASYNC
archival processing, Table 5-5 shows several of the wait events that monitor the time it takes to write the redo data to the online redo log files on the primary database. See Section 5.3.2 for information about LGWR
ASYNC
archival processing.
Table 5-5 Wait Events for Destinations Configured with the LGWR ASYNC Attributes
Wait Event | Monitors the Amount of Time Spent By . . . |
---|---|
LNS wait on DETACH | All network servers to delete an RFS connection. |
LNS wait on ATTACH | All network servers to spawn an RFS connection. |
LNS wait on SENDREQ | All network servers to write the received redo data to disk as well as open and close the remote archived redo log files. |
True ASYNC Control FileTXN Wait | The LNSn process to get hold of the control file transaction during its lifetime. |
True ASYNC Wait for ARCH log | The LNSn process waiting to see the archived redo log (if the LNSn process is archiving a current log file and the log is switched out). |
Waiting for ASYNC dest activation | The LNSn process waiting for an inactive destination to become active. |
True ASYNC log-end-of-file wait | The LNSn process waiting for the next bit of redo after it has reached the logical end of file. |
Footnote Legend
Footnote 1: The managed recovery process (MRP) applies archived redo log files to the physical standby database, and automatically determines the optimal number of parallel recovery processes at the time it starts. The number of parallel recovery slaves spawned is based on the number of CPUs available on the standby server.