Oracle® Data Guard Concepts and Administration 10g Release 1 (10.1) Part Number B10823-01 |
|
|
View PDF |
This chapter steps you through the process of creating a logical standby database. It includes the following main topics:
The steps described in this chapter configure the standby database for maximum performance mode, which is the default data protection mode. Chapter 5 provides information about configuring the different data protection modes.
See also:
Before you create a standby database, you must first ensure the primary database is properly configured.
Table 4-1 provides a checklist of the tasks that you perform on the primary database to prepare for logical standby database creation. There is also a reference to the section that describes the task in more detail.
Reference | Task |
---|---|
Determine Support for Datatypes and Storage Attributes for Tables | |
Ensure Table Rows in the Primary Database Can Be Uniquely Identified |
Before setting up a logical standby database, ensure the logical standby database can maintain the datatypes and tables in your primary database.
The following list shows the various database objects that are supported and unsupported in logical standby databases.
CHAR
NCHAR
VARCHAR2
and VARCHAR
NVARCHAR2
NUMBER
DATE
TIMESTAMP
TIMESTAMP WITH TIME ZONE
TIMESTAMP WITH LOCAL TIME ZONE
INTERVAL YEAR TO MONTH
INTERVAL DAY TO SECOND
RAW
CLOB
(including both fixed-width and variable-width character sets)NCLOB
BLOB
LONG
LONG RAW
BINARY_FLOAT
BINARY_DOUBLE
LOB
columns)To determine exactly which schemas will be skipped, query the DBA_LOGSTDBY_SKIP
view.
To determine if the primary database contains unsupported objects, query the DBA_LOGSTDBY_UNSUPPORTED
view. See Chapter 14, "Views Relevant to Oracle Data Guard" for more information about the DBA_LOGSTDBY_UNSUPPORTED
view.
It is important to identify unsupported database objects on the primary database before you create a logical standby database. This is because changes made to unsupported datatypes, table, sequences, or views on the primary database will not be propagated to the logical standby database. Moreover, no error message will be returned.
For example, use the following query on the primary database to list the schema and table names of primary database tables that are not supported by logical standby databases:
SQL> SELECT DISTINCT OWNER,TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED 2> ORDER BY OWNER,TABLE_NAME; OWNER TABLE_NAME ----------- -------------------------- HR COUNTRIES OE ORDERS OE CUSTOMERS OE WAREHOUSES
To view the column names and datatypes for one of the tables listed in the previous query, use a SELECT
statement similar to the following:
SQL> SELECT COLUMN_NAME,DATA_TYPE FROM DBA_LOGSTDBY_UNSUPPORTED 2> WHERE OWNER='OE' AND TABLE_NAME = 'CUSTOMERS'; COLUMN_NAME DATA_TYPE ------------------------------- ------------------- CUST_ADDRESS CUST_ADDRESS_TYP PHONE_NUMBERS PHONE_LIST_TYP CUST_GEO_LOCATION SDO_GEOMETRY
If the primary database contains unsupported tables, log apply services automatically exclude these tables when applying redo data to the logical standby database.
Note: If you determine that the critical tables in your primary database will not be supported on a logical standby database, then you might want to consider using a physical standby database. |
By default, all SQL statements except those in the following list are applied to a logical standby database if they are executed on a primary database:
ALTER DATABASE
ALTER SESSION
ALTER MATERIALIZED VIEW
ALTER MATERIALIZED VIEW LOG
ALTER SYSTEM
CREATE CONTROL FILE
CREATE DATABASE
CREATE DATABASE LINK
CREATE PFILE FROM SPFILE
CREATE SCHEMA AUTHORIZATION
CREATE MATERIALIZED VIEW
CREATE MATERIALIZED VIEW LOG
CREATE SPFILE FROM PFILE
DROP DATABASE LINK
DROP MATERIALIZED VIEW
DROP MATERIALIZED VIEW LOG
EXPLAIN
LOCK TABLE
SET CONSTRAINTS
SET ROLE
SET TRANSACTION
Oracle PL/SQL supplied packages that do not modify system metadata or user data leave no footprint in the archived redo log files, and hence are safe to use on the primary database. Examples of such packages include DBMS_OUTPUT
, DBMS_RANDOM
, DBMS_PIPE
, DBMS_DESCRIBE
, DBMS_OBFUSCATION_TOOLKIT
, DBMS_TRACE
, DBMS_METADATA
, and so on.
Oracle PL/SQL supplied packages that do not modify system metadata but may modify user data are supported by SQL Apply, as long as the modified user data is in the category of supported datatypes. Examples of such packages include DBMS_LOB
, DBMS_SQL
, DBMS_TRANSACTION
, and so on.
Oracle PL/SQL supplied packages that modify system metadata typically are not supported by SQL Apply, and therefore their effects are not visible on the logical standby database. Examples of such packages include DBMS_JAVA
, DBMS_REGISTRY
, DBMS_ALERT
, DBMS_SPACE_ADMIN
, DBMS_REFRESH
, DBMS_SCHEDULER
, DBMS_AQ
, and so on.
Specific support for DBMS_JOB
has been provided. Job execution is suspended on a logical standby database and jobs cannot be scheduled directly on the standby database. However, jobs submitted on the primary database are replicated in the standby database. In the event of a switchover or failover, jobs scheduled on the original primary database will automatically begin running on the new primary database.
See PL/SQL Packages and Types Reference for more information about all of the Oracle PL/SQL supplied packages.
Because the ROWIDs on a logical standby database might not be the same as the ROWIDs on the primary database, a different mechanism must be used to match the updated row on the primary database to its corresponding row on the logical standby database. You can use one of the following to match up the corresponding rows:
Oracle recommends that you add a primary key or a unique index to tables on the primary database, whenever appropriate and possible, to ensure SQL Apply can efficiently apply data updates to the logical standby database.
Perform the following steps to ensure SQL Apply can uniquely identify table rows.
Query the DBA_LOGSTDBY_NOT_UNIQUE
view to identify tables in the primary database that do not have a primary key or unique index with NOT NULL
columns. The following query displays a list of tables that SQL Apply might not be able to uniquely identify:
SQL> SELECT OWNER, TABLE_NAME,BAD_COLUMN FROM DBA_LOGSTDBY_NOT_UNIQUE 2> WHERE TABLE_NAME NOT IN (SELECT TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED);
Some of the tables displayed in the DBA_LOGSTDBY_NOT_UNIQUE
view can still be supported because supplemental logging (that you will enable in Section 4.2.2.1) adds information that uniquely identifies the row containing the redo data. The presence or absence of a primary key or unique index can affect supplemental logging as follows:
NOT NULL
columns, the amount of information added to the online redo log file during supplemental logging is minimal.The value of the BAD_COLUMN
column will be either Y or N, as described in the following list:
Indicates a table column is defined using an unbounded datatype, such as CLOB or BLOB. SQL Apply attempts to maintain these tables, but you must ensure the application provides uniqueness in bounded columns only. Note that if two rows in the table match except for rows in the LOB
column, then the table cannot be maintained properly and SQL Apply will stop.
Indicates the table contains enough column information to maintain the table in a logical standby database.
If your application ensures the rows in a table are unique, you can create a disabled primary key RELY
constraint on the table. This avoids the overhead of maintaining a primary key on the primary database. See Oracle Database SQL Reference for ALTER TABLE
statement syntax and usage information.
To create a disabled RELY
constraint on a primary database table, use the ALTER TABLE
statement with a RELY DISABLE
clause. The following example creates a disabled RELY
constraint on a table named mytab
where rows can be uniquely identified using the id
and name
columns:
SQL> ALTER TABLE mytab ADD PRIMARY KEY (id, name) RELY DISABLE;
The RELY
constraint tells the system to assume the rows are unique. Be careful to select columns for the disabled RELY
constraint that will uniquely identify a row. If the columns selected for the RELY
constraint do not uniquely identify the row, SQL Apply fails to apply data from the archived redo log file or standby redo log file to the logical standby database.
To improve the performance of SQL Apply, add an index to the columns that uniquely identify the row on the logical standby database. Failure to do this results in full table scans.
See Oracle Database Reference for more information about the DBA_LOGSTDBY_NOT_UNIQUE
view, Oracle Database SQL Reference for more information about creating RELY
constraints, and Section 9.4 for information about RELY
constraints and actions you can take to increase performance on a logical standby database.
This section describes the tasks you perform to create a logical standby database.
Table 4-2 provides a checklist of the tasks that you perform to create a logical standby database and specifies on which database or databases you perform each task. There is also a reference to the section that describes the task in more detail.
Reference | Task | Database |
---|---|---|
Primary | ||
Prepare the Primary Database to Support a Logical Standby Database |
Primary | |
Standby | ||
Standby | ||
Standby |
You create a logical standby database by first creating a physical standby database and then transitioning it into a logical standby database, as follows:
Follow the instructions in Chapter 3 to create a physical standby database.
After you complete the steps in Section 3.2.6 to start the physical standby database and Redo Apply, allow recovery to continue until the physical standby database is consistent with the primary database, including all database structural changes (such as adding or dropping datafiles).
This section contains the following topics:
Supplemental logging must be enabled on the primary database to support a logical standby database. Because an Oracle Database only logs the columns that were modified, this is not always sufficient to uniquely identify the row that changed and additional (supplemental) information must be put into the stream of redo data. The supplemental information that is added to the redo data helps SQL Apply to correctly identify and maintain tables in the logical standby database.
To determine if supplemental logging is enabled on the primary database, query the V$DATABASE
fixed view. For example:
SQL> SELECT SUPPLEMENTAL_LOG_DATA_PK AS PK_LOG, 2> SUPPLEMENTAL_LOG_DATA_UI AS UI_LOG 3> FROM V$DATABASE;
PK_LOG UI_LOG ------ ------ NO NO
In this example, the NO
values indicate that supplemental logging is not enabled on the primary database.
If supplemental logging is enabled, then go to Section 4.2.2.2. If supplemental logging is not enabled, then perform the following steps to enable supplemental logging.
On the primary database, issue the following statement to add primary key and unique index information to the archived redo log file:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;
This SQL statement adds the information to uniquely identify the row that changed on the primary database so SQL Apply can correctly identify and maintain the same row on the standby database.
This statement may take a long time to finish on an open database because it waits for all ongoing transactions to finish. All redo that is generated after the completion of this statement is guaranteed to have supplemental logging information.
On the primary database, verify supplemental logging is enabled by issuing the same query used previously. For example:
SQL> SELECT SUPPLEMENTAL_LOG_DATA_PK AS PK_LOG, 2> SUPPLEMENTAL_LOG_DATA_UI AS UI_LOG 3> FROM V$DATABASE; PK_LOG UI_LOG ------ ------ YES YES
In this example, the YES values indicate supplemental logging is enabled on the primary database. For all tables with a primary key (SUPPLEMENTAL_LOG_DATA_PK
) or unique index (SUPPLEMENTAL_LOG_DATA_UI
), all columns of the primary key and unique index are placed into the online redo log file whenever an update operation is performed.
See Chapter 14, "Views Relevant to Oracle Data Guard" for more information about the V$DATABASE
view and the Oracle Database SQL Reference for more information about the ALTER DATABASE ADD SUPPLEMENTAL LOG DATA
statements.
In Section 3.1.3, you set up several standby role initialization parameters to take effect when the primary database is transitioned to the physical standby role. If you plan to transition the primary database to the logical standby role, you must modify the parameters on the primary database, as shown in Example 4-1, so that no parameters need to change after a role transition.
LOG_ARCHIVE_DEST_1= 'LOCATION=/arch1/chicago/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=chicago' LOG_ARCHIVE_DEST_2= 'SERVICE=boston VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=boston' LOG_ARCHIVE_DEST_3= 'LOCATION=/arch2/chicago/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=chicago' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_DEST_STATE_3=ENABLE UNDO_RETENTION=3600
To dynamically set the LOG_ARCHIVE_DEST_
n parameters, use the SQL ALTER SYSTEM SET
statement and include the SCOPE=BOTH
clause so that the change takes effect immediately and persists after the database is shut down and started up again. Also, set the UNDO_RETENTION
parameter to 3600; this parameter specifies (in seconds) the amount of committed undo information to retain in the database. Setting the value to 3600 is recommended for best results when building a LogMiner dictionary for the logical standby database.
The following table describes the archival processing defined by the initialization parameters shown in Example 4-1.
This section describes how to prepare the physical standby database to transition to a logical standby database. It contains the following topics:
Enabling supplemental logging on the logical standby database now rather than later is beneficial to prepare the database for future role transitions. Use the steps described in Section 4.2.2.1, but perform them on the logical standby database instead of on the primary database.
Perform the following steps to create a standby initialization parameter file.
In the text initialization parameter file (PFILE) that you created in Section 3.2.3, you need to make some additional modifications to the LOG_ARCHIVE_DEST_
n parameters and add the PARALLEL_MAX_SERVERS
parameter.
You need to modify the LOG_ARCHIVE_DEST_
n parameters because, unlike physical standby databases, logical standby databases are open databases that generate redo data and have multiple log files (online redo log files, archived redo log files, and standby redo log files). It is good practice to specify separate local destinations for:
LOG_ARCHIVE_DEST_1=LOCATION=/arch1/boston
destination.LOG_ARCHIVE_DEST_3=LOCATION=/arch2/boston
destination.Example 4-2 shows the initialization parameter changes that were modified for the logical standby database. The parameters shown are valid for the Boston logical standby database when it is running in either the primary or standby database role.
LOG_ARCHIVE_DEST_1= 'LOCATION=/arch1/boston/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=boston' LOG_ARCHIVE_DEST_2= 'SERVICE=chicago VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago' LOG_ARCHIVE_DEST_3= 'LOCATION=/arch2/boston/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=boston' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_DEST_STATE_3=ENABLE PARALLEL_MAX_SERVERS=9 UNDO_RETENTION=3600
The following table describes the archival processing defined by the initialization parameters shown in Example 4-2.
In Example 4-2, the PARALLEL_MAX_SERVERS
initialization parameter was added to the parameter file to specify the maximum number of parallel servers working on the logical standby database. This parameter is required for logical standby databases. Do not set PARALLEL_MAX_SERVERS
to a value less than 5; for best results, set it to a minimum of 9. See Section 9.4 for more details.
Caution: Review the initialization parameter file for additional parameters that may need to be modified. For example, you may need to modify the dump destination parameters ( |
To shut down the logical standby database, issue the following:
SQL> SHUTDOWN IMMEDIATE;
You will mount the logical standby database using the new initialization parameter file later, in Section 4.2.4.
Perform the following steps to create a control file for the standby database.
Issue the ALTER DATABASE CREATE LOGICAL STANDBY CONTROLFILE
statement to create a control file for the standby database. You must include the LOGICAL
keyword when creating a logical standby database, as shown in the following example:
SQL> ALTER DATABASE CREATE LOGICAL STANDBY CONTROLFILE AS '/tmp/boston.ctl';
On the primary system, use an operating system copy utility to copy the standby control file to the logical standby system. For example, the following examples use the UNIX cp
command:
cp /tmp/boston.ctl /arch1/boston/control1.ctl cp /tmp/boston.ctl /arch2/boston/control2.ctl
Perform the following steps to start, mount, and activate the logical standby database and SQL Apply.
On the logical standby database, issue the STARTUP
MOUNT
statement to start and mount the database. Do not open the database; it should remain closed to user access until later in the creation process. For example:
SQL> STARTUP MOUNT PFILE=initboston.ora;
On the logical standby database, issue the following statement to prepare it for SQL Apply:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;
Issue the following statement to activate this database as a logical standby database:
SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;
Run the Oracle DBNEWID (nid) utility to change the database name of the logical standby database. Changing the name prevents any interaction between this copy of the primary database and the original primary database.
Before you run the DBNEWID (nid) utility, you must shut down and mount the database:
SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP MOUNT PFILE=initboston.ora;
Now, run the Oracle DBNEWID utility on the logical standby database to change the database name and shut it down:
nid TARGET=SYS/password@boston DBNAME=boston Connected to database chicago (DBID=1456557175) Control Files in database: /arch1/boston/control1.ctl Change database ID and database name chicago to boston? (Y/[N]) => y Proceeding with operation Changing database ID from 1456557175 to 416458362 Changing database name from chicago to boston Control File /arch1/boston/control1.ctl - modified Datafile /arch1/boston/system01.dbf - dbid changed, wrote new name Datafile /arch1/boston/undotbs01.dbf -dbid changed, wrote new name . . . Control File /arch1/boston/control1.ctl-dbid changed, wrote new name Database name changed to boston. Modify parameter file and generate a new password file before restarting. Database ID for database boston change to 416458362. All previous backups and archive logs for this database are unusable. Database has been shut down, open database with RESETLOGS option. Successfully changed database name and ID. DBNEWID - Completed successfully.
You must re-create the password file after running the Oracle DBNEWID (nid) utility.
The output from the DBNEWID utility states that you must update the initialization parameter file. The following steps describe how to perform this task.
DB_NAME
initialization parameter.
Set the DB_NAME
initialization parameter in the text initialization parameter file from Section 4.2.3.2 to match the new name:
. . . DB_NAME=boston . . .
Connect to an idle instance of the logical standby database, and create a server parameter file for the standby database from the text initialization parameter file. For example:
SQL> CREATE SPFILE FROM PFILE=initboston.ora;
Start and open the logical standby database to user access, as follows:
SQL> STARTUP MOUNT; SQL> ALTER DATABASE OPEN RESETLOGS;
Each database should have a unique global name. To change the global name of the standby database to boston, issue the following statement:
SQL> ALTER DATABASE RENAME GLOBAL_NAME TO boston;
Creating a new temporary file on the logical standby database now, rather than later, is beneficial to prepare the database for future role transitions.
To add temporary files to the logical standby database, perform the following tasks:
SQL> SELECT TABLESPACE_NAME FROM DBA_TABLESPACES 2> WHERE CONTENTS = 'TEMPORARY'; TABLESPACE_NAME -------------------------------- TEMP1 TEMP2
For each tablespace identified in the previous query, add a new temporary file to the standby database. The following example adds a new temporary file called TEMP1
with size and reuse characteristics that match the primary database temporary files:
SQL> ALTER TABLESPACE TEMP1 ADD TEMPFILE 2> '/arch1/boston/temp01.dbf' 3> SIZE 40M REUSE;
Issue the following statement to begin applying redo data to the logical standby database. For example:
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
Continue with Section 4.2.5 to verify the logical standby database is performing properly. See Section 5.6.2 to configure standby redo log files, and see Section 6.4 for information about SQL Apply and real-time apply.
Once you create a logical standby database and set up log transport services and log apply services, verify redo data is being transmitted from the primary database and applied to the standby database. To check this, perform the following steps.
To verify the archived redo log files were registered on the logical standby system, connect to the logical standby database and query the DBA_LOGSTDBY_LOG
view. For example:
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YY HH24:MI:SS'; Session altered. SQL> COLUMN DICT_BEGIN FORMAT A10 SQL> COLUMN DICT_END FORMAT A8 SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, DICT_BEGIN, DICT_END 2> FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#; SEQUENCE# FIRST_TIME NEXT_TIME DIC DIC ---------- ------------------ ------------------ --- --- 24 23-JUL-02 18:19:05 23-JUL-02 18:19:48 YES YES 25 23-JUL-02 18:19:48 23-JUL-02 18:19:51 NO NO 26 23-JUL-02 18:19:51 23-JUL-02 18:19:54 NO NO 27 23-JUL-02 18:19:54 23-JUL-02 18:19:59 NO NO 28 23-JUL-02 18:19:59 23-JUL-02 18:20:03 NO NO 29 23-JUL-02 18:20:03 23-JUL-02 18:20:13 NO NO 30 23-JUL-02 18:20:13 23-JUL-02 18:20:18 NO NO 31 23-JUL-02 18:20:18 23-JUL-02 18:20:21 NO NO 8 rows selected.
Connect to the primary database and enter the following statement to begin sending redo data to the standby database:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT; System altered. SQL> ALTER SYSTEM ARCHIVE LOG CURRENT; System altered.
Connect to the logical standby database and query the DBA_LOGSTDBY_LOG
view again:
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YY HH24:MI:SS'; Session altered. SQL> COLUMN DICT_BEGIN FORMAT A10 SQL> COLUMN DICT_END FORMAT A8 SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, DICT_BEGIN, DICT_END 2 FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#; SEQUENCE# FIRST_TIME NEXT_TIME DIC DIC ---------- ------------------ ------------------ --- --- 24 23-JUL-02 18:19:05 23-JUL-02 18:19:48 YES YES 25 23-JUL-02 18:19:48 23-JUL-02 18:19:51 NO NO 26 23-JUL-02 18:19:51 23-JUL-02 18:19:54 NO NO 27 23-JUL-02 18:19:54 23-JUL-02 18:19:59 NO NO 28 23-JUL-02 18:19:59 23-JUL-02 18:20:03 NO NO 29 23-JUL-02 18:20:03 23-JUL-02 18:20:13 NO NO 30 23-JUL-02 18:20:13 23-JUL-02 18:20:18 NO NO 31 23-JUL-02 18:20:18 23-JUL-02 18:20:21 NO NO 32 23-JUL-02 18:20:21 23-JUL-02 18:32:11 NO NO 33 23-JUL-02 18:32:11 23-JUL-02 18:32:19 NO NO 10 rows selected.
By checking the files on the standby database, archiving a few log files, and then checking the standby database again, you can see that the new archived redo log files were registered. These log files are now available for log apply services to begin applying them.
On the logical standby database, query the V$LOGSTDBY_STATS
view to verify redo data is being applied correctly. For example:
SQL> COLUMN NAME FORMAT A30 SQL> COLUMN VALUE FORMAT A30 SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state'; NAME VALUE ------------------------------ ------------------------------ coordinator state INITIALIZING
In the example, the output from the V$LOGSTDBY_STATS
view shows the coordinator process is in the initialization state. When the coordinator process is initializing, log apply services are preparing to begin SQL Apply, but data from the archived redo log files is not being applied to the logical standby database.
Knowing the state of the coordinator process is of particular importance because it is the LSP background process that instructs all of the other logical standby processes. Section 9.1.9 describes the LSP background processes in more detail.
On the logical standby database, query the V$LOGSTDBY
view to see a current snapshot of SQL Apply activity. A text message describing the current activity of each process involved in reading and applying changes is displayed.
Example 4-3 shows typical output during the initialization phase.
SQL> COLUMN STATUS FORMAT A50 SQL> COLUMN TYPE FORMAT A12 SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY; TYPE HIGH_SCN STATUS ------------ ---------- -------------------------------------------------- COORDINATOR ORA-16115: loading Log Miner dictionary data READER ORA-16127: stalled waiting for additional transact ions to be applied BUILDER ORA-16117: processing PREPARER ORA-16116: no work available SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY; TYPE HIGH_SCN STATUS ------------ ---------- -------------------------------------------------- COORDINATOR ORA-16126: loading table or sequence object number READER ORA-16116: no work available BUILDER ORA-16116: no work available PREPARER ORA-16116: no work available
Once the coordinator process begins applying redo data to the logical standby database, the V$LOGSTDBY
view indicates this by showing the APPLYING
state.
Example 4-4 shows typical output during the applying phase. Notice that the values in the HIGH_SCN
column continue to increment. The numbers in this column will continue to increase as long as changes are being applied. The HIGH_SCN
column serves only as an indicator of progress.
SQL> COLUMN NAME FORMAT A30 SQL> COLUMN VALUE FORMAT A30 SQL> SELECT NAME, VALUE FROM V$LOGSTDBY_STATS WHERE NAME = 'coordinator state'; NAME VALUE ------------------------------ ------------------------------ coordinator state APPLYING SQL> COLUMN STATUS FORMAT A50 SQL> COLUMN TYPE FORMAT A12 SQL> SELECT TYPE, HIGH_SCN, STATUS FROM V$LOGSTDBY; TYPE HIGH_SCN STATUS ------------ ---------- -------------------------------------------------- COORDINATOR ORA-16117: processing READER ORA-16127: stalled waiting for additional transact ions to be applied BUILDER 191896 ORA-16116: no work available PREPARER 191902 ORA-16117: processing ANALYZER 191820 ORA-16120: dependencies being computed for transac tion at SCN 0x0000.0002ed4e APPLIER 191209 ORA-16124: transaction 1 16 1598 is waiting on ano ther transaction APPLIER 191205 ORA-16116: no work available APPLIER 191206 ORA-16124: transaction 1 5 1603 is waiting on anot her transaction APPLIER 191213 ORA-16117: processing APPLIER 191212 ORA-16124: transaction 1 20 1601 is waiting on ano ther transaction APPLIER 191216 ORA-16124: transaction 1 4 1602 is waiting on anot her transaction 11 rows selected.
To check the overall progress of SQL Apply, query the DBA_LOGSTDBY_PROGRESS
view on the standby database. For example:
SQL> SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS; APPLIED_SCN NEWEST_SCN ----------- ---------- 180702 180702
If standby redo log files are not configured, the numbers in the APPLIED_SCN
and NEWEST_SCN
columns are equal (as shown in the query example), indicating that all of the available data in the archived redo log file was applied. These values can be compared to the values in the FIRST_CHANGE#
column in the DBA_LOGSTDBY_LOG
view to see how much log file information has to be applied and how much remains. If standby redo log files are configured, the numbers in the APPLIED_SCN
and NEWEST_SCN
columns may be close, but they will seldom be equal.
See Section 5.9.1, "Monitoring Log File Archival Information" and Section 6.4.4, "Monitoring Log Apply Services for Logical Standby Databases" for information about how to verify both log transport and log apply services are working correctly.
At this point, the logical standby database is running and can provide the maximum performance level of data protection. The following list describes additional preparations you can take on the logical standby database:
The Data Guard configuration is initially set up in the maximum performance mode (the default). See Section 5.6 for information about the data protection modes and how to upgrade or downgrade the current protection mode.
Standby redo logs are required for standby databases running in the maximum protection mode and maximum availability mode. However, configuring standby redo logs is recommended on all standby databases, because during a failover Data Guard can recover and apply more redo data from standby redo log files than from the archived redo log files alone. The standby redo logs should exist on both primary and standby databases and have the same size and names. See Section 5.6.2, "Configuring Standby Redo Log Files" for more information.
Flashback Database removes the need to re-create the primary database after a failover. Flashback Database is similar to conventional point-in-time recovery in its effects, enabling you to return a database to its state at a time in the recent past. Flashback database is faster than point-in-time recovery, because it does not require restoring datafiles from backup or the extensive application of redo data. You can enable Flashback Database on the primary database, the standby database, or both. See Oracle Database Backup and Recovery Advanced User's Guide for more information.