Oracle® Streams Replication Administrator's Guide 10g Release 1 (10.1) Part Number B10728-01 |
|
|
View PDF |
This chapter contains instructions for managing Streams capture processes, propagations, and Streams apply processes in a Streams replication environment. This chapter also includes instructions for managing Streams tags, and for performing database point-in-time recovery at a destination database in a Streams environment
This chapter contains these topics:
The following sections describe management tasks for a capture process in a Streams replication environment:
You also may need to perform other management tasks.
See Also:
Oracle Streams Replication Administrator's Guide for more information about managing a capture process |
A capture process typically starts the process of replicating a database change by capturing the change, converting the change into a logical change record (LCR), and enqueuing the change into a SYS.AnyData
queue. From there, the LCR can be propagated to other databases and applied at these database to complete the replication process.
You can create a capture process that captures changes to the local source database, or you can create a capture process that captures changes remotely at a downstream database. If a capture process runs on a downstream database, then archived redo log files from the source database are copied to the downstream database, and the capture process captures changes in these files at the downstream database.
You can use any of the following procedures to create a local capture process:
DBMS_STREAMS_ADM.ADD_TABLE_RULES
DBMS_STREAMS_ADM.ADD_SUBSET_RULES
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES
DBMS_STREAMS_ADM.ADD_GLOBAL_RULES
DBMS_CAPTURE_ADM.CREATE_CAPTURE
The following is an example that runs the ADD_SCHEMA_RULES
procedure in the DBMS_STREAMS_ADM
package to create a local capture process:
BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'hr', streams_type => 'capture', streams_name => 'strep01_capture', queue_name => 'strep01_queue', include_dml => true, include_ddl => true, include_tagged_lcr => false, source_database => NULL, inclusion_rule => true); END; /
Running this procedure performs the following actions:
strep01_capture
. The capture process is created only if it does not already exist. If a new capture process is created, then this procedure also sets the start SCN to the point in time of creation.strep01_queue
inclusion_rule
parameter is set to true
. The rule set uses the SYS.STREAMS$_EVALUATION_CONTEXT
evaluation context. The rule set name is specified by the system.TRUE
for DML changes to the hr
schema and the database objects in the hr
schema, and the other rule evaluates to TRUE
for DDL changes to the hr
schema and the database objects in the hr
schema. The rule names are specified by the system.inclusion_rule
parameter is set to true
.NULL
tag, because the include_tagged_lcr
parameter is set to false
. This behavior is accomplished through the system-created rules for the capture process.source_database
parameter is set to NULL
. For a local capture process, you also may specify the global name of the local database for this parameter.hr
schema, and all of the database objects added to the hr
schema in the future, for instantiation
See Also:
Oracle Streams Concepts and Administration for more information about creating a capture process, including information about creating a downstream capture process, and for more information about the first SCN and start SCN for a capture process |
Supplemental logging must be specified for certain columns at a source database for changes to the columns to be applied successfully at a destination database. The following sections illustrate how to manage supplemental logging at a source database:
Note: LOB, |
See Also:
|
The following sections describe creating an unconditional log group:
To specify an unconditional supplemental log group that only includes the primary key column(s) for a table, use an ALTER
TABLE
statement with the PRIMARY
KEY
option in the ADD
SUPPLEMENTAL
LOG
DATA
clause.
For example, the following statement adds the primary key column of the hr.regions
table to an unconditional log group:
ALTER TABLE hr.regions ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
The log group has a system-generated name.
To specify an unconditional supplemental log group that includes all of the columns in a table, use an ALTER
TABLE
statement with the ALL
option in the ADD
SUPPLEMENTAL
LOG
DATA
clause.
For example, the following statement adds all of the columns in the hr.departments
table to an unconditional log group:
ALTER TABLE hr.regions ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
The log group has a system-generated name.
To specify an unconditional supplemental log group that contains columns that you select, use an ALTER
TABLE
statement with the ALWAYS
specification for the ADD
SUPPLEMENTAL
LOG
GROUP
clause.These log groups may include key columns, if necessary.
For example, the following statement adds the department_id
column and the manager_id
column of the hr.departments
table to an unconditional log group named log_group_dep_pk
:
ALTER TABLE hr.departments ADD SUPPLEMENTAL LOG GROUP log_group_dep_pk (department_id, manager_id) ALWAYS;
The ALWAYS
specification makes this log group an unconditional log group.
The following sections describe creating a conditional log group:
You can use the following options in the ADD
SUPPLEMENTAL
LOG
DATA
clause of an ALTER
TABLE
statement:
FOREIGN
KEY
option creates a conditional log group that includes the foreign key column(s) in the table.UNIQUE
option creates a conditional log group that includes the unique key column(s) in the table.If you specify more than one option in a single ALTER
TABLE
statement, then a separate conditional log group is created for each option.
For example, the following statement creates two conditional log groups:
ALTER TABLE hr.employees ADD SUPPLEMENTAL LOG DATA (UNIQUE, FOREIGN KEY) COLUMNS;
One conditional log group includes the unique key columns for the table, and the other conditional log group includes the foreign key columns for the table. Both log groups have a system-generated name.
To specify a conditional supplemental log group that includes any columns you choose to add, you can use the ADD
SUPPLEMENTAL
LOG
GROUP
clause in the ALTER
TABLE
statement. To make the log group conditional, do not include the ALWAYS
specification.
For example, suppose the min_salary
and max_salary
columns in the hr.jobs
table are included in a column list for conflict resolution at a destination database. The following statement adds the min_salary
and max_salary
columns to a conditional log group named log_group_jobs_cr
:
ALTER TABLE hr.jobs ADD SUPPLEMENTAL LOG GROUP log_group_jobs_cr (min_salary, max_salary);
To drop a conditional or unconditional supplemental log group, use the DROP
SUPPLEMENTAL
LOG
GROUP
clause in the ALTER
TABLE
statement. For example, to drop a supplemental log group named log_group_jobs_cr
, run the following statement:
ALTER TABLE hr.jobs DROP SUPPLEMENTAL LOG GROUP log_group_jobs_cr;
You also have the option of specifying supplemental logging for all primary key, unique key, and foreign key columns in a source database. You may choose this option if you configure a capture process to capture changes to an entire database. To specify supplemental logging for all primary key, unique key, and foreign key columns in a source database, issue the following SQL statement:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE, FOREIGN KEY) COLUMNS;
If your primary, unique, and foreign key columns are the same at all source and destination databases, then running this command at the source database provides the supplemental logging needed for primary, unique, and foreign key columns at all destination databases. When you specify the FOREIGN
KEY
option, all columns of a row's foreign key are placed in the redo log file if any column belonging to the foreign key is modified.
You may omit one or more of these options. For example, if you do not want to supplementally log all of the foreign key columns in the database, then you can omit the FOREIGN
KEY
option, as in the following example:
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE) COLUMNS;
In additional to PRIMARY
KEY
, UNIQUE
, and FOREIGN
KEY
, you also can use the ALL
option. The ALL
option specifies that, when a row is changed, all the columns of that row (except for LOB, LONG
, LONG
RAW
, and user-defined type columns) are placed in the redo log file.
Supplemental logging statements are cumulative. If you issue two consecutive ALTER
DATABASE
ADD
SUPPLEMENTAL
LOG
DATA
commands, each with a different identification key, then both keys are supplementally logged.
To drop supplemental logging for all primary key and unique key columns in a source database, issue the ALTER
DATABASE
DROP
SUPPLEMENTAL
LOG
DATA
statement. To drop database supplemental logging for all primary key, unique key, and foreign key columns, issue the following SQL statement:
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE, FOREIGN KEY) COLUMNS;
Note: Dropping database supplemental logging of key columns does not affect any existing table-level supplemental log groups. |
The following sections describe management tasks for LCR staging and propagation in a Streams replication environment:
You also may need to perform other management tasks.
See Also:
Oracle Streams Replication Administrator's Guide for more information about managing event staging and propagation |
In a Streams replication environment, SYS.AnyData
queues stage LCRs that encapsulate captured changes. These queues may be used by capture processes, propagations, and apply processes as an LCR goes through a stream from a source database to a destination database.
You use the SET_UP_QUEUE
procedure in the DBMS_STREAMS_ADM
package to create a SYS.AnyData
queue. This procedure enables you to specify the following for the SYS.AnyData
queue it creates:
ENQUEUE
and DEQUEUE
privileges on the queueThis procedure creates a queue that is both a secure queue and a transactional queue and starts the newly created queue.
For example, to create a SYS.AnyData
queue named strep01_queue
in the strmadmin
schema with a queue table named strep01_queue_table
, run the following procedure:
BEGIN DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'strmadmin.strep01_queue_table', queue_name => 'strmadmin.strep01_queue'); END; /
You also can use procedures in the DBMS_AQADM
package to create a SYS.AnyData
queue.
See Also:
Oracle Streams Concepts and Administration for information about managing |
To replicate LCRs between databases, you must propagate the LCRs from the database where they were first staged in a queue to the database where they are applied. To accomplish this goal, you may use any number of separate propagations.
You can use any of the following procedures to create a propagation:
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES
DBMS_STREAMS_ADM.ADD_SUBSET_PROPAGATION_RULES
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES
DBMS_STREAMS_ADM.ADD_GLOBAL_PROPAGATION_RULES
DBMS_PROPAGATION_ADM.CREATE_PROPAGATION
The following tasks must be completed before you create a propagation:
The following is an example that runs the ADD_SCHEMA_PROPAGATION_RULES
procedure in the DBMS_STREAMS_ADM
package to create a propagation:
BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES( schema_name => 'hr', streams_name => 'strep01_propagation', source_queue_name => 'strmadmin.strep01_queue', destination_queue_name => 'strmadmin.strep02_queue@rep2.net', include_dml => true, include_ddl => true, include_tagged_lcr => false, source_database => 'rep1.net', inclusion_rule => true); END; /
Running this procedure performs the following actions:
strep01_propagation
. The propagation is created only if it does not already exist.strep01_queue
in the current database to strep02_queue
in the rep2.net
databaserep2.net
database link to propagate the LCRs, because the destination_queue_name
parameter contains @rep2.net
inclusion_rule
parameter is set to true
. The rule set uses the evaluation context SYS.STREAMS$_EVALUATION_CONTEXT
. The rule set name is specified by the system.TRUE
for row LCRs that contain the results of DML changes to the tables in the hr
schema, and the other rule evaluates to TRUE
for DDL LCRs that contain DDL changes to the hr
schema or to the database objects in the hr
schema. The rule names are specified by the system.inclusion_rule
parameter is set to true
.NULL
tag, because the include_tagged_lcr
parameter is set to false
. This behavior is accomplished through the system-created rules for the propagation.rep1.net
, which may or may not be the current database. This propagation does not propagate LCRs in the source queue that have a different source database.See Also:
Oracle Streams Concepts and Administration for information about creating propagations |
The following sections describe management tasks for an apply process in a Streams replication environment:
You also may need to perform other management tasks.
See Also:
Oracle Streams Concepts and Administration for more information about managing an apply process |
When an apply process applies an LCR or sends an LCR to an apply handler that executes it, the replication process for the LCR is complete. That is, the database change that is encapsulated in the LCR is shared with the database where the LCR is applied.
You can use any of the following procedures to create an apply process that applies LCRs:
DBMS_STREAMS_ADM.ADD_TABLE_RULES
DBMS_STREAMS_ADM.ADD_SUBSET_RULES
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES
DBMS_STREAMS_ADM.ADD_GLOBAL_RULES
DBMS_APPLY_ADM.CREATE_APPLY
Before you create an apply process, create a SYS.AnyData
queue to associate with the apply process, if one does not exist.
The following is an example that runs the ADD_SCHEMA_RULES
procedure in the DBMS_STREAMS_ADM
package to create an apply process:
BEGIN DBMS_STREAMS_ADM.ADD_SCHEMA_RULES( schema_name => 'hr', streams_type => 'apply', streams_name => 'strep01_apply', queue_name => 'strep02_queue', include_dml => true, include_ddl => true, include_tagged_lcr => false, source_database => 'rep1.net', inclusion_rule => true); END; /
Running this procedure performs the following actions:
strep01_apply
that applies captured LCRs to the local database. The apply process is created only if it does not already exist. To create an apply process that applies user-enqueued LCRs, you must use the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.strep02_queue
inclusion_rule
parameter is set to true
. The rule set uses the SYS.STREAMS$_EVALUATION_CONTEXT
evaluation context. The rule set name is specified by the system.TRUE
for row LCRs that contain the results of DML changes to the tables in the hr
schema, and the other rule evaluates to TRUE
for DDL LCRs that contain DDL changes to the hr
schema or to the database objects in the hr
schema. The rule names are specified by the system.inclusion_rule
parameter is set to true
apply_tag
for the apply process to a value that is the hexadecimal equivalent of '00'
(double zero). Redo entries generated by the apply process have a tag with this value.NULL
tag, because the include_tagged_lcr
parameter is set to false
. This behavior is accomplished through the system-created rule for the apply process.rep1.net
source database. The rules in the apply process rule sets determine which events are dequeued by the apply process. If the apply process dequeues an LCR with a source database that is different than rep1.net
, then an error is raised.
Note: Depending on the configuration of the apply process you create, supplemental logging may be required at the source database on columns in the tables for which an apply process applies changes. |
See Also:
Oracle Streams Concepts and Administration for information about creating apply processes |
This section contains instructions for setting and removing the substitute key columns for a table.
When an apply process applies changes to a table, substitute key columns can either replace the primary key columns for a table that has a primary key or act as the primary key columns for a table that does not have a primary key. Set the substitute key columns for a table using the SET_KEY_COLUMNS
procedure in the DBMS_APPLY_ADM
package. This setting applies to all of the apply processes that apply local changes to the database.
For example, to set the substitute key columns for the hr.employees
table to the first_name
, last_name
, and hire_date
columns, replacing the employee_id
column, run the following procedure:
BEGIN DBMS_APPLY_ADM.SET_KEY_COLUMNS( object_name => 'hr.employees', column_list => 'first_name,last_name,hire_date'); END; /
See Also:
|
You remove the substitute key columns for a table by specifying NULL
for the column_list
or column_table
parameter in the SET_KEY_COLUMNS
procedure in the DBMS_APPLY_ADM
package. If the table has a primary key, then the table's primary key is used by any apply process for local changes to the database after you remove the substitute primary key.
For example, to remove the substitute key columns for the hr.employees
table, run the following procedure:
BEGIN DBMS_APPLY_ADM.SET_KEY_COLUMNS( object_name => 'hr.employees', column_list => NULL); END; /
This section contains instructions for creating, setting, and removing a DML handler.
A DML handler must have the following signature:
PROCEDURE user_procedure ( parameter_name IN SYS.AnyData);
Here, user_procedure
stands for the name of the procedure and parameter_name
stands for the name of the parameter passed to the procedure. The parameter passed to the procedure is a SYS.AnyData
encapsulation of a row LCR.
The following restrictions apply to the user procedure:
COMMIT
or ROLLBACK
statements. Doing so may endanger the consistency of the transaction that contains the LCR.EXECUTE
member procedure for the row LCR, then do not attempt to manipulate more than one row in a row operation. You must construct and execute manually any DML statements that manipulate more than one row.UPDATE
or DELETE
, then row operations resubmitted using the EXECUTE
member procedure for the LCR must include the entire key in the list of old values. The key is the primary key or the smallest unique index that has at least one NOT
NULL
column, unless a substitute key has been specified by the SET_KEY_COLUMNS
procedure. If there is no specified key, then the key consists of all non LOB, non LONG
, and non LONG
RAW
columns.INSERT
, then row operations resubmitted using the EXECUTE
member procedure for the LCR should include the entire key in the list of new values. Otherwise, duplicate rows are possible. The key is the primary key or the smallest unique index that has at least one NOT
NULL
column, unless a substitute key has been specified by the SET_KEY_COLUMNS
procedure. If there is no specified key, then the key consists of all non LOB, non LONG
, and non LONG
RAW
columns.A DML handler can be used for any customized processing of row LCRs. For example, the handler may modify an LCR and then execute it using the EXECUTE
member procedure for the LCR. When you execute a row LCR in a DML handler, the apply process applies the LCR without calling the DML handler again.
You may also use a DML handler for recording the history of DML changes. For example, a DML handler may insert information about an LCR it processes into a table and then apply the LCR using the EXECUTE
member procedure. To create such a DML handler, first create a table to hold the history information:
CREATE TABLE strmadmin.history_row_lcrs( timestamp DATE, source_database_name VARCHAR2(128), command_type VARCHAR2(30), object_owner VARCHAR2(32), object_name VARCHAR2(32), tag RAW(10), transaction_id VARCHAR2(10), scn NUMBER, commit_scn NUMBER, old_values SYS.LCR$_ROW_LIST, new_values SYS.LCR$_ROW_LIST) NESTED TABLE old_values STORE AS old_values_ntab NESTED TABLE new_values STORE AS new_values_ntab;
Create the procedure that inserts the information about the row LCR into the history_row_lcrs
table and executes the row LCR:
CREATE OR REPLACE PROCEDURE history_dml(in_any IN SYS.ANYDATA) IS lcr SYS.LCR$_ROW_RECORD; rc PLS_INTEGER; BEGIN -- Access the LCR rc := in_any.GETOBJECT(lcr); -- Insert information about the LCR into the history_row_lcrs table INSERT INTO strmadmin.history_row_lcrs VALUES (SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(), lcr.GET_COMMAND_TYPE(), lcr.GET_OBJECT_OWNER(), lcr.GET_OBJECT_NAME(), lcr.GET_TAG(), lcr.GET_TRANSACTION_ID(), lcr.GET_SCN(), lcr.GET_COMMIT_SCN, lcr.GET_VALUES('old'), lcr.GET_VALUES('new', 'n')); -- Apply row LCR lcr.EXECUTE(true); END; /
See Also:
|
A DML handler processes each row LCR dequeued by any apply process that contains a specific operation on a specific table. You can specify multiple DML handlers on the same table, to handle different operations on the table. All apply processes that apply changes to the specified table in the local database use the specified DML handler.
Set the DML handler using the SET_DML_HANDLER
procedure in the DBMS_APPLY_ADM
package. For example, the following procedure sets the DML handler for UPDATE
operations on the hr.locations
table. Therefore, when any apply process that applies changes locally dequeues a row LCR containing an UPDATE
operation on the hr.locations
table, the apply process sends the row LCR to the history_dml
PL/SQL procedure in the strmadmin
schema for processing. The apply process does not apply a row LCR containing such a change directly.
In this example, the apply_name
parameter is set to NULL
. Therefore, the DML handler is a general DML handler that is used by all of the apply processes in the database.
BEGIN DBMS_APPLY_ADM.SET_DML_HANDLER( object_name => 'hr.locations', object_type => 'TABLE', operation_name => 'UPDATE', error_handler => false, user_procedure => 'strmadmin.history_dml', apply_database_link => NULL, apply_name => NULL); END; /
See Also:
|
You unset a DML handler using the SET_DML_HANDLER
procedure in the DBMS_APPLY_ADM
package. When you run that procedure, set the user_procedure
parameter to NULL
for a specific operation on a specific table. After the DML handler is unset, any apply process that applies changes locally will apply a row LCR containing such a change directly.
For example, the following procedure unsets the DML handler for UPDATE
operations on the hr.locations
table:
BEGIN DBMS_APPLY_ADM.SET_DML_HANDLER( object_name => 'hr.locations', object_type => 'TABLE', operation_name => 'UPDATE', error_handler => false, user_procedure => NULL, apply_name => NULL); END; /
This section contains instructions for creating, specifying, and removing the DDL handler for an apply process.
Note: All applied DDL LCRs commit automatically. Therefore, if a DDL handler calls the |
See Also:
|
A DDL handler must have the following signature:
PROCEDURE handler_procedure ( parameter_name IN SYS.AnyData);
Here, handler_procedure
stands for the name of the procedure and parameter_name
stands for the name of the parameter passed to the procedure. The parameter passed to the procedure is a SYS.AnyData
encapsulation of a DDL LCR.
A DDL handler can be used for any customized processing of DDL LCRs. For example, the handler may modify the LCR and then execute it using the EXECUTE
member procedure for the LCR. When you execute a DDL LCR in a DDL handler, the apply process applies the LCR without calling the DDL handler again.
You may also use a DDL handler to record the history of DDL changes. For example, a DDL handler may insert information about an LCR it processes into a table and then apply the LCR using the EXECUTE
member procedure.
To create such a DDL handler, first create a table to hold the history information:
CREATE TABLE strmadmin.history_ddl_lcrs( timestamp DATE, source_database_name VARCHAR2(128), command_type VARCHAR2(30), object_owner VARCHAR2(32), object_name VARCHAR2(32), object_type VARCHAR2(18), ddl_text CLOB, logon_user VARCHAR2(32), current_schema VARCHAR2(32), base_table_owner VARCHAR2(32), base_table_name VARCHAR2(32), tag RAW(10), transaction_id VARCHAR2(10), scn NUMBER);
Create the procedure that inserts the information about the DDL LCR into the history_ddl_lcrs
table and executes the DDL LCR:
CREATE OR REPLACE PROCEDURE history_ddl(in_any IN SYS.ANYDATA) IS lcr SYS.LCR$_DDL_RECORD; rc PLS_INTEGER; ddl_text CLOB; BEGIN -- Access the LCR rc := in_any.GETOBJECT(lcr); DBMS_LOB.CREATETEMPORARY(ddl_text, true); lcr.GET_DDL_TEXT(ddl_text); -- Insert DDL LCR information into history_ddl_lcrs table INSERT INTO strmadmin.history_ddl_lcrs VALUES( SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(), lcr.GET_COMMAND_TYPE(), lcr.GET_OBJECT_OWNER(), lcr.GET_OBJECT_NAME(), lcr.GET_OBJECT_TYPE(), ddl_text, lcr.GET_LOGON_USER(), lcr.GET_CURRENT_SCHEMA(), lcr.GET_BASE_TABLE_OWNER(), lcr.GET_BASE_TABLE_NAME(), lcr.GET_TAG(), lcr.GET_TRANSACTION_ID(), lcr.GET_SCN()); -- Apply DDL LCR lcr.EXECUTE(); -- Free temporary LOB space DBMS_LOB.FREETEMPORARY(ddl_text); END; /
A DDL handler processes all DDL LCRs dequeued by an apply process. Set the DDL handler for an apply process using the ddl_handler
parameter in the ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package. For example, the following procedure sets the DDL handler for an apply process named strep01_apply
to the history_ddl
procedure in the strmadmin
schema.
BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'strep01_apply', ddl_handler => 'strmadmin.history_ddl'); END; /
A DDL handler processes all DDL LCRs dequeued by an apply process. You remove the DDL handler for an apply process by setting the remove_ddl_handler
parameter to true
in the ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package. For example, the following procedure removes the DDL handler from an apply process named strep01_apply
.
BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'strep01_apply', remove_ddl_handler => true); END; /
This section describes the following tasks:
Set an update conflict handler using the SET_UPDATE_CONFLICT_HANDLER
procedure in the DBMS_APPLY_ADM
package. You can use one of the following prebuilt methods when you create an update conflict resolution handler:
For example, suppose a Streams environment captures changes to the hr.jobs
table at dbs1.net
and propagates these changes to the dbs2.net
destination database, where they are applied. In this environment, applications can perform DML changes on the hr.jobs
table at both databases, but, if there is a conflict for a particular DML change, then the change at the dbs1.net
database should always overwrite the change at the dbs2.net
database. In this environment, you can accomplish this goal by specifying an OVERWRITE
handler at the dbs2.net
database.
To specify an update conflict handler for the hr.jobs
table in the hr
schema at the dbs2.net
database, run the following procedure at dbs2.net
:
DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'job_title'; cols(2) := 'min_salary'; cols(3) := 'max_salary'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name => 'hr.jobs', method_name => 'OVERWRITE', resolution_column => 'job_title', column_list => cols); END; /
All apply processes running on a database that apply changes to the specified table locally use the specified update conflict handler.
See Also:
|
You can modify an existing update conflict handler by running the SET_UPDATE_CONFLICT_HANDLER
procedure in the DBMS_APPLY_ADM
package. To update an existing conflict handler, specify the same table and resolution column as the existing conflict handler.
To modify the update conflict handler created in "Setting an Update Conflict Handler", you specify the hr.jobs
table and the job_title
column as the resolution column. You can modify this update conflict handler by specifying a different type of prebuilt method or a different column list, or both. However, if you want to change the resolution column for an update conflict handler, then you must remove and re-create the handler.
For example, suppose the environment changes, and you want changes from dbs1.net
to be discarded in the event of a conflict, whereas previously changes from dbs1.net
overwrote changes at dbs2.net
. You can accomplish this goal by specifying a DISCARD
handler at the dbs2.net
database.
To modify the existing update conflict handler for the hr.jobs
table in the hr
schema at the dbs2.net
database, run the following procedure:
DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'job_title'; cols(2) := 'min_salary'; cols(3) := 'max_salary'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name => 'hr.jobs', method_name => 'DISCARD', resolution_column => 'job_title', column_list => cols); END; /
You can remove an existing update conflict handler by running the SET_UPDATE_CONFLICT_HANDLER
procedure in the DBMS_APPLY_ADM
package. To remove a an existing conflict handler, specify NULL
for the method, and specify the same table, column list, and resolution column as the existing conflict handler.
For example, suppose you want to remove the update conflict handler created in "Setting an Update Conflict Handler" and then modified in "Modifying an Existing Update Conflict Handler". To remove this update conflict handler, run the following procedure:
DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'job_title'; cols(2) := 'min_salary'; cols(3) := 'max_salary'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name => 'hr.jobs', method_name => NULL, resolution_column => 'job_title', column_list => cols); END; /
You can stop conflict detection for nonkey columns using the COMPARE_OLD_VALUES
procedure in the DBMS_APPLY_ADM
package.
For example, suppose you configure a time
column for conflict resolution for the hr.employees
table, as described in "MAXIMUM". In this case, you may decide to stop conflict detection for the other nonkey columns in the table. After adding the time
column and creating the trigger as described in that section, add the columns in the hr.employees
table to the column list for an update conflict handler:
DECLARE cols DBMS_UTILITY.NAME_ARRAY; BEGIN cols(1) := 'first_name'; cols(2) := 'last_name'; cols(3) := 'email'; cols(4) := 'phone_number'; cols(5) := 'hire_date'; cols(6) := 'job_id'; cols(7) := 'salary'; cols(8) := 'commission_pct'; cols(9) := 'manager_id'; cols(10) := 'department_id'; cols(11) := 'time'; DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER( object_name => 'hr.employees', method_name => 'MAXIMUM', resolution_column => 'time', column_list => cols); END; /
This example does not include the primary key for the table in the column list because it assumes that the primary key is never updated. However, other key columns are included in the column list.
To stop conflict detection for all nonkey columns in the table for both UPDATE
and DELETE
operations at a destination database, run the following procedure:
DECLARE cols DBMS_UTILITY.LNAME_ARRAY; BEGIN cols(1) := 'first_name'; cols(2) := 'last_name'; cols(3) := 'email'; cols(4) := 'phone_number'; cols(5) := 'hire_date'; cols(6) := 'job_id'; cols(7) := 'salary'; cols(8) := 'commission_pct'; DBMS_APPLY_ADM.COMPARE_OLD_VALUES( object_name => 'hr.employees', column_table => cols, operation => '*', compare => false); END; /
The asterisk (*
) specified for the operation
parameter means that conflict detection is stopped for both UPDATE
and DELETE
operations. After you run this procedure, all apply processes running on the database that apply changes to the specified table locally do not detect conflicts on the specified columns. Therefore, in this example, the time
column is the only column used for conflict detection.
See Also:
|
You can use the MAINTAIN_SIMPLE_TABLESPACE
procedure to configure Streams replication for a simple tablespace, and you can use the MAINTAIN_TABLESPACES
procedure to configure Streams replication for a set of self-contained tablespaces. Both of these procedures are in the DBMS_STREAMS_ADM
package. These procedures use transportable tablespaces, Data Pump, the DBMS_STREAMS_TABLESPACE_ADM
package, and the DBMS_FILE_TRANSFER
package to configure the environment.
A self-contained tablespace has no references from the tablespace pointing outside of the tablespace. For example, if an index in the tablespace is for a table in a different tablespace, then the tablespace is not self-contained. A simple tablespace is a self-contained tablespace that uses only one datafile. When there is more than one tablespace, a self-contained tablespace set has no references from inside the set of tablespaces pointing outside of the set of tablespaces.
These procedures set up either a single source Streams configuration with the local database as the source database, or a bi-directional Streams configuration with both databases acting as source and destination databases. The bi_directional
parameter for each procedure controls whether the Streams configuration is single source or bi-directional. If bi_directional
is false
, then a capture process at the local database captures DML changes to the tables in the specified tablespace or tablespace set, a propagation propagates these changes to the destination database, and an apply process at the destination database applies these changes. If bi_directional
is true
, then each database captures changes and propagates them to the other database, and each database applies changes from the other database.
These procedures cannot be used to configure multi-directional replication where changes may be cycled back to a source database by a third database in the environment. For example, this procedure cannot be used to configure a Streams replication environment with three databases where each database shares changes with the other two databases in the environment. If this procedure is used to configure a three way replication environment such as this, then changes made at a source database would be cycled back to the same source database. In a valid three way replication environment, a particular change is made only once at each database.
These procedures do not configure the Streams environment to maintain DDL changes to the tablespace nor to the database objects in the tablespace. For example, the Streams environment is not configured to replicate ALTER
TABLESPACE
statements on the tablespace, nor is it configured to replicate ALTER
TABLE
statements on tables in the tablespace. You may configure the Streams environment to maintain DDL changes manually.
The example in this section uses the MAINTAIN_TABLESPACES
procedure to configure and maintain a bi-directional Streams replication environment. This example makes the following assumptions:
tbs1
and tbs2
make a self-contained tablespace set at the source database sts1.net
./orc/dbs
directory at the source database sts1.net
.sts2.net
database does not contain the tablespace set currently.This example configures the bi-directional Streams replication environment directly, but you also have the option of generating a script. You may modify the script and run it to configure the environment.
Complete the following steps to configure and maintain the bi-directional Streams replication environment:
sts1.net
and sts2.net
if they do not exist. See Oracle Streams Concepts and Administration for instructions. This example assumes that the username of the Streams administrator is strmadmin
at both databases.sts1.net
database to the sts2.net
database:
CONNECT strmadmin/strmadminpw@sts1.net CREATE DATABASE LINK sts2.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'sts2.net';
sts2.net
database to the sts1.net
database:
CONNECT strmadmin/strmadminpw@sts2.net CREATE DATABASE LINK sts1.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'sts1.net';
tbs_directory
that corresponds to the /orc/dbs
directory:
CONNECT strmadmin/strmadminpw@sts1.net CREATE DIRECTORY tbs_directory AS '/orc/dbs';
If the datafiles are in multiple directories, then a directory object must exist for each of these directories, and the user who runs the MAINTAIN_TABLESPACES
procedure in Step 7 must have READ
privilege to these directory objects. In this example, the Streams administrator has this privilege because this user creates the directory object.
source_directory
that corresponds to the /usr/db_files
directory:
CONNECT strmadmin/strmadminpw@sts1.net CREATE DIRECTORY source_directory AS '/usr/db_files';
dest_directory
that corresponds to the /usr/trans_files
directory:
CONNECT strmadmin/strmadminpw@sts2.net CREATE DIRECTORY dest_directory AS '/usr/trans_files';
MAINTAIN_TABLESPACES
procedure:
CONNECT strmadmin/strmadminpw@sts1.net DECLARE t_names DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN -- Tablespace names t_names(1) := 'TBS1'; t_names(2) := 'TBS2'; DBMS_STREAMS_ADM.MAINTAIN_TABLESPACES( tablespace_names => t_names, source_directory_object => 'SOURCE_DIRECTORY', destination_directory_object => 'DEST_DIRECTORY', destination_database => 'STS2.NET', bi_directional => true); END; /
When this procedure completes, the Streams bi-directional replication environment is configured. The procedure uses default names for the SYS.AnyData
queues and capture processes it creates, and the procedure produces system-generated names for the propagations and apply processes it creates. You can specify different names by using additional parameters available in the MAINTAIN_TABLESPACES
procedure. This procedure also starts the queues, capture processes, and apply processes, and it enables the propagation schedules for the propagations.
Typically, conflicts are possible in a bi-directional replication environment. If conflicts are possible in the environment created by the MAINTAIN_TABLESPACES
procedure, then configure conflict resolution before you allow users to make changes to the objects in the tablespace set.
See Also:
|
You can set or get the value of the tags generated by the current session or by an apply process. The following sections describe how to set and get tag values.
This section contains instructions for setting and getting the tag for the current session.
You can set the tag for all redo entries generated by the current session using the SET_TAG
procedure in the DBMS_STREAMS
package. For example, to set the tag to the hexadecimal value of '1D'
in the current session, run the following procedure:
BEGIN DBMS_STREAMS.SET_TAG( tag => HEXTORAW('1D')); END; /
After running this procedure, each redo entry generated by DML or DDL statements in the current session will have a tag value of 1D
. Running this procedure affects only the current session.
You can get the tag for all redo entries generated by the current session using the GET_TAG
procedure in the DBMS_STREAMS
package. For example, to get the hexadecimal value of the tags generated in the redo entries for the current session, run the following procedure:
SET SERVEROUTPUT ON DECLARE raw_tag RAW(2048); BEGIN raw_tag := DBMS_STREAMS.GET_TAG(); DBMS_OUTPUT.PUT_LINE('Tag Value = ' || RAWTOHEX(raw_tag)); END; /
You also can display the tag value for the current session by querying the DUAL
view:
SELECT DBMS_STREAMS.GET_TAG FROM DUAL;
This section contains instructions for setting and removing the tag for an apply process.
See Also:
|
An apply process generates redo entries when it applies changes to a database or invokes handlers. You can set the default tag for all redo entries generated by an apply process when you create the apply process using the CREATE_APPLY
procedure in the DBMS_APPLY_ADM
package, or when you alter an existing apply process using the ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package. In both of these procedures, set the apply_tag
parameter to the value you want to specify for the tags generated by the apply process.
For example, to set the value of the tags generated in the redo log by an existing apply process named strep01_apply
to the hexadecimal value of '7'
, run the following procedure:
BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'strep01_apply', apply_tag => HEXTORAW('7')); END; /
After running this procedure, each redo entry generated by the apply process will have a tag value of 7
.
You remove the apply tag for an apply process by setting the remove_apply_tag
parameter to true
in the ALTER_APPLY
procedure in the DBMS_APPLY_ADM
package. Removing the apply tag means that each redo entry generated by the apply process has a NULL
tag. For example, the following procedure removes the apply tag from an apply process named strep01_apply
.
BEGIN DBMS_APPLY_ADM.ALTER_APPLY( apply_name => 'strep01_apply', remove_apply_tag => true); END; /
Typically, database administrators change the DBID
and global name of a database when it is a clone of another database. You can view the DBID
of a database by querying the DBID
column in the V$DATABASE
dynamic performance view, and you can view the global name of a database by querying the GLOBAL_NAME
static data dictionary view. When you change the DBID
or global name of a source database, any existing capture processes that capture changes originating at this source database become unusable. The capture processes may be local capture processes or downstream capture processes that capture changes originated at the source database. Also, any existing apply processes that apply changes from the source database become unusable.
If a capture process is capturing changes generated by a database for which you have changed the DBID
or global name, then complete the following steps:
RESTRICTED
SESSION
enabled using STARTUP
RESTRICT
.DROP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package. The capture process may be a local capture process at the source database or a downstream capture process at a remote database.ALTER
SYSTEM
SWITCH
LOGFILE
statement on the database.DROP_APPLY
procedure in the DBMS_APPLY_ADM
package to drop an apply process.ALTER
SYSTEM
DISABLE
RESTRICTED
SESSION
statement.See Also:
Oracle Database Utilities for more information about changing the |
A multiple source environment is one in which there is more than one source database for any of the shared data. If a source database in a multiple source environment cannot be recovered to the current point in time, then you can use the method described in this section to resynchronize the source database with the other source databases in the environment. Some reasons why a database cannot be recovered to the current point in time include corrupted archived redo logs or the media failure of an online redo log group.
For example, a bidirectional Streams environment is one in which exactly two databases share the replicated database objects and data. In this example, assume that database A is the database that must be resynchronized and that database B is the other source database in the environment. To resynchronize database A in this bidirectional Streams environment, complete the following steps:
V$BUFFERED_SUBSCRIBERS
data dictionary view at database B to determine whether the apply process that applies these changes has any unapplied changes in its queue. See "Viewing the Propagations Dequeuing LCRs From Each Buffered Queue" for an example of such a query. Do not continue until all of these changes have been applied.REMOVE_STREAMS_CONFIGURATION
procedure in the DBMS_STREAMS_ADM
package. See PL/SQL Packages and Types Reference for more information about this procedure.Point-in-time recovery is the recovery of a database to a specified noncurrent time, SCN, or log sequence number. The following sections discuss performing point-in-time recovery in a Streams replication environment:
See Also:
Oracle Database Backup and Recovery Advanced User's Guide for more information about point-in-time recovery |
A single source Streams replication environment is one in which there is only one source database for shared data. If database point-in-time recovery is required at the source database in a single source Streams environment, then you must stop all capture processes that capture changes generated at a source database before you perform the recovery operation. Both local and downstream capture process that capture changes generated at the source database must be stopped. Typically, database administrators reset the log sequence number of a database during point-in-time recovery. The ALTER
DATABASE
OPEN
RESETLOGS
statement is an example of a statement that resets the log sequence number.
The instructions in this section assume that the single source replication environment has the following characteristics:
strm01_capture
, which may be a local or downstream capture processdest.net
strm01_apply
at the destination databaseIf point-in-time recovery must be performed on the source database, then you can follow these instructions to recover as many transactions as possible at the source database by using transactions applied at the destination database. These instructions assume that you can identify the transactions applied at the destination database after the source point-in-time SCN and execute these transactions at the source database.
Note: Oracle Corporation recommends that you set the apply process parameter |
Complete the following steps to perform point-in-time recovery on the source database in a single source Streams replication environment:
STOP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.SET SERVEROUTPUT ON DECLARE scn NUMBER; BEGIN DBMS_CAPTURE_ADM.BUILD( first_scn => scn); DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn); END; /
Note the SCN value returned because it is needed in Step 13.
STATE
column in both the V$STREAMS_APPLY_READER
and V$STREAMS_APPLY_SERVER
. The state should be IDLE
for the apply process in both views before you continue.If the apply process is running, then perform the following query:
SELECT HWM_MESSAGE_NUMBER FROM V$STREAMS_APPLY_COORDINATOR WHERE APPLY_NAME = 'STRM01_APPLY';
If the apply process is disabled, then perform the following query:
SELECT APPLIED_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS WHERE APPLY_NAME = 'STRM01_APPLY';
Note the highest apply SCN returned by the query because it is needed in subsequent steps.
maximum_scn
capture process parameter of the original capture process to the point-in-time recovery SCN using the SET_PARAMETER
procedure in the DBMS_CAPTURE_ADM
package.OLDEST_SCN_NUM
column in the V$STREAMS_APPLY_READER
dynamic performance view at the destination database. To set the start SCN of the capture process, specify the start_scn
parameter when you run the ALTER_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.BEGIN DBMS_CAPTURE_ADM.SET_PARAMETER( capture_name => 'strm01_capture', parameter => 'write_alert_log', value => 'Y'); END; /
START_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.maximum_scn
setting by querying the CAPTURED_SCN
column in the DBA_CAPTURE
data dictionary view. When the value returned by the query is equal to or greater than the maximum_scn
value, the capture process should stop automatically. When the capture process is stopped, proceed to the next step.LAST_ENQUEUE_MESSAGE_NUMBER
in the alert log. Note this value because it is needed in subsequent steps.strm01_apply
by running the following queries at the destination database:
SELECT DEQUEUED_MESSAGE_NUMBER FROM V$STREAMS_APPLY_READER WHERE APPLY_NAME = 'STRM01_APPLY' AND DEQUEUED_MESSAGE_NUMBER = last_enqueue_message_number;
Substitute the LAST_ENQUEUE_MESSAGE_NUMBER
found in the alert log in Step h for last_enqueue_message_number on the last line of the query. When this query returns a row, all of the changes from the capture database have been applied at the destination database.
Also, ensure that the state of the apply process reader server and each apply server is IDLE
. For example, run the following queries for an apply process named strm01_apply
:
SELECT STATE FROM V$STREAMS_APPLY_READER WHERE APPLY_NAME = 'STRM01_APPLY'; SELECT STATE FROM V$STREAMS_APPLY_SERVER WHERE APPLY_NAME = 'STRM01_APPLY';
When both of these queries return IDLE
, move on to the next step.
DROP_APPLY
procedure in the DBMS_APPLY_ADM
package.START_APPLY
procedure in the DBMS_APPLY_ADM
package.DROP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to replace the capture process you dropped in Step 12. Specify the SCN returned by the data dictionary build in Step 4 for both the first_scn
and start_scn
parameters. The new capture process should use the same queue and rule sets as the original capture process.START_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.A multiple source environment is one in which there is more than one source database for any of the shared data. If database point-in-time recovery is required at a source database in a multiple source Streams environment, then you can use another source database in the environment to recapture the changes made to the recovered source database after the point-in-time recovery.
For example, in a multiple source Streams environment, one source database may become unavailable at time T2 and undergo point in time recovery to an earlier time T1. After recovery to T1, transactions performed at the recovered database between T1 and T2 are lost at the recovered database. However, before the recovered database became unavailable, assume that these transactions were propagated to another source database and applied. In this case, this other source database can be used to restore the lost changes to the recovered database.
Specifically, to restore changes made to the recovered database after the point-in-time recovery, you configure a capture process to recapture these changes from the redo logs at the other source database, a propagation to propagate these changes from the database where changes are recaptured to the recovered database, and an apply process at the recovered database to apply these changes.
Changes originating at the other source database that were applied at the recovered database between T1 and T2 also have been lost and must be recovered. To accomplish this, alter the capture process at the other source database to start capturing changes at an earlier SCN. This SCN is the oldest SCN for the apply process at the recovered database.
The following SCN values are required to restore lost changes to the recovered database:
maximum_scn
parameter for the capture process created to recapture lost changes should be set. The capture process stops capturing changes when it reaches this SCN value. The current SCN for the other source database is used for this value.You should record the point-in-time SCN when you perform point-in-time recovery on the recovered database. You can use the GET_SCN_MAPPING
procedure in the DBMS_STREAMS_ADM
package to determine the other necessary SCN values.
See Also:
PL/SQL Packages and Types Reference for more information about the |
If database point-in-time recovery is required at a destination database in a Streams environment, then you must reapply the captured changes that had already been applied after the point-in-time recovery.
For each relevant capture process, you can choose either of the following methods to perform point-in-time recovery at a destination database in a Streams environment:
Resetting the start SCN for the capture process is simpler than creating a new capture process. However, if the capture process captures changes that are applied at multiple destination databases, then the changes are resent to all the destination databases, including the ones that did not perform point-in-time recovery. If a change is already applied at a destination database, then it is discarded by the apply process, but you may not want to use the network and computer resources required to resend the changes to multiple destination databases. In this case, you can create and temporarily use a new capture process and a new propagation that propagates changes only to the destination database that was recovered.
The following sections provide instructions for each task:
If there are multiple apply processes at the destination database where you performed point-in-time recovery, then complete one of the tasks in this section for each apply process.
Neither of these methods should be used if any of the following conditions are true regarding the destination database you are recovering:
SELECT APPLY_NAME, OLDEST_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS;
The following query at a source database lists the first SCN for each capture process:
SELECT CAPTURE_NAME, FIRST_SCN FROM DBA_CAPTURE;
If any of these conditions are true in your environment, then you cannot use the methods described in this section. Instead, you must manually resynchronize the data at all destination databases.
See Also:
Oracle Streams Concepts and Administration for more information about SCN values relating to a capture process and directed networks |
If you decide to reset the start SCN for the existing capture process to perform point-in-time recovery, then complete the following steps:
DROP_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package to drop the propagation.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then drop the propagation at each intermediate database in the path to the destination database, including the propagation at the source database.
Do not drop the rule sets used by the propagations you drop.
Note: You must drop the appropriate propagation(s). Disabling them is not sufficient. You will re-create the propagation(s) in Step 7, and dropping them now ensures that only events created after resetting the start SCN for the capture process are propagated. |
See Also:
Oracle Streams Concepts and Administration for more information about directed networks |
The following query at a destination database lists the oldest message number for each apply process:
SELECT APPLY_NAME, OLDEST_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS;
STOP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.To reset the start SCN for an existing capture process, run the ALTER_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package and set the start_scn
parameter to the value you recorded from the query in Step 4. For example, to reset the start SCN for a capture process named strm01_capture
to the value 829381993
, run the following ALTER_CAPTURE
procedure:
BEGIN DBMS_CAPTURE_ADM.ALTER_CAPTURE( capture_name => 'strm01_capture', start_scn => 829381993); END; /
CREATE_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package. Specify any rule sets used by the original propagation when you create the propagation.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then create a new propagation at each intermediate database in the path to the destination database, including the propagation at the source database.
START_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.If you decide to create a new capture process to perform point-in-time recovery, then complete the following steps:
DROP_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package to drop the propagation.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then drop the propagation that propagates events between the last intermediate database and the destination database. You do not need to drop the propagations at the other intermediate databases nor at the source database.
See Also:
Oracle Streams Concepts and Administration for more information about directed networks |
The following query at a destination database lists the oldest message number for each apply process:
SELECT APPLY_NAME, OLDEST_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS;
SET_UP_QUEUE
procedure in the DBMS_STREAMS_ADM
package.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then create a queue at each intermediate database in the path to the destination database, including the new queue at the source database. Do not create a new queue at the destination database.
CREATE_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package. Specify any rule sets used by the original propagation when you create the propagation.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then create a propagation at each intermediate database in the path to the destination database, including the propagation from the source database to the first intermediate database. These propagations propagate changes captured by the capture process you will create in Step 7 between the queues created in Step 5.
CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package. Set the source_queue
parameter to the local queue you created in Step 5 and the start_scn
parameter to the value you recorded from the query in Step 4. Also, specify any rule sets used by the original capture process. If the rule sets used by the original capture process instruct the capture process to capture events that should not be sent to the destination database that was recovered, then you can create and use smaller, customized rule sets that share some rules with the original rule sets.START_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.STOP_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
At the destination database, you can use the following query to determine the oldest message number from the source database for the apply process:
SELECT APPLY_NAME, OLDEST_MESSAGE_NUMBER FROM DBA_APPLY_PROGRESS;
At the source database, you can use the following query to determine the capture number of the original capture process:
SELECT CAPTURE_NAME, CAPTURE_MESSAGE_NUMBER FROM V$STREAMS_CAPTURE;
If you are using directed networks, and there are intermediate databases between the source database and destination database, then drop the new propagation at each intermediate database in the path to the destination database, including the new propagation at the source database.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then drop the new queue at each intermediate database in the path to the destination database, including the new queue at the source database. Do not drop the queue at the destination database.
CREATE_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package to create the propagation. Specify any rule sets used by the original propagation when you create the propagation.
If you are using directed networks, and there are intermediate databases between the source database and destination database, then re-create the propagation from the last intermediate database to the destination database. You dropped this propagation in Step 1.
All of the steps after Step 8 can be deferred to a later time, or they can be done as soon as the condition described in Step 9 is met.