Skip Headers
Oracle® Streams Replication Administrator's Guide
11g Release 1 (11.1)

Part Number B28322-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

16 Best Practices for Capture

This chapter describes the best practices for capturing changes with a capture process or a synchronous capture in an Oracle Streams replication environment. This chapter includes these topics:

See Also:

"Best Practices for Oracle Real Application Clusters and Oracle Streams"

Best Practices for Source Database Configuration for Capture Processes

A source database is a database where changes captured by a capture process are generated in the redo log. A local capture process can capture changes at the source database, or a downstream capture process can capture these changes at a remote database. The following best practices pertain to all source databases for capture processes, including source databases that use local capture processes and source databases that use downstream capture processes:

Enable Archive Logging at Each Source Database in an Oracle Streams Environment

Verify that each source database is running in ARCHIVELOG mode. For downstream capture environments, the database where the source redo logs are created must be running in ARCHIVELOG mode.

Add Supplemental Logging at Each Source Database in an Oracle Streams Environment

Confirm that the required supplemental logging has been added at each source database. Supplemental logging is required for a table if changes to the table will be captured by a capture process. In Oracle Database 10g Release 2 or later, supplemental logging is automatically added for primary key, unique index, and foreign key columns in a table when the table is prepared for instantiation. The procedures in the DBMS_STREAMS_ADM package for maintaining database objects with Oracle Streams and for adding rules automatically prepare objects for a instantiation. For downstream capture, supplemental logging must be added at the database where the source redo logs are generated.

All indexed columns at a destination database, including the primary key, unique index, and foreign key columns of each replicated table, must be supplementally logged at the source database. Primary key columns must be unconditionally logged, but unique index and foreign key columns can be conditionally logged. In some cases, supplemental logging is required for additional columns in a table. For example, any columns specified in rule-based transformations or used within DML handlers must be unconditionally logged at the source database. Supplemental logging for these columns must be configured explicitly by the database administrator.

See Also:

Configure a Heartbeat Table at Each Source Database in an Oracle Streams Environment

You can use a heartbeat table to ensure that changes are being replicated in an Oracle Streams replication environment. Specifically, you can check the APPLIED_SCN value in the DBA_CAPTURE data dictionary view at the capture database to ensure that it is updated periodically. A heartbeat table is especially useful for databases that have a low activity rate because you can ensure that the replication environment is working properly even if there are not many replicated changes.

An Oracle Streams capture process requests a checkpoint after every 10 MB of generated redo. During the checkpoint, the metadata for Oracle Streams is maintained if there are active transactions. Implementing a heartbeat table ensures that there are open transactions occurring regularly in the source database, thereby providing additional opportunities for the metadata to be updated frequently. Additionally, the heartbeat table provides quick feedback to the database administrator about the health of the Oracle Streams replication environment.

To implement a heartbeat table, complete the following steps:

  1. Create a table at the source database that includes a date or time stamp column and the global name of the source database.

  2. Instantiate the table at the destination database. If there are multiple destination databases, then instantiate the heartbeat table at each destination database.

  3. Add a rule to the positive rule set for the capture process that captures changes to the source database. The rule instructs the capture process to capture changes to the heartbeat table.

  4. Add a rule to the positive rule set for the propagation that propagates changes from the source database to the destination database. The rule instructs the propagation to propagate LCRs for the heartbeat table. If there are multiple propagations, then add the rule to the rule set for each propagation. If your environment uses directed networks, then you might need to add rules to propagations at several databases.

  5. Add a rule to the positive rule set for the apply process that applies changes that originated at the source database. The rule instructs the apply process to apply changes to the heartbeat table. If there are multiple apply processes at multiple databases that apply the changes that originated at the source database, then add a rule to each the apply process.

  6. Configure an automated job to update the heartbeat table at the source database periodically. For example, the table might be updated every minute.

  7. Monitor the Oracle Streams replication environment to verify that changes to the heartbeat table at the source database are being replicated to the destination database.

Best Practices for Capture Process Configuration

The following sections describe best practices for configuring capture processes:

Set Capture Process Parallelism

Set the parallelism of each capture process by specifying the parallelism parameter in the DBMS_CAPTURE_ADM.SET_PARAMETER procedure. The parallelism parameter controls the number of processes that concurrently mine the redo log for changes.

The default setting for the parallelism capture process parameter is 1, and the default parallelism setting is appropriate for most capture process configurations. Ensure that the PROCESSES initialization parameter is set appropriately when you set the parallelism capture process parameter.

Set the Checkpoint Retention Time

Set the checkpoint retention time for each capture process. Periodically, a capture process takes a checkpoint to facilitate quicker restart. These checkpoints are maintained in the SYSAUX tablespace by default. The checkpoint retention time for a capture process controls the amount of checkpoint data it retains. The checkpoint retention time specifies the number of days prior to the required checkpoint SCN to retain checkpoints. When a checkpoint is older than the specified time period, the capture process purges the checkpoint.

When checkpoints are purged, the first SCN for the capture process moves forward. The first SCN is the lowest possible SCN available for capturing changes. The checkpoint retention time is set when you create a capture process, and it can be set when you alter a capture process. When the checkpoint retention time is exceeded, the first SCN is moved forward, and the Oracle Streams metadata tables prior to this new first SCN is purged. The space used by these tables in the SYSAUX tablespace is reclaimed. To alter the checkpoint retention time for a capture process, use the ALTER_CAPTURE procedure in the DBMS_CAPTURE_ADM package and specify the new retention time with the checkpoint_retention_time parameter.

The default value for the checkpoint retention time is 60 days. If checkpoints are available for a time in the past, then the capture process can be used to recapture changes to recover a destination database. You should set the checkpoint retention time to an appropriate value for your environment. A typical setting is 7 days.

See Also:

Best Practices for Capture Process Operation

The following sections describe best practices for operating existing capture processes:

Perform a Dictionary Build and Prepare Database Objects for Instantiation Periodically

Perform a data dictionary build in the source database redo periodically. Run the DBMS_CAPTURE_ADM.BUILD procedure to build a current copy of the data dictionary in the redo log. Ideally, database objects should be prepared for instantiation after a build is performed. Run one or more of the following procedures in the DBMS_CAPTURE_ADM package to prepare database objects for instantiation:

  • PREPARE_GLOBAL_INSTANTIATION

  • PREPARE_SCHEMA_INSTANTIATION

  • PREPARE_TABLE_INSTANTIATION

Each of the database objects for which a capture process captures changes should be prepared for instantiation periodically. You can reduce the amount of redo data that must be processed if additional capture process are created or if an existing capture process must be re-created by performing a build and preparing shared objects for instantiation periodically.

Minimize the Performance Impact of Batch Processing

For best performance, the commit point for a batch processing job should be kept low. Also, if a large batch processing job must be run at a source database, then consider running it at each Oracle Streams replication database independently. If this technique is used, then ensure that the changes resulting from the batch processing job are not replicated. To accomplish this, run the DBMS_STREAMS.SET_TAG procedure in the session that runs the batch processing job, and set the session tag to a value that will not be captured by a capture process.

Best Practices for Synchronous Capture Configuration

Creating and managing a synchronous capture is simplified when you use the DBMS_STREAMS_ADM package. Specifically, use the following procedures in the DBMS_STREAMS_ADM package to create a synchronous capture and configure synchronous capture rules:

Also, use the REMOVE_RULE procedure in the DBMS_STREAMS_ADM package to remove a rule from a synchronous capture rule set or to drop a rule in a synchronous capture rule set.