Skip Headers
Oracle® Streams Replication Administrator's Guide
10g Release 2 (10.2)

Part Number B14228-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Feedback

Go to previous page
Previous
Go to next page
Next
View PDF

6 Simple Streams Replication Configuration

This chapter describes simple methods for configuring Streams replication between two databases.

This chapter contains these topics:

Configuring Replication Using a Streams Wizard in Enterprise Manager

The Streams tool in Enterprise Manager includes two wizards that configure a Streams replication environment. The following sections describe the wizards and how to open them:

Streams Global, Schema, Table, and Subset Replication Wizard

The Steams Global, Schema, Table, and Subset Replication wizard can configure a Streams environment that replicates changes to the entire source database, certain schemas in the source database, certain tables in the source database, or subsets of tables in the source database.

This wizard can configure a Streams environment that maintains DML changes, DDL changes, or both. The database objects configured for replication by this wizard can be in multiple tablespaces in your source database. This wizard only configures a single-source replication environment. It cannot configure a bi-directional replication environment.

You can run this wizard in Database Control or Grid Control. To run this wizard in Database Control, meet the following requirements:

  • This wizard, or the scripts generated by this wizard, must be run at an Oracle Database 10g Release 2 database.

  • The destination database configured by this wizard must be an Oracle Database 10g Release 1 or later database.

To run this wizard in Grid Control, meet the following requirements:

  • This wizard, or the scripts generated by this wizard, must be run at an Oracle9i Release 2 (9.2) or later database.

  • The destination database configured by this wizard must be an Oracle9i Release 2 (9.2) or later database.

Figure 6-1 shows the opening page of the Steams Global, Schema, Table, and Subset Replication wizard.

Figure 6-1 Streams Global, Schema, Table, and Subset Replication Wizard

Description of strep_object_wizard.gif follows
Description of the illustration strep_object_wizard.gif

Streams Tablespace Replication Wizard

The Streams Tablespace Replication wizard can configure a Streams environment that replicates changes to all of the database objects in a particular self-contained tablespace or in a set of self-contained tablespaces. A self-contained tablespace has no references from the tablespace pointing outside of the tablespace. For example, if an index in the tablespace is for a table in a different tablespace, then the tablespace is not self-contained. When there is more than one tablespace in a tablespace set, a self-contained tablespace set has no references from inside the set of tablespaces pointing outside of the set of tablespaces.

This wizard can configure a single-source replication environment or a bi-directional replication environment. This wizard does not configure the Streams environment to maintain DDL changes to the tablespace set nor to the database objects in the tablespace set. For example, the Streams environment is not configured to replicate ALTER TABLESPACE statements on the tablespace, nor is it configured to replicate ALTER TABLE statements on tables in the tablespace.

You can run this wizard in Database Control or Grid Control. To run this wizard in Database Control, meet the following requirements:

  • This wizard, or the scripts generated by this wizard, must be run at an Oracle Database 10g Release 2 database.

  • If this wizard configures the replication environment directly (not with scripts), then both databases must be Oracle Database 10g Release 2 databases.

  • If the replication environment is configured with scripts generated by this wizard, then the destination database must be an Oracle Database 10g Release 1 or later database. If the script configures an Oracle Database 10g Release 1 database, then the script must be modified so that it does not configure features that are available only in Oracle Database 10g Release 2, such as queue-to-queue propagation.

To run this wizard in Grid Control, meet the following requirements:

  • Each database configured by this wizard, or the scripts generated by this wizard, must be Oracle Database 10g Release 1 or later database.

  • This wizard, or the scripts generated by this wizard, must be run at an Oracle Database 10g Release 1 or later database.

  • If this wizard is run at an Oracle Database 10g Release 2 database, and the wizard configures the replication environment directly (not with scripts), then both databases must be Oracle Database 10g Release 2 databases.

  • If this wizard is run at an Oracle Database 10g Release 2 database, and the replication environment is configured with generated scripts, then the destination database must be an Oracle Database 10g Release 1 or later database. If the script configures an Oracle Database 10g Release 1 database, then the script must be modified so that it does not configure features that are available only in Oracle Database 10g Release 2, such as queue-to-queue propagation.

Figure 6-2 shows the opening page of the Streams Tablespace Replication wizard.

Figure 6-2 Streams Tablespace Replication Wizard

Description of strep_tspace_wizard.gif follows
Description of the illustration strep_tspace_wizard.gif

Opening a Streams Replication Configuration Wizard

Both wizards configure, or produce scripts to configure, a Streams replication environment. A capture process is configured to capture changes to the database objects in the specified tablespaces at the source database. A propagation is configured at the source database to propagate each change in the form of a logical change record (LCR) from the source database to the destination database. An apply process at the destination database applies the LCRs to make the changes at the destination database. If you use the Streams Tablespace Replication Wizard to configure a bi-directional replication environment, then each database captures changes and propagates them to the other database, and each database applies changes from the other database. Both wizards also perform an instantiation of the specified database objects.

To open one of these wizards, complete the following steps in Enterprise Manager:

  1. Navigate to the Database Home page of a database that will be a source database in the replication environment.

  2. Select the Maintenance tab.

  3. Click Setup in the Streams section.

  4. Click the wizard you want to use in the Setup Options list. Click Help for more information.


    Note:

    • Any source database that generates redo data that will be captured by a capture process must run in ARCHIVELOG mode.

    • You might need to configure conflict resolution if bi-directional replication is configured.


Configuring Replication Using the DBMS_STREAMS_ADM Package

The following procedures in the DBMS_STREAMS_ADM package configure a replication environment that is maintained by Streams:

The following sections contain instructions for preparing to run one of these procedures and examples that illustrate common scenarios:


See Also:

Oracle Database PL/SQL Packages and Types Reference for more information about these procedures

Preparing to Configure Streams Replication Using the DBMS_STREAMS_ADM Package

The following sections describe decisions to make and actions to complete before configuring replication with a procedure in the DBMS_STREAMS_ADM package:

Decisions to Make Before Configuring Streams Replication

Make the following decisions before configuring Streams replication:

Decide Whether to Maintain DDL Changes

These procedures configure the replication environment to maintain data manipulation language (DML) changes to the specified database object by default. DML changes include INSERT, UPDATE, DELETE, and LOB update operations. You must decide whether you want the replication environment to maintain data definition language (DDL) changes as well. Examples of statements that result in DDL changes are CREATE TABLE, ALTER TABLE, ALTER TABLESPACE, and ALTER DATABASE.

Some Streams replication environments assume that the database objects are the same at each database. In this case, maintaining DDL changes with Streams makes it easy to keep the shared database objects synchronized. However, some Streams replication environments require that shared database objects are different at different databases. For example, a table can have a different name or shape at two different databases. In these environments, rule-based transformations and apply handlers can modify changes so that they can be shared between databases, and you might not want to maintain DDL changes with Streams.

The include_ddl parameter controls whether the procedure configures Streams replication to maintain DDL changes:

  • To configure a Streams replication environment that does not maintain DDL changes, set the include_ddl parameter to false when you run one of these procedures. The default value for this parameter is false.

  • To configure a Streams replication environment that maintains DDL changes, set the include_ddl parameter to true when you run one of these procedures.


Note:

The MAINTAIN_SIMPLE_TTS procedure does not include the include_ddl parameter. A Streams replication environment configured by the MAINTAIN_SIMPLE_TTS procedure only maintains DML changes.


See Also:


Decide Whether to Configure Local or Downstream Capture for the Source Database

Local capture means that a capture process runs on the source database. Downstream capture means that a capture process runs on a database other than the source database. These procedures can either configure local capture or downstream capture for the database specified in the source_database parameter.

The database that captures changes made to the source database is called the capture database. These procedures can configure one of the following databases as the capture database:

  • Source database (local capture)

  • Destination database (downstream capture)

  • A third database (downstream capture)

Figure 6-3 shows the role of the capture database.

Figure 6-3 The Capture Database

Description of strep052.gif follows
Description of the illustration strep052.gif

The database on which the procedure is run is configured as the capture database for changes made to the source database. Therefore, to configure local capture at the source database, run the procedure at the source database. To configure downstream capture at the destination database or a third database, run the procedure at the destination database or third database.

If the source database or a third database is the capture database, then these procedures configure a propagation to propagate changes from the capture database to the destination database. If the destination database is the capture database, then this propagation between databases is not needed.

Also, the capture_name and capture_queue_name parameters must be set to NULL when both of the following conditions are met:

  • The destination database is the capture database.

  • The bi_directional parameter is set to true.

When both of these conditions are met, these procedure configure two capture processes at the destination database, and these capture processes must have different names. When the capture_name and capture_queue_name parameters are set to NULL, the system generates a different name for the capture processes. These procedures raise an error if both conditions are met and either the capture_name parameter or the capture_queue_name parameter is set to a non-NULL value.


Note:

  • When these procedures configure downstream capture, they always configure archived-log downstream capture. These procedures do not configure real-time downstream capture. However, the scripts generated by these procedures can be modified to configure real-time downstream capture.

  • If these procedures configure bi-directional replication, then the capture process for the destination database always is a local capture process. That is, these procedures always configure the capture process for changes made to the destination database to run on the destination database.



See Also:


Decide Whether Replication Is Bi-Directional

These procedures set up either a single-source Streams configuration with the database specified in the source_database parameter as the source database, or a bi-directional Streams configuration with both databases acting as source and destination databases. The bi_directional parameter in each procedure controls whether the Streams configuration is single source or bi-directional.

  • If the bi_directional parameter is false, then a capture process captures changes made to the source database and an apply process at the destination database applies these changes. If the destination database is not the capture database, then a propagation propagates the captured changes to the destination database. The default value for this parameter is false.

  • If the bi_directional parameter is true, then a separate capture process captures changes made to each database, propagations propagate these changes to the other database, and each database applies changes from the other database.

When a replication environment is not bi-directional, and no changes are allowed at the destination database, Streams keeps the shared database objects synchronized at the databases. However, when a replication environment is not bi-directional, and independent changes are allowed at the destination database, the shared database objects might diverge between the databases. Independent changes can be made by users, by applications, or by replication with a third database.

These procedures cannot be used to configure multi-directional replication where changes can be cycled back to a source database by a third database in the environment. For example, these procedures cannot be used to configure a Streams replication environment with three databases where each database shares changes with the other two databases in the environment. If these procedures are used to configure a three way replication environment such as this, then changes made at a source database would be cycled back to the same source database. In a valid three way replication environment, a particular change is made only once at each database. However, you can add additional databases to the Streams environment after using one of these procedures to configure the environment, and these procedures can be used to configure a "hub and spoke" replication environment.


Note:

  • If you set the bi_directional parameter to true when you run one of these procedures, then do not allow data manipulation language (DML) or data definition language (DDL) changes to the shared database objects at the destination database while the procedure, or the script generated by the procedure, is running. This restriction does not apply if a procedure is configuring a single-source replication environment.

  • You might need to configure conflict resolution if bi-directional replication is configured.


Decide Whether to Configure Replication Directly or Generate a Script

These procedures can configure the Streams replication environment directly, or they can generate a script that configures the environment. Using a procedure to configure replication directly is simpler than running a script, and the environment is configured immediately. However, you might choose to generate a script for the following reasons:

  • You want to review the actions performed by the procedure before configuring the environment.

  • You want to modify the script to customize the configuration.

For example, you might want an apply process to use apply handlers for customized processing of the changes to certain tables before applying these changes. In this case, you can use the procedure to generate a script and modify the script to add the apply handlers.

You also might want to maintain DML changes for a number of tables, but you might want to maintain DDL changes for a subset of these tables. In this case, you can generate a script by running the MAINTAIN_TABLES procedure with the include_ddl parameter set to false. You can modify the script to maintain DDL changes for the appropriate tables.

The perform_actions parameter controls whether the procedure configures the replication environment directly:

  • To configure a Streams replication environment directly when you run one of these procedures, set the perform_actions parameter to true. The default value for this parameter is true.

  • To generate a configuration script when you run one of these procedures, set the perform_actions parameter to false, and use the script_name and script_directory_object parameters to specify the name and location of the configuration script.

Decide How to Perform Instantiation

The MAINTAIN_GLOBAL, MAINTAIN_SCHEMAS, and MAINTAIN_TABLES procedures provide options for instantiation. Instantiation is the process of preparing database objects for instantiation at a source database, optionally copying the database objects from a source database to a destination database, and setting the instantiation SCN for each instantiated database object.

When you run one of these three procedures, you can choose to perform the instantiation in one of the following ways:

  • Data Pump Export Dump File Instantiation: This option performs a Data Pump export of the shared database objects at the source database and a Data Pump import of the export dump file at the destination database. The instantiation SCN is set for each shared database object during import.

    To specify this instantiation option, set the instantiation parameter to one of the following values:

    • DBMS_STREAMS_ADM.INSTANTIATION_FULL if you run the MAINTAIN_GLOBAL procedure

    • DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA if you run the MAINTAIN_SCHEMAS procedure

    • DBMS_STREAMS_ADM.INSTANTIATION_TABLE if you run the MAINTAIN_TABLES procedure

    If the bi_directional parameter is set to true, then the procedure also sets the instantiation SCN for each shared database object at the source database.

  • Data Pump Network Import Instantiation: This option performs a network Data Pump import of the shared database objects. A network import means that Data Pump performs the import without using an export dump file. Therefore, directory objects do not need to be created for instantiation purposes when you use this option. The instantiation SCN is set for each shared database object during import.

    To specify this instantiation option, set the instantiation parameter to one of the following values:

    • DBMS_STREAMS_ADM.INSTANTIATION_FULL_NETWORK if you run the MAINTAIN_GLOBAL procedure

    • DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA_NETWORK if you run the MAINTAIN_SCHEMAS procedure

    • DBMS_STREAMS_ADM.INSTANTIATION_TABLE_NETWORK if you run the MAINTAIN_TABLES procedure

    If the bi_directional parameter is set to true, then the procedure also sets the instantiation SCN for each shared database object at the source database.

  • Generate a Configuration Script with No Instantiation Specified: This option does not perform an instantiation. This setting is valid only if the perform_actions parameter is set to false, and the procedure generates a configuration script. In this case, the configuration script does not perform an instantiation and does not set the instantiation SCN for each shared database object. Instead, you must perform the instantiation and ensure that instantiation SCN values are set properly.

    To specify this instantiation option, set the instantiation parameter to DBMS_STREAMS_ADM.INSTANTIATION_NONE in each procedure.

The PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures do not perform an instantiation. You must perform any required instantiation actions manually after running PRE_INSTANTIATION_SETUP and before running POST_INSTANTIATION_SETUP. You also must perform any required instantiation actions manually if you use the MAINTAIN_GLOBAL, MAINTAIN_SCHEMAS, and MAINTAIN_TABLES procedures and set the instantiation parameter to DBMS_STREAMS_ADM.INSTANTIATION_NONE.

In these cases, you can use any instantiation method. For example, you can use Recovery Manager (RMAN) to perform a database instantiation using the RMAN DUPLICATE or CONVERT DATABASE command or a tablespace instantiation using the RMAN TRANSPORT TABLESPACE command. If the bi_directional parameter is set to true, then make sure the instantiation SCN values are set properly at the source database as well as the destination database.


Note:

  • The MAINTAIN_SIMPLE_TTS and MAINTAIN_TTS procedures do not provide these instantiation options. These procedures always perform an instantiation by cloning the tablespace or tablespace set, transferring the files required for instantiation to the destination database, and attaching the tablespace or tablespace set at the destination database.

  • If one of these procedures performs an instantiation, then the database objects, tablespace, or tablespaces set being configured for replication must exist at the source database, but they must not exist at the destination database.

  • If the RMAN DUPLICATE or CONVERT DATABASE command is used for database instantiation, then the destination database cannot be the capture database.


Tasks to Complete Before Configuring Streams Replication

The following sections describe tasks to complete before configuring Streams replication:

Configure a Streams Administrator on All Databases

The Streams administrator at each database must have the required privileges to perform the configuration actions. The examples in this chapter assume that the username of the Streams administrator is strmadmin at each database.


See Also:

Oracle Streams Concepts and Administration for information about configuring a Streams administrator

Create One or More Database Links

A database link from the source database to the destination database always is required before running one of the procedures. A database link from the destination database to the source database is required in any of the following cases:

  • The Streams replication environment will be bi-directional.

  • A Data Pump network import will be performed during instantiation.

  • The destination database is the capture database for downstream capture of source database changes.

  • The RMAN DUPLICATE or CONVERT DATABASE command will be used for database instantiation.

    This database link is required because the POST_INSTANTIATION_SETUP procedure with a non-NULL setting for the instantiation_scn parameter runs the SET_GLOBAL_INSTANTIATION_SCN procedure in the DBMS_APPLY_ADM package at the destination database. The SET_GLOBAL_INSTANTIATION_SCN procedure requires the database link. This database link must be created after the RMAN instantiation and before running the POST_INSTANTIATION_SETUP procedure.

If a third database is the capture database for downstream capture of source database changes, then the following database links are required:

  • A database link is required from the third database to the source database.

  • A database link is required from the third database to the destination database.

Each database link should be created in the Streams administrator's schema. For example, if the source database is stm1.net, the destination database is stm2.net, and the Streams administrator is strmadmin at each database, then the following statement creates the database link from the source database to the destination database:

CONNECT strmadmin/strmadminpw@stm1.net

CREATE DATABASE LINK stm2.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw 
   USING 'stm2.net';

If a database link is required from the destination database to the source database, then the following statement creates this database link:

CONNECT strmadmin/strmadminpw@stm2.net

CREATE DATABASE LINK stm1.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw 
   USING 'stm1.net';

If a third database is the capture database, then a database link is required from the third database to the source and destination databases. For example, if the third database is stm3.net, then the following statements create the database links from the third database to the source and destination databases:

CONNECT strmadmin/strmadminpw@stm3.net

CREATE DATABASE LINK stm1.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw 
   USING 'stm1.net';

CREATE DATABASE LINK stm2.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw 
   USING 'stm2.net';

If an RMAN database instantiation is performed, then the database link at the source database is copied to the destination database during instantiation. This copied database link should be dropped at the destination database. In this case, if the replication is bi-directional, and a database link from the destination database to the source database is required, then this database link should be created after the instantiation.

Create the Required Directory Objects

A directory object is similar to an alias for a directory on a file system. The following directory objects might be required when you run one of these procedures:

  • A script directory object is required if you decided to generate a configuration script. The configuration script is placed in this directory on the computer system where the procedure is run. Use the script_directory_object parameter when you run one of these procedures to specify the script directory object.

  • A source directory object is required if you decided to perform a Data Pump export dump file instantiation, and you will use one of the following procedures: MAINTAIN_GLOBAL, MAINTAIN_SCHEMAS, MAINTAIN_SIMPLE_TTS, MAINTAIN_TABLES, or MAINTAIN_TTS. The Data Pump export dump file and log file are placed in this directory on the computer system running the source database. Use the source_directory_object parameter when you run one of these procedures to specify the source directory object. This directory object is not required if you will use the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures.

  • A destination directory object is required if you decided to perform a Data Pump export dump file instantiation, and you will use one of the following procedures: MAINTAIN_GLOBAL, MAINTAIN_SCHEMAS, MAINTAIN_SIMPLE_TTS, MAINTAIN_TABLES, or MAINTAIN_TTS. The Data Pump export dump file is transferred from the computer system running the source database to the computer system running the destination database and placed in this directory on the computer system running the destination database. Use the destination_directory_object parameter when you run one of these procedures to specify the destination directory object. This directory object is not required if you will use the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures.

Each directory object must be created using the SQL statement CREATE DIRECTORY, and the user who invokes one of the procedures must have READ and WRITE privilege on each directory object. For example, the following statement creates a directory object named db_files_directory that corresponds to the /usr/db_files directory:

CONNECT strmadmin/strmadminpw
 
CREATE DIRECTORY db_files_directory AS '/usr/db_files';

Because this directory object was created by the Streams administrator (strmadmin), this user automatically has READ and WRITE privilege on the directory object.

Make Sure Each Source Database Is In ARCHIVELOG Mode

Each source database must be in ARCHIVELOG mode before running one of these procedures (or the script generated by one of these procedures). A source database is a database that will generate changes that will be captured by a capture process. The source database always must be in ARCHIVELOG mode. If the procedure configures a bi-directional replication environment, then the destination database also must be in ARCHIVELOG mode.


See Also:


Configure Log File Copying for Downstream Capture

If you decided to use a local capture process at the source database, then log file copying is not required. However, if you decided to use downstream capture for the source database, then configure log file copying from the source database to the capture database before you run the procedure.

Complete the following steps to prepare the source database to copy its redo log files to the capture database, and to prepare the capture database to accept these redo log files:

  1. Configure Oracle Net so that the source database can communicate with the capture database.

  2. Set the following initialization parameters to configure redo transport services to copy archived redo log files from the source database to the capture database:

    • At the source database, set at least one archive log destination in the LOG_ARCHIVE_DEST_n initialization parameter to a directory on the computer system running the capture database. To do this, set the following attributes of this parameter:

      • SERVICE - Specify the network service name of the capture database.

      • ARCH or LGWR ASYNC - If you specify ARCH (the default), then the archiver process (ARCn) will archive the redo log files to the capture database. If you specify LGWR ASYNC, then the log writer process (LGWR) will archive the redo log files to the capture database. Either ARCH or LGWR ASYNC is acceptable for a capture database destination.

      • MANDATORY or OPTIONAL - If you specify MANDATORY, then archiving of a redo log file to the capture database must succeed before the corresponding online redo log at the source database can be overwritten. If you specify OPTIONAL, then successful archiving of a redo log file to the capture database is not required before the corresponding online redo log at the source database can be overwritten. Either MANDATORY or OPTIONAL is acceptable for a capture database destination. If neither the MANDATORY nor the OPTIONAL attribute is specified, then the default is OPTIONAL.

      • NOREGISTER - Specify this attribute so that the capture database location is not recorded in the capture database control file.

      • TEMPLATE - Specify a directory and format template for archived redo logs at the capture database. The TEMPLATE attribute overrides the LOG_ARCHIVE_FORMAT initialization parameter settings at the capture database. The TEMPLATE attribute is valid only with remote destinations. Make sure the format uses all of the following variables at each source database: %t, %s, and %r.

      The following is an example of an LOG_ARCHIVE_DEST_n setting that specifies a capture database:

      LOG_ARCHIVE_DEST_2='SERVICE=STM2.NET ARCH OPTIONAL NOREGISTER
         TEMPLATE=/usr/oracle/log_for_stm1/stm1_arch_%t_%s_%r.log'
      
      

      If another source database transfers log files to this capture database, then, in the initialization parameter file at this other source database, you can use the TEMPLATE attribute to specify a different directory and format for the log files at the capture database. The log files from each source database are kept separate at the capture database.


      Tip:

      Log files from a remote source database should be kept separate from local database log files. In addition, if the capture database contains log files from multiple source databases, then the log files from each source database should be kept separate from each other.

    • At the source database, set the LOG_ARCHIVE_DEST_STATE_n initialization parameter that corresponds with the LOG_ARCHIVE_DEST_n parameter for the capture database to ENABLE.

      For example, if the LOG_ARCHIVE_DEST_2 initialization parameter is set for the capture database, then set one LOG_ARCHIVE_DEST_STATE_2 parameter in the following way:

      LOG_ARCHIVE_DEST_STATE_2=ENABLE 
      
      
    • At the source database, make sure the setting for the LOG_ARCHIVE_CONFIG initialization parameter includes the send value.

    • At the downstream database, make sure the setting for the LOG_ARCHIVE_CONFIG initialization parameter includes the receive value.


    See Also:

    Oracle Database Reference and Oracle Data Guard Concepts and Administration for more information about these initialization parameters

  3. If you reset any initialization parameters while the instance is running at a database in Step 2, then you might want to reset them in the initialization parameter file as well, so that the new values are retained when the database is restarted.

    If you did not reset the initialization parameters while the instance was running, but instead reset them in the initialization parameter file in Step 2, then restart the database. The source database must be open when it sends redo log files to the capture database because the global name of the source database is sent to the capture database only if the source database is open.

Make Sure the Initialization Parameters Are Set Properly

Certain initialization parameters are important in a Streams environment. Make sure the initialization parameters are set properly at all databases before running one of the procedures.


See Also:

Oracle Streams Concepts and Administration for information about initialization parameters that are important in a Streams environment

Configuring Database Replication Using the DBMS_STREAMS_ADM Package

You can use the following procedures in the DBMS_STREAMS_ADM package to configure database replication:

The MAINTAIN_GLOBAL procedure automatically excludes database objects that are not supported by Streams from the replication environment. The PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures do not automatically exclude database objects. Instead, these procedures enable you to specify which database objects to exclude from the replication environment. Query the DBA_STREAMS_UNSUPPORTED data dictionary view to determine which database objects are not supported by Streams. If unsupported database objects are not excluded, then capture errors will result.

The example in this section uses the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures to configure database replication. The replication configuration will exclude all database objects that are not supported by Streams. The source database is stm1.net, and the destination database is stm2.net.

Assume that the following decisions were made about the configuration:

  • DDL changes will be maintained.

  • Local capture will be configured for the source database.

  • The replication environment will be bi-directional.

  • An RMAN database instantiation will be performed.

  • The procedures will configure the replication environment directly. Configuration scripts will not be generated.


Note:

A capture process never captures changes in the SYS, SYSTEM, or CTXSYS schemas. Changes to these schemas are not maintained by Streams in the replication configuration described in this section.


See Also:

"Decisions to Make Before Configuring Streams Replication" for more information about these decisions

Complete the following steps to use the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures to configure the environment:

  1. Complete the required tasks before running the PRE_INSTANTIATION_SETUP procedure. See "Tasks to Complete Before Configuring Streams Replication" for instructions.

    For this configuration, the following tasks must be completed:

    • Configure a Streams administrator at both databases.

    • Create a database link from the source database stm1.net to the destination database stm2.net.

    • Make sure both databases are in ARCHIVELOG mode.

    • Make sure the initialization parameters are set properly at both databases.

    A database link is required from the destination database to the source database. However, because RMAN will be used for database instantiation, this database link must be created after instantiation. This database link is required because the replication environment will be bi-directional and because RMAN will be used for database instantiation.

  2. Connect to the source database as the Streams administrator, and run the PRE_INSTANTIATION_SETUP procedure:

    CONNECT strmadmin/strmadminpw@stm1.net
    
    DECLARE
      empty_tbs  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; 
    BEGIN
      DBMS_STREAMS_ADM.PRE_INSTANTIATION_SETUP(
        maintain_mode        => 'GLOBAL',
        tablespace_names     => empty_tbs,
        source_database      => 'stm1.net',
        destination_database => 'stm2.net',
        perform_actions      => true,
        bi_directional       => true,
        include_ddl          => true,
        start_processes      => true,
        exclude_schemas      => '*',
        exclude_flags        => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL);
    END;
    /
    
    

    Notice that the start_processes parameter is set to true. Therefore, each capture process and apply process created during the configuration is started automatically.

    Also, notice the values specified for the exclude_schemas and exclude_flags parameters. The asterisk (*) specified for exclude_schemas indicates that certain database objects in every schema in the database might be excluded from the replication environment. The value specified for the exclude_flags parameter indicates that DML and DDL changes for all unsupported database objects are excluded from the replication environment. Rules are placed in the negative rule sets for the capture processes to exclude these database objects.

    Because the procedure is run at the source database, local capture is configured at the source database.

    Because this procedure configures a bi-directional replication environment, do not allow DML or DDL changes to the shared database objects at the destination database while the procedure is running.

    If this procedure encounters an error and stops, then see "Recovering from Configuration Errors" for information about either recovering from the error or rolling back the configuration operation.

  3. Perform the instantiation. You can use any of the methods described in Chapter 10, "Performing Instantiations" to complete the instantiation. This example uses the RMAN DUPLICATE command to perform the instantiation by performing the following steps:

    1. Create a backup of the source database if one does not exist. RMAN requires a valid backup for duplication. In this example, create a backup of stm1.net if one does not exist.

    2. Connect to the source database as the Streams administrator and determine the until SCN for the RMAN DUPLICATE command:

      CONNECT strmadmin/strmadminpw@stm1.net
      
      SET SERVEROUTPUT ON SIZE 1000000
      DECLARE
        until_scn NUMBER;
      BEGIN
        until_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER;
            DBMS_OUTPUT.PUT_LINE('Until SCN: ' || until_scn);
      END;
      /
      
      

      Make a note of the until SCN returned. You will use this number in Step e. For this example, assume that the returned until SCN is 45442631.

    3. Connect to the source database as a system administrator in SQL*Plus and archive the current online redo log:

      ALTER SYSTEM ARCHIVE LOG CURRENT;
      
      
    4. Prepare your environment for database duplication, which includes preparing the destination database as an auxiliary instance for duplication. See Oracle Database Backup and Recovery Advanced User's Guide for instructions.

    5. Use the RMAN DUPLICATE command with the OPEN RESTRICTED option to instantiate the source database at the destination database. The OPEN RESTRICTED option is required. This option enables a restricted session in the duplicate database by issuing the following SQL statement: ALTER SYSTEM ENABLE RESTRICTED SESSION. RMAN issues this statement immediately before the duplicate database is opened.

      You can use the UNTIL SCN clause to specify an SCN for the duplication. Use the until SCN determined in Step b for this clause. Archived redo logs must be available for the until SCN specified and for higher SCN values. Therefore, Step c archived the redo log containing the until SCN.

      Make sure you use TO database_name in the DUPLICATE command to specify the name of the duplicate database. In this example, the duplicate database is stm2.net. Therefore, the DUPLICATE command for this example includes TO stm2.net.

      The following is an example of an RMAN DUPLICATE command:

      rman
      RMAN> CONNECT TARGET SYS/change_on_install@stm1.net
      RMAN> CONNECT AUXILIARY SYS/change_on_install@stm2.net
      RMAN> RUN
            { 
              SET UNTIL SCN 45442631;
              ALLOCATE AUXILIARY CHANNEL stm2 DEVICE TYPE sbt; 
              DUPLICATE TARGET DATABASE TO stm2 
              NOFILENAMECHECK
              OPEN RESTRICTED;
            }
      
      
    6. Connect to the destination database as a system administrator in SQL*Plus, and rename the global name. After an RMAN database instantiation, the destination database has the same global name as the source database. Rename the global name of the destination database back to its original name with the following statement:

      ALTER DATABASE RENAME GLOBAL_NAME TO stm2.net;
      
      
    7. Connect to the destination database as the Streams administrator, and drop the database link from the source database to the destination database that was cloned from the source database:

      CONNECT strmadmin/strmadminpw@stm2.net
      
      DROP DATABASE LINK stm2.net;
      
      
  4. While still connected to the destination database as the Streams administrator, create a database link from the destination database to the source database:

    CREATE DATABASE LINK stm1.net CONNECT TO strmadmin 
      IDENTIFIED BY strmadminpw USING 'stm1.net';
    
    

    See Step 1 for information about why this database link is required.

  5. Connect to the source database as the Streams administrator, and run the POST_INSTANTIATION_SETUP procedure:

    CONNECT strmadmin/strmadminpw@stm1.net
    
    DECLARE
      empty_tbs  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; 
    BEGIN
      DBMS_STREAMS_ADM.POST_INSTANTIATION_SETUP(
        maintain_mode        => 'GLOBAL',
        tablespace_names     => empty_tbs,
        source_database      => 'stm1.net',
        destination_database => 'stm2.net',
        perform_actions      => true,
        bi_directional       => true,
        include_ddl          => true,
        start_processes      => true,
        instantiation_scn    => 45442630,
        exclude_schemas      => '*',
        exclude_flags        => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + 
                                DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL);
    END;
    /
    
    

    The parameter values specified in both the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures must match, except for the values of the following parameters: perform_actions, script_name, script_directory_object, and start_processes.

    Also, notice that the instantiation_scn parameter is set to 45442630. The RMAN DUPLICATE command duplicates the database up to one less than the SCN value specified in the UNTIL SCN clause. Therefore, you should subtract one from the until SCN value that you specified when you ran the DUPLICATE command in Step 3e. In this example, the until SCN was set to 45442631. Therefore, the instantiation_scn parameter should be set to 45442631 - 1, or 45442630.

    If the instantiation SCN was set for the shared database objects at the destination database during instantiation, then the instantiation_scn parameter should be set to NULL. For example, the instantiation SCN might be set during a full database export/import.

    Because this procedure configures a bi-directional replication environment, do not allow DML or DDL changes to the shared database objects at the destination database while the procedure is running.

    If this procedure encounters an error and stops, then see "Recovering from Configuration Errors" for information about either recovering from the error or rolling back the configuration operation.

  6. At the destination database, connect as an administrator with SYSDBA privilege in SQL*Plus and use the ALTER SYSTEM statement to disable the RESTRICTED SESSION:

    ALTER SYSTEM DISABLE RESTRICTED SESSION;
    
    
  7. Configure conflict resolution for the shared database objects if necessary.

    Typically, conflicts are possible in a bi-directional replication environment. If conflicts are possible in the environment created by the PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures, then configure conflict resolution before you allow users to make changes to the shared database objects.

The bi-directional replication environment configured in this example has the following characteristics:

  • Database supplemental logging is configured at both databases.

  • The stm1.net database has two queues and queue tables with system-generated names. One queue is for the local capture process, and one queue is for the apply process.

  • The stm2.net database has two queues and queue tables with system-generated names. One queue is for the local capture process, and one queue is for the apply process.

  • At the stm1.net database, a capture process with a system-generated name captures DML and DDL changes to all of the database objects in the database that are supported by Streams.

  • At the stm2.net database, a capture process with a system-generated name captures DML and DDL changes to all of the database objects in the database that are supported by Streams.

  • A propagation running on the stm1.net database with a system-generated name propagates the captured changes from a queue at the stm1.net database to a queue at the stm2.net database.

  • A propagation running on the stm2.net database with a system-generated name propagates the captured changes from a queue at the stm2.net database to a queue at the stm1.net database.

  • At the stm1.net database, an apply process with a system-generated name dequeues the changes from its queue and applies them to the database objects.

  • At the stm2.net database, an apply process with a system-generated name dequeues the changes from its queue and applies them to the database objects.

  • Tags are used to avoid change cycling. Specifically, each apply process uses an apply tag so that redo records for changes applied by the apply process include the tag. Each apply process uses an apply tag that is unique in the replication environment. Each propagation discards changes that have the tag of the apply process running on the same database.

Configuring Tablespace Replication Using the DBMS_STREAMS_ADM Package

You can use the following procedures in the DBMS_STREAMS_ADM package to configure tablespace replication:

You can use the MAINTAIN_SIMPLE_TTS procedure to configure Streams replication for a simple tablespace, and you can use the MAINTAIN_TTS procedure to configure Streams replication for a set of self-contained tablespaces. These procedures use transportable tablespaces, Data Pump, the DBMS_STREAMS_TABLESPACE_ADM package, and the DBMS_FILE_TRANSFER package to configure the environment.

A self-contained tablespace has no references from the tablespace pointing outside of the tablespace. For example, if an index in the tablespace is for a table in a different tablespace, then the tablespace is not self-contained. A simple tablespace is a self-contained tablespace that uses only one datafile. When there is more than one tablespace in a tablespace set, a self-contained tablespace set has no references from inside the set of tablespaces pointing outside of the set of tablespaces.

These procedures clone the tablespace or tablespaces being configured for replication from the source database to the destination database. The MAINTAIN_SIMPLE_TTS procedure uses the CLONE_SIMPLE_TABLESPACE procedure in the DBMS_STREAMS_TABLESPACE_ADM package, and the MAINTAIN_TTS procedure uses the CLONE_TABLESPACES procedure in the DBMS_STREAMS_TABLESPACE_ADM package. When a tablespace is cloned, it is made read-only automatically until the clone operation is complete.

The example in this section uses the MAINTAIN_TTS procedure to configure a Streams replication environment that maintains the following tablespaces using Streams:

  • tbs1

  • tbs2

The source database is stm1.net, and the destination database is stm2.net.

Assume that the following decisions were made about the configuration:

  • DDL changes to these tablespaces and the database objects in these tablespaces will be maintained.

  • A downstream capture process running on the destination database (stm2.net) will capture changes made to the source database (stm1.net).

  • The replication environment will be bi-directional.

  • The MAINTAIN_TTS procedure will configure the replication environment directly. A configuration script will not be generated.


See Also:

"Decisions to Make Before Configuring Streams Replication" for more information about these decisions

In addition, this example makes the following assumptions:

  • The tablespaces tbs1 and tbs2 make a self-contained tablespace set at the source database stm1.net.

  • The datafiles for the tablespace set are both in the /orc/dbs directory at the source database stm1.net.

  • The stm2.net database does not contain the tablespace set currently.

The MAINTAIN_SIMPLE_TTS and MAINTAIN_TTS procedures automatically exclude database objects that are not supported by Streams from the replication environment by adding rules to the negative rule set of each capture and apply process. The PRE_INSTANTIATION_SETUP and POST_INSTANTIATION_SETUP procedures enable you to specify which database objects to exclude from the replication environment.

Query the DBA_STREAMS_UNSUPPORTED data dictionary view to determine which database objects are not supported by Streams. If unsupported database objects are not excluded, then capture errors will result.

Complete the following steps to use the MAINTAIN_TTS procedure to configure the environment:

  1. Complete the required tasks before running the MAINTAIN_TTS procedure. See "Tasks to Complete Before Configuring Streams Replication" for instructions.

    For this configuration, the following tasks must be completed:

    • Configure a Streams administrator at both databases.

    • Create a database link from the source database stm1.net to the destination database stm2.net.

    • Because the replication environment will be bi-directional, and because downstream capture will be configured at the destination database, create a database link from the destination database stm2.net to the source database stm1.net.

    • Create the following required directory objects:

      • A source directory object at the source database. This example assumes that this directory object is SOURCE_DIRECTORY.

      • A destination directory object at the destination database. This example assumes that this directory object is DEST_DIRECTORY.

    • Make sure both databases are in ARCHIVELOG mode.

    • Because the destination database will be the capture database for changes made to the source database, configure log file copying from the source database stm1.net to the destination database stm2.net.

    • Make sure the initialization parameters are set properly at both databases.

  2. While connected as the Streams administrator to the database that contains the tablespace set, create a directory object for the directory that contains the datafiles for the tablespaces in the tablespace set. For example, the following statement creates a directory object named tbs_directory that corresponds to the /orc/dbs directory:

    CONNECT strmadmin/strmadminpw@stm1.net
    
    CREATE DIRECTORY tbs_directory AS '/orc/dbs';
    
    

    If the datafiles are in multiple directories, then a directory object must exist for each of these directories, and the user who runs the MAINTAIN_TTS procedure in Step 3 must have READ privilege on these directory objects. In this example, the Streams administrator has this privilege because this user creates the directory object.

  3. While connected as the Streams administrator to the destination database, run the MAINTAIN_TTS procedure:

    CONNECT strmadmin/strmadminpw@stm2.net
    
    DECLARE
      t_names  DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET;
    BEGIN
      -- Tablespace names
      t_names(1) := 'TBS1';
      t_names(2) := 'TBS2';
      DBMS_STREAMS_ADM.MAINTAIN_TTS(
         tablespace_names             => t_names,
         source_directory_object      => 'SOURCE_DIRECTORY',
         destination_directory_object => 'DEST_DIRECTORY',
         source_database              => 'stm1.net',
         destination_database         => 'stm2.net',
         perform_actions              => true,
         bi_directional               => true,
         include_ddl                  => true);
    END;
    /
    
    

    When this procedure completes, the Streams bi-directional replication environment is configured. The procedure automatically generates names for the ANYDATA queues, capture processes, propagations, and apply processes it creates. If you do not want system-generated names for these components, you can specify names by using additional parameters available in the MAINTAIN_TTS procedure. This procedure also starts the queues, capture processes, propagations, and apply processes.

    Because the procedure is run at the destination database, downstream capture is configured at the destination database for changes to the source database.

    If this procedure encounters an error and stops, then see "Recovering from Configuration Errors" for information about either recovering from the error or rolling back the configuration operation.

  4. Configure conflict resolution for the database objects in the tablespace set if necessary.

    Typically, conflicts are possible in a bi-directional replication environment. If conflicts are possible in the environment created by the MAINTAIN_TTS procedure, then configure conflict resolution before you allow users to make changes to the objects in the tablespace set.

The resulting bi-directional replication environment has the following characteristics:

  • Supplemental logging is configured for the shared database objects at both databases.

  • The stm1.net database has a queue and queue table with system-generated names. This queue is for the apply process.

  • The stm2.net database has three queues and queue tables with system-generated names. One queue is for the downstream capture process, one queue is for the local capture process, and one queue is for the apply process.

  • At the stm2.net database, a downstream capture process with a system-generated name captures DML and DDL changes made to the source database. Specifically, this downstream capture process captures DML changes made to the tables in the tbs1 and tbs2 tablespaces and DDL changes to these tablespaces and the database objects in them.

  • At the stm2.net database, a local capture process with a system-generated name captures DML and DDL changes made to the destination database. Specifically, this local capture process captures DML changes to the tables in the tbs1 and tbs2 tablespaces and DDL changes to these tablespaces and the database objects in them.

  • A propagation running on the stm2.net database with a system-generated name propagates the changes captured by the downstream capture process from the queue for the downstream capture process to the queue for the apply process within the stm2.net database.

  • A propagation running on the stm2.net database with a system-generated name propagates the changes captured by the local capture process from the queue for the local capture process to the queue in the stm1.net database.

  • At the stm1.net database, an apply process with a system-generated name dequeues the changes from the queue and applies them to the shared database objects.

  • At the stm2.net database, an apply process with a system-generated name dequeues the changes from its queue and applies them to the shared database objects.

  • Tags are used to avoid change cycling. Specifically, each apply process uses an apply tag so that redo records for changes applied by the apply process include the tag. Each apply process uses an apply tag that is unique in the replication environment. Each propagation discards changes that have the tag of the apply process running on the same database.

Configuring Schema Replication Using the DBMS_STREAMS_ADM Package

You can use the MAINTAIN_SCHEMAS in the DBMS_STREAMS_ADM package to configure schema replication. The example in this section uses this procedure to configure a Streams replication environment that maintains the hr schema. The source database is stm1.net, and the destination database is stm2.net.

Assume that the following decisions were made about the configuration:

  • DDL changes to hr schema and the database objects in the hr schema will be maintained.

  • The replication environment will be bi-directional.

  • A downstream capture process running on a third database named stm3.net will capture changes made to the source database (stm1.net), and a propagation at stm3.net will propagate these captured changes to the destination database (stm2.net).

  • A Data Pump export dump file instantiation will be performed.

  • The MAINTAIN_SCHEMAS procedure will configure the replication environment directly. A configuration script will not be generated.

The MAINTAIN_SCHEMAS procedure automatically excludes database objects that are not supported by Streams from the replication environment by adding rules to the negative rule set of each capture and apply process. Query the DBA_STREAMS_UNSUPPORTED data dictionary view to determine which database objects are not supported by Streams. If unsupported database objects are not excluded, then capture errors will result.


See Also:

"Decisions to Make Before Configuring Streams Replication" for more information about these decisions

Complete the following steps to use the MAINTAIN_SCHEMAS procedure to configure the environment:

  1. Complete the required tasks before running the MAINTAIN_SCHEMAS procedure. See "Tasks to Complete Before Configuring Streams Replication" for instructions.

    For this configuration, the following tasks must be completed:

    • Configure a Streams administrator at all three databases.

    • Create a database link from the source database stm1.net to the destination database stm2.net.

    • Because downstream capture will be configured at the third database, create a database link from the third database stm3.net to the source database stm1.net.

    • Because downstream capture will be configured at the third database, create a database link from the third database stm3.net to the destination database stm2.net.

    • Because the replication environment will be bi-directional, create a database link from the destination database stm2.net to the source database stm1.net.

    • Create the following required directory objects:

      • A source directory object at the source database. This example assumes that this directory object is SOURCE_DIRECTORY.

      • A destination directory object at the destination database. This example assumes that this directory object is DEST_DIRECTORY.

    • Make sure the source database and destination databases are in ARCHIVELOG mode.

    • Because a third database (stm3.net) will be the capture database for changes made to the source database, configure log file copying from the source database stm1.net to the third database stm3.net.

    • Make sure the initialization parameters are set properly at all databases.

  2. Connect to the third database as the Streams administrator and run the MAINTAIN_SCHEMAS procedure:

    CONNECT strmadmin/strmadminpw@stm3.net
    
    BEGIN
      DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(
        schema_names                 => 'hr',
        source_directory_object      => 'SOURCE_DIRECTORY',
        destination_directory_object => 'DEST_DIRECTORY',
        source_database              => 'stm1.net',
        destination_database         => 'stm2.net',
        perform_actions              => true,
        dump_file_name               => 'export_hr.dmp',
        capture_queue_table          => 'rep_capture_queue_table',
        capture_queue_name           => 'rep_capture_queue',
        capture_queue_user           => NULL,
        apply_queue_table            => 'rep_dest_queue_table',
        apply_queue_name             => 'rep_dest_queue',
        apply_queue_user             => NULL,
        capture_name                 => 'capture_hr',
        propagation_name             => 'prop_hr',
        apply_name                   => 'apply_hr',
        log_file                     => 'export_hr.clg',
        bi_directional               => true,
        include_ddl                  => true,
        instantiation                => DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA);
    END;
    /
    
    

    Because this procedure configures a bi-directional replication environment, do not allow DML or DDL changes to the shared database objects at the destination database while the procedure is running.

    Because the procedure is run at the third database, downstream capture is configured at the third database for changes to the source database.

    If this procedure encounters an error and stops, then see "Recovering from Configuration Errors" for information about either recovering from the error or rolling back the configuration operation.

  3. Configure conflict resolution for the shared database objects if necessary.

    Typically, conflicts are possible in a bi-directional replication environment. If conflicts are possible in the environment created by the MAINTAIN_SCHEMAS procedure, then configure conflict resolution before you allow users to make changes to the shared database objects.

The bi-directional replication environment configured in this example has the following characteristics:

  • Supplemental logging is configured for the shared database objects at the source and destination databases.

  • The stm1.net database has a queue named rep_dest_queue which uses a queue table named rep_dest_queue_table. This queue is for the apply process.

  • The stm2.net database has a queue named rep_capture_queue which uses a queue table named rep_capture_queue_table. This queue is for the local capture process.

  • The stm2.net database has a queue named rep_dest_queue which uses a queue table named rep_dest_queue_table. This queue is for the apply process.

  • The stm3.net database has a queue named rep_capture_queue which uses a queue table named rep_capture_queue_table. This queue is for the downstream capture process.

  • At the stm3.net database, a downstream capture process named capture_hr captures DML and DDL changes to the hr schema and the database objects in the schema at the source database.

  • At the stm2.net database, a local capture process named capture_hr captures DML and DDL changes to the hr schema and the database objects in the schema at the destination database.

  • A propagation running on the stm3.net database named prop_hr propagates the captured changes from the queue in the stm3.net database to the queue in the stm2.net database.

  • A propagation running on the stm2.net database named prop_hr propagates the captured changes from the queue in the stm2.net database to the queue in the stm1.net database.

  • At the stm1.net database, an apply process named apply_hr dequeues the changes from rep_dest_queue and applies them to the database objects.

  • At the stm2.net database, an apply process named apply_hr dequeues the changes from rep_dest_queue and applies them to the database objects.

  • Tags are used to avoid change cycling. Specifically, each apply process uses an apply tag so that redo records for changes applied by the apply process include the tag. Each apply process uses an apply tag that is unique in the replication environment. Each propagation discards changes that have the tag of the apply process running on the same database.

Configuring Table Replication Using the DBMS_STREAMS_ADM Package

You can use the MAINTAIN_TABLES in the DBMS_STREAMS_ADM package to configure table replication. The example in this section uses this procedure to configure a Streams replication environment that maintains the tables in the hr schema. The source database is stm1.net, and the destination database is stm2.net.

Assume that the following decisions were made about the configuration:

  • The replication environment should maintain DDL changes to the following tables in the hr schema:

    • departments

    • employees

  • The replication environment should not maintain DDL changes to the following tables in the hr schema:

    • countries

    • regions

    • locations

    • jobs

    • job_history

  • Local capture will be configured for the source database.

  • The replication environment will be single source, not bi-directional.

  • A Data Pump network import instantiation will be performed.

  • The MAINTAIN_TABLES procedure will not configure the replication environment directly. Instead, a configuration script will be generated, and this script will be modified so that DDL changes to the following tables are maintained: departments and employees.


See Also:

"Decisions to Make Before Configuring Streams Replication" for more information about these decisions

Complete the following steps to use the MAINTAIN_TABLES procedure to configure the environment:

  1. Complete the required tasks before running the MAINTAIN_TABLES procedure. See "Tasks to Complete Before Configuring Streams Replication" for instructions.

    For this configuration, the following tasks must be completed:

    • Configure a Streams administrator at both databases.

    • Create a database link from the source database stm1.net to the destination database stm2.net.

    • Because the MAINTAIN_TABLES procedure will perform a Data Pump network import instantiation, create a database link from the destination database stm2.net to the source database stm1.net.

    • Create a script directory object at the source database. This example assumes that this directory object is SCRIPT_DIRECTORY.

    • Make sure the source database stm1.net is in ARCHIVELOG mode.

    • Make sure the initialization parameters are set properly at both databases.

  2. Connect to the source database as the Streams administrator and run the MAINTAIN_TABLES procedure:

    CONNECT strmadmin/strmadminpw@stm1.net
    
    DECLARE
      tables DBMS_UTILITY.UNCL_ARRAY;
      BEGIN
        tables(1) := 'hr.departments';
        tables(2) := 'hr.employees';
        tables(3) := 'hr.countries';
        tables(4) := 'hr.regions';
        tables(5) := 'hr.locations';
        tables(6) := 'hr.jobs';
        tables(7) := 'hr.job_history';
        DBMS_STREAMS_ADM.MAINTAIN_TABLES(
          table_names                  => tables,
          source_directory_object      => NULL,
          destination_directory_object => NULL,
          source_database              => 'stm1.net',
          destination_database         => 'stm2.net',
          perform_actions              => false,
          script_name                  => 'configure_rep.sql',
          script_directory_object      => 'SCRIPT_DIRECTORY',
          bi_directional               => false,
          include_ddl                  => false,
          instantiation      => DBMS_STREAMS_ADM.INSTANTIATION_TABLE_NETWORK);
    END;
    /
    
    

    The configure_rep.sql script generated by the procedure uses default values for the parameters that are not specified in the procedure call. The script uses system-generated names for the ANYDATA queues, queue tables, capture process, propagation, and apply process it creates. You can specify different names by using additional parameters available in the MAINTAIN_TABLES procedure. Notice that the include_ddl parameter is set to false. Therefore, the script does not configure the replication environment to maintain DDL changes to the tables.

  3. Modify the configure_rep.sql script:

    1. Navigate to the directory that corresponds with the SCRIPT_DIRECTORY directory object on the computer system running the source database.

    2. Open the configure_rep.sql script in a text editor. Consider making a backup of this script before modifying it.

    3. In the script, find the ADD_TABLE_RULES and ADD_TABLE_PROPAGATION_RULES procedure calls that create the table rules for the hr.departments and hr.employees tables. For example, the procedure calls for the capture process look similar to the following:

      dbms_streams_adm.add_table_rules(
          table_name => '"HR"."DEPARTMENTS"', 
          streams_type => 'CAPTURE', 
          streams_name => '"STM1$CAP"', 
          queue_name => '"STRMADMIN"."STM1$CAPQ"', 
          include_dml => TRUE,
          include_ddl => FALSE,
          include_tagged_lcr => TRUE,
          source_database => 'STM1.NET', 
          inclusion_rule => TRUE,
          and_condition => get_compatible);
      
      dbms_streams_adm.add_table_rules(
          table_name => '"HR"."EMPLOYEES"', 
          streams_type => 'CAPTURE', 
          streams_name => '"STM1$CAP"', 
          queue_name => '"STRMADMIN"."STM1$CAPQ"', 
          include_dml => TRUE,
          include_ddl => FALSE,
          include_tagged_lcr => TRUE,
          source_database => 'STM1.NET', 
          inclusion_rule => TRUE,
          and_condition => get_compatible);
      
      
    4. In the procedure calls that you found in Step c, change the setting of the include_ddl parameter to TRUE. For example, the procedure calls for the capture process should look similar to the following after the modification:

      dbms_streams_adm.add_table_rules(
          table_name => '"HR"."DEPARTMENTS"', 
          streams_type => 'CAPTURE', 
          streams_name => '"STM1$CAP"', 
          queue_name => '"STRMADMIN"."STM1$CAPQ"', 
          include_dml => TRUE,
          include_ddl => TRUE,
          include_tagged_lcr => TRUE,
          source_database => 'STM1.NET', 
          inclusion_rule => TRUE,
          and_condition => get_compatible);
      
      dbms_streams_adm.add_table_rules(
          table_name => '"HR"."EMPLOYEES"', 
          streams_type => 'CAPTURE', 
          streams_name => '"STM1$CAP"', 
          queue_name => '"STRMADMIN"."STM1$CAPQ"', 
          include_dml => TRUE,
          include_ddl => TRUE,
          include_tagged_lcr => TRUE,
          source_database => 'STM1.NET', 
          inclusion_rule => TRUE,
          and_condition => get_compatible);
      
      

      Remember to change the procedure calls for all capture processes, propagations, and apply processes.

    5. Save and close the configure_rep.sql script.

  4. At the source database, connect as the Streams administrator, and run the configuration script:

    CONNECT strmadmin/strmadminpw@stm1.net
    
    SET ECHO ON
    SPOOL configure_rep.out
    @configure_rep.sql
    
    

    The script prompts you to supply information about the database names and the Streams administrators. When this configuration script completes, the Streams single-source replication environment is configured. The script also starts the queues, capture process, propagations, and apply process.

The resulting single-source replication environment has the following characteristics:

  • At the source database, supplemental logging is configured for the shared database objects.

  • The source database stm1.net has a queue and queue table with system-generated names.

  • The destination database stm2.net has a queue and queue table with system-generated names.

  • At the source database, a capture process with a system-generated name captures DML changes to all of the tables in the hr schema and DDL changes to the hr.departments and hr.employees tables.

  • A propagation running on the source database with a system-generated name propagates the captured changes from the queue at the source database to the queue at the destination database.

  • At the destination database, an apply process with a system-generated name dequeues the changes from the queue and applies them to the tables at the destination database.