Oracle® Streams Concepts and Administration 10g Release 2 (10.2) Part Number B14229-01 |
|
|
View PDF |
This appendix describes how to perform some maintenance operations with Oracle Streams on an Oracle Database 10g Release 2 database. These maintenance operations include migrating an Oracle Database to a different platform or character set, upgrading user-created applications, and applying Oracle Database patches. The maintenance operations described in this appendix use the features of Oracle Streams to achieve little or no database down time.
This appendix contains these topics:
See Also: Appendix B, "Online Database Upgrade with Streams" for instructions on performing an upgrade of a prior release of Oracle Database with Streams |
The following maintenance operations typically require substantial database down time:
Migrating a database to a different platform
Migrating a database to a different character set
Modifying database schema objects to support upgrades to user-created applications
Applying an Oracle Database software patch
You can achieve these maintenance operations with little or no down time by using the features of Oracle Streams. To do so, you use Oracle Streams to configure a single-source replication environment with the following databases:
Source Database: The original database that is being maintained.
Capture Database: The database where a capture process captures changes made to the source database during the maintenance operation.
Destination Database: The copy of the source database where an apply process applies changes made to the source database during the maintenance operation.
Specifically, you can use the following general steps to perform the maintenance operation while the database is online:
Create an empty destination database.
Configure an Oracle Streams single-source replication environment where the original database is the source database and a copy of the database is the destination database. The PRE_INSTANTIATION_SETUP
and POST_INSTANTIATION_SETUP
procedures in the DBMS_STREAMS_ADM
package configure the Streams replication environment.
Perform the maintenance operation on the destination database. During this time the original source database is available online, and changes to the original source database are being captured by a capture process.
Use Oracle Streams to apply the changes made to the source database at the destination database.
When the destination database has caught up with the changes made at the source database, take the source database offline and make the destination database available for applications and users.
Figure C-1 provides an overview of this process.
During the maintenance operation, the capture database is the database where the capture process is created. A local capture process can be created at the source database during the maintenance operation, or a downstream capture process can be created at the destination database or at a third database. If the destination database is the capture database, then a propagation from the capture database to the destination database is not needed. A downstream capture process reduces the resources required at the source database during the maintenance operation.
Note:
|
The instructions in this appendix assume that all of the following statements are true for the database being maintained:
The database is not part of an existing Oracle Streams environment.
The database is not part of an existing logical standby environment.
The database is not part of an existing Advanced Replication environment.
No tables at the database are master tables for materialized views in other databases.
Any user-created queues are read-only during the maintenance operation.
If possible, ensure that no job queue processes are created, modified, or deleted during the maintenance operation, and that no Oracle-supplied PL/SQL package subprograms are invoked during the maintenance operation that modify both user data and data dictionary metadata at the same time. The following packages contain subprograms that modify both user data and data dictionary metadata at the same time: DBMS_RLS
, DBMS_STATS
, and DBMS_JOB
.
It might be possible to perform such actions on the database if you ensure that the same actions are performed on the source database and destination database in Steps 13 and 14 in "Performing a Database Maintenance Operation Using Streams". For example, if a PL/SQL procedure gathers statistics on the source database during the maintenance operation, then the same PL/SQL procedure should be invoked at the destination database in Step 14.
The PRE_INSTANTIATION_SETUP
and POST_INSTANTIATION_SETUP
procedures in the DBMS_STREAMS_ADM
package include the following parameters:
exclude_schemas
exclude_flags
These parameters specify which database objects to exclude from the Streams configuration. The examples in this appendix set these parameters to the following values:
exclude_schemas => '*', exclude_flags => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL);
These values exclude any database objects that are not supported by Streams. The asterisk (*
) specified for exclude_schemas
indicates that some database objects in every schema in the database might be excluded from the replication environment. The value specified for the exclude_flags
parameter indicates that DML and DDL changes for all unsupported database objects are excluded from the replication environment. Rules are placed in the negative rule sets for the capture processes to exclude these database objects.
To list unsupported database objects, query the DBA_STREAMS_UNSUPPORTED
data dictionary view at the source database. If you use these parameter settings, then changes to the database objects listed in this view are not maintained by Streams during the maintenance operation. Therefore, Step 6 in "Task 1: Beginning the Maintenance Operation" instructs you to make these database objects read-only during the database maintenance operation.
Note: "Preparing for Maintenance of a Database with User-defined Types" discusses a method for retaining changes to tables that contain user-defined types during the maintenance operation. If you are using this method, then tables that contain user-defined types can remain open during the maintenance operation. |
See Also: Oracle Database PL/SQL Packages and Types Reference for more information about theexclude_schemas and exclude_flags parameters |
The following sections describe tasks to complete before starting the database maintenance operation with Streams:
Preparing for Maintenance of a Database with User-defined Types
Deciding Whether to Configure Streams Directly or Generate a Script
If you decided that the destination database or a third database will be the capture database, then you must prepare for downstream capture by configuring log file copying from the source database to the capture database. If you decided that the source database will be the capture database, then log file copying is not required.
Complete the following steps to prepare the source database to copy its redo log files to the capture database, and to prepare the capture database to accept these redo log files:
Configure Oracle Net so that the source database can communicate with the capture database.
Set the following initialization parameters to configure redo transport services to copy archived redo log files from the source database to the capture database:
At the source database, set at least one archive log destination in the LOG_ARCHIVE_DEST_
n
initialization parameter to a directory on the computer system running the capture database. To do this, set the following attributes of this parameter:
SERVICE
- Specify the network service name of the capture database.
ARCH
or LGWR
ASYNC
- If you specify ARCH
(the default), then the archiver process (ARC
n
) will archive the redo log files to the capture database. If you specify LGWR
ASYNC
, then the log writer process (LGWR
) will archive the redo log files to the capture database. Either ARCH
or LGWR
ASYNC
is acceptable for a capture database destination.
MANDATORY
or OPTIONAL
- If you specify MANDATORY
, then archiving of a redo log file to the capture database must succeed before the corresponding online redo log at the source database can be overwritten. If you specify OPTIONAL
, then successful archiving of a redo log file to the capture database is not required before the corresponding online redo log at the source database can be overwritten. Either MANDATORY
or OPTIONAL
is acceptable for a capture database destination. If neither the MANDATORY
nor the OPTIONAL
attribute is specified, then the default is OPTIONAL
.
NOREGISTER
- Specify this attribute so that the capture database location is not recorded in the capture database control file.
TEMPLATE
- Specify a directory and format template for archived redo logs at the capture database. The TEMPLATE
attribute overrides the LOG_ARCHIVE_FORMAT
initialization parameter settings at the capture database. The TEMPLATE
attribute is valid only with remote destinations. Make sure the format uses all of the following variables at each source database: %t
, %s
, and %r
.
The following is an example of an LOG_ARCHIVE_DEST_
n
setting that specifies a capture database (DBS2.NET
):
LOG_ARCHIVE_DEST_2='SERVICE=DBS2.NET ARCH OPTIONAL NOREGISTER TEMPLATE=/usr/oracle/log_for_dbs1/dbs1_arch_%t_%s_%r.log'
If another source database transfers log files to this capture database, then, in the initialization parameter file at this other source database, you can use the TEMPLATE
attribute to specify a different directory and format for the log files at the capture database. The log files from each source database are kept separate at the capture database.
Tip: Log files from a remote source database should be kept separate from local database log files. In addition, if the capture database contains log files from multiple source databases, then the log files from each source database should be kept separate from each other. |
At the source database, set the LOG_ARCHIVE_DEST_STATE_
n
initialization parameter that corresponds with the LOG_ARCHIVE_DEST_
n
parameter for the capture database to ENABLE
.
For example, if the LOG_ARCHIVE_DEST_2
initialization parameter is set for the capture database, then set one LOG_ARCHIVE_DEST_STATE_2
parameter in the following way:
LOG_ARCHIVE_DEST_STATE_2=ENABLE
At the source database, make sure the setting for the LOG_ARCHIVE_CONFIG
initialization parameter includes the send
value.
At the downstream database, make sure the setting for the LOG_ARCHIVE_CONFIG
initialization parameter includes the receive
value.
See Also: Oracle Database Reference and Oracle Data Guard Concepts and Administration for more information about these initialization parameters |
If you reset any initialization parameters while the instance is running at a database in Step 2, then you might want to reset them in the initialization parameter file as well, so that the new values are retained when the database is restarted.
If you did not reset the initialization parameters while the instance was running, but instead reset them in the initialization parameter file in Step 2, then restart the database. The source database must be open when it sends redo log files to the capture database because the global name of the source database is sent to the capture database only if the source database is open.
See Also: "Overview of Using Streams for Database Maintenance Operations" for more information about the capture database |
User-defined types include object types, REF
values, varrays, and nested tables. Currently, Streams capture processes and apply processes do not support user-defined types. This section discusses using Streams to perform a maintenance operation on a database that has user-defined types.
One option is to make tables that contain user-defined types read-only during the database maintenance operation. In this case, these tables are instantiated at the destination database, and no changes are made to these tables during the entire operation. After the maintenance operation is complete, make the tables that contain user-defined types read/write at the destination database.
However, if tables that contain user-defined types must remain open during the maintenance operation, then the following general steps can be used to retain changes to these types during the database maintenance operation:
At the source database, create one or more logging tables to store row changes to tables that include user-defined types. Each column in the logging table must use a datatype that is supported by Streams.
At the source database, create a DML trigger that fires on the tables that contain the user-defined datatypes. The trigger converts each row change into relational equivalents and logs the modified row in a logging table created in Step 1.
Make sure the capture process and propagation are configured to capture and, if necessary, propagate changes made to the logging table to the destination database. Changes to tables that contain user-defined types should not be captured or propagated. Therefore, make sure the PRE_INSTANTIATION_SETUP
and POST_INSTANTIATION_SETUP
procedures include the logging tables and exclude the tables that contain user-defined types.
At the destination, configure the apply process to use a DML handler that processes the changes to the logging tables. The DML handler reconstructs the user-defined types from the relational equivalents and applies the modified changes to the tables that contain user-defined types.
See Also:
|
This section is relevant only if the maintenance operation entails upgrading user-created applications. During an upgrade of user-created applications, schema objects can be modified, and there might be logical dependencies that cannot be detected by the database alone. The following sections describe handling these issues during an application upgrade:
If you are upgrading user-created applications, then, typically, schema objects in the database change to support the upgraded applications. In Streams, row LCRs contain information about row changes that result from DML statements. A declarative rule-based transformation or DML handler can modify row LCRs captured from the source database redo log so that the row LCRs can be applied to the altered tables at the destination database.
A rule-based transformation is any modification to a message that results when a rule in a positive rule set evaluates to TRUE
. Declarative rule-based transformation cover one of a common set of transformation scenarios for row LCRs. Declarative rule-based transformations are run internally without using PL/SQL. You specify such a transformation using a procedure in the DBMS_STREAMS_ADM
package. A declarative rule-based transformation can modify row LCRs during capture, propagation, or apply.
A DML handler is a user procedure that processes row LCRs resulting from DML statements at a source database. A Streams apply process at a destination database can pass row LCRs to a DML handler, and the DML handler can modify the row LCRs.
The process for upgrading user-created applications using Streams can involve modifying and creating the schema objects at the destination database after instantiation. You can use one or more declarative rule-based transformations and DML handlers at the destination database to process changes from the source database so that they apply to the modified schema objects correctly. Declarative rule-based transformations and DML handlers can be used during application upgrade to account for differences between the source database and destination database.
In general, declarative rule-based transformations are easier to use than DML handlers. Therefore, when modifications to row LCRs are required, try to configure a declarative rule-based transformation first. If a declarative rule-based transformation is not sufficient, then use a DML handler. If row LCRs for tables that contain one or more LOB columns must be modified, then you should use a DML handler and LOB assembly.
Before you begin the database maintenance operation, you should complete the following tasks to prepare your declarative rule-based transformations or DML handlers:
Learn about declarative rule-based transformations. See "Declarative Rule-Based Transformations".
Learn about DML handlers. See "Message Processing Options for an Apply Process".
Determine the declarative rule-based transformations and DML handlers you will need at your destination database. Your determination depends on the modifications to the schema objects required by your upgraded applications.
Create the PL/SQL procedures that you will use for any DML handlers during the database maintenance operation. See the Oracle Streams Replication Administrator's Guide for information about creating the PL/SQL procedures.
If row LCRs for tables that contain one or more LOB columns must be modified, then learn about using LOB assembly. See Oracle Streams Replication Administrator's Guide.
Note: Custom rule-based transformation can also be used to modify row LCRs during application upgrade. However, these modifications can be accomplished using DML handlers, and DML handlers are more efficient than custom rule-based transformations. |
In some cases, an apply process requires additional information to detect dependencies in row LCRs that are being applied in parallel. During application upgrades, an apply process might require additional information to detect dependencies in the following situations:
The application, rather than the database, enforces logical dependencies.
Schema objects have been modified to support the application upgrade, and a DML handler will modify row LCRs to account for differences between the source database and destination database.
A virtual dependency definition is a description of a dependency that is used by an apply process to detect dependencies between transactions at a destination database. A virtual dependency definition is not described as a constraint in the destination database data dictionary. Instead, it is specified using procedures in the DBMS_APPLY_ADM
package. Virtual dependency definitions enable an apply process to detect dependencies that it would not be able to detect by using only the constraint information in the data dictionary. After dependencies are detected, an apply process schedules LCRs and transactions in the correct order for apply.
If virtual dependency definitions are required for your application upgrade, then learn about virtual dependency definitions and plan to configure them during the application upgrade.
See Also: Oracle Streams Replication Administrator's Guide for more information about virtual dependency definitions |
The PRE_INSTANTIATION_SETUP
and POST_INSTANTIATION_SETUP
procedures in the DBMS_STREAMS_ADM
package configure the Streams replication environment during the maintenance operation. These procedures can configure the Streams replication environment directly, or they can generate a script that configures the environment. Using a procedure to configure replication directly is simpler than running a script, and the environment is configured immediately. However, you might choose to generate a script for the following reasons:
You want to review the actions performed by the procedure before configuring the environment.
You want to modify the script to customize the configuration.
To configure Streams directly when you run one of these procedures, set the perform_actions
parameter to true
. The examples in this appendix assume that the procedures will configure Streams directly.
To generate a configuration script when you run one of these procedures, complete the following steps when you are instructed to run a procedure in this appendix:
In SQL*Plus, connect as the Streams administrator to database where you will run the procedure, and create a directory object to store the script that will be generated by the procedure. For example:
CONNECT strmadmin/strmadminpw CREATE DIRECTORY scripts_dir AS '/usr/scripts';
While still connected to the source database as the Streams administrator, run the procedure. Make sure the following parameters are set to generate a script:
Set the perform_actions
parameter to false
.
Set the script_name
parameter to the name of the script you want to generate.
Set the script_directory_object
parameter to the directory object into which you want to place the script.
Review or modify the script, if necessary.
In SQL*Plus, connect as the Streams administrator, and run the generated script. For example:
CONNECT strmadmin/strmadminpw @/usr/scripts/pre_instantiation.sql;
Before you begin the database maintenance operation, decide whether you want to use Export/Import utilities (Data Pump or original) or the Recovery Manager (RMAN) utility to instantiate the destination database during the operation. Consider the following factors when you make this decision:
If you are migrating the database to a different platform, then you can use either Export/Import or the RMAN CONVERT
DATABASE
command. The RMAN DUPLICATE
command does not support migrating a database to a different platform.
If you are migrating the database to a different character set, then you must use Export/Import. The RMAN DUPLICATE
and CONVERT
DATABASE
commands do not support migrating a database to a different character set.
If RMAN is supported for the operation, then using RMAN for the instantiation might be faster than using Export/Import, especially if the database is large.
Oracle recommends that you do not use RMAN for instantiation in an environment where distributed transactions are possible. Doing so might cause in-doubt transactions that must be corrected manually.
If you use Export/Import for instantiation, then Oracle recommends using Data Pump. Data Pump typically performs the instantiation faster than original Export/Import.
If the RMAN DUPLICATE
or CONVERT
DATABASE
command is used for database instantiation, then the destination database cannot be the capture database.
Table C-1 describes when each instantiation method is supported based on whether the platform at the source and destination databases are the same or different, and whether the character set at the source and destination databases are the same or different.
Table C-1 Instantiation Methods for Database Maintenance with Streams
Instantiation Method | Same Platform Supported? | Different Platforms Supported? | Same Character Set Supported? | Different Character Sets Supported? |
---|---|---|---|---|
Original Export/Import | Yes | Yes | Yes | Yes |
Data Pump Export/Import | Yes | Yes | Yes | Yes |
RMAN DUPLICATE |
Yes | No | Yes | No |
RMAN CONVERT DATABASE |
No | Maybe | Yes | No |
Only some platform combinations are supported by the RMAN CONVERT
DATABASE
command. You can use the DBMS_TDB
package to determine whether a platform combination is supported.
See Also:
|
This section describes performing one of the following database maintenance operations on an Oracle Database 10g Release 2 database:
Migrating the database to a different platform
Migrating the database to a different character set
Modifying database schema objects to support upgrades to user-created applications
Applying an Oracle Database software patch
You can use Streams to achieve little or no downtime during these operations. During the operation, the source database is the existing database on which you are performing database maintenance. The capture database is the database on which the Streams capture process runs. The destination database is the database that will replace the source database at the end of the operation.
Complete the following tasks to perform a database maintenance operation using Streams:
Complete the following steps to begin the maintenance operation using Oracle Streams:
Create an empty Oracle Database 10g Release 2 database. This database will be the destination database during the maintenance operation. If you are migrating the database to a different platform, then create the database on a computer system that uses the new platform. If you are migrating the database to a different character set, then create a database that uses the new character set.
See the Oracle installation guide for your operating system if you need to install Oracle, and see the Oracle Database Administrator's Guide for information about creating a database.
Make sure the destination database has a different global name than the source database. This example assumes that the global name of the source database is orcl.net
and the global name of the destination database during the database maintenance operation is stms.net
. The global name of the destination database is changed when the destination database replaces the source database at the end of the maintenance operation.
Make sure the source database is running in ARCHIVELOG
mode. See Oracle Database Administrator's Guide for information about running a database in ARCHIVELOG
mode.
Make sure the initialization parameters are set properly at both databases to support a Streams environment. See "Setting Initialization Parameters Relevant to Streams".
Configure a Streams administrator at each database, including the source database, destination database, and capture database (if the capture database is a third database). See "Configuring a Streams Administrator" for instructions. This example assumes that the name of the Streams administrator is strmadmin
at each database.
If you are upgrading user-created applications, then supplementally log any columns at the source database that will be involved in a rule-based transformation, DML handler, or value dependency. These columns must be unconditionally logged at the source database. See Oracle Streams Replication Administrator's Guide for information about specifying unconditional supplemental log groups for these columns.
At the source database, make read-only any database objects that are not supported by Streams. To list unsupported database objects, query the DBA_STREAMS_UNSUPPORTED
data dictionary view.
"Preparing for Maintenance of a Database with User-defined Types" discusses a method for retaining changes to tables that contain user-defined types during the maintenance operation. If you are using this method, then tables that contain user-defined types can remain open during the maintenance operation.
The specific instructions for setting up Streams prior to instantiation depend on which database is the capture database. The PRE_INSTANTIATION_SETUP
procedure always configures the capture process on the database where it is run. Therefore, this procedure must be run at the capture database.
When you run this procedure, you can specify that the procedure performs the configuration directly, or that the procedure generates a script that contains the configuration actions. See "Deciding Whether to Configure Streams Directly or Generate a Script". The examples in this section specify that the procedure performs the configuration directly.
Follow the instructions in the appropriate section:
Note: When thePRE_INSTANTIATION_SETUP procedure is running with the perform_actions parameter set to true , metadata about its configuration actions is recorded in the following data dictionary views: DBA_RECOVERABLE_SCRIPT , DBA_RECOVERABLE_SCRIPT_PARAMS , DBA_RECOVERABLE_SCRIPT_BLOCKS , and DBA_RECOVERABLE_SCRIPT_ERRORS . If the procedure stops because it encounters an error, then you can use the RECOVER_OPERATION procedure in the DBMS_STREAMS_ADM package to complete the configuration after you correct the conditions that caused the error. These views are not populated if a script is used to configure the replication environment. |
See Also:
|
Complete the following steps to set up Streams prior to instantiation when the source database is the capture database:
Configure your network and Oracle Net so that the source database can communicate with the destination database. See Oracle Database Net Services Administrator's Guide for instructions.
Connect as the Streams administrator in SQL*Plus to the source database, and create a database link to the destination database. For example:
CONNECT strmadmin/strmadminpw@orcl.net CREATE DATABASE LINK stms.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'stms.net';
While connected as the Streams administrator in SQL*Plus to the source database, run the PRE_INSTANTIATION_SETUP
procedure:
DECLARE empty_tbs DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN DBMS_STREAMS_ADM.PRE_INSTANTIATION_SETUP( maintain_mode => 'GLOBAL', tablespace_names => empty_tbs, source_database => 'orcl.net', destination_database => 'stms.net', perform_actions => true, script_name => NULL, script_directory_object => NULL, capture_name => 'capture_maint', capture_queue_table => 'strmadmin.capture_q_table', capture_queue_name => 'strmadmin.capture_q', propagation_name => 'prop_maint', apply_name => 'apply_maint', apply_queue_table => 'strmadmin.apply_q', apply_queue_name => 'strmadmin.apply_q_table', bi_directional => false, include_ddl => true, start_processes => false, exclude_schemas => '*', exclude_flags => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL); END; /
Proceed to "Task 3: Instantiating the Database".
Complete the following steps to set up Streams prior to instantiation when the destination database is the capture database:
Configure your network and Oracle Net so that the source database and destination database can communicate with each other. See Oracle Database Net Services Administrator's Guide for instructions.
Make sure log file shipping from the source database to the destination database is configured. See "Preparing for Downstream Capture" for instructions.
Connect as the Streams administrator in SQL*Plus to the destination database, and create a database link to the source database. For example:
CONNECT strmadmin/strmadminpw@stms.net CREATE DATABASE LINK orcl.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'orcl.net';
While connected as the Streams administrator in SQL*Plus to the destination database, run the PRE_INSTANTIATION_SETUP
procedure:
DECLARE empty_tbs DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN DBMS_STREAMS_ADM.PRE_INSTANTIATION_SETUP( maintain_mode => 'GLOBAL', tablespace_names => empty_tbs, source_database => 'orcl.net', destination_database => 'stms.net', perform_actions => true, script_name => NULL, script_directory_object => NULL, capture_name => 'capture_maint', capture_queue_table => 'strmadmin.streams_q_table', capture_queue_name => 'strmadmin.streams_q', apply_name => 'apply_maint', apply_queue_table => 'strmadmin.streams_q', apply_queue_name => 'strmadmin.streams_q_table', bi_directional => false, include_ddl => true, start_processes => false, exclude_schemas => '*', exclude_flags => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL); END; /
Notice that the propagation_name
parameter is omitted because a propagation is not necessary when the destination database is the capture database and the downstream capture process and apply process use the same queue at the destination database.
Also, notice that the capture process and apply process will share a queue named streams_q
at the destination database.
Proceed to "Task 3: Instantiating the Database".
This example assumes that the global name of the third database is thrd.net
. Complete the following steps to set up Streams prior to instantiation when a third database is the capture database:
Configure your network and Oracle Net so that the source database, destination database, and third database can communicate with each other. See Oracle Database Net Services Administrator's Guide for instructions.
Make sure log file shipping from the source database to the third database is configured. See "Preparing for Downstream Capture" for instructions.
Connect as the Streams administrator in SQL*Plus to the third database, and create a database link to the source database. For example:
CONNECT strmadmin/strmadminpw@thrd.net CREATE DATABASE LINK orcl.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'orcl.net';
While connected as the Streams administrator in SQL*Plus to the third database, create a database link to the destination database. For example:
CREATE DATABASE LINK stms.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'stms.net';
While connected as the Streams administrator in SQL*Plus to the third database, run the PRE_INSTANTIATION_SETUP
procedure:
DECLARE empty_tbs DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN DBMS_STREAMS_ADM.PRE_INSTANTIATION_SETUP( maintain_mode => 'GLOBAL', tablespace_names => empty_tbs, source_database => 'orcl.net', destination_database => 'stms.net', perform_actions => true, script_name => NULL, script_directory_object => NULL, capture_name => 'capture_maint', capture_queue_table => 'strmadmin.capture_q_table', capture_queue_name => 'strmadmin.capture_q', propagation_name => 'prop_maint', apply_name => 'apply_maint', apply_queue_table => 'strmadmin.apply_q', apply_queue_name => 'strmadmin.apply_q_table', bi_directional => false, include_ddl => true, start_processes => false, exclude_schemas => '*', exclude_flags => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL); END; /
Proceed to "Task 3: Instantiating the Database".
"Deciding Which Utility to Use for Instantiation" discusses different options for instantiating an entire database. Complete the steps in the appropriate section based on the instantiation option you are using:
See Also: Oracle Streams Replication Administrator's Guide for more information about performing instantiations |
If you use Oracle Data Pump or original Export/Import to instantiate the destination database, then make sure the following parameters are set to the appropriate values:
Set the STREAMS_CONFIGURATION
import parameter to n
.
If you use original Export/Import, then set the CONSISTENT
export parameter to y
. This parameter does not apply to Data Pump exports.
If you use original Export/Import, then set the STREAMS_INSTANTIATION
import parameter to y
. This parameter does not apply to Data Pump imports.
Complete the following steps to instantiate an entire database with Data Pump:
Connect in SQL*Plus to the source database as the Streams administrator, and create a directory object to hold the export dump file and export log file:
CREATE DIRECTORY dpump_dir AS '/usr/dpump_dir';
While connected to the source database as the Streams administrator, determine the current system change number (SCN) of the source database:
SET SERVEROUTPUT ON SIZE 1000000 DECLARE current_scn NUMBER; BEGIN current_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER; DBMS_OUTPUT.PUT_LINE('Current SCN: ' || current_scn); END; /
The returned SCN value is specified for the FLASHBACK_SCN
Data Pump export parameter in Step 3. Specifying the FLASHBACK_SCN
export parameter, or a similar export parameter, ensures that the export is consistent to a single SCN. In this example, assume that the query returned 876606
.
After you perform this query, make sure no DDL changes are made to the objects being exported until after the export is complete.
On a command line, use Data Pump to export the source database.
Perform the export by connecting as an administrative user who is granted EXP_FULL_DATABASE
role. This user also must have READ
and WRITE
privilege on the directory object created in Step 1. This example connects as the Streams administrator strmadmin
.
The following example is a Data Pump export command:
expdp strmadmin/strmadminpw FULL DIRECTORY=DPUMP_DIR DUMPFILE=orc1.dmp FLASHBACK_SCN=876606
Connect in SQL*Plus to the destination database as the Streams administrator, and create a directory object to hold the import dump file and import log file:
CREATE DIRECTORY dpump_dir AS '/usr/dpump_dir';
Transfer the Data Pump export dump file orc1.dmp
to the destination database. You can use the DBMS_FILE_TRANSFER
package, binary FTP, or some other method to transfer the file to the destination database. After the file transfer, the export dump file should reside in the directory that corresponds to the directory object created in Step 4.
On a command line at the destination database, use Data Pump to import the export dump file orc1.dmp
. Make sure no changes are made to the database tables until the import is complete. Performing the import automatically sets the instantiation SCN for the destination database and all of its objects.
Perform the import by connecting as an administrative user who is granted IMP_FULL_DATABASE
role. This user also must have READ
and WRITE
privilege on the directory object created in Step 4. This example connects as the Streams administrator strmadmin
.
The following example is an import command:
impdp strmadmin/strmadminpw FULL DIRECTORY=DPUMP_DIR DUMPFILE=orc1.dmp STREAMS_CONFIGURATION=n
If you use the RMAN DUPLICATE
command for instantiation on the same platform, then complete the following steps:
Create a backup of the source database if one does not exist. RMAN requires a valid backup for duplication. In this example, create a backup of orcl.net
if one does not exist.
While connected as an administrative user in SQL*Plus to the source database, determine the until SCN for the RMAN DUPLICATE
command. For example:
CONNECT SYSTEM/MANAGER@orcl.net SET SERVEROUTPUT ON SIZE 1000000 DECLARE until_scn NUMBER; BEGIN until_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER; DBMS_OUTPUT.PUT_LINE('Until SCN: ' || until_scn); END; /
Make a note of the until SCN value. This example assumes that the until SCN value is 748044
. You will set the UNTIL
SCN
option to this value when you use RMAN to duplicate the database in Step 5 and as the instantiation SCN in "Task 4: Setting Up Streams After Instantiation".
While still connected as an administrative user in SQL*Plus to the source database, archive the current online redo log. For example:
ALTER SYSTEM ARCHIVE LOG CURRENT;
Prepare your environment for database duplication, which includes preparing the destination database as an auxiliary instance for duplication. See the Oracle Database Backup and Recovery Advanced User's Guide for instructions.
Use the RMAN DUPLICATE
command with the OPEN
RESTRICTED
option to instantiate the source database at the destination database. The OPEN
RESTRICTED
option is required. This option enables a restricted session in the duplicate database by issuing the following SQL statement: ALTER
SYSTEM
ENABLE
RESTRICTED
SESSION
. RMAN issues this statement immediately before the duplicate database is opened.
You can use the UNTIL
SCN
clause to specify an SCN for the duplication. Use the until SCN determined in Step 2 for this clause. Archived redo logs must be available for the until SCN specified and for higher SCN values. Therefore, Step 3 archived the redo log containing the until SCN.
Make sure you use TO
database_name
in the DUPLICATE
command to specify the name of the duplicate database. In this example, the duplicate database is stms.net
. Therefore, the DUPLICATE
command for this example includes TO
stms.net
.
The following example is an RMAN DUPLICATE
command:
rman RMAN> CONNECT TARGET SYS/change_on_install@orcl.net RMAN> CONNECT AUXILIARY SYS/change_on_install@stms.net RMAN> RUN { SET UNTIL SCN 748044; ALLOCATE AUXILIARY CHANNEL mgdb DEVICE TYPE sbt; DUPLICATE TARGET DATABASE TO mgdb NOFILENAMECHECK OPEN RESTRICTED; }
Connect to the destination database as a system administrator in SQL*Plus, and rename the global name. After an RMAN database instantiation, the destination database has the same global name as the source database, but the destination database must have its original name until the end of the maintenance operation. Rename the global name of the destination database back to its original name with the following statement:
ALTER DATABASE RENAME GLOBAL_NAME TO stms.net;
Connect as the Streams administrator in SQL*Plus to the destination database, and create a database link to the source database. For example:
CONNECT strmadmin/strmadminpw@stms.net CREATE DATABASE LINK orcl.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'orcl.net';
This database link is required because the POST_INSTANTIATION_SETUP
procedure runs the SET_GLOBAL_INSTANTIATION_SCN
procedure in the DBMS_APPLY_ADM
package at the destination database, and the SET_GLOBAL_INSTANTIATION_SCN
procedure requires the database link.
If the source database and the capture database are the same database, then while still connected as the Streams administrator in SQL*Plus to the destination database, drop the database link from the source database to the destination database that was cloned from the source database:
DROP DATABASE LINK stms.net;
If you use the RMAN CONVERT
DATABASE
command for instantiation to migrate the database to a different platform, then complete the following steps:
Create a backup of the source database if one does not exist. RMAN requires a valid backup. In this example, create a backup of orcl.net
if one does not exist.
While still connected as an administrative user in SQL*Plus to the source database, archive the current online redo log. For example:
ALTER SYSTEM ARCHIVE LOG CURRENT;
Prepare your environment for database conversion, which includes opening the source database in read-only mode. Complete the following steps:
If the source database is open, then shut it down and start it in read-only mode.
Run the CHECK_DB
and CHECK_EXTERNAL
functions in the DBMS_TDB
package. Check the results to ensure that the conversion is supported by the RMAN CONVERT
DATABASE
command.
See Also: Oracle Database Backup and Recovery Advanced User's Guide for more information about these steps |
Determine the current SCN of the source database:
SET SERVEROUTPUT ON SIZE 1000000 DECLARE current_scn NUMBER; BEGIN current_scn:= DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER; DBMS_OUTPUT.PUT_LINE('Current SCN: ' || current_scn); END; /
Make a note of the SCN value returned. You will use this number for the instantiation SCN in "Task 4: Setting Up Streams After Instantiation". For this example, assume that the returned value is 748044
.
Open RMAN and run the CONVERT
DATABASE
command.
Make sure you use NEW
DATABASE
database_name
in the CONVERT
DATABASE
command to specify the name of the destination database. In this example, the destination database name is stms
. Therefore, the CONVERT
DATABASE
command for this example includes NEW
DATABASE
stms
.
The following example is an RMAN CONVERT
DATABASE
command for a destination database that is running on the Linux
IA
(64-bit)
platform:
rman RMAN> CONNECT TARGET SYS/change_on_install@orc1.net CONVERT DATABASE NEW DATABASE 'stms' TRANSPORT SCRIPT '/tmp/convertdb/transportscript.sql' TO PLATFORM 'Linux IA (64-bit)' DB_FILE_NAME_CONVERT = ('/home/oracle/dbs','/tmp/convertdb');
Transfer the datafiles, PFILE, and SQL script produced by the RMAN CONVERT
DATABASE
command to the computer system that is running the destination database.
On the computer system that is running the destination database, modify the SQL script so that the destination database always opens with restricted session enabled.
An example script follows with the necessary modifications in bold font:
-- The following commands will create a new control file and use it -- to open the database. -- Data used by Recovery Manager will be lost. -- The contents of online logs will be lost and all backups will -- be invalidated. Use this only if online logs are damaged. -- After mounting the created controlfile, the following SQL -- statement will place the database in the appropriate -- protection mode: -- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE STARTUP NOMOUNT PFILE='init_00gd2lak_1_0.ora' CREATE CONTROLFILE REUSE SET DATABASE "STMS" RESETLOGS NOARCHIVELOG MAXLOGFILES 32 MAXLOGMEMBERS 2 MAXDATAFILES 32 MAXINSTANCES 1 MAXLOGHISTORY 226 LOGFILE GROUP 1 '/tmp/convertdb/archlog1' SIZE 25M, GROUP 2 '/tmp/convertdb/archlog2' SIZE 25M DATAFILE '/tmp/convertdb/systemdf', '/tmp/convertdb/sysauxdf', '/tmp/convertdb/datafile1', '/tmp/convertdb/datafile2', '/tmp/convertdb/datafile3' CHARACTER SET WE8DEC ; -- NOTE: This ALTER SYSTEM statement is added to enable restricted session. ALTER SYSTEM ENABLE RESTRICTED SESSION; -- Database can now be opened zeroing the online logs. ALTER DATABASE OPEN RESETLOGS; -- No tempfile entries found to add. -- set echo off prompt ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ prompt * Your database has been created successfully! prompt * There are many things to think about for the new database. Here prompt * is a checklist to help you stay on track: prompt * 1. You may want to redefine the location of the directory objects. prompt * 2. You may want to change the internal database identifier (DBID) prompt * or the global database name for this database. Use the prompt * NEWDBID Utility (nid). prompt ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SHUTDOWN IMMEDIATE -- NOTE: This startup has the UPGRADE parameter. -- The startup already has restricted session enabled, so no change is needed. STARTUP UPGRADE PFILE='init_00gd2lak_1_0.ora' @@ ?/rdbms/admin/utlirp.sql SHUTDOWN IMMEDIATE -- NOTE: The startup below is generated without the RESTRICT clause. -- Add the RESTRICT clause. STARTUP RESTRICT PFILE='init_00gd2lak_1_0.ora' -- The following step will recompile all PL/SQL modules. -- It may take serveral hours to complete. @@ ?/rdbms/admin/utlrp.sql set feedback 6;
Other changes to the script might be necessary. For example, the datafile locations and PFILE location might need to be changed to point to the correct locations on the destination database computer system.
Connect to the destination database as a system administrator in SQL*Plus, and rename the global name. After an RMAN database instantiation, the destination database has the same global name as the source database, but the destination database must have its original name until the end of the maintenance operation. Rename the global name of the destination database back to its original name with the following statement:
ALTER DATABASE RENAME GLOBAL_NAME TO stms.net;
Connect as the Streams administrator in SQL*Plus to the destination database, and create a database link to the source database. For example:
CONNECT strmadmin/strmadminpw@stms.net CREATE DATABASE LINK orcl.net CONNECT TO strmadmin IDENTIFIED BY strmadminpw USING 'orcl.net';
This database link is required because the POST_INSTANTIATION_SETUP
procedure runs the SET_GLOBAL_INSTANTIATION_SCN
procedure in the DBMS_APPLY_ADM
package at the destination database, and the SET_GLOBAL_INSTANTIATION_SCN
procedure requires the database link.
If the source database and the capture database are the same database, then while still connected as the Streams administrator in SQL*Plus to the destination database, drop the database link from the source database to the destination database that was cloned from the source database:
DROP DATABASE LINK stms.net;
To set up Streams after instantiation, run the POST_INSTANTIATION_SETUP
procedure. The POST_INSTANTIATION_SETUP
procedure must be run at the database where the PRE_INSTANTIATION_SETUP
procedure was run in "Task 2: Setting Up Streams Prior to Instantiation".
When you run the POST_INSTANTIATION_SETUP
procedure, you can specify that the procedure performs the configuration directly, or that the procedure generates a script that contains the configuration actions. See "Deciding Whether to Configure Streams Directly or Generate a Script". The examples in this section specify that the procedure performs the configuration directly.
The parameter values specified in the PRE_INSTANTIATION_SETUP
and POST_INSTANTIATION_SETUP
procedures must match, except for the values of the following parameters: perform_actions
, script_name
, script_directory_object
, and start_processes
. In this example, all of the parameter values match in the two procedures.
It is important to set the instantiation_scn
parameter in the POST_INSTANTIATION_SETUP
procedure correctly. Follow these instructions when you set this parameter:
If RMAN was used for instantiation, then set the instantiation_scn
parameter to the value determined during instantiation. This value was determined when you completed the instantiation in "Instantiating the Database Using the RMAN DUPLICATE Command" or "Instantiating the Database Using the RMAN CONVERT DATABASE Command".
The source database and third database examples in this section set the instantiation_scn
parameter to 748044
for the following reasons:
If the RMAN DUPLICATE
command was used for instantiation, then the command duplicates the database up to one less than the SCN value specified in the UNTIL
SCN
clause. Therefore, you should subtract one from the until SCN value that you specified when you ran the DUPLICATE
command in Step 5 in "Instantiating the Database Using the RMAN DUPLICATE Command". In this example, the until SCN was set to 748045
. Therefore, the instantiation_scn
parameter should be set to 748045 - 1, or 748044
.
If the RMAN CONVERT
DATABASE
command was used for instantiation, then the instantiation_scn
parameter should be set to the SCN value determined immediately before running the CONVERT
DATABASE
command. This value was determined in Step 4 in "Instantiating the Database Using the RMAN CONVERT DATABASE Command".
If Export/Import was used for instantiation, then the instantiation SCN was set during import, and the instantiation_scn
parameter must be set to NULL
. The destination database example in this section sets the instantiation_scn
to NULL
because RMAN cannot be used for database instantiation when the destination database is the capture database.
The specific instructions for setting up Streams after instantiation depend on which database is the capture database. Follow the instructions in the appropriate section:
Note: When thePOST_INSTANTIATION_SETUP procedure is running with the perform_actions parameter set to true , metadata about its configuration actions is recorded in the following data dictionary views: DBA_RECOVERABLE_SCRIPT , DBA_RECOVERABLE_SCRIPT_PARAMS , DBA_RECOVERABLE_SCRIPT_BLOCKS , and DBA_RECOVERABLE_SCRIPT_ERRORS . If the procedure stops because it encounters an error, then you can use the RECOVER_OPERATION procedure in the DBMS_STREAMS_ADM package to complete the configuration after you correct the conditions that caused the error. These views are not populated if a script is used to configure the replication environment. |
See Also:
|
Complete the following steps to set up Streams after instantiation when the source database is the capture database:
Connect in SQL*Plus to the source database as the Streams administrator, and run the POST_INSTANTIATION_SETUP
procedure:
CONNECT strmadmin/strmadminpw@orcl.net DECLARE empty_tbs DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN DBMS_STREAMS_ADM.POST_INSTANTIATION_SETUP( maintain_mode => 'GLOBAL', tablespace_names => empty_tbs, source_database => 'orcl.net', destination_database => 'stms.net', perform_actions => true, script_name => NULL, script_directory_object => NULL, capture_name => 'capture_maint', capture_queue_table => 'strmadmin.capture_q_table', capture_queue_name => 'strmadmin.capture_q', propagation_name => 'prop_maint', apply_name => 'apply_maint', apply_queue_table => 'strmadmin.apply_q', apply_queue_name => 'strmadmin.apply_q_table', bi_directional => false, include_ddl => true, start_processes => false, instantiation_scn => 748044, -- NULL if Export/Import instantiation exclude_schemas => '*', exclude_flags => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL); END; /
Proceed to "Task 5: Finishing the Maintenance Operation and Removing Streams".
Complete the following steps to set up Streams after instantiation when the destination database is the capture database:
Connect in SQL*Plus to the destination database as the Streams administrator, and run the POST_INSTANTIATION_SETUP
procedure:
CONNECT strmadmin/strmadminpw@stms.net DECLARE empty_tbs DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN DBMS_STREAMS_ADM.POST_INSTANTIATION_SETUP( maintain_mode => 'GLOBAL', tablespace_names => empty_tbs, source_database => 'orcl.net', destination_database => 'stms.net', perform_actions => true, script_name => NULL, script_directory_object => NULL, capture_name => 'capture_maint', capture_queue_table => 'strmadmin.streams_q_table', capture_queue_name => 'strmadmin.streams_q', apply_name => 'apply_maint', apply_queue_table => 'strmadmin.streams_q', apply_queue_name => 'strmadmin.streams_q_table', bi_directional => false, include_ddl => true, start_processes => false, instantiation_scn => NULL, exclude_schemas => '*', exclude_flags => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL); END; /
Notice that the propagation_name
parameter is omitted because a propagation is not necessary when the destination database is the capture database.
Proceed to "Task 5: Finishing the Maintenance Operation and Removing Streams".
This example assumes that the global name of the third database is thrd.net
. Complete the following steps to set up Streams after instantiation when a third database is the capture database:
Connect in SQL*Plus to the third database as the Streams administrator, and run the POST_INSTANTIATION_SETUP
procedure:
CONNECT strmadmin/strmadminpw@thrd.net DECLARE empty_tbs DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN DBMS_STREAMS_ADM.POST_INSTANTIATION_SETUP( maintain_mode => 'GLOBAL', tablespace_names => empty_tbs, source_database => 'orcl.net', destination_database => 'stms.net', perform_actions => true, script_name => NULL, script_directory_object => NULL, capture_name => 'capture_maint', capture_queue_table => 'strmadmin.capture_q_table', capture_queue_name => 'strmadmin.capture_q', propagation_name => 'prop_maint', apply_name => 'apply_maint', apply_queue_table => 'strmadmin.apply_q', apply_queue_name => 'strmadmin.apply_q_table', bi_directional => false, include_ddl => true, start_processes => false, instantiation_scn => 748044, -- NULL if Export/Import instantiation exclude_schemas => '*', exclude_flags => DBMS_STREAMS_ADM.EXCLUDE_FLAGS_UNSUPPORTED + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DML + DBMS_STREAMS_ADM.EXCLUDE_FLAGS_DDL); END; /
Proceed to "Task 5: Finishing the Maintenance Operation and Removing Streams".
Complete the following steps to finish a maintenance operation using Oracle Streams and remove Streams components:
At the destination database, disable any imported jobs that modify data that will be replicated from the source database. Query the DBA_JOBS
data dictionary view to list the jobs.
If you are applying a patch, then apply the patch to the destination database now. Follow the instructions included with the patch.
If you are upgrading user-created applications, then, at the destination database, you might need to complete the following steps:
Modify the schema objects in the database to support the upgraded user-created applications.
Configure one or more declarative rule-based transformations and DML handlers that modify row LCRs from the source database so that the apply process applies these row LCRs to the modified schema objects correctly. For example, if a column name was changed to support the upgraded user-created applications, then a declarative rule-based transformation should rename the column in a row LCR that involves the column.
Configure one or more virtual dependency definitions if row LCRs might contain logical dependencies that cannot be detected by the apply process alone.
While connected as an administrative user in SQL*Plus to the destination database, use the ALTER
SYSTEM
statement to disable the RESTRICTED
SESSION
:
CONNECT SYSTEM/MANAGER ALTER SYSTEM DISABLE RESTRICTED SESSION;
While connected as the Streams administrator in SQL*Plus to the destination database, start the apply process. For example:
CONNECT strmadmin/strmadminpw@stms.net BEGIN DBMS_APPLY_ADM.START_APPLY( apply_name => 'apply_maint'); END; /
While connected as the Streams administrator in SQL*Plus to the capture database, start the capture process. For example:
BEGIN DBMS_CAPTURE_ADM.START_CAPTURE( capture_name => 'capture_maint'); END; /
This step begins the process of replicating changes that were made to the source database during instantiation of the destination database.
Monitor the Streams environment until the apply process at the destination database has applied most of the changes from the source database. For example, if the name of the capture process is capture_maint
, and the name of the apply process is apply_maint
, then run the following query at the capture database:
COLUMN ENQUEUE_MESSAGE_NUMBER HEADING 'Captured SCN' FORMAT 99999999999 COLUMN LWM_MESSAGE_NUMBER HEADING 'Applied SCN' FORMAT 99999999999 SELECT c.ENQUEUE_MESSAGE_NUMBER, a.LWM_MESSAGE_NUMBER FROM V$STREAMS_CAPTURE c, V$STREAMS_APPLY_COORDINATOR@stms.net a WHERE CAPTURE_NAME = 'CAPTURE_MAINT' AND APPLY_NAME = 'APPLY_MAINT';
When the two SCN values returned by this query are nearly equal, most of the changes from the source database have been applied at the destination database, and you can move on to the next step. At this point in the process, the values returned by this query might never be equal because the source database still allows changes.
If this query returns no results, then make sure the Streams clients in the environment are enabled by querying the STATUS
column in the DBA_CAPTURE
view at the capture database and the DBA_APPLY
view at the destination database. If the Streams configuration uses a propagation, you can check the status of the propagation by running the query in "Displaying the Schedule for a Propagation Job".
If a Streams client is disabled, then try restarting it. If a Streams client will not restart, then troubleshoot the environment using the information in Chapter 18, "Troubleshooting a Streams Environment".
While connected as the Streams administrator in SQL*Plus to the destination database, make sure there are no apply errors by running the following query:
CONNECT strmadmin/strmadminpw@stms.net SELECT COUNT(*) FROM DBA_APPLY_ERROR;
If this query returns zero, then move on to the next step. If this query shows errors in the error queue, then resolve these errors before continuing. See "Managing Apply Errors" for instructions.
Disconnect all applications and users from the source database.
While connected as an administrative user in SQL*Plus to the source database, restrict access to the database. For example:
CONNECT SYSTEM/MANAGER@orcl.net ALTER SYSTEM ENABLE RESTRICTED SESSION;
While connected as an administrative user in SQL*Plus to the source database, repeat the query you ran in Step 7. When the two SCN values returned by the query are equal, all of the changes from the source database have been applied at the destination database, and you can move on to the next step.
While connected as the Streams administrator in SQL*Plus to the destination database, repeat the query you ran in Step 8. If this query returns zero, then move on to the next step. If this query shows errors in the error queue, then resolve these errors before continuing. See "Managing Apply Errors" for instructions.
If you performed any actions that created, modified, or deleted job queue processes at the source database during the maintenance operation, then perform the same actions at the destination database. See "Considerations for Job Queue Processes and PL/SQL Package Subprograms" for more information.
If you invoked any Oracle-supplied PL/SQL package subprograms at the source database during the maintenance operation that modified both user data and dictionary metadata at the same time, then invoke the same subprograms at the destination database. See "Considerations for Job Queue Processes and PL/SQL Package Subprograms" for more information.
Remove the Streams components that are no longer needed from both databases, including the ANYDATA
queues, supplemental logging specifications, the capture process, the propagation if one exists, and the apply process. Connect as the Streams administrator in SQL*Plus to the capture database, and run the CLEANUP_INSTANTIATION_SETUP
procedure to remove the Streams components both databases.
If the capture database is the source database or a third database, then run the following procedure:
DECLARE empty_tbs DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN DBMS_STREAMS_ADM.CLEANUP_INSTANTIATION_SETUP( maintain_mode => 'GLOBAL', tablespace_names => empty_tbs, source_database => 'orcl.net', destination_database => 'stms.net', perform_actions => true, script_name => NULL, script_directory_object => NULL, capture_name => 'capture_maint', capture_queue_table => 'strmadmin.capture_q_table', capture_queue_name => 'strmadmin.capture_q', propagation_name => 'prop_maint', apply_name => 'apply_maint', apply_queue_table => 'strmadmin.apply_q', apply_queue_name => 'strmadmin.apply_q_table', bi_directional => false, change_global_name => true); END; /
If the capture database is the destination database, then run the following procedure:
DECLARE empty_tbs DBMS_STREAMS_TABLESPACE_ADM.TABLESPACE_SET; BEGIN DBMS_STREAMS_ADM.CLEANUP_INSTANTIATION_SETUP( maintain_mode => 'GLOBAL', tablespace_names => empty_tbs, source_database => 'orcl.net', destination_database => 'stms.net', perform_actions => true, script_name => NULL, script_directory_object => NULL, capture_name => 'capture_maint', capture_queue_table => 'strmadmin.streams_q_table', capture_queue_name => 'strmadmin.streams_q', apply_name => 'apply_maint', apply_queue_table => 'strmadmin.streams_q', apply_queue_name => 'strmadmin.streams_q_table', bi_directional => false, change_global_name => true); END; /
Notice that the propagation_name
parameter is omitted because a propagation is not necessary when the destination database is the capture database.
Both sample procedures in this step rename the global name of the destination database to orc1.net
because the change_global_name
parameter is set to true
.
Shut down the source database. This database should not be opened again.
At the destination database, enable any jobs that you disabled earlier.
Make the destination database available for applications and users. Redirect any applications and users that were connecting to the source database to the destination database. If necessary, reconfigure your network and Oracle Net so that systems that communicated with the source database now communicate with the destination database. See Oracle Database Net Services Administrator's Guide for instructions.
If you no longer need the Streams administrator at the destination database, then connect as an administrative user in SQL*Plus to the destination database, and run the following statement:
CONNECT SYSTEM/MANAGER@orcl.net DROP USER strmadmin CASCADE;
The maintenance operation is complete.