Oracle® Streams Concepts and Administration 10g Release 2 (10.2) Part Number B14229-01 |
|
|
View PDF |
This chapter contains information about identifying and resolving common problems in a Streams environment.
This chapter contains these topics:
See Also: Oracle Streams Replication Administrator's Guide for more information about troubleshooting Streams replication environments |
If a capture process is not capturing changes as expected, or if you are having other problems with a capture process, then use the following checklist to identify and resolve capture problems:
Are You Trying to Configure Downstream Capture without DBMS_CAPTURE_ADM?
Are More Actions Required for Downstream Capture without a Database Link?
A capture process captures changes only when it is enabled.
You can check whether a capture process is enabled, disabled, or aborted by querying the DBA_CAPTURE
data dictionary view. For example, to check whether a capture process named capture
is enabled, run the following query:
SELECT STATUS FROM DBA_CAPTURE WHERE CAPTURE_NAME = 'CAPTURE';
If the capture process is disabled, then your output looks similar to the following:
STATUS -------- DISABLED
If the capture process is disabled, then try restarting it. If the capture process is aborted, then you might need to correct an error before you can restart it successfully.
To determine why the capture process aborted, query the DBA_CAPTURE
data dictionary view or check the trace file for the capture process. The following query shows when the capture process aborted and the error that caused it to abort:
COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A10 COLUMN STATUS_CHANGE_TIME HEADING 'Abort Time' COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999 COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A40 SELECT CAPTURE_NAME, STATUS_CHANGE_TIME, ERROR_NUMBER, ERROR_MESSAGE FROM DBA_CAPTURE WHERE STATUS='ABORTED';
See Also:
|
If a capture process has not captured recent changes, then the cause might be that the capture process has fallen behind. To check, you can query the V$STREAMS_CAPTURE
dynamic performance view. If capture process latency is high, then you might be able to improve performance by adjusting the setting of the parallelism
capture process parameter.
When a capture process is started or restarted, it might need to scan redo log files that were generated before the log file that contains the start SCN. You can query the DBA_CAPTURE
data dictionary view to determine the first SCN and start SCN for a capture process. Removing required redo log files before they are scanned by a capture process causes the capture process to abort and results in the following error in a capture process trace file:
ORA-01291: missing logfile
If you see this error, then try restoring any missing redo log file and restarting the capture process. You can check the V$LOGMNR_LOGS
dynamic performance view to determine the missing SCN range, and add the relevant redo log files. A capture process needs the redo log file that includes the required checkpoint SCN and all subsequent redo log files. You can query the REQUIRED_CHECKPOINT_SCN
column in the DBA_CAPTURE
data dictionary view to determine the required checkpoint SCN for a capture process.
If you are using the flash recovery area feature of Recovery Manager (RMAN) on a source database in a Streams environment, then RMAN might delete archived redo log files that are required by a capture process. RMAN might delete these files when the disk space used by the recovery-related files is nearing the specified disk quota for the flash recovery area. To prevent this problem in the future, complete one or more of the following actions:
Increase the disk quota for the flash recovery area. Increasing the disk quota makes it less likely that RMAN will delete a required archived redo log file, but it will not always prevent the problem.
Configure the source database to store archived redo log files in a location other than the flash recovery area. A local capture process will be able to use the log files in the other location if the required log files are missing in the flash recovery area. In this case, a database administrator must manage the log files manually in the other location.
See Also:
|
If a downstream capture process is not capturing changes, then it might be waiting for redo data to scan. Redo log files can be registered implicitly or explicitly for a downstream capture process. Redo log files registered implicitly typically are registered in one of the following ways:
For a real-time downstream capture process, redo transport services use the log writer process (LGWR) to transfer the redo data from the source database to the standby redo log at the downstream database. Next, the archiver at the downstream database registers the redo log files with the downstream capture process when it archives them.
For an archived-log downstream capture process, redo transport services transfer the archived redo log files from the source database to the downstream database and register the archived redo log files with the downstream capture process.
If redo log files are registered explicitly for a downstream capture process, then you must manually transfer the redo log files to the downstream database and register them with the downstream capture process.
Regardless of whether the redo log files are registered implicitly or explicitly, the downstream capture process can capture changes made to the source database only if the appropriate redo log files are registered with the downstream capture process. You can query the V$STREAMS_CAPTURE
dynamic performance view to determine whether a downstream capture process is waiting for a redo log file. For example, run the following query for a downstream capture process named strm05_capture
:
SELECT STATE FROM V$STREAMS_CAPTURE WHERE CAPTURE_NAME='STRM05_CAPTURE';
If the capture process state is either WAITING
FOR
DICTIONARY
REDO
or WAITING
FOR
REDO
, then verify that the redo log files have been registered with the downstream capture process by querying the DBA_REGISTERED_ARCHIVED_LOG
and DBA_CAPTURE
data dictionary views. For example, the following query lists the redo log files currently registered with the strm05_capture
downstream capture process:
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A15 COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 9999999 COLUMN NAME HEADING 'Archived Redo Log|File Name' FORMAT A30 COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' FORMAT A10 COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' FORMAT A10 SELECT r.SOURCE_DATABASE, r.SEQUENCE#, r.NAME, r.DICTIONARY_BEGIN, r.DICTIONARY_END FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c WHERE c.CAPTURE_NAME = 'STRM05_CAPTURE' AND r.CONSUMER_NAME = c.CAPTURE_NAME;
If this query does not return any rows, then no redo log files are registered with the capture process currently. If you configured redo transport services to transfer redo data from the source database to the downstream database for this capture process, then make sure the redo transport services are configured correctly. If the redo transport services are configured correctly, then run the ALTER
SYSTEM
ARCHIVE
LOG
CURRENT
statement at the source database to archive a log file. If you did not configure redo transport services to transfer redo data, then make sure the method you are using for log file transfer and registration is working properly. You can register log files explicitly using an ALTER
DATABASE
REGISTER
LOGICAL
LOGFILE
statement.
If the downstream capture process is waiting for redo, then it also is possible that there is a problem with the network connection between the source database and the downstream database. There also might be a problem with the log file transfer method. Check your network connection and log file transfer method to ensure that they are working properly.
If you configured a real-time downstream capture process, and no redo log files are registered with the capture process, then try switching the log file at the source database. You might need to switch the log file more than once if there is little or no activity at the source database.
Also, if you plan to use a downstream capture process to capture changes to historical data, then consider the following additional issues:
Both the source database that generates the redo log files and the database that runs a downstream capture process must be Oracle Database 10g databases.
The start of a data dictionary build must be present in the oldest redo log file added, and the capture process must be configured with a first SCN that matches the start of the data dictionary build.
The database objects for which the capture process will capture changes must be prepared for instantiation at the source database, not at the downstream database. In addition, you cannot specify a time in the past when you prepare objects for instantiation. Objects are always prepared for instantiation at the current database SCN, and only changes to a database object that occurred after the object was prepared for instantiation can be captured by a capture process.
You must use the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to create a downstream capture process. If you try to create a capture process using a procedure in the DBMS_STREAMS_ADM
package and specify a source database name that does not match the global name of the local database, then Oracle returns the following error:
ORA-26678: Streams capture process must be created first
To correct the problem, use the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package to create the downstream capture process.
If you are trying to create a local capture process using a procedure in the DBMS_STREAMS_ADM
package, and you encounter this error, then make sure the database name specified in the source_database
parameter of the procedure you are running matches the global name of the local database.
When downstream capture is configured with a database link, the database link can be used to perform operations at the source database and obtain information from the source database automatically. When downstream capture is configured without a database link, these actions must be performed manually, and the information must be obtained manually. If you do not complete these actions manually, then errors result when you try to create the downstream capture process.
Specifically, the following actions must be performed manually when you configure downstream capture without a database link:
In certain situations, you must run the DBMS_CAPTURE_ADM.BUILD
procedure at the source database to extract the data dictionary at the source database to the redo log before a capture process is created.
You must prepare the source database objects for instantiation.
You must obtain the first SCN for the downstream capture process and specify the first SCN using the first_scn
parameter when you create the capture process with the CREATE_CAPTURE
procedure in the DBMS_CAPTURE_ADM
package.
If a propagation is not propagating changes as expected, then use the following checklist to identify and resolve propagation problems:
If messages are not appearing in the destination queue for a propagation as expected, then the propagation might not be configured to propagate messages from the correct source queue to the correct destination queue.
For example, to check the source queue and destination queue for a propagation named dbs1_to_dbs2
, run the following query:
COLUMN SOURCE_QUEUE HEADING 'Source Queue' FORMAT A35 COLUMN DESTINATION_QUEUE HEADING 'Destination Queue' FORMAT A35 SELECT p.SOURCE_QUEUE_OWNER||'.'|| p.SOURCE_QUEUE_NAME||'@'|| g.GLOBAL_NAME SOURCE_QUEUE, p.DESTINATION_QUEUE_OWNER||'.'|| p.DESTINATION_QUEUE_NAME||'@'|| p.DESTINATION_DBLINK DESTINATION_QUEUE FROM DBA_PROPAGATION p, GLOBAL_NAME g WHERE p.PROPAGATION_NAME = 'DBS1_TO_DBS2';
Your output looks similar to the following:
Source Queue Destination Queue ----------------------------------- ----------------------------------- STRMADMIN.STREAMS_QUEUE@DBS1.NET STRMADMIN.STREAMS_QUEUE@DBS2.NET
If the propagation is not using the correct queues, then create a new propagation. You might need to remove the existing propagation if it is not appropriate for your environment.
For a propagation job to propagate messages, the propagation must be enabled. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled.
You can find the following information about a propagation:
The database link used to propagate messages from the source queue to the destination queue
Whether the propagation is ENABLED
, DISABLED
, or ABORTED
The date of the last error, if there are any propagation errors
The error message of the last error, if there are any propagation errors
For example, to check whether a propagation named streams_propagation
is enabled, run the following query:
COLUMN DESTINATION_DBLINK HEADING 'Database|Link' FORMAT A10 COLUMN STATUS HEADING 'Status' FORMAT A8 COLUMN ERROR_DATE HEADING 'Error|Date' COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A50 SELECT DESTINATION_DBLINK, STATUS, ERROR_DATE, ERROR_MESSAGE FROM DBA_PROPAGATION WHERE PROPAGATION_NAME = 'STREAMS_PROPAGATION';
If the propagation is disabled currently, then your output looks similar to the following:
Database Error Link Status Date Error Message ---------- -------- --------- -------------------------------------------------- INST2.NET DISABLED 27-APR-05 ORA-25307: Enqueue rate too high, flow control enabled
If there is a problem, then try the following actions to correct it:
If a propagation is disabled, then you can enable it using the START_PROPAGATION
procedure in the DBMS_PROPAGATION_ADM
package, if you have not done so already.
If the propagation is disabled or aborted, and the Error Date
and Error Message
fields are populated, then diagnose and correct the problem based on the error message.
If the propagation is disabled or aborted, then check the trace file for the propagation job process. The query in "Displaying the Schedule for a Propagation Job" displays the propagation job process.
If the propagation job is enabled, but is not propagating messages, then try stopping and restarting the propagation.
Propagation jobs use job queue processes to propagate messages. Make sure the JOB_QUEUE_PROCESSES
initialization parameter is set to 2
or higher in each database instance that runs propagations. It should be set to a value that is high enough to accommodate all of the jobs that run simultaneously.
See Also:
|
ANYDATA
queues are secure queues, and security must be configured properly for users to be able to perform operations on them. If you use the SET_UP_QUEUE
procedure in the DBMS_STREAMS_ADM
package to configure a secure ANYDATA
queue, then an error is raised if the agent that SET_UP_QUEUE
tries to create already exists and is associated with a user other than the user specified by queue_user
in this procedure. In this case, rename or remove the existing agent using the ALTER_AQ_AGENT
or DROP_AQ_AGENT
procedure, respectively, in the DBMS_AQADM
package. Next, retry SET_UP_QUEUE
.
In addition, you might encounter one of the following errors if security is not configured properly for an ANYDATA
queue:
Secure queue access must be granted to an AQ agent explicitly for both enqueue and dequeue operations. You grant the agent these privileges using the ENABLE_DB_ACCESS
procedure in the DBMS_AQADM
package.
For example, to grant an agent named explicit_dq
privileges of the database user oe
, run the following procedure:
BEGIN DBMS_AQADM.ENABLE_DB_ACCESS( agent_name => 'explicit_dq', db_username => 'oe'); END; /
To check the privileges of the agents in a database, run the following query:
SELECT AGENT_NAME "Agent", DB_USERNAME "User" FROM DBA_AQ_AGENT_PRIVS;
Your output looks similar to the following:
Agent User ------------------------------ ------------------------------ EXPLICIT_ENQ OE APPLY_OE OE EXPLICIT_DQ OE
See Also: "Enabling a User to Perform Operations on a Secure Queue" for a detailed example that grants privileges to an agent |
To enqueue into a secure queue, the SENDER_ID
must be set to an AQ agent with secure queue privileges for the queue in the message properties.
See Also: "Wrapping User Message Payloads in an ANYDATA Wrapper and Enqueuing Them" for an example that sets theSENDER_ID for enqueue |
If an apply process is not applying changes as expected, then use the following checklist to identify and resolve apply problems:
Does the Apply Process Apply Captured Messages or User-Enqueued Messages?
Is the Apply Process Queue Receiving the Messages to be Applied?
Is the AQ_TM_PROCESSES Initialization Parameter Set to Zero?
An apply process applies changes only when it is enabled. You can check whether an apply process is enabled, disabled, or aborted by querying the DBA_APPLY
data dictionary view. For example, to check whether an apply process named apply
is enabled, run the following query:
SELECT STATUS FROM DBA_APPLY WHERE APPLY_NAME = 'APPLY';
If the apply process is disabled, then your output looks similar to the following:
STATUS -------- DISABLED
If the apply process is disabled, then try restarting it. If the apply process is aborted, then you might need to correct an error before you can restart it successfully.
To determine why the apply process aborted, query the DBA_APPLY
data dictionary view or check the trace files for the apply process. The following query shows when the apply process aborted and the error that caused it to abort:
COLUMN APPLY_NAME HEADING 'APPLY|Process|Name' FORMAT A10 COLUMN STATUS_CHANGE_TIME HEADING 'Abort Time' COLUMN ERROR_NUMBER HEADING 'Error Number' FORMAT 99999999 COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A40 SELECT APPLY_NAME, STATUS_CHANGE_TIME, ERROR_NUMBER, ERROR_MESSAGE FROM DBA_APPLY WHERE STATUS='ABORTED';
See Also:
|
If an apply process has not applied recent changes, then the problem might be that the apply process has fallen behind. You can check apply process latency by querying the V$STREAMS_APPLY_COORDINATOR
dynamic performance view. If apply process latency is high, then you might be able to improve performance by adjusting the setting of the parallelism
apply process parameter.
An apply process can apply either captured messages or user-enqueued messages, but not both types of messages. An apply process might not be applying messages of a one type because it was configured to apply the other type of messages.
You can check the type of messages applied by an apply process by querying the DBA_APPLY
data dictionary view. For example, to check whether an apply process named apply
applies captured messages or user-enqueued messages, run the following query:
COLUMN APPLY_CAPTURED HEADING 'Type of Messages Applied' FORMAT A25 SELECT DECODE(APPLY_CAPTURED, 'YES', 'Captured', 'NO', 'User-Enqueued') APPLY_CAPTURED FROM DBA_APPLY WHERE APPLY_NAME = 'APPLY';
If the apply process applies captured messages, then your output looks similar to the following:
Type of Messages Applied ------------------------- Captured
If an apply process is not applying the expected type of messages, then you might need to create a new apply process to apply the messages.
An apply process must receive messages in its queue before it can apply these messages. Therefore, if an apply process is applying captured messages, then the capture process that captures these messages must be enabled, and it must be configured properly. Similarly, if messages are propagated from one or more databases before reaching the apply process, then each propagation must be enabled and must be configured properly. If a capture process or a propagation on which the apply process depends is not enabled or is not configured properly, then the messages might never reach the apply process queue.
The rule sets used by all Streams clients, including capture processes and propagations, determine the behavior of these Streams clients. Therefore, make sure the rule sets for any capture processes or propagations on which an apply process depends contain the correct rules. If the rules for these Streams clients are not configured properly, then the apply process queue might never receive the appropriate messages. Also, a message traveling through a stream is the composition of all of the transformations done along the path. For example, if a capture process uses subset rules and performs row migration during capture of a message, and a propagation uses a rule-based transformation on the message to change the table name, then, when the message reaches an apply process, the apply process rules must account for these transformations.
In an environment where a capture process captures changes that are propagated and applied at multiple databases, you can use the following guidelines to determine whether a problem is caused by a capture process or a propagation on which an apply process depends or by the apply process itself:
If no other destination databases of a capture process are applying changes from the capture process, then the problem is most likely caused by the capture process or a propagation near the capture process. In this case, first make sure the capture process is enabled and configured properly, and then make sure the propagations nearest the capture process are enabled and configured properly.
If other destination databases of a capture process are applying changes from the capture process, then the problem is most likely caused by the apply process itself or a propagation near the apply process. In this case, first make sure the apply process is enabled and configured properly, and then make sure the propagations nearest the apply process are enabled and configured properly.
You can use apply handlers to handle messages dequeued by an apply process in a customized way. These handlers include DML handlers, DDL handlers, precommit handlers, and message handlers. If an apply process is not behaving as expected, then check the handler procedures used by the apply process, and correct any flaws. You might need to modify a handler procedure or remove it to correct an apply problem.
You can find the names of these procedures by querying the DBA_APPLY_DML_HANDLERS
and DBA_APPLY
data dictionary views.
See Also:
|
The AQ_TM_PROCESSES
initialization parameter controls time monitoring on queue messages and controls processing of messages with delay and expiration properties specified. In Oracle Database 10g, the database automatically controls these activities when the AQ_TM_PROCESSES
initialization parameter is not set.
If an apply process is not applying messages, but there are messages that satisfy the apply process rule sets in the apply process queue, then make sure the AQ_TM_PROCESSES
initialization parameter is not set to zero at the destination database. If this parameter is set to zero, then unset this parameter or set it to a nonzero value and monitor the apply process to see if it begins to apply messages.
To determine whether there are messages in a buffered queue, you can query the V$BUFFERED_QUEUES
and V$BUFFERED_SUBSCRIBERS
dynamic performance views. To determine whether there are user-enqueued messages in a queue, you can query the queue table for the queue.
See Also:
|
When an apply process cannot apply a message, it moves the message and all of the other messages in the same transaction into the error queue. You should check for apply errors periodically to see if there are any transactions that could not be applied.
You can check for apply errors by querying the DBA_APPLY_ERROR
data dictionary view. Also, you can reexecute a particular transaction from the error queue or all of the transactions in the error queue.
When a capture process, a propagation, an apply process, or a messaging client is not behaving as expected, the problem might be that rules or rule-based transformations for the Streams client are not configured properly. Use the following checklist to identify and resolve problems with rules and rule-based transformations:
Are Declarative Rule-Based Transformations Configured Properly?
Are the Custom Rule-Based Transformations Configured Properly?
See Also: |
If a capture process, a propagation, an apply process, or a messaging client is behaving in an unexpected way, then the problem might be that the rules in either the positive rule set or negative rule set for the Streams client are not configured properly. For example, if you expect a capture process to capture changes made to a particular table, but the capture process is not capturing these changes, then the cause might be that the rules in the rule sets used by the capture process do not instruct the capture process to capture changes to the table.
You can check the rules for a particular Streams client by querying the DBA_STREAMS_RULES
data dictionary view. If you use both positive and negative rule sets in your Streams environment, then it is important to know whether a rule returned by this view is in the positive or negative rule set for a particular Streams client.
A Streams client performs an action, such as capture, propagation, apply, or dequeue, for messages that satisfy its rule sets. In general, a message satisfies the rule sets for a Streams client if no rules in the negative rule set evaluate to TRUE
for the message, and at least one rule in the positive rule set evaluates to TRUE
for the message.
"Rule Sets and Rule Evaluation of Messages" contains more detailed information about how a message satisfies the rule sets for a Streams client, including information about Streams client behavior when one or more rule sets are not specified.
This section includes the following subsections:
Schema and global rules in the positive rule set for a Streams client instruct the Streams client to perform its task for all of the messages relating to a particular schema or database, respectively. Schema and global rules in the negative rule set for a Streams client instruct the Streams client to discard all of the messages relating to a particular schema or database, respectively. If a Streams client is not behaving as expected, then it might be because schema or global rules are not configured properly for the Streams client.
For example, suppose a database is running an apply process named strm01_apply
, and you want this apply process to apply LCRs containing changes to the hr
schema. If the apply process uses a negative rule set, then make sure there are no schema rules that evaluate to TRUE
for this schema in the negative rule set. Such rules cause the apply process to discard LCRs containing changes to the schema. "Displaying the Rules in the Negative Rule Set for a Streams Client" contains an example of a query that shows such rules.
If the query returns any such rules, then the rules returned might be causing the apply process to discard changes to the schema. If this query returns no rows, then make sure there are schema rules in the positive rule set for the apply process that evaluate to TRUE
for the schema. "Displaying the Rules in the Positive Rule Set for a Streams Client" contains an example of a query that shows such rules.
Table rules in the positive rule set for a Streams client instruct the Streams client to perform its task for the messages relating to one or more particular tables. Table rules in the negative rule set for a Streams client instruct the Streams client to discard the messages relating to one or more particular tables.
If a Streams client is not behaving as expected for a particular table, then it might be for one of the following reasons:
One or more global rules in the rule sets for the Streams client instruct the Streams client to behave in a particular way for messages relating to the table because the table is in a specific database. That is, a global rule in the negative rule set for the Streams client might instruct the Streams client to discard all messages from the source database that contains the table, or a global rule in the positive rule set for the Streams client might instruct the Streams client to perform its task for all messages from the source database that contains the table.
One or more schema rules in the rule sets for the Streams client instruct the Streams client to behave in a particular way for messages relating to the table because the table is in a specific schema. That is, a schema rule in the negative rule set for the Streams client might instruct the Streams client to discard all messages relating to database objects in the schema, or a schema rule in the positive rule set for the Streams client might instruct the Streams client to perform its task for all messages relating to database objects in the schema.
One or more table rules in the rule sets for the Streams client instruct the Streams client to behave in a particular way for messages relating to the table.
If you are sure that no global or schema rules are causing the unexpected behavior, then you can check for table rules in the rule sets for a Streams client. For example, if you expect a capture process to capture changes to a particular table, but the capture process is not capturing these changes, then the cause might be that the rules in the positive and negative rule sets for the capture process do not instruct it to capture changes to the table.
Suppose a database is running a capture process named strm01_capture,
and you want this capture process to capture changes to the hr.departments
table. If the capture process uses a negative rule set, then make sure there are no table rules that evaluate to TRUE
for this table in the negative rule set. Such rules cause the capture process to discard changes to the table. "Displaying the Rules in the Negative Rule Set for a Streams Client" contains an example of a query that shows rules in a negative rule set.
If that query returns any such rules, then the rules returned might be causing the capture process to discard changes to the table. If that query returns no rules, then make sure there are one or more table rules in the positive rule set for the capture process that evaluate to TRUE
for the table. "Displaying the Rules in the Positive Rule Set for a Streams Client" contains an example of a query that shows rules in a positive rule set.
You can also determine which rules have a particular pattern in their rule condition. "Listing Each Rule that Contains a Specified Pattern in Its Condition". For example, you can find all of the rules with the string "departments" in their rule condition, and you can make sure these rules are in the correct rule sets.
A subset rule can be in the rule set used by a capture process, propagation, apply process, or messaging client. A subset rule evaluates to TRUE
only if a DML operation contains a change to a particular subset of rows in the table. For example, to check for table rules that evaluate to TRUE
for an apply process named strm01_apply
when there are changes to the hr.departments
table, run the following query:
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A20 COLUMN RULE_TYPE HEADING 'Rule Type' FORMAT A20 COLUMN DML_CONDITION HEADING 'Subset Condition' FORMAT A30 SELECT RULE_NAME, RULE_TYPE, DML_CONDITION FROM DBA_STREAMS_RULES WHERE STREAMS_NAME = 'STRM01_APPLY' AND STREAMS_TYPE = 'APPLY' AND SCHEMA_NAME = 'HR' AND OBJECT_NAME = 'DEPARTMENTS';
Rule Name Rule Type Subset Condition -------------------- -------------------- ------------------------------ DEPARTMENTS5 DML location_id=1700 DEPARTMENTS6 DML location_id=1700 DEPARTMENTS7 DML location_id=1700
Notice that this query returns any subset condition for the table in the DML_CONDITION
column, which is labeled "Subset Condition" in the output. In this example, subset rules are specified for the hr.departments
table. These subset rules evaluate to TRUE
only if an LCR contains a change that involves a row where the location_id
is 1700
. So, if you expected the apply process to apply all changes to the table, then these subset rules cause the apply process to discard changes that involve rows where the location_id
is not 1700
.
Note: Subset rules must reside only in positive rule sets. |
A message rule can be in the rule set used by a propagation, apply process, or messaging client. Message rules pertain only to user-enqueued messages of a specific message type, not to captured messages. A message rule evaluates to TRUE
if a user-enqueued message in a queue is of the type specified in the message rule and satisfies the rule condition of the message rule.
If you expect a propagation, apply process, or messaging client to perform its task for some user-enqueued messages, but the Streams client is not performing its task for these messages, then the cause might be that the rules in the positive and negative rule sets for the Streams client do not instruct it to perform its task for these messages. Similarly, if you expect a propagation, apply process, or messaging client to discard some user-enqueued messages, but the Streams client is not discarding these messages, then the cause might be that the rules in the positive and negative rule sets for the Streams client do not instruct it to discard these messages.
For example, suppose you want a messaging client named oe
to dequeue messages of type oe.user_msg
that satisfy the following condition:
:"VAR$_2".OBJECT_OWNER = 'OE' AND :"VAR$_2".OBJECT_NAME = 'ORDERS'
If the messaging client uses a negative rule set, then make sure there are no message rules that evaluate to TRUE
for this message type in the negative rule set. Such rules cause the messaging client to discard these messages. For example, to determine whether there are any such rules in the negative rule set for the messaging client, run the following query:
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A30 COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A30 SELECT RULE_NAME, RULE_CONDITION FROM DBA_STREAMS_RULES WHERE STREAMS_NAME = 'OE' AND MESSAGE_TYPE_OWNER = 'OE' AND MESSAGE_TYPE_NAME = 'USER_MSG' AND RULE_SET_TYPE = 'NEGATIVE';
If this query returns any rules, then the rules returned might be causing the messaging client to discard messages. Examine the rule condition of the returned rules to determine whether these rules are causing the messaging client to discard the messages that it should be dequeuing. If this query returns no rules, then make sure there are message rules in the positive rule set for the messaging client that evaluate to TRUE
for this message type and condition.
For example, to determine whether any message rules evaluate to TRUE
for this message type in the positive rule set for the messaging client, run the following query:
COLUMN RULE_NAME HEADING 'Rule Name' FORMAT A35 COLUMN RULE_CONDITION HEADING 'Rule Condition' FORMAT A35 SELECT RULE_NAME, RULE_CONDITION FROM DBA_STREAMS_RULES WHERE STREAMS_NAME = 'OE' AND MESSAGE_TYPE_OWNER = 'OE' AND MESSAGE_TYPE_NAME = 'USER_MSG' AND RULE_SET_TYPE = 'POSITIVE';
If you have message rules that evaluate to TRUE
for this message type in the positive rule set for the messaging client, then these rules are returned. In this case, your output looks similar to the following:
Rule Name Rule Condition ----------------------------------- ----------------------------------- RULE$_3 :"VAR$_2".OBJECT_OWNER = 'OE' AND :"VAR$_2".OBJECT_NAME = 'ORDERS'
Examine the rule condition for the rules returned to determine whether they instruct the messaging client to dequeue the proper messages. Based on these results, the messaging client named oe
should dequeue messages of oe.user_msg
type that satisfy condition shown in the output. In other words, no rule in the negative messaging client rule set discards these messages, and a rule exists in the positive messaging client rule set that evaluates to TRUE
when the messaging client finds a message in its queue of the of oe.user_msg
type that satisfies the rule condition.
See Also:
|
If you determine that a Streams capture process, propagation, apply process, or messaging client is not behaving as expected because one or more rules must be added to the rule set for the Streams client, then you can use one of the following procedures in the DBMS_STREAMS_ADM
package to add appropriate rules:
ADD_GLOBAL_PROPAGATION_RULES
ADD_GLOBAL_RULES
ADD_SCHEMA_PROPAGATION_RULES
ADD_SCHEMA_RULES
ADD_SUBSET_PROPAGATION_RULES
ADD_SUBSET_RULES
ADD_TABLE_PROPAGATION_RULES
ADD_TABLE_RULES
ADD_MESSAGE_PROPAGATION_RULE
ADD_MESSAGE_RULE
You can use the DBMS_RULE_ADM
package to add customized rules, if necessary.
It is also possible that the Streams capture process, propagation, apply process, or messaging client is not behaving as expected because one or more rules should be altered or removed from a rule set.
If you have the correct rules, and the relevant messages are still filtered out by a Streams capture process, propagation, or apply process, then check your trace files and alert log for a warning about a missing "multi-version data dictionary", which is a Streams data dictionary. The following information might be included in such warning messages:
gdbnm
: Global name of the source database of the missing object
scn
: SCN for the transaction that has been missed
If you find such messages, and you are using custom capture process rules or reusing existing capture process rules for a new destination database, then make sure you run the appropriate procedure to prepare for instantiation:
PREPARE_TABLE_INSTANTIATION
PREPARE_SCHEMA_INSTANTIATION
PREPARE_GLOBAL_INSTANTIATION
Also, make sure propagation is working from the source database to the destination database. Streams data dictionary information is propagated to the destination database and loaded into the dictionary at the destination database.
See Also:
|
A declarative rule-based transformation is a rule-based transformation that covers one of a common set of transformation scenarios for row LCRs. Declarative rule-based transformations are run internally without using PL/SQL. If a Streams capture process, propagation, apply process, or messaging client is not behaving as expected, then check the declarative rule-based transformations specified for the rules used by the Streams client and correct any mistakes.
The most common problems with declarative rule-based transformations are:
The declarative rule-based transformation is specified for a table or involves columns in a table, but the schema either was not specified or was incorrectly specified when the transformation was created. If the schema is not correct in a declarative rule-based transformation, then the transformation will not be run on the appropriate LCRs. You should specify the owning schema for a table when you create a declarative rule-based transformation. If the schema is not specified when a declarative rule-based transformation is created, then the user who creates the transformation is specified for the schema by default.
If the schema is not correct for a declarative rule-based transformation, then, to correct the problem, remove the transformation and re-create it, specifying the correct schema for each table.
If more than one declarative rule-based transformation is specified for a particular rule, then make sure the ordering is correct for execution of these transformations. Incorrect ordering of declarative rule-based transformations can result in errors or inconsistent data.
If the ordering is not correct for the declarative rule-based transformation specified on a single rule, then, to correct the problem, remove the transformations and re-create them with the correct ordering.
A custom rule-based transformation is any modification by a user-defined function to a message when a rule evaluates to TRUE
. A custom rule-based transformation is specified in the action context of a rule, and these action contexts contain a name-value pair with STREAMS$_TRANSFORM_FUNCTION
for the name and a user-created function name for the value. This user-created function performs the transformation. If the user-created function contains any flaws, then unexpected behavior can result.
If a Streams capture process, propagation, apply process, or messaging client is not behaving as expected, then check the custom rule-based transformation functions specified for the rules used by the Streams client and correct any flaws. You can find the names of these functions by querying the DBA_STREAMS_TRANSFORM_FUNCTION
data dictionary view. You might need to modify a transformation function or remove a custom rule-based transformation to correct the problem. Also, make sure the name of the function is spelled correctly when you specify the transformation for a rule.
An error caused by a custom rule-based transformation might cause a capture process, propagation, apply process, or messaging client to abort. In this case, you might need to correct the transformation before the Streams client can be restarted or invoked.
Rule evaluation is done before a custom rule-based transformation. For example, if you have a transformation that changes the name of a table from emps
to employees
, then make sure each rule using the transformation specifies the table name emps
, rather than employees
, in its rule condition.
See Also:
|
In some cases, incorrectly transformed LCRs might have been moved to the error queue by an apply process. When this occurs, you should examine the transaction in the error queue to analyze the feasibility of reexecuting the transaction successfully. If an abnormality is found in the transaction, then you might be able to configure a DML handler to correct the problem. The DML handler will run when you reexecute the error transaction. When a DML handler is used to correct a problem in an error transaction, the apply process that uses the DML handler should be stopped to prevent the DML handler from acting on LCRs that are not involved with the error transaction. After successful reexecution, if the DML handler is no longer needed, then remove it. Also, correct the rule-based transformation to avoid future errors.
Messages about each capture process, propagation, and apply process are recorded in trace files for the database in which the process or propagation job is running. A local capture process runs on a source database, a downstream capture process runs on a downstream database, a propagation job runs on the database containing the source queue in the propagation, and an apply process runs on a destination database. These trace file messages can help you to identify and resolve problems in a Streams environment.
All trace files for background processes are written to the destination directory specified by the initialization parameter BACKGROUND_DUMP_DEST
. The names of trace files are operating system specific, but each file usually includes the name of the process writing the file.
For example, on some operating systems, the trace file name for a process is sid_xxxxx_iiiii
.trc
, where:
sid
is the system identifier for the database
xxxxx
is the name of the process
iiiii
is the operating system process number
Also, you can set the write_alert_log
parameter to y
for both a capture process and an apply process. When this parameter is set to y
, which is the default setting, the alert log for the database contains messages about why the capture process or apply process stopped.
You can control the information in the trace files by setting the trace_level
capture process or apply process parameter using the SET_PARAMETER
procedure in the DBMS_CAPTURE_ADM
and DBMS_APPLY_ADM
packages.
Use the following checklist to check the trace files related to Streams:
Does a Capture Process Trace File Contain Messages About Capture Problems?
Do the Trace Files Related to Propagation Jobs Contain Messages About Problems?
Does an Apply Process Trace File Contain Messages About Apply Problems?
See Also:
|
A capture process is an Oracle background process named c
nnn
, where nnn
is the capture process number. For example, on some operating systems, if the system identifier for a database running a capture process is hqdb
and the capture process number is 01
, then the trace file for the capture process starts with hqdb_c001
.
See Also: "Displaying Change Capture Information About Each Capture Process" for a query that displays the capture process number of a capture process |
Each propagation uses a propagation job that depends on the job queue coordinator process and a job queue process. The job queue coordinator process is named cjq
nn
, where nn
is the job queue coordinator process number, and a job queue process is named j
nnn
, where nnn
is the job queue process number.
For example, on some operating systems, if the system identifier for a database running a propagation job is hqdb
and the job queue coordinator process is 01
, then the trace file for the job queue coordinator process starts with hqdb_cjq01
. Similarly, on the same database, if a job queue process is 001
, then the trace file for the job queue process starts with hqdb_j001
. You can check the process name by querying the PROCESS_NAME
column in the DBA_QUEUE_SCHEDULES
data dictionary view.
See Also: "Is the Propagation Enabled?" for a query that displays the job queue process used by a propagation job |
An apply process is an Oracle background process named a
nnn
, where nnn
is the apply process number. For example, on some operating systems, if the system identifier for a database running an apply process is hqdb
and the apply process number is 001
, then the trace file for the apply process starts with hqdb_a001
.
An apply process also uses parallel execution servers. Information about an apply process might be recorded in the trace file for one or more parallel execution servers. The process name of a parallel execution server is p
nnn
, where nnn
is the process number. So, on some operating systems, if the system identifier for a database running an apply process is hqdb
and the process number is 001
, then the trace file that contains information about a parallel execution server used by an apply process starts with hqdb_p001
.
See Also:
|