Oracle® Streams Concepts and Administration 10g Release 1 (10.1) Part Number B10727-01 |
|
|
View PDF |
This chapter briefly describes the basic concepts and terminology related to Oracle Streams. These concepts are described in more detail in other chapters in this book and in the Oracle Streams Replication Administrator's Guide.
This chapter contains these topics:
Oracle Streams enables information sharing. Using Oracle Streams, you can share data and events in a stream. The stream can propagate information within a database or from one database to another. The stream routes specified information to specified destinations. The result is a feature that provides greater functionality and flexibility than traditional solutions for capturing and managing events, and sharing the events with other databases and applications. Streams provides the capabilities needed to build and operate distributed enterprises and applications, data warehouses, and high availability solutions. You can use all of the capabilities of Oracle Streams at the same time. If your needs change, then you can implement a new capability of Streams without sacrificing existing capabilities.
Using Oracle Streams, you control what information is put into a stream, how the stream flows or is routed from database to database, what happens to events in the stream as they flow into each database, and how the stream terminates. By configuring specific capabilities of Streams, you can address specific requirements. Based on your specifications, Streams can capture, stage, and manage events in the database automatically, including, but not limited to, data manipulation language (DML) changes and data definition language (DDL) changes. You also can put user-defined events into a stream, and Streams can propagate the information to other databases or applications automatically. When events reach a destination, Streams can consume them based on your specifications. Figure 1-1 shows the Streams information flow.
Text description of the illustration strms022.gif
The following sections provide an overview of what Streams can do.
A capture process can capture changes made to tables, schemas, or an entire database. Such changes are recorded in the redo log for a database, and a capture process captures changes from the redo log and formats each captured change into an event called a logical change record (LCR). Rules determine which changes are captured by a capture process, and these captured changes are called captured events.
The database where changes are generated in the redo log is called the source database. A capture process may capture changes locally at the source database, or it may capture changes remotely at a downstream database. A capture process enqueues LCRs into a queue that is associated with it. When a capture process captures events, it is sometimes referred to as implicit capture.
Users and applications also can enqueue events into a queue manually. These events are called user-enqueued events, and they can be LCRs or messages of a user-defined type called user messages. When users and applications enqueue events into a queue manually, it is sometimes referred to as explicit capture.
Events are stored (or staged) in a queue. These events may be captured events or user-enqueued events. A capture process enqueues events into a SYS.AnyData
queue. A a SYS.AnyData
queue can stage events of different types. Users and applications may enqueue events into a SYS.AnyData
queue or into a typed queue. A typed queue can stage events of one specific type only.
Streams propagations can propagate events from one queue to another. These queues may be in the same database or in different databases. Rules determine which events are propagated by a propagation.
An event is consumed when it is dequeued from a queue. An apply process can dequeue events from a queue implicitly. A user, application, or messaging client can dequeue events explicitly. The database where events are consumed is called the destination database. In some configurations, the source database and the destination database may be the same.
Rules determine which events are dequeued and processed by an apply process. An apply process may apply events directly to database objects, or an apply process may pass events to custom PL/SQL subprograms for processing.
Rules determine which events are dequeued by a messaging client. A messaging client dequeues events when it is invoked by an application or a user.
Other capabilities of Streams include the following:
These capabilities are discussed briefly later in this chapter and in detail later in this document and in the Oracle Streams Replication Administrator's Guide.
The following sections briefly describe some of the reasons for using Streams. In some cases, Streams components provide infrastructure for various features of Oracle.
Oracle Streams Advanced Queuing (AQ) enables user applications to enqueue messages into a queue, propagate messages to subscribing queues, notify user applications that messages are ready for consumption, and dequeue messages at the destination. A queue may be configured to stage messages of a particular type only, or a queue may be configured as a SYS.AnyData
queue. Messages of almost any type can be wrapped in a SYS.AnyData
wrapper and staged in SYS.AnyData
queues. AQ supports all the standard features of message queuing systems, including multiconsumer queues, publish and subscribe, content-based routing, Internet propagation, transformations, and gateways to other messaging subsystems.
You can create a queue at a database, and applications can enqueue messages into the queue explicitly. Subscribing applications or messaging clients can dequeue messages directly from this queue. If an application is remote, then a queue may be created in a remote database that subscribes to messages published in the source queue. The destination application can dequeue messages from the remote queue. Alternatively, the destination application can dequeue messages directly from the source queue using a variety of standard protocols.
See Also:
Oracle Streams Advanced Queuing User's Guide and Reference for more information about AQ |
Streams can capture DML and DDL changes made to database objects and replicate those changes to one or more other databases. A Streams capture process captures changes made to source database objects and formats them into LCRs, which can be propagated to destination databases and then applied by Streams apply processes.
The destination databases can allow DML and DDL changes to the same database objects, and these changes may or may not be propagated to the other databases in the environment. In other words, you can configure a Streams environment with one database that propagates changes, or you can configure an environment where changes are propagated between databases bidirectionally. Also, the tables for which data is shared do not need to be identical copies at all databases. Both the structure and the contents of these tables can differ at different databases, and the information in these tables can be shared between these databases.
See Also:
Oracle Streams Replication Administrator's Guide for more information using Streams for replication |
Business events are valuable communications between applications or organizations. An application may enqueue events into a queue explicitly, or a Streams capture process may capture a database event. These captured events may be DML or DDL changes. Propagations may propagate events in a stream through multiple queues. Finally, a user application may dequeue events explicitly, or a Streams apply process may dequeue events implicitly. An apply process may re-enqueue these events explicitly into the same queue or a different queue if necessary.
You can configure queues to retain explicitly-enqueued messages after consumption for a specified period of time. This capability enables you to use Advanced Queuing (AQ) as a business event management system. AQ stores all messages in the database in a transactional manner, where they can be automatically audited and tracked. You can use this audit trail to extract intelligence about the business operations.
Capture processes, propagations, apply processes, and messaging clients perform actions based on rules. You specify which events are captured, propagated, applied, and dequeued using rules, and a built-in rules engine evaluates events based on these rules. The ability to capture events and propagate them to relevant consumers based on rules means that you can use Streams for event notification. Events staged in a queue may be dequeued explicitly by a messaging client or an application, and then actions can be taken based on these events, which may include an email notification, or passing the message to a wireless gateway for transmission to a cell phone or pager.
See Also:
|
Data warehouse loading is a special case of data replication. Some of the most critical tasks in creating and maintaining a data warehouse include refreshing existing data, and adding new data from the operational databases. Streams components can capture changes made to a production system and send those changes to a staging database or directly to a data warehouse or operational data store. Streams capture of redo log information avoids unnecessary overhead on the production systems. Support for data transformations and user-defined apply procedures enables the necessary flexibility to reformat data or update warehouse-specific data fields as data is loaded. In addition, Change Data Capture uses some of the components of Streams to identify data that has changed so that this data can be loaded into a data warehouse.
See Also:
Oracle Data Warehousing Guide for more information about data warehouses |
One solution for data protection is to create a local or remote copy of a production database. In the event of human error or a catastrophe, the copy can be used to resume processing. You can use Streams to configure flexible high availability environments. In addition, you can use Oracle Data Guard, a data protection feature that uses some of the same infrastructure as Streams, to create and maintain a logical standby database, which is a logically equivalent standby copy of a production database. As in the case of Streams replication, a capture process captures changes in the redo log and formats these changes into LCRs. These LCRs are applied at the standby databases.
The standby databases are fully open for read/write and may include specialized indexes or other database objects. Therefore, these standby databases can be queried as updates are applied, making Oracle Data Guard a good solution for off loading queries from a production database.
It is important to move the updates to the remote site as soon as possible with a logical standby database. Doing so ensures that, in the event of a failure, lost transactions are minimal. By directly and synchronously writing the redo logs at the remote database, you can achieve no data loss in the event of a disaster. At the standby system, the changes are captured and directly applied to the standby database with an apply process.
See Also:
|
Changes made to database objects in an Oracle database are logged in the redo log to guarantee recoverability in the event of user error or media failure. A capture process is an Oracle background process that scans the database redo log to capture DML and DDL changes made to database objects. A capture process formats these changes into events called LCRs and enqueues them into a queue. There are two types of LCRs: row LCRs contain information about a change to a row in table resulting from a DML operation, and DDL LCRs contain information about a DDL change to a database object. Rules determine which changes are captured. Figure 1-2 shows a capture process capturing LCRs.
Text description of the illustration strms012.gif
Note: The capture process does not capture some types of DML and DDL changes, and it does not capture changes made in the |
You can configure change capture locally at a source database or remotely at a downstream database. If a capture process runs on a downstream database, then redo log files from the source database are copied to the downstream database, and the capture process captures changes in these redo log files.
See Also:
Chapter 2, "Streams Capture Process" for more information about capture processes and for detailed information about which DML and DDL statements are captured by a capture process |
Streams uses queues to stage events for propagation or consumption. Propagations send events from one queue to another, and these queues can be in the same database or in different databases. The queue from which the events are propagated is called the source queue, and the queue that receives the events is called the destination queue. There can be a one-to-many, many-to-one, or many-to-many relationship between source and destination queues.
Events that are staged in a queue can be consumed by an apply process, a messaging client, or by an application. Rules determine which events are propagated by a propagation. Figure 1-3 shows propagation from a source queue to a destination queue.
Text description of the illustration strms008.gif
Streams enables you to configure an environment in which changes are shared through directed networks. In a directed network, propagated events may pass through one or more intermediate databases before arriving at a destination database where they are consumed. The events may or may not be consumed at an intermediate database in addition to the destination database. Using Streams, you can choose which events are propagated to each destination database, and you can specify the route events will traverse on their way to a destination database.
Figure 1-4 shows an example directed networks environment. Notice that, in this example, the queue at the intermediate database in Chicago is both a source queue and a destination queue.
Text description of the illustration strms007.gif
See Also:
Chapter 3, "Streams Staging and Propagation" for more information about staging and propagation |
User applications can enqueue events into a queue explicitly. The user applications can format these user-enqueued events as LCRs or user messages, and an apply process, a messaging client, or a user application can consume these events. Events that were enqueued explicitly into a queue can be propagated to another queue or explicitly dequeued from the same queue. Figure 1-5 shows explicit enqueue of events into and dequeue of events from the same queue.
Text description of the illustration strms010.gif
When events are propagated between queues, events that were enqueued explicitly into a source queue can be dequeued explicitly from a destination queue by a messaging client or user application. These events also may be processed by an apply process. Figure 1-6 shows explicit enqueue of events into a source queue, propagation to a destination queue, and then explicit dequeue of events from the destination queue.
Text description of the illustration strms011.gif
See Also:
"SYS.AnyData Queues and User Messages" for more information about explicit enqueue and dequeue of events |
An apply process is an Oracle background process that dequeues events from a queue and either applies each event directly to a database object or passes the event as a parameter to a user-defined procedure called an apply handler. Apply handlers include message handlers, DML handlers, DDL handlers, precommit handlers, and error handlers.
Typically, an apply process applies events to the local database where it is running, but, in a heterogeneous database environment, it can be configured to apply events at a remote non-Oracle database. Rules determine which events are dequeued by an apply process. Figure 1-7 shows an apply process processing LCRs and user messages.
Text description of the illustration strms013.gif
A messaging client consumes user-enqueued events when it is invoked by an application or a user. Rules determine which user-enqueued events are dequeued by a messaging client. These user-enqueued events may be LCRs or user messages. Figure 1-8 shows a messaging client dequeuing user-enqueued events.
Text description of the illustration strms045.gif
An apply process detects conflicts automatically when directly applying LCRs in a replication environment. Typically, a conflict results when the same row in the source database and destination database is changed at approximately the same time.
When a conflict occurs, you need a mechanism to ensure that the conflict is resolved in accordance with your business rules. Streams offers a variety of prebuilt conflict handlers. Using these prebuilt handlers, you can define a conflict resolution system for each of your databases that resolves conflicts in accordance with your business rules. If you have a unique situation that Oracle's prebuilt conflict resolution handlers cannot resolve, then you can build your own conflict resolution handlers.
If a conflict is not resolved, or if a handler procedure raises an error, then all events in the transaction that raised the error are saved in the error queue for later analysis and possible reexecution.
Streams enables you to control which information to share and where to share it using rules. A rule is specified as a condition that is similar to the condition in the WHERE
clause of a SQL query.
A rule consists of the following components:
TRUE
, FALSE
, or NULL
(unknown), based on an event.You can group related rules together into rule sets. In Streams, rule sets may be positive or negative.
For example, the following rule condition may be used for a rule in Streams to specify that the schema name that owns a table must be hr
and that the table name must be departments
for the condition to evaluate to TRUE
:
:dml.get_object_owner() = 'HR' AND :dml.get_object_name() = 'DEPARTMENTS'
The :dml
variable is used in rule conditions for row LCRs. In a Streams environment, a rule with this condition may be used in the following ways:
hr.departments
table. If the rule is in a negative rule set for a capture process, then it instructs the capture process to discard DML changes to the hr.departments
table.hr.departments
table. If the rule is in a negative rule set for a propagation, then it instructs the propagation to discard LCRs that contain row changes to the hr.departments
table.hr.departments
table. If the rule is in a negative rule set for an apply process, then it instructs the apply process to discard LCRs that contain row changes to the hr.departments
table.hr.departments
table. If the rule is in a negative rule set for a messaging client, then it instructs the messaging client to discard LCRs that contain row changes to the hr.departments
table.Streams performs tasks based on rules. These tasks include capturing events with a capture process, propagating events with a propagation, applying events with an apply process, dequeuing events with a messaging client, and discarding events.
A rule-based transformation is any modification to an event that results when a rule in a positive rule set evaluates to TRUE
. For example, a rule-based transformation can change the datatype of a particular column in a table for an event. In this case, the transformation can be a PL/SQL function that takes as input a SYS.AnyData
object containing an LCR with a NUMBER
datatype for a column and returns a SYS.AnyData
object containing an LCR with a VARCHAR2
datatype for the same column.
A transformation can occur at the following times:
When a transformation is performed during apply, an apply process may apply the transformed event directly or send the transformed event to an apply handler for processing. Figure 1-9 shows a rule-based transformation during apply.
Text description of the illustration strms044.gif
Every redo entry in the redo log has a tag associated with it. The datatype of the tag is RAW
. By default, when a user or application generates redo entries, the value of the tag is NULL
for each redo entry, and a NULL
tag consumes no space in the redo entry. The size limit for a tag value is 2000 bytes.
In Streams, rules may have conditions relating to tag values to control the behavior of Streams clients. For example, a tag can be used to determine whether an LCR contains a change that originated in the local database or at a different database, so that you can avoid change cycling (sending an LCR back to the database where it originated). Also, a tag can be used to specify the set of destination databases for each LCR. Tags may be used for other LCR tracking purposes as well.
You can specify Streams tags for redo entries generated by a certain session or by an apply process. These tags then become part of the LCRs captured by a capture process. Typically, tags are used in Streams replication environments, but you can use them whenever it is necessary to track database changes and LCRs.
See Also:
Oracle Streams Replication Administrator's Guide for more information about Streams tags |
In addition to information sharing between Oracle databases, Streams supports information sharing between Oracle databases and non-Oracle databases. The following sections contain an overview of this support.
See Also:
Oracle Streams Replication Administrator's Guide for more information about heterogeneous information sharing with Streams |
If an Oracle database is the source and a non-Oracle database is the destination, then the non-Oracle database destination lacks the following Streams mechanisms:
To share DML changes from an Oracle source database with a non-Oracle destination database, the Oracle database functions as a proxy and carries out some of the steps that would normally be done at the destination database. That is, the events intended for the non-Oracle destination database are dequeued in the Oracle database itself, and an apply process at the Oracle database uses Heterogeneous Services to apply the events to the non-Oracle database across a network connection through a gateway. Figure 1-10 shows an Oracle databases sharing data with a non-Oracle database.
Text description of the illustration strms009.gif
See Also:
Oracle Database Heterogeneous Connectivity Administrator's Guide for more information about Heterogeneous Services |
To capture and propagate changes from a non-Oracle database to an Oracle database, a custom application is required. This application gets the changes made to the non-Oracle database by reading from transaction logs, using triggers, or some other method. The application must assemble and order the transactions and must convert each change into an LCR. Next, the application must enqueue the LCRs into a queue in an Oracle database by using the PL/SQL interface, where they can be processed by an apply process. Figure 1-11 shows a non-Oracle databases sharing data with an Oracle database.
Text description of the illustration strms014.gif
Figure 1-12 shows how Streams might be configured to share information within a single database, while Figure 1-13 shows how Streams might be configured to share information between two different databases.
Text description of the illustration strms006.gif
Text description of the illustration strms005.gif
Several tools are available for configuring, administering, and monitoring your Streams environment. Oracle-supplied PL/SQL packages are the primary configuration and management tool, while the Streams tool in the Oracle Enterprise Manager Console provides some configuration, administration, and monitoring capabilities to help you manage your environment. Additionally, Streams data dictionary views keep you informed about your Streams environment.
The following Oracle-supplied PL/SQL packages contain procedures and functions for configuring and managing a Streams environment.
See Also:
PL/SQL Packages and Types Reference for more information about these packages |
The DBMS_STREAMS_ADM
package provides an administrative interface for adding and removing simple rules for capture processes, propagations, and apply processes at the table, schema, and database level. This package also enables you to add rules that control which events a propagation propagates and which events a messaging client dequeues. This package also contains procedures for creating queues and for managing Streams metadata, such as data dictionary information. This package also contains procedures that enable you to configure and maintain a Streams replication environment for specific tablespaces. This package is provided as an easy way to complete common tasks in a Streams environment. You can use other packages, such as the DBMS_CAPTURE_ADM
, DBMS_PROPAGATION_ADM
, DBMS_APPLY_ADM
, DBMS_RULE_ADM
, and DBMS_AQADM
packages, to complete these same tasks, as well as tasks that require additional customization.
The DBMS_CAPTURE_ADM
package provides an administrative interface for starting, stopping, and configuring a capture process. This package also provides administrative procedures that prepare database objects at the source database for instantiation at a destination database.
The DBMS_PROPAGATION_ADM
package provides an administrative interface for configuring propagation from a source queue to a destination queue.
The DBMS_APPLY_ADM
package provides an administrative interface for starting, stopping, and configuring an apply process. This package includes procedures that enable you to configure apply handlers, set enqueue destinations for events, and specify execution directives for events. This package also provides administrative procedures that set the instantiation SCN for objects at a destination database. This package also includes subprograms for configuring conflict detection and resolution and for managing apply errors.
The DBMS_STREAMS_MESSAGING
package provides interfaces to enqueue messages into and dequeue messages from a SYS.AnyData
queue.
The DBMS_RULE_ADM
package provides an administrative interface for creating and managing rules, rule sets, and rule evaluation contexts. This package also contains subprograms for managing privileges related to rules.
The DBMS_RULE
package contains the EVALUATE
procedure, which evaluates a rule set. The goal of this procedure is to produce the list of satisfied rules, based on the data. This package also contains subprograms that enable you to use iterators during rule evaluation. Instead of returning all rules that evaluate to TRUE
or MAYBE
for an evaluation, iterators can return one rule at a time.
The DBMS_STREAMS
package provides interfaces to convert SYS.AnyData
objects into LCR objects, to return information about Streams attributes and Streams clients, and to annotate redo entries generated by a session with a tag. This tag may affect the behavior of a capture process, a propagation job, an apply process, or a messaging client whose rules include specifications for these tags in redo entries or LCRs.
The DBMS_STREAMS_AUTH
package provides interfaces for granting privileges to Streams administrators and revoking privileges from Streams administrators.
The DBMS_STREAMS_TABLESPACE_ADM
package provides administrative procedures for copying tablespaces between databases and moving tablespaces from one database to another. This package uses transportable tablespaces, Data Pump, and the DBMS_FILE_TRANSFER
package.
Every database in a Streams environment has Streams data dictionary views. These views maintain administrative information about local rules, objects, capture processes, propagations, apply processes, and messaging clients. You can use these views to monitor your Streams environment.
See Also:
|
To help configure, administer, and monitor Streams environments, Oracle provides a Streams tool in the Oracle Enterprise Manager Console. You also can use the Streams tool to generate Streams configuration scripts, which you can then modify and run to configure your Streams environment. The Streams tool online help is the primary documentation source for this tool. Figure 1-14 shows the Topology tab in the Streams tool.
Text description of the illustration strms_to.jpg