Oracle® Database Utilities 10g Release 1 (10.1) Part Number B10825-01 |
|
|
View PDF |
This section describes new features of the Oracle Database 10g utilities and provides pointers to additional information. For information about features that were introduced in earlier releases of Oracle Database, refer to the documentation for those releases.
Oracle Database 10g introduces the new Oracle Data Pump technology, which enables very high-speed movement of data and metadata from one database to another. This technology is the basis for Oracle's new data movement utilities, Data Pump Export and Data Pump Import.
See Chapter 1, " Overview of Oracle Data Pump" for more information.
Data Pump Export is a utility that makes use of Oracle Data Pump technology to unload data and metadata at high speeds into a set of operating system files called a dump file set. The dump file set can be moved to another system and loaded by the Data Pump Import utility.
Although the functionality of Data Pump Export (invoked with the expdp
command) is similar to that of the original Export utility (exp
), they are completely separate utilities.
See Chapter 2, " Data Pump Export" for more information.
Data Pump Import is a utility for loading a Data Pump Export dump file set into a target system.
Although the functionality of Data Pump Import (invoked with the impdp
command) is similar to that of the original Import utility (imp
), they are completely separate utilities.
See Chapter 3, " Data Pump Import" for more information.
The Data Pump API provides a high-speed mechanism to move all or part of the data and metadata from one database to another. The Data Pump Export and Data Pump Import utilities are based on the Data Pump API.
The Data Pump API is implemented through a PL/SQL package, DBMS_DATAPUMP
, that provides programmatic access to Data Pump data and metadata movement capabilities.
See Chapter 5, " The Data Pump API" for more information.
The following features have been added or updated for Oracle Database 10g.
You can now use remap parameters, which enable you to modify an object by changing specific old attribute values to new values. For example, when you are importing data into a database, you can use the REMAP_SCHEMA
parameter to change occurrences of schema name scott
in a dump file set to schema name blake.
All dictionary objects needed for a full export are supported.
You can request that a heterogeneous collection of objects be returned in creation order.
In addition to retrieving metadata as XML and creation DDL, you can now submit the XML to re-create the object.
See Chapter 18, " Using the Metadata API" for full descriptions of these features.
A new access driver, ORACLE_DATAPUMP
, is now available. See Chapter 15, " The ORACLE_DATAPUMP Access Driver" for more information.
The LogMiner utility, previously documented in the Oracle9i Database Administrator's Guide, is now documented in this guide. The new and changed LogMiner features for Oracle Database 10g are as follows:
The new DBMS_LOGMNR
.REMOVE_LOGFILE
() procedure removes log files from the list of those being analyzed. This subprogram replaces the REMOVEFILE
option to the DBMS_LOGMNR
.ADD_LOGFILE
() procedure.
The new NO_ROWID_IN_STMT
option for DBMS_LOGMNR
.START_LOGMNR
procedure lets you filter out the ROWID
clause from reconstructed SQL_REDO
and SQL_UNDO
statements.
Supplemental logging is enhanced as follows:
At the database level, there are two new options for identification key logging:
FOREIGN
KEY
Supplementally logs all other columns of a row's foreign key if any column in the foreign key is modified.
ALL
Supplementally logs all the columns in a row (except for LOBs LONG
s, and ADT
s) if any column value is modified.
At the table level, there are these new features:
Identification key logging is now supported (PRIMARY
KEY
, FOREIGN
KEY
, UNIQUE
INDEX
, and ALL
).
The NO
LOG
option provides a way to prevent a column in a user-defined log group from being supplementally logged.
See Chapter 19, " Using LogMiner to Analyze Redo Log Files" for more information.