Skip Headers
Oracle® Database Concepts
10g Release 2 (10.2)

Part Number B14220-02
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Feedback

Go to previous page
Previous
Go to next page
Next
View PDF

11 Oracle Utilities

This chapter describes Oracle database utilities for data transfer, data maintenance, and database administration.

This chapter contains the following topics:

Introduction to Oracle Utilities

Oracle's database utilities let you perform the following tasks:

Overview of Data Pump Export and Import

Oracle Data Pump technology enables very high-speed movement of data and metadata from one database to another. This technology is the basis for Oracle's data movement utilities, Data Pump Export and Data Pump Import.

Data Pump enables you to specify whether a job should move a subset of the data and metadata. This is done using data filters and metadata filters, which are implemented through Export and Import parameters.

Data Pump Export

Data Pump Export (hereinafter referred to as Export for ease of reading) is a utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set can be moved to another system and loaded by the Data Pump Import utility.

The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format, which can be read only by Data Pump Import. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.

Data Pump Import

Data Pump Import (hereinafter referred to as Import for ease of reading) is a utility for loading an export dump file set into a target system. The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format.

Import can also be used to load a target database directly from a source database with no intervening files, which allows export and import operations to run concurrently, minimizing total elapsed time. This is known as network import.

Import also enables you to see all of the SQL DDL that the Import job will be executing, without actually executing the SQL. This is implemented through the Import SQLFILE parameter.

Overview of the Data Pump API

The Data Pump API provides a high-speed mechanism to move all or part of the data and metadata for a site from one database to another. To use the Data Pump API, you use the procedures provided in the DBMS_DATAPUMP PL/SQL package. The Data Pump Export and Data Pump Import utilities are based on the Data Pump API.


See Also:


Overview of the Metadata API

The Metadata application programming interface (API), provides a means for you to do the following:

To use the Metadata API, you use the procedures provided in the DBMS_METADATA PL/SQL package. For the purposes of the Metadata API, every entity in the database is modeled as an object that belongs to an object type. For example, the table scott.emp is an object and its object type is TABLE. When you fetch an object's metadata you must specify the object type.


See Also:


Overview of SQL*Loader

SQL*Loader loads data from external files into tables of an Oracle database. It has a powerful data parsing engine that puts little limitation on the format of the data in the datafile. You can use SQL*Loader to do the following:

A typical SQL*Loader session takes as input a control file, which controls the behavior of SQL*Loader, and one or more datafiles. The output of SQL*Loader is an Oracle database (where the data is loaded), a log file, a bad file, and potentially, a discard file.

Overview of External Tables

The external tables feature is a complement to existing SQL*Loader functionality. It lets you access data in external sources as if it were in a table in the database. External tables can be written to using the ORACLE_DATAPUMP access driver. Neither data manipulation language (DML) operations nor index creation are allowed on an external table. Therefore, SQL*Loader may be the better choice in data loading situations that require additional indexing of the staging table.

To use the external tables feature, you must have some knowledge of the file format and record format of the datafiles on your platform. You must also know enough about SQL to be able to create an external table and perform queries against it.


See Also:

"External Tables"

Overview of LogMiner

Oracle LogMiner enables you to query redo log files through a SQL interface. All changes made to user data or to the database dictionary are recorded in the Oracle redo log files. Therefore, redo log files contain all the necessary information to perform recovery operations.

LogMiner functionality is available through a command-line interface or through the Oracle LogMiner Viewer graphical user interface (GUI). The LogMiner Viewer is a part of Oracle Enterprise Manager.

The following are some of the potential uses for data contained in redo log files:

Overview of DBVERIFY Utility

DBVERIFY is an external command-line utility that performs a physical data structure integrity check. It can be used on offline or online databases, as well on backup files. You use DBVERIFY primarily when you need to ensure that a backup database (or datafile) is valid before it is restored or as a diagnostic aid when you have encountered data corruption problems.

Because DBVERIFY can be run against an offline database, integrity checks are significantly faster.

DBVERIFY checks are limited to cache-managed blocks (that is, data blocks). Because DBVERIFY is only for use with datafiles, it will not work against control files or redo logs.

There are two command-line interfaces to DBVERIFY. With the first interface, you specify disk blocks of a single datafile for checking. With the second interface, you specify a segment for checking.

Overview of DBNEWID Utility

DBNEWID is a database utility that can change the internal, unique database identifier (DBID) and the database name (DBNAME) for an operational database. The DBNEWID utility lets you change any of the following:

Therefore, you can manually create a copy of a database and give it a new DBNAME and DBID by re-creating the control file, and you can register a seed database and a manually copied database together in the same RMAN repository.