Oracle® Database Utilities 10g Release 2 (10.2) Part Number B14215-01 |
|
|
View PDF |
This chapter describes the Oracle Data Pump Export utility. The following topics are discussed:
Note: Data Pump Export (invoked with theexpdp command) is a new utility as of Oracle Database 10g. Although its functionality and its parameters are similar to those of the original Export utility (exp ), they are completely separate utilities and their files are not compatible. See Chapter 19, "Original Export and Import" for a description of the original Export utility. |
Data Pump Export (hereinafter referred to as Export for ease of reading) is a utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported on the same system or it can be moved to another system and loaded there.
The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.
Because the dump files are written by the server, rather than by the client, the data base administrator (DBA) must create directory objects. See Default Locations for Dump, Log, and SQL Files for more information about directory objects.
Data Pump Export enables you to specify that a job should move a subset of the data and metadata, as determined by the export mode. This is done using data filters and metadata filters, which are specified through Export parameters. See Filtering During Export Operations.
To see some examples of the various ways in which you can use Data Pump Export, refer to Examples of Using Data Pump Export.
The Data Pump Export utility is invoked using the expdp
command. The characteristics of the export operation are determined by the Export parameters you specify. These parameters can be specified either on the command line or in a parameter file.
The following sections contain more information about invoking Export:
Note: It is not possible to start or restart Data Pump jobs on one instance in a Real Application Clusters (RAC) environment if there are Data Pump jobs currently running on other instances in the RAC environment. [This note added per mail from Steve Dipirro on 8/30/04.] |
You can interact with Data Pump Export by using a command line, a parameter file, or an interactive-command mode.
Command-Line Interface: Enables you to specify most of the Export parameters directly on the command line. For a complete description of the parameters available in the command-line interface, see Parameters Available in Export's Command-Line Mode.
Parameter File Interface: Enables you to specify command-line parameters in a parameter file. The only exception is the PARFILE
parameter, because parameter files cannot be nested. The use of parameter files is recommended if you are using parameters whose values require quotation marks. See Use of Quotation Marks On the Data Pump Command Line.
Interactive-Command Interface: Stops logging to the terminal and displays the Export prompt, from which you can enter various commands, some of which are specific to interactive-command mode. This mode is enabled by pressing Ctrl+C during an export operation started with the command-line interface or the parameter file interface. Interactive-command mode is also enabled when you attach to an executing or stopped job.
For a complete description of the commands available in interactive-command mode, see Commands Available in Export's Interactive-Command Mode.
One of the most significant characteristics of an export operation is its mode, because the mode largely determines what is exported. Export provides different modes for unloading different portions of the database. The mode is specified on the command line, using the appropriate parameter. The available modes are as follows:
A full export is specified using the FULL
parameter. In a full database export, the entire database is unloaded. This mode requires that you have the EXP_FULL_DATABASE
role.
A schema export is specified using the SCHEMAS
parameter. This is the default export mode. If you have the EXP_FULL_DATABASE
role, then you can specify a list of schemas and optionally include the schema definitions themselves, as well as system privilege grants to those schemas. If you do not have the EXP_FULL_DATABASE
role, you can export only your own schema.
Cross-schema references are not exported unless the referenced schema is also specified in the list of schemas to be exported. For example, a trigger defined on a table within one of the specified schemas, but that resides in a schema not explicitly specified, is not exported. This is also true for external type definitions upon which tables in the specified schemas depend. In such a case, it is expected that the type definitions already exist in the target instance at import time.
A table export is specified using the TABLES
parameter. In table mode, only a specified set of tables, partitions, and their dependent objects are unloaded. You must have the EXP_FULL_DATABASE
role to specify tables that are not in your own schema. All specified tables must reside in a single schema. Note that type definitions for columns are not exported in table mode. It is expected that the type definitions already exist in the target instance at import time. Also, as in schema exports, cross-schema references are not exported.
A tablespace export is specified using the TABLESPACES
parameter. In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, its dependent objects are also unloaded. Both object metadata and data are unloaded. In tablespace mode, if any part of a table resides in the specified set, then that table and all of its dependent objects are exported. Privileged users get all tables. Nonprivileged users get only the tables in their own schemas.
See Also:
|
A transportable tablespace export is specified using the TRANSPORT_TABLESPACES
parameter. In transportable tablespace mode, only the metadata for the tables (and their dependent objects) within a specified set of tablespaces are unloaded. This allows the tablespace datafiles to then be copied to another Oracle database and incorporated using transportable tablespace import. This mode requires that you have the EXP_FULL_DATABASE
role.
Unlike tablespace mode, transportable tablespace mode requires that the specified tables be completely self-contained. That is, the partitions of all tables in the set must also be in the set.
Transportable tablespace exports cannot be restarted once stopped. Also, they cannot have a degree of parallelism greater than 1.
Note: You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or higher release level as the source database. |
You can specify a connect identifier in the connect string when you invoke the Data Pump Export utility. This identifier can specify a database instance that is different from the current instance identified by the current Oracle System ID (SID). The connect identifier can be an Oracle*Net connect descriptor or a name that maps to a connect descriptor. This requires an active listener (to start the listener, enter lsnrctl
start
) that can be located using the connect descriptor. The following example invokes Export for user hr
, using the connect descriptor named inst1
:
expdp hr/hr@inst1 DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp TABLES=employees
The local Export client connects to the database instance identified by the connect descriptor inst1
(a simple net service name, usually defined in a tnsnames.ora
file), to export the data on that instance.
Do not confuse invoking the Export utility using a connect identifier with an export operation specifying the Export NETWORK_LINK
command-line parameter, which initiates an export via a database link. In this case, the local Export client connects to the database instance identified by the command-line connect string, retrieves the data to be exported from the database instance identified by the database link, and writes the data to a dump file set on the connected database instance.
Data Pump Export provides much greater data and metadata filtering capability than was provided by the original Export utility.
Data specific filtering is implemented through the QUERY
and SAMPLE
parameters, which specify restrictions on the table rows that are to be exported.
Data filtering can also occur indirectly as a result of metadata filtering, which can include or exclude table objects along with any associated row data.
Each data filter can be specified once per table within a job. If different filters using the same name are applied to both a particular table and to the whole job, the filter parameter supplied for the specific table will take precedence.
Metadata filtering is implemented through the EXCLUDE
and INCLUDE
parameters. The EXCLUDE
and INCLUDE
parameters are mutually exclusive.
Metadata filters identify a set of objects to be included or excluded from an Export or Import operation. For example, you could request a full export, but without Package Specifications or Package Bodies.
To use filters correctly and to get the results you expect, remember that dependent objects of an identified object are processed along with the identified object. For example, if a filter specifies that an index is to be included in an operation, then statistics from that index will also be included. Likewise, if a table is excluded by a filter, then indexes, constraints, grants, and triggers upon the table will also be excluded by the filter.
If multiple filters are specified for an object type, an implicit AND
operation is applied to them. That is, objects pertaining to the job must pass all of the filters applied to their object types.
The same metadata filter name can be specified multiple times within a job.
To see which objects can be filtered, you can perform queries on the following views: DATABASE_EXPORT_OBJECTS
, SCHEMA_EXPORT_OBJECTS
, and TABLE_EXPORT_OBJECTS
. For example, you could perform the following query:
SQL> SELECT OBJECT_PATH, COMMENTS FROM SCHEMA_EXPORT_OBJECTS 2 WHERE OBJECT_PATH LIKE '%GRANT' AND OBJECT_PATH NOT LIKE '%/%';
The output of this query looks similar to the following:
OBJECT_PATH -------------------------------------------------------------------------------- COMMENTS -------------------------------------------------------------------------------- GRANT Object grants on the selected tables OBJECT_GRANT Object grants on the selected tables PROCDEPOBJ_GRANT Grants on instance procedural objects PROCOBJ_GRANT Schema procedural object grants in the selected schemas ROLE_GRANT Role grants to users associated with the selected schemas SYSTEM_GRANT System privileges granted to users associated with the selected schemas
This section provides descriptions of the parameters available in the command-line mode of Data Pump Export. Many of the descriptions include an example of how to use the parameter.
Using the Export Parameter Examples
If you try running the examples that are provided for each parameter, be aware of the following requirements:
Most of the examples use the sample schemas of the seed database, which is installed by default when you install Oracle Database. In particular, the human resources (hr
) schema is often used.
The examples assume that the directory objects, dpump_dir1
and dpump_dir2
, already exist and that READ
and WRITE
privileges have been granted to the hr
schema for these directory objects. See Default Locations for Dump, Log, and SQL Files for information about creating directory objects and assigning privileges to them.
Some of the examples require the EXP_FULL_DATABASE
and IMP_FULL_DATABASE
roles. The examples assume that the hr
schema has been granted these roles.
If necessary, ask your DBA for help in creating these directory objects and assigning the necessary privileges and roles.
Syntax diagrams of these parameters are provided in Syntax Diagrams for Data Pump Export.
Unless specifically noted, these parameters can also be specified in a parameter file.
Use of Quotation Marks On the Data Pump Command Line
Some operating systems require that quotation marks on the command line be preceded by an escape character, such as the backslash. If the backslashes were not present, the command-line parser that Export uses would not understand the quotation marks and would remove them, resulting in an error. In general, Oracle recommends that you place such statements in a parameter file because escape characters are not necessary in parameter files.
See Also:
|
Note: If you are accustomed to using the original Export utility (exp ), you may be wondering which Data Pump parameters are used to perform the operations you used to perform with original Export. For a comparison, see How Data Pump Export Parameters Map to Those of the Original Export Utility. |
Default: job currently in the user's schema, if there is only one
Purpose
Attaches the client session to an existing export job and automatically places you in the interactive-command interface. Export displays a description of the job to which you are attached and also displays the Export prompt.
Syntax and Description
ATTACH [=[schema_name.]job_name]
The schema_name
is optional. To specify a schema other than your own, you must have the EXP_FULL_DATABASE
role.
The job_name
is optional if only one export job is associated with your schema and the job is active. To attach to a stopped job, you must supply the job name. To see a list of Data Pump job names, you can query the DBA_DATAPUMP_JOBS
view or the USER_DATAPUMP_JOBS
view.
When you are attached to the job, Export displays a description of the job and then displays the Export prompt.
Restrictions
When you specify the ATTACH
parameter, you cannot specify any other parameters except for the connection string (user
/password
).
You cannot attach to a job in another schema unless it is already running.
If the dump file set or master table for the job have been deleted, the attach operation will fail.
Altering the master table in any way will lead to unpredictable results.
Example
The following is an example of using the ATTACH
parameter. It assumes that the job, hr.export_job
, already exists.
> expdp hr/hr ATTACH=hr.export_job
Default: METADATA_ONLY
Purpose
Specifies whether to compress metadata before writing to the dump file set.
Syntax and Description
COMPRESSION=(METADATA_ONLY | NONE)
METADATA_ONLY
results in all metadata being written to the dump file in compressed format. This is the default.
NONE
disables compression for the entire unload.
Example
The following is an example of using the COMPRESSION
parameter:
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=METADATA_ONLY
This command will execute a schema-mode export that will compress all metadata before writing it out to the dump file, hr_comp.dmp
. It defaults to a schema-mode export because no export mode is specified.
Default: ALL
Purpose
Enables you to filter what Export unloads: data only, metadata only, or both.
Syntax and Description
CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
ALL
unloads both data and metadata. This is the default.
DATA_ONLY
unloads only table row data; no database object definitions are unloaded.
METADATA_ONLY
unloads only database object definitions; no table row data is unloaded.
Restrictions
The CONTENT=METADATA_ONLY
parameter cannot be used in conjunction with the parameter TRANSPORT_TABLESPACES
(transportable-tablespace-mode).
Example
The following is an example of using the CONTENT
parameter:
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp CONTENT=METADATA_ONLY
This command will execute a schema-mode export that will unload only the metadata associated with the hr
schema. It defaults to a schema-mode export of the hr
schema because no export mode is specified.
Default: DATA_PUMP_DIR
Purpose
Specifies the default location to which Export can write the dump file set and the log file.
Syntax and Description
DIRECTORY=directory_object
The directory_object
is the name of a database directory object (not the name of an actual directory). Upon installation, privileged users have access to a default directory object named DATA_PUMP_DIR
. Users with access to DATA_PUMP_DIR
need not use the DIRECTORY
parameter at all.
A directory object specified on the DUMPFILE
or LOGFILE
parameter overrides any directory object that you specify for the DIRECTORY
parameter.
Example 1
The following is an example of using the DIRECTORY
parameter:
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp CONTENT=METADATA_ONLY
The dump file, employees
.dmp
, will be written to the path that is associated with the directory object dpump_dir1
.
See Also:
|
Default: expdat
.dmp
Purpose
Specifies the names, and optionally, the directory objects of dump files for an export job.
Syntax and Description
DUMPFILE=[directory_object:]file_name [, ...]
The directory_object
is optional if one has already been established by the DIRECTORY
parameter. If you supply a value here, it must be a directory object that already exists and that you have access to. A database directory object that is specified as part of the DUMPFILE
parameter overrides a value specified by the DIRECTORY
parameter or by the default directory object.
You can supply multiple file_name
specifications as a comma-delimited list or in separate DUMPFILE
parameter specifications. If no extension is given for the filename, then Export uses the default file extension of .dmp
. The filenames can contain a substitution variable (%U
), which implies that multiple files may be generated. The substitution variable is expanded in the resulting filenames into a 2-digit, fixed-width, incrementing integer starting at 01 and ending at 99. If a file specification contains two substitution variables, both are incremented at the same time. For example, exp%Uaa%U.dmp
would resolve to exp01aa01.dmp
, exp02aa02.dmp
, and so forth.
If the FILESIZE
parameter is specified, each dump file will have a maximum of that size in bytes and be nonextensible. If more space is required for the dump file set and a template with a substitution variable (%U
) was supplied, a new dump file is automatically created of the size specified by FILESIZE,
if there is room on the device.
As each file specification or file template containing a substitution variable is defined, it is instantiated into one fully qualified filename and Export attempts to create it. The file specifications are processed in the order in which they are specified. If the job needs extra files because the maximum file size is reached, or to keep parallel workers active, then additional files are created if file templates with substitution variables were specified.
Although it is possible to specify multiple files using the DUMPFILE
parameter, the export job may only require a subset of those files to hold the exported data. The dump file set displayed at the end of the export job shows exactly which files were used. It is this list of files that is required in order to perform an import operation using this dump file set.
Restrictions
If there are preexisting files that match the resulting filenames, an error is generated. The existing dump files will not be overwritten.
Example
The following is an example of using the DUMPFILE
parameter:
> expdp hr/hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp, exp2%U.dmp PARALLEL=3
The dump file, exp1
.dmp,
will be written to the path associated with the directory object dpump_dir2
because dpump_dir2
was specified as part of the dump file name, and therefore overrides the directory object specified with the DIRECTORY
parameter. Because all three parallel processes will be given work to perform during this job, the exp201.dmp
and exp202.dmp
dump files will be created and they will be written to the path associated with the directory object, dpump_dir1
, that was specified with the DIRECTORY
parameter.
Default: none
Purpose
Specifies a key for encrypting encrypted column data in the export dumpfile.
Syntax and Description
ENCRYPTION_PASSWORD = password
The password
value that is supplied specifies a key for re-encrypting encrypted table columns so that they are not written as clear text in the dump file set. If the export operation involves encrypted table columns, but an encryption password is not supplied, then the encrypted columns will be written to the dump file set as clear text and a warning will be issued.
To use the ENCRYPTION_PASSWORD
parameter, you must have Transparent Data Encryption set up. See Oracle Database Advanced Security Administrator's Guide for more information about Transparent Data Encryption.
Note: There is no connection or dependency between the key specified with the Data PumpENCRYPTION_PASSWORD parameter and the key specified with the ENCRYPT keyword when the table with encrypted columns was initially created. For example, suppose a table is created as follows, with an encrypted column whose key is xyz :
CREATE TABLE emp (col1 VARCHAR2(256) ENCRYPT IDENTIFIED BY "xyz"); When you export the |
Restrictions
The ENCRYPTION_PASSWORD
parameter applies only to columns that already have encrypted data. Data Pump neither provides nor supports encryption of entire dump files.
To use the ENCRYPTION_PASSWORD
parameter, you must have Transparent Data Encryption set up. See Oracle Database Advanced Security Administrator's Guide for more information about Transparent Data Encryption.
The ENCRYPTION_PASSWORD
parameter is only used to export encrypted columns used by Transparent Data Encryption functionality.
For network exports, the ENCRYPTION_PASSWORD
parameter is not supported with user-defined external tables that have encrypted columns. The table will be skipped and an error message will be displayed, but the job will continue.
Encryption attributes for all columns must match between the exported table definition and the target table. For example, suppose you have a table, EMP
, and one of its columns is named EMPNO
. Both of the following situations would result in an error because the encryption attribute for the EMP
column in the source table would not match the encryption attribute for the EMP
column in the target table:
The EMP
table is exported with the EMPNO
column being encrypted, but prior to importing the table you remove the encryption attribute from the EMPNO
column.
The EMP
table is exported without the EMPNO
column being encrypted, but prior to importing the table you enable encryption on the EMPNO
column.
Example
In the following example, an encryption password, 123456
, is assigned to the dump file, dpcd2be1.dmp
.
expdp hr/hr TABLES=employee_s_encrypt DIRECTORY=dpump_dir DUMPFILE=dpcd2be1.dmp ENCRYPTION_PASSWORD=123456
Encrypted columns in the employee_s_encrypt
table, will not be written as clear text in the dpcd2be1.dmp
dump file. Note that in order to subsequently import the dpcd2be1.dmp
file created by this example, you will need to supply the same encryption password. (See "ENCRYPTION_PASSWORD" for an example of an import operation using the ENCRYPTION_PASSWORD
parameter.)
Default: BLOCKS
Purpose
Specifies the method that Export will use to estimate how much disk space each table in the export job will consume (in bytes). The estimate is printed in the log file and displayed on the client's standard output device. The estimate is for table row data only; it does not include metadata.
Syntax and Description
ESTIMATE={BLOCKS | STATISTICS}
BLOCKS
- The estimate is calculated by multiplying the number of database blocks used by the source objects, times the appropriate block sizes.
STATISTICS
- The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently.
Restrictions
If the Data Pump export job involves compressed tables, the default size estimation given for the compressed table is inaccurate when ESTIMATE=BLOCKS
is used. This is because the size estimate does not reflect that the data was stored in a compressed form. To get a more accurate size estimate for compressed tables, use ESTIMATE=STATISTICS
.
Example
The following example shows a use of the ESTIMATE
parameter in which the estimate is calculated using statistics for the employees
table:
> expdp hr/hr TABLES=employees ESTIMATE=STATISTICS DIRECTORY=dpump_dir1 DUMPFILE=estimate_stat.dmp
Default: n
Purpose
Instructs Export to estimate the space that a job would consume, without actually performing the export operation.
Syntax and Description
ESTIMATE_ONLY={y | n}
If ESTIMATE_ONLY
=y
, then Export estimates the space that would be consumed, but quits without actually performing the export operation.
Example
The following shows an example of using the ESTIMATE_ONLY
parameter to determine how much space an export of the HR
schema will take.
> expdp hr/hr ESTIMATE_ONLY=y NOLOGFILE=y
Default: none
Purpose
Enables you to filter the metadata that is exported by specifying objects and object types that you want excluded from the export operation.
Syntax and Description
EXCLUDE=object_type[:name_clause] [, ...]
All object types for the given mode of export will be included except those specified in an EXCLUDE
statement. If an object is excluded, all of its dependent objects are also excluded. For example, excluding a table will also exclude all indexes and triggers on the table.
The name_clause
is optional. It allows selection of specific objects within an object type. It is a SQL expression used as a filter on the type's object names. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name clause applies only to object types whose instances have names (for example, it is applicable to TABLE
, but not to GRANT
). The name clause must be separated from the object type with a colon and enclosed in double quotation marks, because single-quotation marks are required to delimit the name strings. For example, you could set EXCLUDE=INDEX:"LIKE 'EMP%'"
to exclude all indexes whose names start with emp
.
If no name_clause
is provided, all objects of the specified type are excluded.
More than one EXCLUDE
statement can be specified.
Oracle recommends that you place EXCLUDE
clauses in a parameter file to avoid having to use escape characters on the command line.
If the object_type
you specify is CONSTRAINT
, GRANT
, or USER
, you should be aware of the effects this will have, as described in the following paragraphs.
Excluding Constraints
The following constraints cannot be excluded:
NOT
NULL
constraints
Constraints needed for the table to be created and loaded successfully; for example, primary key constraints for index-organized tables, or REF
SCOPE
and WITH
ROWID
constraints for tables with REF
columns
This means that the following EXCLUDE
statements will be interpreted as follows:
EXCLUDE
=CONSTRAINT
will exclude all (nonreferential) constraints, except for NOT
NULL
constraints and any constraints needed for successful table creation and loading.
EXCLUDE
=REF_CONSTRAINT
will exclude referential integrity (foreign key) constraints.
Excluding Grants and Users
Specifying EXCLUDE
=GRANT
excludes object grants on all object types and system privilege grants.
Specifying EXCLUDE
=USER
excludes only the definitions of users, not the objects contained within users' schemas.
To exclude a specific user and all objects of that user, specify a filter such as the following (where hr
is the schema name of the user you want to exclude):
EXCLUDE=SCHEMA:"='HR'"
If you try to exclude a user by using a statement such as EXCLUDE=USER:"='HR'"
, then only the information used in CREATE USER hr
DDL statements will be excluded, and you may not get the results you expect.
Restrictions
The EXCLUDE
and INCLUDE
parameters are mutually exclusive.
Example
The following is an example of using the EXCLUDE
statement.
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_exclude.dmp EXCLUDE=VIEW, PACKAGE, FUNCTION
This will result in a schema-mode export in which all of the hr
schema will be exported except its views, packages, and functions.
See Also:
|
Default: 0
(unlimited)
Purpose
Specifies the maximum size of each dump file. If the size is reached for any member of the dump file set, that file is closed and an attempt is made to create a new file, if the file specification contains a substitution variable.
Syntax and Description
FILESIZE=integer[B | K | M | G]
The integer
can be followed by B
, K
, M
, or G
(indicating bytes, kilobytes, megabytes, and gigabytes respectively). Bytes is the default. The actual size of the resulting file may be rounded down slightly to match the size of the internal blocks used in dump files.
Restrictions
The minimum size for a file is ten times the default Data Pump block size, which is 4 kilobytes.
Example
The following shows an example in which the size of the dump file is set to 3 megabytes:
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_3m.dmp FILESIZE=3M
If three megabytes had not been sufficient to hold all the exported data, then the following error would have been displayed and the job would have stopped:
ORA-39095: Dump file space has been exhausted: Unable to allocate 217088 bytes
The actual number of bytes that could not be allocated may vary. Also, this number does not represent the amount of space needed to complete the entire export operation. It indicates only the size of the current object that was being exported when the job ran out of dump file space.This situation can be corrected by first attaching to the stopped job, adding one or more files using the ADD_FILE
command, and then restarting the operation.
Default: none
Purpose
Specifies the system change number (SCN) that Export will use to enable the Flashback Query utility.
Syntax and Description
FLASHBACK_SCN=scn_value
The export operation is performed with data that is consistent as of the specified SCN. If the NETWORK_LINK
parameter is specified, the SCN refers to the SCN of the source database
Restrictions
FLASHBACK_SCN
and FLASHBACK_TIME
are mutually exclusive.
The FLASHBACK_SCN
parameter pertains only to the flashback query capability of Oracle Database 10g release 1. It is not applicable to Flashback Database, Flashback Drop, or any other flashback capabilities new as of Oracle Database 10g release 1.
Example
The following example assumes that an existing SCN value of 384632
exists. It exports the hr
schema up to SCN 384632.
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_scn.dmp FLASHBACK_SCN=384632
Note: If you are on a logical standby system and using a network link to access the logical standby primary, theFLASHBACK_SCN parameter is ignored because SCNs are selected by logical standby. See Oracle Data Guard Concepts and Administration for information about logical standby databases. |
Default: none
Purpose
The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent as of this SCN.
Syntax and Description
FLASHBACK_TIME="TO_TIMESTAMP(time-value)"
Because the TO_TIMESTAMP
value is enclosed in quotation marks, it would be best to put this parameter in a parameter file. Otherwise, you might need to use escape characters on the command line in front of the quotation marks. See Use of Quotation Marks On the Data Pump Command Line.
Restrictions
FLASHBACK_TIME
and FLASHBACK_SCN
are mutually exclusive.
The FLASHBACK_TIME
parameter pertains only to the flashback query capability of Oracle Database 10g release 1. It is not applicable to Flashback Database, Flashback Drop, or any other flashback capabilities new as of Oracle Database 10g release 1.
Example
You can specify the time in any format that the DBMS_FLASHBACK.ENABLE_AT_TIME
procedure accepts. For example, suppose you have a parameter file, flashback.par, with the following contents:
DIRECTORY=dpump_dir1 DUMPFILE=hr_time.dmp FLASHBACK_TIME="TO_TIMESTAMP('25-08-2003 14:35:00', 'DD-MM-YYYY HH24:MI:SS')"
You could then issue the following command:
> expdp hr/hr PARFILE=flashback.par
The export operation will be performed with data that is consistent with the SCN that most closely matches the specified time.
Note: If you are on a logical standby system and using a network link to access the logical standby primary, theFLASHBACK_SCN parameter is ignored because SCNs are selected by logical standby. See Oracle Data Guard Concepts and Administration for information about logical standby databases. |
See Also: Oracle Database Application Developer's Guide - Fundamentals for information about using Flashback Query |
Default: n
Purpose
Specifies that you want to perform a full database mode export.
Syntax and Description
FULL={y | n}
FULL=y
indicates that all data and metadata are to be exported. Filtering can restrict what is exported using this export mode. See Filtering During Export Operations.
To perform a full export, you must have the EXP_FULL_DATABASE
role.
Restrictions
The following system schemas are not exported as part of a Full export because the metadata they contain is exported as part of other objects in the dump file set: SYS
, ORDSYS
, EXFSYS
, MDSYS
, DMSYS
, CTXSYS
, ORDPLUGINS
, LBACSYS
, XDB
, SI_INFORMTN_SCHEMA
, DIP
, DBSNMP
, and WMSYS
.
Grants on objects owned by the SYS
schema are never exported.
Example
The following is an example of using the FULL
parameter. The dump file, expfull.dmp
is written to the dpump_dir2
directory.
> expdp hr/hr DIRECTORY=dpump_dir2 DUMPFILE=expfull.dmp FULL=y NOLOGFILE=y
Default: N
Purpose
Displays online help for the Export utility.
Syntax and Description
HELP = {y | n}
If HELP
=y
is specified, Export displays a summary of all Export command-line parameters and interactive commands.
Example
> expdp HELP = y
This example will display a brief description of all Export parameters and commands.
Default: none
Purpose
Enables you to filter the metadata that is exported by specifying objects and object types for the current export mode. The specified objects and all their dependent objects are exported. Grants on these objects are also exported.
Syntax and Description
INCLUDE = object_type[:name_clause] [, ...]
Only object types explicitly specified in INCLUDE
statements, and their dependent objects, are exported. No other object types, including the schema definition information that is normally part of a schema-mode export when you have the EXP_FULL_DATABASE
role, are exported.
To see a list of valid object type path names for use with the INCLUDE
parameter, you can query the following views: DATABASE_EXPORT_OBJECTS
, SCHEMA_EXPORT_OBJECTS
, and TABLE_EXPORT_OBJECTS
.
The name_clause
is optional. It allows fine-grained selection of specific objects within an object type. It is a SQL expression used as a filter on the object names of the type. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name clause applies only to object types whose instances have names (for example, it is applicable to TABLE
, but not to GRANT
). The optional name clause must be separated from the object type with a colon and enclosed in double quotation marks, because single-quotation marks are required to delimit the name strings.
Oracle recommends that INCLUDE
statements be placed in a parameter file; otherwise you might have to use operating system-specific escape characters on the command line before quotation marks. See Use of Quotation Marks On the Data Pump Command Line.
For example, suppose you have a parameter file named hr.par
with the following content:
SCHEMAS=HR DUMPFILE=expinclude.dmp DIRECTORY=dpump_dir1 LOGFILE=expinclude.log INCLUDE=TABLE:"IN ('EMPLOYEES', 'DEPARTMENTS')" INCLUDE=PROCEDURE INCLUDE=INDEX:"LIKE 'EMP%'"
You could then use the hr.par
file to start an export operation, without having to enter any other parameters on the command line:
> expdp hr/hr parfile=hr.par
Restrictions
The INCLUDE
and EXCLUDE
parameters are mutually exclusive.
Grants on objects owned by the SYS
schema are never exported.
Example
The following example performs an export of all tables (and their dependent objects) in the hr
schema:
> expdp hr/hr INCLUDE=TABLE DUMPFILE=dpump_dir1:exp_inc.dmp NOLOGFILE=y
Default: system-generated name of the form SYS_EXPORT_<mode>_NN
Purpose
Used to identify the export job in subsequent actions, such as when the ATTACH
parameter is used to attach to a job, or to identify the job using the DBA_DATAPUMP_JOBS
or USER_DATAPUMP_JOBS
views. The job name becomes the name of the master table in the current user's schema. The master table is used to control the export job.
Syntax and Description
JOB_NAME=jobname_string
The jobname_string
specifies a name of up to 30 bytes for this export job. The bytes must represent printable characters and spaces. If spaces are included, the name must be enclosed in single quotation marks (for example, 'Thursday Export'). The job name is implicitly qualified by the schema of the user performing the export operation.
The default job name is system-generated in the form SYS_EXPORT_<mode>_NN
, where NN
expands to a 2-digit incrementing integer starting at 01. An example of a default name is 'SYS_EXPORT_TABLESPACE_02'
.
Example
The following example shows an export operation that is assigned a job name of exp_job
:
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=exp_job.dmp JOB_NAME=exp_job NOLOGFILE=y
Default: export
.log
Purpose
Specifies the name, and optionally, a directory, for the log file of the export job.
Syntax and Description
LOGFILE=[directory_object:]file_name
You can specify a database directory_object
previously established by the DBA, assuming that you have access to it. This overrides the directory object specified with the DIRECTORY
parameter.
The file_name
specifies a name for the log file. The default behavior is to create a file named export
.log
in the directory referenced by the directory object specified in the DIRECTORY
parameter.
All messages regarding work in progress, work completed, and errors encountered are written to the log file. (For a real-time status of the job, use the STATUS
command in interactive mode.)
A log file is always created for an export job unless the NOLOGFILE
parameter is specified. As with the dump file set, the log file is relative to the server and not the client.
An existing file matching the filename will be overwritten.
Example
The following example shows how to specify a log file name if you do not want to use the default:
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp LOGFILE=hr_export.log
Note: Data Pump Export writes the log file using the database character set. If your clientNLS_LANG environment setting sets up a different client character set from the database character set, then it is possible that table names may be different in the log file than they are when displayed on the client output screen. |
Default: none
Purpose
Enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance.
Syntax and Description
NETWORK_LINK=source_database_link
The NETWORK_LINK
parameter initiates an export using a database link. This means that the system to which the expdp
client is connected contacts the source database referenced by the source_database_link
, retrieves data from it, and writes the data to a dump file set back on the connected system.
The source_database_link
provided must be the name of a database link to an available database. If the database on that instance does not already have a database link, you or your DBA must create one. For more information about the CREATE
DATABASE
LINK
statement, see Oracle Database SQL Reference.
If the source database is read-only, then the user on the source database must have a locally managed tablespace assigned as the default temporary tablespace. Otherwise, the job will fail. For further details about this, see the information about creating locally managed temporary tablespaces in the Oracle Database Administrator's Guide.
Restrictions
When the NETWORK_LINK
parameter is used in conjunction with the TABLES
parameter, only whole tables can be exported (not partitions of tables).
The only types of database links supported by Data Pump Export are: public, fixed-user, and connected-user. Current-user database links are not supported.
Example
The following is an example of using the NETWORK_LINK
parameter. The source_database_link
would be replaced with the name of a valid database link that must already exist.
> expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link
DUMPFILE=network_export.dmp LOGFILE=network_export.log
Default: n
Purpose
Specifies whether to suppress creation of a log file.
Syntax and Description
NOLOGFILE={y | n}
Specify NOLOGFILE
=y
to suppress the default behavior of creating a log file. Progress and error information is still written to the standard output device of any attached clients, including the client that started the original export operation. If there are no clients attached to a running job and you specify NOLOGFILE=y
, you run the risk of losing important progress and error information.
Example
The following is an example of using the NOLOGFILE
parameter:
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp NOLOGFILE=y
This command results in a schema-mode export in which no log file is written.
Default: 1
Purpose
Specifies the maximum number of threads of active execution operating on behalf of the export job. This execution set consists of a combination of worker processes and parallel I/O server processes. The master control process and worker processes acting as query coordinators in parallel query operations do not count toward this total.
This parameter enables you to make trade-offs between resource consumption and elapsed time.
Syntax and Description
PARALLEL=integer
The value you specify for integer
should be less than, or equal to, the number of files in the dump file set (or you should specify substitution variables in the dump file specifications). Because each active worker process or I/O server process writes exclusively to one file at a time, an insufficient number of files can have adverse effects. Some of the worker processes will be idle while waiting for files, thereby degrading the overall performance of the job. More importantly, if any member of a cooperating group of parallel I/O server processes cannot obtain a file for output, then the export operation will be stopped with an ORA-39095
error. Both situations can be corrected by attaching to the job using the Data Pump Export utility, adding more files using the ADD_FILE
command while in interactive mode, and in the case of a stopped job, restarting the job.
To increase or decrease the value of PARALLEL
during job execution, use interactive-command mode. Decreasing parallelism does not result in fewer worker processes associated with the job; it merely decreases the number of worker processes that will be executing at any given time. Also, any ongoing work must reach an orderly completion point before the decrease takes effect. Therefore, it may take a while to see any effect from decreasing the value. Idle workers are not deleted until the job exits.
Increasing the parallelism takes effect immediately if there is work that can be performed in parallel.
Restrictions
This parameter is valid only in the Enterprise Edition of Oracle Database 10g.
Example
The following is an example of using the PARALLEL
parameter:
> expdp hr/hr DIRECTORY=dpump_dir1 LOGFILE=parallel_export.log JOB_NAME=par4_job DUMPFILE=par_exp%u.dmp PARALLEL=4
This results in a schema-mode export of the hr
schema in which up to four files could be created in the path pointed to by the directory object, dpump_dir1
.
Default: none
Purpose
Specifies the name of an export parameter file.
Syntax and Description
PARFILE=[directory_path]file_name
Unlike dump and log files, which are created and written by the Oracle database, the parameter file is opened and read by the client running the expdp image. Therefore, a directory object name is neither required nor appropriate. The directory path is an operating system-specific directory specification. The default is the user's current directory.
The use of parameter files is highly recommended if you are using parameters whose values require the use of quotation marks. See Use of Quotation Marks On the Data Pump Command Line.
Restrictions
The PARFILE
parameter cannot be specified within a parameter file.
Example
The content of an example parameter file, hr.par
, might be as follows:
SCHEMAS=HR DUMPFILE=exp.dmp DIRECTORY=dpump_dir1 LOGFILE=exp.log
You could then issue the following Export command to specify the parameter file:
> expdp hr/hr parfile=hr.par
Default: none
Purpose
Enables you to filter the data that is exported by specifying a clause for a SQL SELECT
statement, which is applied to all tables in the export job or to a specific table.
Syntax and Description
QUERY = [schema.][table_name:] query_clause
The query_clause
is typically a WHERE
clause for fine-grained row selection, but could be any SQL clause. For example, an ORDER
BY
clause could be used to speed up a migration from a heap-organized table to an index-organized table. If a [schema
.]table_name
is not supplied, the query is applied to (and must be valid for) all tables in the export job. A table-specific query overrides a query applied to all tables.
When the query is to be applied to a specific table, a colon must separate the table name from the query clause. More than one table-specific query can be specified, but only one can be specified per table. Oracle highly recommends that you place QUERY
specifications in a parameter file; otherwise, you might have to use operating system-specific escape characters on the command line before each quotation mark. See Use of Quotation Marks On the Data Pump Command Line.
The query must be enclosed in single or double quotation marks.
To specify a schema other than your own in a table-specific query, you need the EXP_FULL_DATABASE
role.
Restrictions
The QUERY
parameter cannot be used in conjunction with the following parameters:
CONTENT=METADATA_ONLY
ESTIMATE_ONLY
TRANSPORT_TABLESPACES
Example
The following is an example of using the QUERY
parameter:
> expdp hr/hr parfile=emp_query.par
The contents of the emp_query.par file are as follows:
QUERY=employees:'"WHERE department_id > 10 AND salary > 10000"' NOLOGFILE=y DIRECTORY=dpump_dir1 DUMPFILE=exp1.dmp
This example unloads all tables in the hr
schema, but only the rows that fit the query expression. In this case, all rows in all tables (except employees
) in the hr
schema will be unloaded. For the employees
table, only rows that meet the query criteria are unloaded.
Default: None
Purpose
Allows you to specify a percentage of data to be sampled and unloaded from the source database.
Syntax and Description
SAMPLE=[[schema_name.]table_name:]sample_percent
This parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported. The sample_percent
indicates the probability that a block of rows will be selected as part of the sample. It does not mean that the database will retrieve exactly that amount of rows from the table. The value you supply for sample_percent
can be anywhere from .000001 up to, but not including, 100.
The sample_percent can be applied to specific tables. In the following example, 50% of the HR.EMPLOYEES
table will be exported:
SAMPLE="HR"."EMPLOYEES":50
If you specify a schema, you must also specify a table. However, you can specify a table without specifying a schema; the current user will be assumed. If no table is specified, then the sample_percent
value applies to the entire export job.
Note that you can use this parameter in conjunction with the Data Pump Import PCTSPACE transform, so that the size of storage allocations matches the sampled data subset. (See TRANSFORM.)
Restrictions
The SAMPLE
parameter is not valid for network exports.
Example
In the following example, the value 70 for SAMPLE
is applied to the entire export job because no table name is specified.
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=sample.dmp SAMPLE=70
Default: current user's schema
Purpose
S
pecifies that you want to perform a schema-mode export. This is the default mode for Export.
Syntax and Description
SCHEMAS=schema_name [, ...]
If you have the EXP_FULL_DATABASE
role, then you can specify a single schema other than your own or a list of schema names. The EXP_FULL_DATABASE
role also allows you to export additional nonschema object information for each specified schema so that the schemas can be re-created at import time. This additional information includes the user definitions themselves and all associated system and role grants, user password history, and so on. Filtering can further restrict what is exported using schema mode (see Filtering During Export Operations).
Restrictions
If you do not have the EXP_FULL_DATABASE
role, then you can specify only your own schema.
Example
The following is an example of using the SCHEMAS
parameter. Note that user hr
is allowed to specify more than one schema because the EXP_FULL_DATABASE
role was previously assigned to it for the purpose of these examples.
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr,sh,oe
This results in a schema-mode export in which the schemas, hr
, sh,
and oe
will be written to the expdat.dmp
dump file located in the dpump_dir1
directory.
Default: 0
Purpose
Specifies the frequency at which the job status display is updated.
Syntax and Description
STATUS=[integer]
If you supply a value for integer
, it specifies how frequently, in seconds, job status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, no additional information is displayed beyond information about the completion of each object type, table, or partition.
This status information is written only to your standard output device, not to the log file (if one is in effect).
Example
The following is an example of using the STATUS
parameter.
> expdp hr/hr DIRECTORY=dpump_dir1 SCHEMAS=hr,sh STATUS=300
This example will export the hr
and sh
schemas and display the status of the export every 5 minutes (60 seconds x 5 = 300 seconds).
Default: none
Purpose
Specifies that you want to perform a table-mode export.
Syntax and Description
TABLES=[schema_name.]table_name[:partition_name] [, ...]
Filtering can restrict what is exported using this mode (see Filtering During Export Operations). You can filter the data and metadata that is exported, by specifying a comma-delimited list of tables and partitions or subpartitions. If a partition name is specified, it must be the name of a partition or subpartition in the associated table. Only the specified set of tables, partitions, and their dependent objects are unloaded.
The table name that you specify can be preceded by a qualifying schema name. All table names specified must reside in the same schema. The schema defaults to that of the current user. To specify a schema other than your own, you must have the EXP_FULL_DATABASE
role.
The use of wildcards is supported for one table name per export operation. For example, TABLES=emp%
would export all tables having names that start with 'EMP
'.
Restrictions
Cross-schema references are not exported. For example, a trigger defined on a table within one of the specified schemas, but that resides in a schema not explicitly specified, is not exported.
Types used by the table are not exported in table mode. This means that if you subsequently import the dump file and the TYPE does not already exist in the destination database, the table creation will fail.
The use of synonyms as values for the TABLES
parameter is not supported. For example, if the regions
table in the hr
schema had a synonym of regn
, it would not be valid to use TABLES=regn
. An error would be returned.
The export of table partitions is not supported when the NETWORK_LINK
parameter is used.
The export of tables that include wildcards in the table name is not supported if the table has partitions.
Examples
The following example shows a simple use of the TABLES
parameter to export three tables found in the hr
schema: employees
, jobs
, and departments
. Because user hr
is exporting tables found in the hr
schema, the schema name is not needed before the table names.
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp TABLES=employees,jobs,departments
The following example shows the use of the TABLES
parameter to export partitions:
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables_part.dmp TABLES=sh.sales:sales_Q1_2000,sh.sales:sales_Q2_2000
This example exports the partitions, sales_Q1_2000
and sales_Q2_2000
, from the table sales
in the schema sh
.
Default: none
Purpose
Specifies a list of tablespace names to be exported in tablespace mode.
Syntax and Description
TABLESPACES=tablespace_name [, ...]
In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, its dependent objects are also unloaded. If any part of a table resides in the specified set, then that table and all of its dependent objects are exported. Filtering can restrict what is exported using this mode (see Filtering During Export Operations).
Example
The following is an example of using the TABLESPACES
parameter. The example assumes that tablespaces tbs_4
, tbs_5
, and tbs_6
already exist.
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp TABLESPACES=tbs_4, tbs_5, tbs_6
This results in a tablespace export in which tables (and their dependent objects) from the specified tablespaces (tbs_4
, tbs_5
, and tbs_6
) will be unloaded.
Default: n
Purpose
Specifies whether or not to check for dependencies between those objects inside the transportable set and those outside the transportable set. This parameter is applicable only to a transportable-tablespace mode export.
Syntax and Description
TRANSPORT_FULL_CHECK={y | n}
If TRANSPORT_FULL_CHECK
=y
, then Export verifies that there are no dependencies between those objects inside the transportable set and those outside the transportable set. The check addresses two-way dependencies. For example, if a table is inside the transportable set but its index is not, a failure is returned and the export operation is terminated. Similarly, a failure is also returned if an index is in the transportable set but the table is not.
If TRANSPORT_FULL_CHECK
=n,
then Export verifies only that there are no objects within the transportable set that are dependent on objects outside the transportable set. This check addresses a one-way dependency. For example, a table is not dependent on an index, but an index is dependent on a table, because an index without a table has no meaning. Therefore, if the transportable set contains a table, but not its index, then this check succeeds. However, if the transportable set contains an index, but not the table, the export operation is terminated.
In addition to this check, Export always verifies that all storage segments of all tables (and their indexes) defined within the tablespace set specified by TRANSPORT_TABLESPACES
are actually contained within the tablespace set.
Example
The following is an example of using the TRANSPORT_FULL_CHECK
parameter. It assumes that tablespace tbs_1
exists.
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=y LOGFILE=tts.log
Default: none
Purpose
Specifies that you want to perform a transportable-tablespace-mode export.
Syntax and Description
TRANSPORT_TABLESPACES=tablespace_name [, ...]
Use the TRANSPORT_TABLESPACES
parameter to specify a list of tablespace names for which object metadata will be exported from the source database into the target database.
Note: You cannot export transportable tablespaces and then import them into a database at a lower release level. The target database must be at the same or higher release level as the source database. |
Restrictions
Transportable jobs are not restartable.
Transportable jobs are restricted to a degree of parallelism of 1.
Transportable tablespace mode requires that you have the EXP_FULL_DATABASE
role.
Example 1
The following is an example of using the TRANSPORT_TABLESPACES
parameter in a file-based job (rather than network-based). The tablespace tbs_1
is the tablespace being moved. This example assumes that tablespace tbs_1
exists and that it has been set to read-only.
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=y LOGFILE=tts.log
See Also:
|
Default: COMPATIBLE
Purpose
Specifies the version of database objects to be exported. This can be used to create a dump file set that is compatible with a previous release of Oracle Database. Note that this does not mean that Data Pump Export can be used with versions of Oracle Database prior to 10.1. Data Pump Export only works with Oracle Database 10g release 1 (10.1) or later. The VERSION
parameter simply allows you to identify the version of the objects being exported.
Syntax and Description
VERSION={COMPATIBLE | LATEST | version_string}
The legal values for the VERSION
parameter are as follows:
COMPATIBLE
- This is the default value. The version of the metadata corresponds to the database compatibility level. Database compatibility must be set to 9.2 or higher.
LATEST
- The version of the metadata corresponds to the database version.
version_string
- A specific database version (for example, 10.0.0). In Oracle Database 10g, this value cannot be lower than 9.2.
Database objects or attributes that are incompatible with the specified version will not be exported. For example, tables containing new datatypes that are not supported in the specified version will not be exported.
Example
The following example shows an export for which the version of the metadata will correspond to the database version:
> expdp hr/hr TABLES=hr.employees VERSION=LATEST DIRECTORY=dpump_dir1 DUMPFILE=emp.dmp NOLOGFILE=y
Table 2-1 maps, as closely as possible, Data Pump Export parameters to original Export parameters. In some cases, because of feature redesign, the original Export parameter is no longer needed, so there is no Data Pump parameter to compare it to. Also, as shown in the table, some of the parameter names may be the same, but the functionality is slightly different.
Table 2-1 Original Export Parameters and Their Counterparts in Data Pump Export
Original Export Parameter | Comparable Data Pump Export Parameter |
---|---|
BUFFER |
A parameter comparable to BUFFER is not needed. |
COMPRESS |
A parameter comparable to COMPRESS is not needed. |
CONSISTENT |
A parameter comparable to CONSISTENT is not needed. Use FLASHBACK_SCN and FLASHBACK_TIME for this functionality. |
CONSTRAINTS |
EXCLUDE=CONSTRAINT |
DIRECT |
A parameter comparable to DIRECT is not needed. Data Pump Export automatically chooses the best method (direct path mode or external tables mode). |
FEEDBACK |
STATUS |
FILE |
DUMPFILE |
FILESIZE |
FILESIZE |
FLASHBACK_SCN |
FLASHBACK_SCN |
FLASHBACK_TIME |
FLASHBACK_TIME |
FULL |
FULL |
GRANTS |
EXCLUDE=GRANT |
HELP |
HELP |
INDEXES |
EXCLUDE=INDEX |
LOG |
LOGFILE |
OBJECT_CONSISTENT |
A parameter comparable to OBJECT_CONSISTENT is not needed. |
OWNER |
SCHEMAS |
PARFILE |
PARFILE |
QUERY |
QUERY |
RECORDLENGTH |
A parameter comparable to RECORDLENGTH is not needed because sizing is done automatically. |
RESUMABLE |
A parameter comparable to RESUMABLE is not needed. This functionality is automatically provided for privileged users. |
RESUMABLE_NAME |
A parameter comparable to RESUMABLE_NAME is not needed. This functionality is automatically provided for privileged users. |
RESUMABLE_TIMEOUT |
A parameter comparable to RESUMABLE_TIMEOUT is not needed. This functionality is automatically provided for privileged users. |
ROWS=N |
CONTENT =METADATA_ONLY |
ROWS=Y |
CONTENT =ALL |
STATISTICS |
A parameter comparable to STATISTICS is not needed. Statistics are always saved for tables. |
TABLES |
TABLES |
TABLESPACES |
TABLESPACES (Same parameter; slightly different behavior) |
TRANSPORT_TABLESPACE |
TRANSPORT_TABLESPACES (Same parameter; slightly different behavior) |
TRIGGERS |
EXCLUDE=TRIGGER |
TTS_FULL_CHECK |
TRANSPORT_FULL_CHECK |
USERID |
A parameter comparable to USERID is not needed. This information is supplied as the username /password when you invoke Export. |
VOLSIZE |
A parameter comparable to VOLSIZE is not needed. |
This table does not list all Data Pump Export command-line parameters. For information about all Export command-line parameters, see Parameters Available in Export's Command-Line Mode.
In interactive-command mode, the current job continues running, but logging to the terminal is suspended and the Export prompt (Export>
) is displayed.
Note: Data Pump Export interactive-command mode is different from the interactive mode for original Export, in which Export prompted you for input. See Interactive Mode for information about interactive mode in original Export. |
To start interactive-command mode, do one of the following:
From an attached client, press Ctrl+C.
From a terminal other than the one on which the job is running, specify the ATTACH
parameter in an expdp
command to attach to the job. This is a useful feature in situations in which you start a job at one location and need to check on it at a later time from a different location.
Table 2-2 lists the activities you can perform for the current job from the Data Pump Export prompt in interactive-command mode.
Table 2-2 Supported Activities in Data Pump Export's Interactive-Command Mode
Activity | Command Used |
---|---|
Add additional dump files. | ADD_FILE |
Exit interactive mode and enter logging mode. | CONTINUE_CLIENT |
Stop the export client session, but leave the job running. | EXIT_CLIENT |
Redefine the default size to be used for any subsequent dump files. | FILESIZE |
Display a summary of available commands. | HELP |
Detach all currently attached client sessions and kill the current job. | KILL_JOB |
Increase or decrease the number of active worker processes for the current job. This command is valid only in the Enterprise Edition of Oracle Database 10g. | PARALLEL |
Restart a stopped job to which you are attached. | START_JOB |
Display detailed status for the current job and/or set status interval. | STATUS |
Stop the current job for later restart. | STOP_JOB |
The following are descriptions of the commands available in the interactive-command mode of Data Pump Export.
Purpose
Adds additional files or wildcard file templates to the export dump file set.
Syntax and Description
ADD_FILE=[directory_object]file_name [,...]
The file_name
must not contain any directory path information. However, it can include a substitution variable, %U,
which indicates that multiple files may be generated using the specified filename as a template. It can also specify another directory_object
.
The size of the file being added is determined by the setting of the FILESIZE
parameter.
Restrictions
When you use the ADD_FILE
interactive command, Oracle recommends that the DIRECTORY
parameter for the job be specified on the command line rather than in a parameter file.
Example
The following example adds two dump files to the dump file set. A directory object is not specified for the dump file named hr2.dmp
, so the default directory object for the job is assumed. A different directory object, dpump_dir2
, is specified for the dump file named hr3.dmp
.
Export> ADD_FILE=hr2.dmp, dpump_dir2:hr3.dmp
Purpose
Changes the Export mode from interactive-command mode to logging mode.
Syntax and Description
CONTINUE_CLIENT
In logging mode, status is continually output to the terminal. If the job is currently stopped, then CONTINUE_CLIENT
will also cause the client to attempt to start the job.
Example
Export> CONTINUE_CLIENT
Purpose
Stops the export client session, exits Export, and discontinues logging to the terminal, but leaves the current job running.
Syntax and Description
EXIT_CLIENT
Because EXIT_CLIENT
leaves the job running, you can attach to the job at a later time. To see the status of the job, you can monitor the log file for the job or you can query the USER_DATAPUMP_JOBS
view or the V$SESSION_LONGOPS
view.
Example
Export> EXIT_CLIENT
Purpose
Redefines the default size to be used for any subsequent dump files.
Syntax and Description
FILESIZE=number
The file size can be followed by B
, K
, M
, or G
to indicate that the size is expressed in bytes, kilobytes, megabytes, or gigabytes, respectively. The default is B
.
A file size of 0 indicates that there will not be any size restrictions on new dump files. They will be extended as needed until the limits of the containing device are reached.
Example
Export> FILESIZE=100M
Purpose
Provides information about Data Pump Export commands available in interactive-command mode.
Syntax and Description
HELP
Displays information about the commands available in interactive-command mode.
Example
Export> HELP
Purpose
Detaches all currently attached client sessions and then kills the current job. It exits Export and returns to the terminal prompt.
Syntax and Description
KILL_JOB
A job that is killed using KILL_JOB
cannot be restarted. All attached clients, including the one issuing the KILL_JOB
command, receive a warning that the job is being killed by the current user and are then detached. After all clients are detached, the job's process structure is immediately run down and the master table and dump files are deleted. Log files are not deleted.
Example
Export> KILL_JOB
Purpose
Enables you to increase or decrease the number of active worker processes for the current job.
Syntax and Description
PARALLEL=integer
PARALLEL
is available as both a command-line parameter and as an interactive-command mode parameter. (It is only available in Enterprise Edition.) You set it to the desired number of parallel processes. An increase takes effect immediately if there are sufficient files and resources. A decrease does not take effect until an existing process finishes its current task. If the value is decreased, workers are idled but not deleted until the job exits.
Example
PARALLEL=10
Purpose
Starts the current job to which you are attached.
Syntax and Description
START_JOB
The START_JOB
command restarts the current job to which you are attached (the job cannot be currently executing). The job is restarted with no data loss or corruption after an unexpected failure or after you issued a STOP_JOB
command, provided the dump file set and master table have not been altered in any way.
Transportable-tablespace-mode exports are not restartable.
Example
Export> START_JOB
Purpose
Displays cumulative status of the job, along with a description of the current operation. An estimated completion percentage for the job is also returned. Also allows resetting the display interval for logging mode status.
Syntax and Description
STATUS[=integer]
You have the option of specifying how frequently, in seconds, this status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, the periodic status display is turned off and status is displayed only once.
This status information is written only to your standard output device, not to the log file (even if one is in effect).
Example
The following example will display the current job status and change the logging mode display interval to five minutes (300 seconds):
Export> STATUS=300
Purpose
Stops the current job either immediately or after an orderly shutdown, and exits Export.
Syntax and Description
STOP_JOB[=IMMEDIATE]
If the master table and dump file set are not disturbed when or after the STOP_JOB
command is issued, the job can be attached to and restarted at a later time with the START_JOB
command.
To perform an orderly shutdown, use STOP_JOB
(without any associated value). A warning requiring confirmation will be issued. An orderly shutdown stops the job after worker processes have finished their current tasks.
To perform an immediate shutdown, specify STOP_JOB
=IMMEDIATE
. A warning requiring confirmation will be issued. All attached clients, including the one issuing the STOP_JOB
command, receive a warning that the job is being stopped by the current user and they will be detached. After all clients are detached, the process structure of the job is immediately run down. That is, the master process will not wait for the worker processes to finish their current tasks. There is no risk of corruption or data loss when you specify STOP_JOB=IMMEDIATE
. However, some tasks that were incomplete at the time of shutdown may have to be redone at restart time.
Example
Export> STOP_JOB=IMMEDIATE
This section provides the following examples of using Data Pump Export:
For information that will help you to successfully use these examples, see Using the Export Parameter Examples.
Example 2-1 shows a table-mode export, specified using the TABLES
parameter. Issue the following Data Pump export command to perform a table export of the tables employees
and jobs
from the human resources (hr
) schema:
Example 2-1 Performing a Table-Mode Export
expdp hr/hr TABLES=employees,jobs DUMPFILE=dpump_dir1:table.dmp NOLOGFILE=y
Because user hr
is exporting tables in his own schema, it is not necessary to specify the schema name for the tables. The NOLOGFILE=y
parameter indicates that an Export log file of the operation will not be generated.
Example 2-2 shows the contents of a parameter file (exp.par
) that you could use to perform a data-only unload of all tables in the human resources (hr
) schema except for the tables countries
and regions
. Rows in the employees
table are unloaded that have a department_id
other than 50. The rows are ordered by employee_id
.
Example 2-2 Data-Only Unload of Selected Tables and Rows
DIRECTORY=dpump_dir1 DUMPFILE=dataonly.dmp CONTENT=DATA_ONLY EXCLUDE=TABLE:"IN ('COUNTRIES', 'REGIONS')" QUERY=employees:"WHERE department_id !=50 ORDER BY employee_id"
You can issue the following command to execute the exp.par
parameter file:
> expdp hr/hr PARFILE=exp.par
A schema-mode export (the default mode) is performed, but the CONTENT
parameter effectively limits the export to an unload of just the table's data. The DBA previously created the directory object dpump_dir1
which points to the directory on the server where user hr
is authorized to read and write export dump files. The dump file dataonly.dmp
is created in dpump_dir1
.
Example 2-3 shows the use of the ESTIMATE_ONLY
parameter to estimate the space that would be consumed in a table-mode export, without actually performing the export operation. Issue the following command to use the BLOCKS
method to estimate the number of bytes required to export the data in the following three tables located in the human resource (hr
) schema: employees
, departments
, and locations
.
Example 2-3 Estimating Disk Space Needed in a Table-Mode Export
> expdp hr/hr DIRECTORY=dpump_dir1 ESTIMATE_ONLY=y TABLES=employees, departments, locations LOGFILE=estimate.log
The estimate is printed in the log file and displayed on the client's standard output device. The estimate is for table row data only; it does not include metadata.
Example 2-4 shows a schema-mode export of the hr
schema. In a schema-mode export, only objects belonging to the corresponding schemas are unloaded. Because schema mode is the default mode, it is not necessary to specify the SCHEMAS
parameter on the command line, unless you are specifying more than one schema or a schema other than your own.
Example 2-5 shows a full database Export that will have 3 parallel worker processes.
Example 2-5 Parallel Full Export
> expdp hr/hr FULL=y DUMPFILE=dpump_dir1:full1%U.dmp, dpump_dir2:full2%U.dmp FILESIZE=2G PARALLEL=3 LOGFILE=dpump_dir1:expfull.log JOB_NAME=expfull
Because this is a full database export, all data and metadata in the database will be exported. Dump files full101
.dmp
, full201
.dmp
, full102
.dmp
, and so on will be created in a round-robin fashion in the directories pointed to by the dpump_dir1
and dpump_dir2
directory objects. For best performance, these should be on separate I/O channels. Each file will be up to 2 gigabytes in size, as necessary. Initially, up to three files will be created. More files will be created, if needed. The job and master table will have a name of expfull
. The log file will be written to expfull.log
in the dpump_dir1
directory.
To start this example, reexecute the parallel full export in Example 2-5. While the export is running, press Ctrl+C. This will start the interactive-command interface of Data Pump Export. In the interactive interface, logging to the terminal stops and the Export prompt is displayed.
Example 2-6 Stopping and Reattaching to a Job
At the Export prompt, issue the following command to stop the job:
Export> STOP_JOB=IMMEDIATE Are you sure you wish to stop this job ([y]/n): y
The job is placed in a stopped state and exits the client.
Enter the following command to reattach to the job you just stopped:
> expdp hr/hr ATTACH=EXPFULL
After the job status is displayed, you can issue the CONTINUE_CLIENT
command to resume logging mode and restart the expfull
job.
Export> CONTINUE_CLIENT
A message is displayed that the job has been reopened, and processing status is output to the client.
This section provides syntax diagrams for Data Pump Export. These diagrams use standard SQL syntax notation. For more information about SQL syntax notation, see Oracle Database SQL Reference.
ExpInit
ExpStart
ExpModes
ExpOpts
ExpFileOpts
ExpDynOpts