Oracle® Database Utilities 10g Release 1 (10.1) Part Number B10825-01 |
|
|
View PDF |
This chapter describes the Oracle Data Pump Import utility. The following topics are discussed:
Note: Data Pump Import (invoked with theimpdp command) is a new utility as of Oracle Database 10g. Although its functionality and its parameters are similar to those of the original Import utility (imp ), they are completely separate utilities and their files are not compatible. See Chapter 20, " Original Export and Import" for a description of the original Import utility. |
Data Pump Import (hereinafter referred to as Import for ease of reading) is a utility for loading an export dump file set into a target system. The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.
Import can also be used to load a target database directly from a source database with no intervening dump files. This allows export and import operations to run concurrently, minimizing total elapsed time. This is known as a network import.
Data Pump Import enables you to specify whether a job should move a subset of the data and metadata from the dump file set or the source database (in the case of a network import), as determined by the import mode. This is done using data filters and metadata filters, which are implemented through Import commands. See Filtering During Import Operations.
To see some examples of the various ways in which you can use Import, refer to Examples of Using Data Pump Import.
The Data Pump Import utility is invoked using the impdp
command. The characteristics of the import operation are determined by the import parameters you specify. These parameters can be specified either on the command line or in a parameter file.
The following sections contain more information about invoking Import:
You can interact with Data Pump Import by using a command line, a parameter file, or an interactive-command mode.
Command-Line Interface: Enables you to specify the Import parameters directly on the command line. For a complete description of the parameters available in the command-line interface, see Parameters Available in Import's Command-Line Mode.
Parameter File Interface: Enables you to specify command-line parameters in a parameter file. The only exception is the PARFILE
parameter because parameter files cannot be nested.
Interactive-Command Interface: Stops logging to the terminal and displays the Import prompt, from which you can enter various commands, some of which are specific to interactive-command mode. This mode is enabled by pressing Ctrl+C during an import operation started with the command-line interface or the parameter file interface. Interactive-command mode is also enabled when you attach to an executing or stopped job.
For a complete description of the commands available in interactive-command mode, see Commands Available in Import's Interactive-Command Mode.
One of the most significant characteristics of an import operation is its mode, because the mode largely determines what is imported. The specified mode applies to the source of the operation, either a dump file set or another database if the NETWORK_LINK
parameter is specified.
When the source of the import operation is a dump file set, specifying a mode is optional. If no mode is specified, then Import attempts to load the entire dump file set in the mode in which the export operation was run.
The mode is specified on the command line, using the appropriate parameter. The available modes are as follows:
A full import is specified using the FULL
parameter. In full import mode, the entire content of the source (dump file set or another database) is loaded into the target database. This is the default for file-based imports. You must have the IMP_FULL_DATABASE
role if the source is another database or if the export operation that generated the dump file set required the EXP_FULL_DATABASE
role.
A schema import is specified using the SCHEMAS
parameter. In a schema import, only objects owned by the current user are loaded. The source can be a full or schema-mode export dump file set or another database. If you have the IMP_FULL_DATABASE
role, then a single schema other than your own schema, or a list of schemas can be specified and the schemas themselves (including system privilege grants) are created in the database in addition to the objects contained within those schemas.
Cross-schema references are not imported. For example, a trigger defined on a table within one of the specified schemas, but residing in a schema not explicitly specified, is not imported.
A table-mode import is specified using the TABLES
parameter. In table mode, only the specified set of tables, partitions, and their dependent objects are loaded. The source can be a full, schema, tablespace, or table-mode export dump file set or another database. You must have the IMP_FULL_DATABASE
role to specify tables that are not in your own schema.
A tablespace-mode import is specified using the TABLESPACES
parameter. In tablespace mode, all objects contained within the specified set of tablespaces are loaded. The source can be a full, schema, tablespace, or table-mode export dump file set or another database.
A transportable tablespace import is specified using the TRANSPORT_TABLESPACES
parameter. In transportable tablespace mode, the metadata from a transportable tablespace export dump file set or from another database is loaded. The datafiles specified by the TRANSPORT_DATAFILES
parameter must be made available from the source system for use in the target database, typically by copying them over to the target system.
This mode requires the IMP_FULL_DATABASE
role.
You can use SQL*Net connection strings or connection descriptors when you invoke the Import utility. To do so, the listener must be running (lsnrctl start
). The following example shows how to invoke Import using a SQL*Net connection:
> impdp hr/hr@inst1 DIRECTORY=dpump_dir DUMPFILE=hr.dmp TABLES=employees
The hr/hr@
inst1
results in a SQL*Net connection. The inst1
refers to a service name specified in the tnsnames.ora
file. This means that the import client is being run remotely from the server to import data from a dump file.
Do not confuse invoking the Import utility using a SQL*Net connection string with performing an import operation using the Import NETWORK_LINK
parameter.
The NETWORK_LINK
parameter initiates a network import. This means that the impdp
client initiates the import request, typically to the local database. That server contacts the remote source database referenced by the database link in the NETWORK_LINK
parameter, retrieves the data, and writes it directly back to the target database. There are no dump files involved.
Data Pump Import provides much greater data and metadata filtering capability than was provided by the original Import utility.
Data filters specify restrictions on the rows that are to be imported. These restrictions can be based on partition names and on the results of subqueries.
Each data filter can only be specified once per table and once per job. If different filters using the same name are applied to both a particular table and to the whole job, the filter parameter supplied for the specific table will take precedence.
Data Pump Import provides much greater metadata filtering capability than was provided by the original Import utility. Metadata filtering is implemented through the EXCLUDE
and INCLUDE
parameters. The EXCLUDE
and INCLUDE
parameters are mutually exclusive.
Metadata filters identify a set of objects to be included or excluded from a Data Pump operation. For example, you could request a full import, but without Package Specifications or Package Bodies.
To use filters correctly and to get the results you expect, remember that dependent objects of an identified object are processed along with the identified object. For example, if a filter specifies that a package is to be included in an operation, then grants upon that package will also be included. Likewise, if a table is excluded by a filter, then indexes, constraints, grants, and triggers upon the table will also be excluded by the filter.
If multiple filters are specified for an object type, an implicit AND
operation is applied to them. That is, objects participating in the job must pass all of the filters applied to their object types.
The same filter name can be specified multiple times within a job.
To see which objects can be filtered, you can perform queries on the following views: DATABASE_EXPORT_OBJECTS
, SCHEMA_EXPORT_OBJECTS
, and TABLE_EXPORT_OBJECTS
. For an example of this, see Metadata Filters.
This section provides descriptions of the parameters available in the command-line mode of Data Pump Import. Many of the descriptions include an example of how to use the parameter.
If you try running the examples that are provided for each parameter, be aware of the following requirements:
Most of the examples use the sample schemas of the seed database, which is installed by default when you install Oracle Database. In particular, the human resources (hr
) schema is often used.
Examples that specify a dump file to import assume that the dump file exists. Wherever possible, the examples use dump files that are generated when you run the Export examples in Chapter 2.
The examples assume that the directory objects, dpump_dir1
and dpump_dir2
, already exist and that READ
and WRITE
privileges have been granted to the hr
schema for these directory objects. See Default Locations for Dump, Log, and SQL Files for information about creating directory objects and assigning privileges to them.
Some of the examples require the EXP_FULL_DATABASE
and IMP_FULL_DATABASE
roles. The examples assume that the hr
schema has been granted these roles.
If necessary, ask your DBA for help in creating these directory objects and assigning the necessary privileges and roles.
Syntax diagrams of these parameters are provided in Syntax Diagrams for Data Pump Import.
Unless specifically noted, these parameters can also be specified in a parameter file.
Some of the examples in this chapter may show certain clauses enclosed in quotation marks and backslashes. This is because those clauses contain a blank, a situation for which most operating systems require that the entire string be placed in quotation marks or marked as a literal by some method. Some operating systems also require that quotation marks on the command line be preceded by an escape character, such as the backslash. If the backslashes were not present, the command-line parser that Import uses would not understand the quotation marks and would remove them. Some examples in this chapter may use escape characters to show you how they would be used. However, in general, Oracle recommends that you place such statements in a parameter file because escape characters are not necessary in parameter files.
See Also:
|
Note: If you are accustomed to using the original Import utility, you may be wondering which Data Pump parameters are used to perform the operations you used to perform with original Import. For a comparison, see How Data Pump Import Parameters Map to Those of the Original Import Utility. |
Default: current job in user's schema, if there is only one running job
Attaches the client session to an existing import job and automatically places you in interactive-command mode.
ATTACH [=[schema_name.]job_name]
Specify a schema_name
if the schema to which you are attaching is not your own. You must have the IMP_FULL_DATABASE
role to do this.
A job_name
does not have to be specified if only one running job is associated with your schema and the job is active. If the job you are attaching to is stopped, you must supply the job name. To see a list of Data Pump job names, you can query the DBA_DATAPUMP_JOBS
view or the USER_DATAPUMP_JOBS
view.
When you are attached to the job, Import displays a description of the job and then displays the Import prompt.
When you specify the ATTACH
parameter, you cannot specify any other parameters except for the connection string (user/password
).
You cannot attach to a job in another schema unless it is already running.
The following is an example of using the ATTACH
parameter.
> impdp hr/hr ATTACH=import_job
This example assumes that a job named import_job
exists in the hr
schema.
Default: ALL
Enables you to filter what is loaded during the import operation.
CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
ALL
loads any data and metadata contained in the source. This is the default.
DATA_ONLY
loads only table row data into existing tables; no database objects are created.
METADATA_ONLY
loads only database object definitions; no table row data is loaded.
The following is an example of using the CONTENT
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp CONTENT=METADATA_ONLY
This command will execute a full import that will load only the metadata in the expfull.dmp
dump file. It executes a full import because that is the default for file-based imports in which no import mode is specified.
Default: none for nonprivileged users; DATA_PUMP_DIR
for privileged users
Specifies the location in which the import job can find the dump file set and where it should create log and SQL files.
DIRECTORY=directory_object
The directory_object
is the name of a database directory object (not the name of an actual directory) that was previously created by the database administrator (DBA) using the SQL CREATE
DIRECTORY
statement. You can override this value with individual directory objects specified on the DUMPFILE
, LOGFILE
, and SQLFILE
parameters. You must have read access to the directory used for the dump file set and write access to the directory used to create the log and SQL files.
The following is an example of using the DIRECTORY
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=dpump_dir2:expfull.log
This command results in the import job looking for the expfull.dmp
dump file in the directory pointed to by the dpump_dir1
directory object. The dpump_dir2
directory object specified on the LOGFILE
parameter overrides the DIRECTORY
parameter so that the log file is written to dpump_dir2
.
Note: See DIRECTORY for an example (Example 2) of using the defaultDATA_PUMP_DIR directory object available to privileged users |
See Also:
|
Default: expdat
.dmp
Specifies the names and optionally, the directory objects of the dump file set that was created by Export.
DUMPFILE=[directory_object:]file_name [, ...]
The directory_object
is optional if one has already been established by the DIRECTORY
parameter. If you do supply a value here, it must be a directory object that already exists, and it will override a value that was specified by the DIRECTORY
parameter.
The file_name
is the name of the dump file set. The filenames can also be templates that contain the substitution variable, %U
. If %U
is used, Import examines each file that matches the template (until no match is found) in order to locate all files that are part of the dump file set. The %U
expands to a 2-digit incrementing integer starting with 01.
Sufficient information is contained within the files for Import to locate the entire set, provided the file specifications in the DUMPFILE
parameter encompass the entire set. The files are not required to have the same names or locations that they had at export time.
The following is an example of using the Import DUMPFILE
parameter. You can create the dump files used in this example by running the example provided for the Export DUMPFILE
parameter. See DUMPFILE.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp, exp2%U.dmp
Because a directory object (dpump_dir2
) is specified for the exp1.dmp
dump file, the import job will look there for the file. It will also look in dpump_dir1
for any dump files of the form exp2<nn>.dmp
. The log file will be written to dpump_dir1
.
Default: BLOCKS
Instructs the source system in a network import operation to estimate how much data will be generated.
ESTIMATE={BLOCKS | STATISTICS}
The valid choices for the ESTIMATE
parameter are as follows:
BLOCKS
- The estimate is calculated by multiplying the number of database blocks used by the target objects times the appropriate block sizes.
STATISTICS
- The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently.
The estimate that is generated can be used to determine a percentage complete throughout the execution of the import job. When the import source is a dump file set, the amount of data to be loaded is already known, so the percentage complete is automatically calculated.
The Import ESTIMATE
parameter is valid only if the NETWORK_LINK
parameter is also specified.
In the following example, source_database_link
would be replaced with the name of a valid link to the source database.
> impdp hr/hr TABLES=job_history NETWORK_LINK=source_database_link DIRECTORY=dpump_dir1 ESTIMATE=statistics
The job_history
table in the hr
schema is imported from the source database. A log file is created by default and written to the directory pointed to by the dpump_dir1
directory object. When the job begins, an estimate for the job is calculated based on table statistics.
Default: none
Enables you to filter the metadata that is imported by specifying objects and object types that you want to exclude from the import job.
EXCLUDE=object_type[:name_clause] [, ...]
For the given mode of import, all object types contained within the source (and their dependents) are included, except those specified in an EXCLUDE
statement. If an object is excluded, all of its dependent objects are also excluded. For example, excluding a table will also exclude all indexes and triggers on the table.
The name_clause
is optional. It allows fine-grained selection of specific objects within an object type. It is a SQL expression used as a filter on the object names of the type. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name clause applies only to object types whose instances have names (for example, it is applicable to TABLE
and VIEW
, but not to GRANT
). The optional name clause must be separated from the object type with a colon and enclosed in double quotation marks, because single-quotation marks are required to delimit the name strings. For example, you could set EXCLUDE=INDEX:"LIKE
'DEPT%'"
to exclude all indexes whose names start with dept
.
More than one EXCLUDE
statement can be specified. Oracle recommends that you place EXCLUDE
statements in a parameter file to avoid having to use operating system-specific escape characters on the command line.
As explained in the following sections, you should be aware of the effects of specifying certain objects for exclusion, in particular, CONSTRAINT
, GRANT
, and USER
.
The following constraints cannot be excluded:
NOT
NULL
constraints.
Constraints needed for the table to be created and loaded successfully (for example, primary key constraints for index-organized tables or REF
SCOPE
and WITH
ROWID
constraints for tables with REF
columns).
This means that the following EXCLUDE
statements will be interpreted as follows:
EXCLUDE
=CONSTRAINT
will exclude all nonreferential constraints, except for NOT
NULL
constraints and any constraints needed for successful table creation and loading.
EXCLUDE
=REF_CONSTRAINT
will exclude referential integrity (foreign key) constraints.
Specifying EXCLUDE
=GRANT
excludes object grants on all object types and system privilege grants.
Specifying EXCLUDE
=USER
excludes only the definitions of users, not the objects contained within users' schemas.
To exclude a specific user and all objects of that user, specify a filter such as the following (where hr
is the schema name of the user you want to exclude):
EXCLUDE
=SCHEMA
:"= 'HR
'"
If you try to exclude a user by using a statement such as EXCLUDE
=USER
:"= 'HR
'", only CREATE USER hr
DDL statements will be excluded, and you may not get the results you expect.
The EXCLUDE
and INCLUDE
parameters are mutually exclusive.
Neither EXCLUDE
nor INCLUDE
can be used if the CONTENT
=DATA_ONLY
parameter is specified, because that implies import of row data only.
Assume the following is in a parameter file, exclude.par
, being used by a DBA or some other user with the IMP_FULL_DATABASE
role. (If you want to try the example, you will need to create this file.)
EXCLUDE=FUNCTION EXCLUDE=PROCEDURE EXCLUDE=PACKAGE EXCLUDE=INDEX:"LIKE 'EMP%' "
You could then issue the following command. You can create the expfull.dmp
dump file used in this command by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp PARFILE=exclude.par
All data from the expfull.dmp
dump file will be loaded except for functions, procedures, packages, and indexes whose names start with emp
.
See Also: Filtering During Import Operations for more information about the effects of using theEXCLUDE parameter |
Default: none
Specifies the system change number (SCN) that Import will use to enable the Flashback utility.
FLASHBACK_SCN=scn_number
The import operation is performed with data that is consistent as of the specified scn_number
.
Note: If you are on a logical standby system, theFLASHBACK_SCN parameter is ignored because SCNs are selected by logical standby. See Oracle Data Guard Concepts and Administration for information about logical standby databases. |
The FLASHBACK_SCN
parameter is valid only when the NETWORK_LINK
parameter is also specified. This is because the value is passed to the source system to provide an SCN-consistent data extraction.
FLASHBACK_SCN
and FLASHBACK_TIME
are mutually exclusive.
The following is an example of using the FLASHBACK_SCN
parameter.
> impdp hr/hr DIRECTORY=dpump_dir1 FLASHBACK_SCN=scn_number NETWORK_LINK=source_database_link
The source_database_link
in this example would be replaced with the name of a source database from which you were importing data. Also, an actual integer value would be supplied for the scn_number
.
Default: none
Specifies the time of a particular SCN.
FLASHBACK_TIME="TO_TIMESTAMP()"
The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The import operation is performed with data that is consistent as of this SCN.
Note: If you are on a logical standby system, theFLASHBACK_TIME parameter is ignored because SCNs are selected by logical standby. See Oracle Data Guard Concepts and Administration for information about logical standby databases. |
This parameter is valid only when the NETWORK_LINK
parameter is also specified. This is because the value is passed to the source system to provide a time-consistent import.
FLASHBACK_TIME
and FLASHBACK_SCN
are mutually exclusive.
You can specify the time in any format that the DBMS_FLASHBACK.ENABLE_AT_TIME
procedure accepts, for example:
> impdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link FLASHBACK_TIME="TO_TIMESTAMP('25-08-2003 14:35:00', 'DD-MM-YYYY HH24:MI:SS')"
This example may require the use of escape characters, depending on your operating system. See Command-Line Escape Characters in Examples.
See Also: Oracle Database Application Developer's Guide - Fundamentals for information about using flashback |
Default: Y
Specifies that you want to perform a full database import.
FULL=y
If you specify FULL=y
, then everything from the source (either a dump file set or another database) is imported.
The IMP_FULL_DATABASE
role is required if the NETWORK_LINK
parameter is also used or if the export operation that generated the dump file set required the EXP_FULL_DATABASE
role.
The following is an example of using the FULL
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DUMPFILE=dpump_dir1:expfull.dmp FULL=y LOGFILE=dpump_dir2:full_imp.log
This example imports everything from the expfull.dmp
dump file. In this example, a DIRECTORY
parameter is not provided. Therefore, a directory object must be provided on both the DUMPFILE
parameter and the LOGFILE
parameter. The directory objects can be different, as shown in this example.
Default: n
Displays online help for the Import utility.
HELP=y
If HELP
=y
is specified, Import displays a summary of all Import command-line parameters and interactive commands.
> impdp HELP = Y
This example will display a brief description of all Import parameters and commands.
Default: none
Enables you to filter the metadata that is imported by specifying objects and object types for the current import mode.
INCLUDE = object_type[:name_clause] [, ...]
Only object types in the source that are explicitly specified in the INCLUDE
statement are imported.
The name_clause
is optional. It allows fine-grained selection of specific objects within an object type. It is a SQL expression used as a filter on the object names of the type. It consists of a SQL operator and the values against which the object names of the specified type are to be compared. The name clause applies only to object types whose instances have names (for example, it is applicable to TABLE
, but not to GRANT
). The optional name clause must be separated from the object type with a colon and enclosed in double quotation marks, because single-quotation marks are required to delimit the name strings.
More than one INCLUDE
statement can be specified. Oracle recommends that you place INCLUDE
statements in a parameter file to avoid having to use operating system-specific escape characters on the command line.
To see a list of valid paths for use with the INCLUDE
parameter, you can query the following views: DATABASE_EXPORT_OBJECTS
, SCHEMA_EXPORT_OBJECTS
, and TABLE_EXPORT_OBJECTS
.
The INCLUDE
and EXCLUDE
parameters are mutually exclusive.
Neither INCLUDE
nor EXCLUDE
can be used if the CONTENT
=DATA_ONLY
parameter is specified, because that implies import of table row data only.
Assume the following is in a parameter file, imp_include.par
, being used by a DBA or some other user with the IMP_FULL_DATABASE
role:
INCLUDE=FUNCTION INCLUDE=PROCEDURE INCLUDE=PACKAGE INCLUDE=INDEX:"LIKE 'EMP%' "
You can then issue the following command:
> impdp hr/hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp PARFILE=imp_include.par
You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
The Import operation will load only functions, procedures, and packages from the hr
schema and indexes whose names start with EMP
. Although this is a privileged-mode import (the user must have the IMP_FULL_DATABASE
role), the schema definition is not imported, because the USER
object type was not specified in an INCLUDE
statement.
Default: system-generated name of the form SYS_<operation>_<mode>_NN
Specifies a name for the import job. The job name is used to identify the import job in subsequent actions, such as ATTACH
. The job name becomes the name of the master table in the current user's schema. The master table controls the import job.
JOB_NAME=jobname_string
The jobname_string
specifies a name of up to 30 bytes for this import job. The bytes must represent printable characters and spaces. If spaces are included, the name must be enclosed in single quotation marks (for example, 'Thursday Import'). The job name is implicitly qualified by the schema of the user performing the import operation.
The default job name is system-generated in the form SYS_<operation>_<mode>_NN
, where NN
expands to a 2-digit incrementing integer starting at 01. An example of a default name is 'SYS_IMPORT_TABLESPACE_02
'.
The following is an example of using the JOB_NAME
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp JOB_NAME=impjob01
Default: import
.log
Specifies the name, and optionally, a directory object, for the log file of the import job.
LOGFILE=[directory_object:]file_name
If you specify a directory_object
, it must be one that was previously established by the DBA. This overrides the directory object specified with the DIRECTORY
parameter. The default behavior is to create import
.log
in the directory referenced by the directory object specified in the DIRECTORY
parameter.
If the file_name
you specify already exists, it will be overwritten.
All messages regarding work in progress, work completed, and errors encountered are written to the log file. (For a real-time status of the job, use the STATUS
command in interactive mode.)
A log file is always created unless the NOLOGFILE
parameter is specified. As with the dump file set, the log file is relative to the server and not the client.
Note: Data Pump Import writes the log file using the database character set. If your clientNLS_LANG environment sets up a different client character set from the database character set, then it is possible that table names may be different in the log file than they are when displayed on the client output screen. |
The following is an example of using the LOGFILE
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr SCHEMAS=HR DIRECTORY=dpump_dir2 LOGFILE=imp.log DUMPFILE=dpump_dir1:expfull.dmp
The DUMPFILE
parameter specifies a directory object (dpump_dir1
) and a dump file name (expfull.dmp
). The directory object specified on the DUMPFILE
parameter overrides the directory object specified on the DIRECTORY
parameter. Because no directory object is specified on the LOGFILE
parameter, the log file is written to the directory object specified on the DIRECTORY
parameter.
See Also:
|
Default: none
Enables a network import when you specify the name of a valid database link to a source system.
NETWORK_LINK=source_database_link
The NETWORK_LINK
parameter initiates a network import. This means that the impdp
client initiates the import request, typically to the local database. That server contacts the remote source database referenced by source_database_link
, retrieves the data, and writes it directly back to the target database. There are no dump files involved.
The source_database_link
provided must be the name of a valid link to a source database. If the database does not already have a database link, you or your DBA must create one. For more information about the CREATE
DATABASE
LINK
statement, see Oracle Database SQL Reference.
If the source database is read-only, then the user on the source database must have a locally-managed tablespace assigned as a default temporary tablespace. Otherwise, the job will fail. For further details about this, see the information about creating locally managed temporary tablespaces in the Oracle Database Administrator's Guide.
This parameter is required when any of the following parameters are specified: FLASHBACK_SCN
, FLASHBACK_TIME
, ESTIMATE
, or TRANSPORT_TABLESPACES
.
Network imports do not support the use of evolved types.
In the following example, the source_database_link
would be replaced with the name of a valid database link that must already exist.
> impdp hr/hr TABLES=employees DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link EXCLUDE=CONSTRAINT
This example results in an import of the employees
table (excluding constraints) from the source database. The log file is written to dpump_dir1
, specified on the DIRECTORY
parameter.
Default: n
Specifies whether or not to suppress the default behavior of creating a log file.
NOLOGFILE={y | n}
If you specify NOLOGFILE=Y
to suppress creation of a log file, progress and error information is still written to the standard output device of any attached clients, including the client that started the original export operation. If there are no clients attached to a running job and you specify NOLOGFILE=Y
, you run the risk of losing important progress and error information.
The following is an example of using the NOLOGFILE
parameter.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp NOLOGFILE=Y
This command results in a full mode import (the default for file-based imports) of the expfull.dmp
dump file. No log file is written because NOLOGFILE
is set to y
.
Default: 1
Specifies the maximum number of threads of active execution operating on behalf of the import job.
PARALLEL=integer
The value you specify for integer
specifies the maximum number of threads of active execution operating on behalf of the import job. This execution set consists of a combination of worker processes and parallel I/O server processes. The master control process, idle workers, and worker processes acting as parallel execution coordinators in parallel I/O operations do not count toward this total. This parameter enables you to make trade-offs between resource consumption and elapsed time.
If the source of the import is a dump file set consisting of files, multiple processes can read from the same file, but performance may be limited by I/O contention.
To increase or decrease the value of PARALLEL
during job execution, use interactive-command mode.
Parallelism is used to best advantage for loading user data and package bodies, and for building indexes.
This parameter is valid only in the Enterprise Edition of Oracle Database 10g.
The following is an example of using the PARALLEL
parameter.
> impdp hr/hr DIRECTORY=dpump_dir1 LOGFILE=parallel_import.log JOB_NAME=imp_par3 DUMPFILE=par_exp%U.dmp PARALLEL=3
This command imports the dump file set that is created when you run the example for the Export PARALLEL
parameter. (See PARALLEL.) The names of the dump files are par_exp01.dmp
, par_exp02.dmp
, and par_exp03.dmp
.
The import job looks for the dump files in the location indicated by the dpump_dir1
directory object specified on the DIRECTORY
parameter. This is also the location to which the log file is written.
Default: none
Specifies the name of an import parameter file.
PARFILE=[directory_path]file_name
Unlike dump files, log files, and SQL files which are created and written by the server, the parameter file is opened and read by the client running the impdp image. Therefore, a directory object name is neither required nor appropriate. The default is the user's current directory.
The PARFILE
parameter cannot be specified within a parameter file.
The content of an example parameter file, hr_imp.par
, might be as follows:
TABLES= countries, locations, regions DUMPFILE=dpump_dir2:exp1.dmp,exp2%U.dmp DIRECTORY=dpump_dir1 PARALLEL=3
You could then issue the following command to execute the parameter file:
> impdp hr/hr PARFILE=hr_imp.par
The tables named countries
, locations
, and regions
will be imported from the dump file set that is created when you run the example for the Export DUMPFILE
parameter. (See DUMPFILE.) The import job looks for the exp1.dmp
file in the location pointed to by dpump_dir2
. It looks for any dump files of the form exp2<nn>.dmp
in the location pointed to by dpump_dir1
. The log file for the job will also be written to dpump_dir1
.
Default: none
Enables you to filter the data that is imported by specifying a SQL SELECT
statement, which is applied to all tables in the import job or to a specific table.
QUERY=[[schema_name.]table_name:]query_clause
The query_clause
is typically a WHERE
clause for fine-grained row selection, but could be any SQL clause.
If a schema and table are not supplied, the query is applied to (and must be valid for) all tables in the source dump file set or database.
When the query is to be applied to a specific table, a colon must separate the table name from the query clause. More than one table-specific query can be specified, but only one can be specified per table.
The query must be enclosed in single or double quotation marks. Double quotation marks are recommended, because strings within the clause must be enclosed in single quotation marks.
Oracle recommends that you place QUERY
specifications in a parameter file to avoid having to use operating system-specific escape characters on the command line.
When the QUERY
parameter is used, the external tables method (rather than the direct path method) is used for data access.
To specify a schema other than your own in a table-specific query, you need the IMP_FULL_DATABASE
role.
The QUERY
parameter cannot be used in conjunction with the following parameters:
CONTENT=METADATA_ONLY
SQLFILE
TRANSPORT_TABLESPACES
The following is an example of using the QUERY
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp QUERY=departments:'"WHERE department_id < 120"' NOLOGFILE=Y
Only data in expfull.dmp
that meets the criteria specified in the QUERY
parameter is imported.
This example may require the use of escape characters, depending on your operating system. See Command-Line Escape Characters in Examples.
Default: none
Changes the name of the source datafile to the target datafile name in all SQL statements where the source datafile is referenced: CREATE TABLESPACE
, CREATE LIBRARY
, and CREATE DIRECTORY
.
REMAP_DATAFILE=source_datafile:target_datafile
Remapping datafiles is useful when you move databases between platforms that have different file naming conventions. The source_datafile
and target_datafile
names should be exactly as you want them to appear in the SQL statements where they are referenced. Oracle recommends that you enclose datafile names in quotation marks to eliminate ambiguity on platforms for which a colon is a valid file specification character.
You must have the IMP_FULL_DATABASE
role to specify this parameter.
The following is an example of using the REMAP_DATAFILE
parameter.
> impdp hr/hr FULL=y DIRECTORY=dpump_dir1 DUMPFILE=db_full.dmp REMAP_DATAFILE='DB1$:[HRDATA.PAYROLL]tbs6.f':'/db1/hrdata/payroll/tbs6.f'
This example remaps a VMS file specification (DR1$:[HRDATA.PAYROLL]tbs6.f)
to a UNIX file specification, (/db1/hrdata/payroll/tbs6.f
) for all SQL DDL statements during the import. The dump file, db_full.dmp,
is located in the directory object, dpump_dir1
.
Default: none
Loads all objects from the source schema into a target schema.
REMAP_SCHEMA=source_schema:target_schema
Multiple REMAP_SCHEMA
lines can be specified, but the source schema must be different for each one. However, different source schemas can map to the same target schema. The mapping may not be 100 percent complete, because there are certain schema references that Import is not capable of finding. For example, Import will not find schema references embedded within the body of definitions of types, views, procedures, and packages.
This parameter requires the IMP_FULL_DATABASE
role.
If the schema you are remapping to does not already exist, the import operation creates it, provided the dump file set contains the necessary CREATE
USER
metadata and you are importing with enough privileges. For example, the following Export commands would create the dump file sets with the necessary metadata to create a schema, because the user SYSTEM
has the necessary privileges:
> expdp SYSTEM/password SCHEMAS=hr > expdp SYSTEM/password FULL=y
If you do not have enough privileges to perform an import that creates dump files containing the metadata necessary to create a schema, then you must create the target schema before performing the import operation. This is because the dump files do not contain the necessary information for the import to create the schema automatically.
If the import operation does create the schema, then after the import is complete, you must assign it a valid password in order to connect to it. The SQL statement to do this, which requires privileges, is:
SQL> ALTER USER [schema_name] IDENTIFIED BY [new_pswd]
When you remap a schema, it is possible to constrain the rows loaded into the destination table by specifying a query. However, Data Pump Import performs remapping before applying any queries. This means that by the time the query is applied, the schema has already been remapped. Therefore, to constrain the rows that are loaded, the query must be applied to the target schema table rather than to the source schema table.
Suppose that you execute the following Export and Import commands to remap the hr
schema into the blake
schema:
> expdp SYSTEM/password SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp > impdp SYSTEM/password DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp REMAP_SCHEMA=hr:scott
In this example, if user scott
already exists before the import, then the Import REMAP_SCHEMA
command will add objects from the hr
schema into the existing scott
schema. You can connect to the scott
schema after the import by using the existing password (without resetting it).
If user scott
does not exist before you execute the import operation, Import automatically creates it with an unusable password. This is possible because the dump file, hr
.dmp
, was created by SYSTEM,
which has the privileges necessary to create a dump file that contains the metadata needed to create a schema. However, you cannot connect to scott
on completion of the import, unless you reset the password for scott
on the target database after the import completes.
Default: none
Remaps all objects selected for import with persistent data in the source tablespace to be created in the target tablespace.
REMAP_TABLESPACE=source_tablespace:target_tablespace
Multiple REMAP_TABLESPACE
parameters can be specified, but no two can have the same source tablespace. The target schema must have sufficient quota in the target tablespace.
Note that use of the REMAP_TABLESPACE
parameter is the only way to remap a tablespace in Data Pump Import. This is a simpler and cleaner method than the one provided in the original Import utility. In original Import, if you wanted to change the default tablespace for a user, you had to perform several steps, including exporting and dropping the user, creating the user in the new tablespace, and importing the user from the dump file. That method was subject to many restrictions (including the number of tablespace subclauses) which sometimes resulted in the failure of some DDL commands.
By contrast, the Data Pump Import method of using the REMAP_TABLESPACE
parameter works for all objects, including the user, and it works regardless of how many tablespace subclauses are in the DDL statement.
The following is an example of using the REMAP_TABLESPACE
parameter.
> impdp hr/hr REMAP_TABLESPACE='tbs_1':'tbs_6' DIRECTORY=dpump_dir1 PARALLEL=2 JOB_NAME=cf1n02 DUMPFILE=employees.dmp NOLOGFILE=Y
Default: n
Specifies whether or not the import job should reuse existing datafiles for tablespace creation.
REUSE_DATAFILES={y | n}
If the default (n
) is used and the datafiles specified in CREATE TABLESPACE
statements already exist, an error message from the failing CREATE TABLESPACE
statement is issued, but the import job continues.
If this parameter is specified as y
, a warning is issued and the existing datafiles are reinitialized. Be aware that specifying Y
can result in a loss of data.
The following is an example of using the REUSE_DATAFILES
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=reuse.log REUSE_DATAFILES=Y
This example reinitializes datafiles referenced by CREATE
TABLESPACE
statements in the expfull.dmp
file.
Default: none
Specifies that a schema-mode import is to be performed.
SCHEMAS=schema_name [,...]
If you have the IMP_FULL_DATABASE
role, you can use this parameter to perform a schema-mode import by specifying a single schema other than your own or a list of schemas to import. First, the schemas themselves are created (if they do not already exist), including system and role grants, password history, and so on. Then all objects contained within the schemas are imported. Nonprivileged users can specify only their own schemas. In that case, no information about the schema definition is imported, only the objects contained within it.
The following is an example of using the SCHEMAS
parameter. You can create the expdat.dmp
file used in this example by running the example provided for the Export SCHEMAS
parameter. See SCHEMAS.
> impdp hr/hr SCHEMAS=hr,oe DIRECTORY=dpump_dir1 LOGFILE=schemas.log DUMPFILE=expdat.dmp
The hr
and oe
schemas are imported from the expdat.dmp
file. The log file, schemas.log
, is written to dpump_dir1
.
Default: n
Specifies whether or not Import skips loading tables that have indexes that were set to the Index Unusable state (by either the system or the user).
SKIP_UNUSABLE_INDEXES={y | n}
If SKIP_UNUSABLE_INDEXES
is set to y, then the import job does not load tables that have indexes that were previously set to an Index Unusable state. Other tables, not previously set Unusable, continue to be updated as rows are inserted.
This parameter enables you to postpone index maintenance on selected index partitions until after row data has been inserted. You then have the responsibility to rebuild the affected index partitions after the import.
If SKIP_UNUSABLE_INDEXES
is set to n
(the default) and an index in the Unusable state is encountered, the load of that table or partition will fail.
The following is an example of using the SKIP_UNUSABLE_INDEXES
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=skip.log SKIP_UNUSABLE_INDEXES=y
Default: none
Specifies a file into which all of the SQL DDL that Import would have executed, based on other parameters, is written.
SQLFILE=[directory_object:]file_name
The file_name
specifies where the import job will write the DDL that would be executed during the job. The SQL is not actually executed, and the target system remains unchanged. The file is written to the directory object specified in the DIRECTORY
parameter, unless another directory_object
is explicitly specified here. Any existing file that has a name matching the one specified with this parameter is overwritten.
Note that passwords are commented out in the SQL file. For example, if a CONNECT
statement is part of the DDL that was executed, it will be commented out and the schema name will be shown but the password will not. In the following example, the dashes indicate that a comment follows, and the hr
schema name is shown, but not the password.
-- CONNECT hr
Therefore, before you can execute the SQL file, you must edit it by removing the dashes indicating a comment and adding the password for the hr
schema (in this case, the password is also hr
), as follows:
CONNECT hr/hr
The following is an example of using the SQLFILE
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp SQLFILE=dpump_dir2:expfull.sql
A SQL file named expfull.sql
is written to dpump_dir2
.
Default: 0
Displays detailed status of the job, along with a description of the current operation. An estimated completion percentage for the job is also returned.
STATUS[=integer]
If you supply a value for integer
, it specifies how frequently, in seconds, job status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, information is displayed only upon completion of each object type, table, or partition
This status information is written only to your standard output device, not to the log file (if one is in effect).
The following is an example of using the STATUS
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr NOLOGFILE=y STATUS=120 DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
In this example, the status is shown every two minutes (120 seconds).
Default: y
Specifies whether or not to import any general Streams metadata that may be present in the export dump file.
STREAMS_CONFIGURATION={y | n}
The following is an example of using the STREAMS_CONFIGURATION
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp STREAMS_CONFIGURATION=n
Default: SKIP
(Note that if CONTENT
=DATA_ONLY
is specified, the default is APPEND
, not SKIP
.)
Tells Import what to do if the table it is trying to create already exists.
TABLE_EXISTS_ACTION={SKIP | APPEND | TRUNCATE | REPLACE}
The possible values have the following effects:
SKIP
leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT
parameter is set to DATA_ONLY
.
APPEND
loads rows from the source and leaves existing rows unchanged.
TRUNCATE
deletes existing rows and then loads rows from the source.
REPLACE
drops the existing table and then creates and loads it from the source. This is not a valid option if the CONTENT
parameter is set to DATA_ONLY
.
The following considerations apply when you are using these options:
When you use TRUNCATE
or REPLACE
, make sure that rows in the affected tables are not targets of any referential constraints.
When you use SKIP
, APPEND
, or TRUNCATE
, existing table-dependent objects in the source, such as indexes, grants, triggers, and constraints, are ignored. For REPLACE
, the dependent objects are dropped and re-created from the source, if they were not explicitly or implicitly excluded (using EXCLUDE
) and they exist in the source dump file or system.
When you use APPEND
or TRUNCATE
, checks are made to ensure that rows from the source are compatible with the existing table prior to performing any action.
The existing table is loaded using the external tables access method because the external tables feature honors active constraints and triggers. However, be aware that if any row violates an active constraint, the load fails and no data is loaded.
If you have data that must be loaded, but may cause constraint violations, consider disabling the constraints, loading the data, and then deleting the problem rows before reenabling the constraints.
When you use APPEND
, the data is always loaded into new space; existing space, even if available, is not reused. For this reason, you may wish to compress your data after the load.
TRUNCATE
cannot be used on clustered tables or over network links.
The following is an example of using the TABLE_EXISTS_ACTION
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr TABLES=employees DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLE_EXISTS_ACTION=REPLACE
Default: none
Specifies that you want to perform a table-mode import.
TABLES=[schema_name.]table_name[:partition_name]
In a table-mode import, you can filter the data that is imported from the source by specifying a comma-delimited list of tables and partitions or subpartitions.
If you do not supply a schema_name
, it defaults to that of the current user. To specify a schema other than your own, you must have the IMP_FULL_DATABASE
role.
If a partition_name
is specified, it must be the name of a partition or subpartition in the associated table.
The use of wildcards with table names is also supported. For example, TABLES=emp%
would import all tables having names that start with 'EMP
'.
The use of synonyms as values for the TABLES
parameter is not supported. For example, if the regions
table in the hr
schema had a synonym of regn
, it would not be valid to use TABLES=regn
. An error would be returned.
If you specify more than one table_name
, they must all reside in the same schema.
The following example shows a simple use of the TABLES
parameter to import only the employees
and jobs
tables from the expfull.dmp
file. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLES=employees,jobs
The following example shows the use of the TABLES
parameter to import partitions:
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp TABLES=sh.sales:sales_Q1_2000,sh.sales:sales_Q2_2000
This example imports the partitions sales_Q1_2000
and sales_Q2_2000
for the table sales
in the schema sh
.
Default: none
Specifies that you want to perform a tablespace-mode import.
TABLESPACES=tablespace_name [, ...]
Use TABLESPACES
to specify a list of tablespace names whose tables and dependent objects are to be imported from the source (full, schema, tablespace, or table-mode export dump file set or another database).
The following is an example of using the TABLESPACES
parameter. It assumes that the tablespaces already exist. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLESPACES=tbs_1,tbs_2, tbs_3,tbs_4
Default: none
Enables you to alter object creation DDL for specific objects, as well as for all applicable objects being loaded.
TRANSFORM = transform_name:value[:object_type]
The transform_name
specifies the name of the transform. There are two possible options:
SEGMENT_ATTRIBUTES
- if the value is specified as y
, then segment attributes (physical attributes, storage attributes, tablespaces, and logging) are included, with appropriate DDL. The default is y
.
STORAGE
- If the value is specified as y
, the storage clauses are included, with appropriate DDL. The default is y
. This parameter is ignored if SEGMENT_ATTRIBUTES
=n
.
The value
specifies whether to include (y
) or omit (n
) segment or storage attributes (as specified by the transform_name
) on applicable object DDL.
The object_type
is optional. If supplied, it designates the object type to which the transform will be applied. If no object type is specified then the transform applies to all object types. The object type must be either TABLE
or INDEX
.
For the following example, assume that you have exported the employees
table in the hr
schema. The SQL CREATE
TABLE
statement that results when you then import the table is similar to the following:
CREATE TABLE "HR"."EMPLOYEES" ( "EMPLOYEE_ID" NUMBER(6,0), "FIRST_NAME" VARCHAR2(20), "LAST_NAME" VARCHAR2(25) CONSTRAINT "EMP_LAST_NAME_NN" NOT NULL ENABLE, "EMAIL" VARCHAR2(25) CONSTRAINT "EMP_EMAIL_NN" NOT NULL ENABLE, "PHONE_NUMBER" VARCHAR2(20), "HIRE_DATE" DATE CONSTRAINT "EMP_HIRE_DATE_NN" NOT NULL ENABLE, "JOB_ID" VARCHAR2(10) CONSTRAINT "EMP_JOB_NN" NOT NULL ENABLE, "SALARY" NUMBER(8,2), "COMMISSION_PCT" NUMBER(2,2), "MANAGER_ID" NUMBER(6,0), "DEPARTMENT_ID" NUMBER(4,0) ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 10240 NEXT 16384 MINEXTENTS 1 MAXEXTENTS 121 PCTINCREASE 50 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "SYSTEM" ;
If you do not want to retain the STORAGE
clause or TABLESPACE
clause, you can remove them from the CREATE
STATEMENT
by using the Import TRANSFORM
parameter. Specify the value of SEGMENT_ATTRIBUTES
as n
. This results in the exclusion of segment attributes (both storage and tablespace) from the table.
> impdp hr/hr TABLES=hr.employees \ DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp \ TRANSFORM=SEGMENT_ATTRIBUTES:n:table
The resulting CREATE
TABLE
statement for the employees
table would then look similar to the following. It does not contain a STORAGE
or TABLESPACE
clause; the attributes for the default tablespace for the HR
schema will be used instead.
CREATE TABLE "HR"."EMPLOYEES" ( "EMPLOYEE_ID" NUMBER(6,0), "FIRST_NAME" VARCHAR2(20), "LAST_NAME" VARCHAR2(25) CONSTRAINT "EMP_LAST_NAME_NN" NOT NULL ENABLE, "EMAIL" VARCHAR2(25) CONSTRAINT "EMP_EMAIL_NN" NOT NULL ENABLE, "PHONE_NUMBER" VARCHAR2(20), "HIRE_DATE" DATE CONSTRAINT "EMP_HIRE_DATE_NN" NOT NULL ENABLE, "JOB_ID" VARCHAR2(10) CONSTRAINT "EMP_JOB_NN" NOT NULL ENABLE, "SALARY" NUMBER(8,2), "COMMISSION_PCT" NUMBER(2,2), "MANAGER_ID" NUMBER(6,0), "DEPARTMENT_ID" NUMBER(4,0) );
As shown in the previous example, the SEGMENT_ATTRIBUTES
transform applies to both storage and tablespace attributes. To omit only the STORAGE
clause and retain the TABLESPACE
clause, you can use the STORAGE
transform, as follows:
> impdp hr/hr TABLES=hr.employees \ DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp \ TRANSFORM=STORAGE:n:table
The SEGMENT_ATTRIBUTES
and STORAGE
transforms can be applied to all applicable table and index objects by not specifying the object type on the TRANSFORM
parameter, as shown in the following command:
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp \ SCHEMAS=hr TRANSFORM=SEGMENT_ATTRIBUTES:n
Default: none
Specifies a list of datafiles to be imported into the target database by a transportable-mode import. The files must already have been copied from the source database system.
TRANSPORT_DATAFILES=datafile_name
The datafile_name
must include an absolute directory path specification (not a directory object name) that is valid on the system where the target database resides.
The following is an example of using the TRANSPORT_DATAFILES
parameter.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp TRANSPORT_DATAFILES='/user01/data/tbs1.f'
Default: n
Specifies whether or not to verify that the specified transportable tablespace set has no dependencies.
TRANSPORT_FULL_CHECK={y | n}
If TRANSPORT_FULL_CHECK
=y
, then Import verifies that there are no dependencies between those objects inside the transportable set and those outside the transportable set. The check addresses two-way dependencies. For example, if a table is inside the transportable set but its index is not, a failure is returned and the import operation is terminated. Similarly, a failure is also returned if an index is in the transportable set but the table is not.
If TRANSPORT_FULL_CHECK
=n,
then Import verifies only that there are no objects within the transportable set that are dependent on objects outside the transportable set. This check addresses a one-way dependency. For example, a table is not dependent on an index, but an index is dependent on a table, because an index without a table has no meaning. Therefore, if the transportable set contains a table, but not its index, then this check succeeds. However, if the transportable set contains an index, but not the table, the import operation is terminated.
In addition to this check, Import always verifies that all storage segments of all tables (and their indexes) defined within the tablespace set specified by TRANSPORT_TABLESPACES
are actually contained within the tablespace set.
This parameter is valid for transportable mode only when the NETWORK_LINK
parameter is specified.
In the following example, source_database_link
would be replaced with the name of a valid database link. The example also assumes that a datafile named tbs6.f
already exists.
> impdp hr/hr DIRECTORY=dpump_dir1 TRANSPORT_TABLESPACES=tbs_6 NETWORK_LINK=source_database_link TRANSPORT_FULL_CHECK=y TRANSPORT_DATAFILES='/wkdir/data/tbs6.f'
Default: none
Specifies that you want to perform a transportable-tablespace-mode import.
TRANSPORT_TABLESPACES=tablespace_name [, ...]
Use the TRANSPORT_TABLESPACES
parameter to specify a list of tablespace names for which object metadata will be imported from the source database into the target database.
In the following example, the source_database_link
would be replaced with the name of a valid database link. The example also assumes that a datafile named tbs6.f
already exists.
> impdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link TRANSPORT_TABLESPACES=tbs_6 TRANSPORT_FULL_CHECK=n TRANSPORT_DATAFILES='user01/data/tbs6.f'
Default: COMPATIBLE
Specifies the version of database objects to be imported.
VERSION={COMPATIBLE | LATEST | version_string}
This parameter can be used to load a target system whose Oracle database is at an earlier version than that of the source system. Database objects or attributes on the source system that are incompatible with the specified version will not be moved to the target. For example, tables containing new datatypes that are not supported in the specified version will not be imported. Legal values for this parameter are as follows:
COMPATIBLE
- This is the default value. The version of the metadata corresponds to the database compatibility level. Database compatibility must be set to 9.2.0 or higher.
LATEST
- The version of the metadata corresponds to the database version.
version_string
- A specific database version (for example, 10.0.0). In Oracle Database 10g, this value cannot be lower than 10.0.0.
The following is an example of using the VERSION
parameter. You can create the expfull.dmp
dump file used in this example by running the example provided for the Export FULL
parameter. See FULL.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLES=employees VERSION=LATEST
Table 3-1 maps, as closely as possible, Data Pump Import parameters to original Import parameters. In some cases, because of feature redesign, the original Import parameter is no longer needed so there is no Data Pump command to compare it to. Also, as shown in the table, some of the parameter names may be the same, but the functionality is slightly different.
Table 3-1 Original Import Parameters and Their Counterparts in Data Pump Import
Original Import Parameter | Comparable Data Pump Import Parameter |
---|---|
BUFFER |
A parameter comparable to BUFFER is not needed. |
CHARSET |
A parameter comparable to CHARSET is not needed. |
COMMIT |
A parameter comparable to COMMIT is not supported. |
COMPILE |
A parameter comparable to COMPILE is not supported. |
CONSTRAINTS |
EXCLUDE=CONSTRAINT and INCLUDE=CONSTRAINT |
DATAFILES |
TRANSPORT_DATAFILES |
DESTROY |
REUSE_DATAFILES |
FEEDBACK |
STATUS |
FILE |
DUMPFILE |
FILESIZE |
Not necessary. It is included in the dump file set. |
FROMUSER |
SCHEMAS |
FULL |
FULL |
GRANTS |
EXCLUDE=GRANT and INCLUDE=GRANT |
HELP |
HELP |
IGNORE |
TABLE_EXISTS_ACTION |
INDEXES |
EXCLUDE=INDEX and INCLUDE=INDEX |
INDEXFILE |
SQLFILE |
LOG |
LOGFILE |
PARFILE |
PARFILE |
RECORDLENGTH |
A parameter comparable to RECORDLENGTH is not needed. |
RESUMABLE |
A parameter comparable to RESUMABLE is not needed. It is automatically defaulted. |
RESUMABLE_NAME |
A parameter comparable to RESUMABLE_NAME is not needed. It is automatically defaulted. |
RESUMABLE_TIMEOUT |
A parameter comparable to RESUMABLE_TIMEOUT is not needed. It is automatically defaulted. |
ROWS=N |
CONTENT =METADATA_ONLY |
ROWS=Y |
CONTENT =ALL |
SHOW |
SQLFILE |
SKIP_UNUSABLE_INDEXES |
SKIP_UNUSABLE_INDEXES |
STATISTICS |
A parameter comparable to STATISTICS is not needed. If the source table has statistics, they are imported. |
STREAMS_CONFIGURATION |
STREAMS_CONFIGURATION |
STREAMS_INSTANTIATION |
A parameter comparable to STREAMS_INSTANTIATION is not needed. |
TABLES |
TABLES |
TABLESPACES |
This parameter still exists, but some of its functionality is now performed using the TRANSPORT_TABLESPACES parameter. |
TOID_NOVALIDATE |
A command comparable to TOID_NOVALIDATE is not needed. OIDs are no longer used for type validation. |
TOUSER |
REMAP_SCHEMA |
TRANSPORT_TABLESPACE |
TRANSPORT_TABLESPACES (see command description) |
TTS_OWNERS |
A parameter comparable to TTS_OWNERS is not needed because the information is stored in the dump file set. |
USERID |
A parameter comparable to USERID is not needed. This information is supplied as the username /password when you invoke Import. |
VOLSIZE |
A parameter comparable to VOLSIZE is not needed because tapes are not supported. |
This table does not list all Data Pump Import command-line parameters. For information about all Import command-line parameters, see Commands Available in Import's Interactive-Command Mode.
In interactive-command mode, the current job continues running, but logging to the terminal is stopped and the Import prompt is displayed.
Note: Data Pump Import interactive-command mode is different from the interactive mode for original Import, in which Import prompted you for input. See Interactive Mode for information about interactive mode in original Import. |
To start interactive-command mode, do one of the following:
From an attached client, press Ctrl+C.
From a terminal other than the one on which the job is running, use the ATTACH
parameter to attach to the job. This is a useful feature in situations in which you start a job at one location and need to check on it at a later time from a different location.
Table 3-2 lists the activities you can perform for the current job from the Data Pump Import prompt in interactive-command mode.
Table 3-2 Supported Activities in Data Pump Import's Interactive-Command Mode
Activity | Command Used |
---|---|
Exit interactive-command mode | CONTINUE_CLIENT |
Stop the import client session, but leave the current job running | EXIT_CLIENT |
Display a summary of available commands | HELP |
Detach all currently attached client sessions and kill the current job | KILL_JOB |
Increase or decrease the number of active worker processes for the current job. This command is valid only in the Enterprise Edition of Oracle Database 10g. | PARALLEL |
Restart a stopped job to which you are attached | START_JOB |
Display detailed status for the current job | STATUS |
Stop the current job | STOP_JOB |
The following are descriptions of the commands available in the interactive-command mode of Data Pump Import.
Changes the mode from interactive-command mode to logging mode.
CONTINUE_CLIENT
In logging mode, the job status is continually output to the terminal. If the job is currently stopped, then CONTINUE_CLIENT
will also cause the client to attempt to start the job.
impdp> CONTINUE_CLIENT
Stops the import client session, exits Import, and discontinues logging to the terminal, but leaves the current job running.
EXIT_CLIENT
Because EXIT_CLIENT
stops the client session but leaves the job running, you can attach to the job at a later time if it is still executing or in a stopped state. To see the status of the job, you can monitor the log file for the job or you can query the USER_DATAPUMP_JOBS
view or the V$SESSION_LONGOPS
view.
impdp> EXIT_CLIENT
Provides information about Data Pump Import commands available in interactive-command mode.
HELP
Displays information about the commands available in interactive-command mode.
impdp> HELP
Detaches all currently attached client sessions and then kills the current job. It exits Import and returns to the terminal prompt.
KILL_JOB
A job that is killed using KILL_JOB
cannot be restarted. All attached clients, including the one issuing the KILL_JOB
command, receive a warning that the job is being killed by the current user and are then detached. After all clients are detached, the job's process structure is immediately run down and the master table and dump files are deleted. Log files are not deleted.
impdp> KILL_JOB
Enables you to increase or decrease the number of active worker processes for the current job.
PARALLEL=integer
PARALLEL
is available as both a command-line parameter and an interactive-mode parameter. You set it to the desired number of parallel processes. An increase takes effect immediately if there are sufficient files and resources. A decrease does not take effect until an existing process finishes its current task. If the integer value is decreased, workers are idled but not deleted until the job exits.
PARALLEL=10
Starts the current job to which you are attached.
START_JOB[=SKIP_CURRENT]
The START_JOB
command restarts the job to which you are currently attached (the job cannot be currently executing). The job is restarted with no data loss or corruption after an unexpected failure or after you issue a STOP_JOB
command, provided the dump file set and master table remain undisturbed.
The SKIP_CURRENT
option allows you to restart a job that previously failed to restart because execution of some DDL statement failed. The failing statement is skipped and the job is restarted from the next work item.
Transportable-tablespace-mode imports are not restartable.
impdp> START_JOB
Displays the cumulative status of the job, along with a description of the current operation. A completion percentage for the job is also returned.
STATUS[=integer]
You have the option of specifying how frequently, in seconds, this status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, the periodic status display is turned off and status is displayed only once.
This status information is written only to your standard output device, not to the log file (even if one is in effect).
The following example will display the status every two minutes (120 seconds).
STATUS=120
Stops the current job either immediately or after an orderly shutdown, and exits Import.
STOP_JOB[=IMMEDIATE]
If the master table and dump file set are not disturbed when or after the STOP_JOB
command is issued, the job can be attached to and restarted at a later time with the START_JOB
command.
To perform an orderly shutdown, use STOP_JOB
(without any associated value). A warning requiring confirmation will be issued. An orderly shutdown stops the job after worker processes have finished their current tasks.
To perform an immediate shutdown, specify STOP_JOB
=IMMEDIATE
. A warning requiring confirmation will be issued. All attached clients, including the one issuing the STOP_JOB
command, receive a warning that the job is being stopped by the current user and they will be detached. After all clients are detached, the process structure of the job is immediately run down. That is, the master process will not wait for the worker processes to finish their current tasks. There is no risk of corruption or data loss when you specify STOP_JOB=IMMEDIATE
. However, some tasks that were incomplete at the time of shutdown may have to be redone at restart time.
impdp> STOP_JOB=IMMEDIATE
This section provides examples of the following ways in which you might use Data Pump Import:
For information that will help you to successfully use these examples, see Using the Import Parameter Examples.
Example 3-1 shows how to perform a data-only table-mode import of the table named employees
. It uses the dump file created in Example 2-1.
Example 3-1 Performing a Data-Only Table-Mode Import
> impdp hr/hr TABLES=employees CONTENT=DATA_ONLY DUMPFILE=dpump_dir1:table.dmp NOLOGFILE=y
The CONTENT=DATA_ONLY
parameter filters out any database object definitions (metadata). Only table row data is loaded.
Example 3-2 shows a schema-mode import of the dump file set created in Example 2-4.
Example 3-2 Performing a Schema-Mode Import
> impdp hr/hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp EXCLUDE=CONSTRAINT, REF_CONSTRAINT, INDEX TABLE_EXISTS_ACTION=REPLACE
The EXCLUDE
parameter filters the metadata that is imported. For the given mode of import, all the objects contained within the source, and all their dependent objects, are included except those specified in an EXCLUDE
statement. If an object is excluded, all of its dependent objects are also excluded.The TABLE_EXISTS_ACTION=REPLACE
parameter tells Import to drop the table if it already exists and to then re-create and load it using the dump file contents.
Example 3-3 performs a network-mode import where the source is the database defined by the NETWORK_LINK
parameter.
Example 3-3 Network-Mode Import of Schemas
> impdp hr/hr TABLES=employees REMAP_SCHEMA=hr:scott DIRECTORY=dpump_dir1 NETWORK_LINK=dblink
This example imports the employees
table from the hr
schema into the scott
schema. The dblink
references a source database that is different than the target database.
REMAP_SCHEMA
loads all the objects from the source schema into the target schema.
This section provides syntax diagrams for Data Pump Import. These diagrams use standard SQL syntax notation. For more information about SQL syntax notation, see Oracle Database SQL Reference.