Pro Oracle Database 11g Administration (55 page)

Read Pro Oracle Database 11g Administration Online

Authors: Darl Kuhn

Tags: #Oracle DBA

BOOK: Pro Oracle Database 11g Administration
7.05Mb size Format: txt, pdf, ePub

Additionally, some of the Data Pump commands (like FLASHBACK_TIME) require the use of quotation marks; in these situations, it’s sometimes hard to predict how the operating system will interpret the quotation marks. Whenever a command requires quotation marks, it’s highly preferable to use a parameter file.

To use a parameter file, first create an operating text file that contains the commands you want to use to control the behavior of your job. This example uses the Linux/Unix vi command to create a text file named imp.par:

$ vi imp.par

Now, place the following commands in the imp.par file:

userid=darl/foo

directory=dp

dumpfile=invp.dmp

logfile=invp.log

tables=f_sales

Next, reference the parameter file via the PARFILE command-line option: $ impdp parfile=pfile.ctl

Data Pump import processes the parameters in the file as if they were typed on the command line. If you find yourself repeatedly typing the same commands, then consider using a parameter file to increase your efficiency.

316

CHAPTER 13 ■ DATA PUMP

Estimating the Size of Export Jobs

If you’re about to export a large amount of data, you can estimate the size of the file that Data Pump creates before you run the export. You may want to do this because you’re concerned about the amount of space an export job needs.

To estimate the size, use the ESTIMATE_ONLY parameter. This example estimates the size of the export file for an entire database:

$ expdp dbauser/foo estimate_only=y full=y logfile=n

Here’s a snippet of the output:

Estimate in progress using BLOCKS method...

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

. estimated "REP_MV"."REG_QUEUE_REP" 9.606 GB

. estimated "REP_MV"."CWP_USER_PROFILE" 2.589 GB

. estimated "REP_MV"."PRODUCT_INSTANCES_MV" 1.620 GB

. estimated "REP_MV"."ROLES_MV" 1.550 GB

. estimated "STAR2"."F_DOWNLOADS":"DOWN_P_5" 1.450 GB

. estimated "STAR2"."F_DOWNLOADS":"DOWN_P_11" 1.414 GB

Similarly, you can specify a schema name to get an estimate of the size required to export a user: $ expdp dbauser/foo estimate_only=y schemas=star2 logfile=n

Here’s an example of estimating the size required for two tables:

$ expdp dbauser/foo estimate_only=y tables=star2.f_downloads,star2.f_installations \

logfile=n

Listing the Contents of Dump Files

Data Pump has a very robust method of creating a file that contains all the SQL that’s executed when an import job runs. Data Pump uses the DBMS_METADATA package to create the DDL that you can use to recreate objects in the Data Pump dump file.

Use the SQLFILE option of Data Pump import to list the contents of a Data Pump export file. This example creates a file named expfull.sql that contains the SQL statements that the import process calls (the file is placed in the directory defined by the DPUMP_DIR2 directory object): $ impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp \

SQLFILE=dpump_dir2:expfull.sql

In the previous command, if you don’t specify a separate directory (like dpump_dir2 in this example), then the SQL file is written to the location specified in the DIRECTORY option.


Tip
You must run the previous command as a user with DBA privileges or the schema that performed the Data Pump export. Otherwise, you get an empty SQL file without the expected SQL statements in it.

When you use the SQLFILE option with an import, the impdp process doesn’t import any data. It only creates a file that contains the SQL commands that would be run by the import process. It’s sometimes handy to generate a SQL file for the following reasons:

317

CHAPTER 13 ■ DATA PUMP

Oracle also provides the transportable tablespace feature, which lets you copy the datafiles from a source database to the destination and then use Data Pump to transfer the associated metadata. These two techniques are described in this section.


Note
See Chapter 14 for a discussion of using external tables to transfer data.

Exporting and Importing Directly Across the Network

Sometimes you need to create a testing database and load it with production data. In scenarios like this, the production box is usually located remotely from the development box. Data Pump provides you the ability to take an export and directly import it into your target database without creating any intermediate dump files. This is a fast and efficient way to create new environments from existing environments.

An example will help illustrate how this works. For this example, the production database users are STAR2, CIA_APP, and CIA_SEL. You want to move these users into a testing database and name them STAR_JUL, CIA_APP_JUL, and CIA_SEL_JUL.

This task requires the following high-level steps:

1. Create users in the destination database to be imported into. Here’s a sample script that creates the users in the testing database:

define star_user=star_jul

define star_user_pwd=star_jul_pwd

define cia_app_user=cia_jul_dec

define cia_app_user_pwd=cia_app_jul_pwd

define cia_sel_user=cia_sel_jul

define cia_sel_user_pwd=cia_sel_jul_pwd

--

create user &&star_user identified by &&star_user_pwd; grant connect,resource to &&star_user;

alter user &&star_user default tablespace dim_data;

--

create user &&cia_app_user identified by &&cia_app_user_pwd; grant connect,resource to &&cia_app_user;

alter user &&cia_app_user default tablespace cia_data;

--

create user &&cia_sel_user identified by &&cia_app_user_pwd; grant connect,resource to &&cia_app_user;

alter user &&cia_sel_user default tablespace cia_data;

2. Create a database link in your testing database that points to your production database. The remote user referenced in the CREATE DATABASE LINK statement has the DBA role granted to it in the production database. Here’s a sample CREATE DATABASE LINK script:

create database link dk

connect to darl identified by foobar

using 'dwdb1:1522/dwrep1';

3. Create a directory that points to the location where you want your log file to go: 319

CHAPTER 13 ■ DATA PUMP

SQL> create or replace directory engdev as '/orahome/oracle/ddl/engdev'; 4. Run the import command on the testing box. This command references the remote database via the NETWORK_LINK parameter. This command also instructs Data Pump to map the production database user names to the newly created users in the testing database.

$ impdp darl/engdev directory=engdev network_link=dk \

schemas='STAR2,CIA_APP,CIA_SEL' \

remap_schema=STAR2:STAR_JUL,CIA_APP:CIA_APP_JUL,CIA_SEL:CIA_SEL_JUL \

parallel=4

This technique allows you to move large amounts of data between disparate databases without having to create or copy any dump files or datafiles. This is a very powerful Data Pump feature that lets you quickly and efficiently transfer data.

If you don’t have Oracle Net connectivity between the two databases, then the steps to accomplish the same task are as follows:

1. Export the production database.

2. Copy the dump file to the testing database.

3. Import the dump file into the testing database.

Copying Datafile(s)

Oracle provides a mechanism for copying datafiles from one database to another in conjunction with using Data Pump to transport the associated metadata. This is known as the
transportable tablespace
feature. The amount of time this takes depends on how long it takes you to copy the datafiles to the destination server. This technique is appropriate for moving data in decision-support systems and data-warehousing environments.

Follow these steps to transport tablespaces:

1. Ensure that the tablespace is self-contained. These are some common violations of the self-contained rule:

• An index in one tablespace can’t point to a table in another tablespace that isn’t in the set of tablespaces being transported.

• A foreign-key constraint is defined on a table in one tablespace that references a primary-key constraint on a table in a tablespace that isn’t in the set of tablespaces being transported.

Run the following check to see if the set of tablespaces being transported violates any of the self-contained rules:

SQL> exec dbms_tts.transport_set_check('INV_DATA,INV_INDEX', TRUE); Now, see if Oracle detected any violations:

SQL> select * from transport_set_violations;

If you don’t have any violations, you should see:

no rows selected

2. Make the tablespaces being transported read-only:

320

CHAPTER 13 ■ DATA PUMP

SQL> alter tablespace inv_data read only;

SQL> alter tablespace inv_index read only;

3. Use Data Pump to export the metadata for the tablespaces being transported: $ expdp darl/foo directory=dp dumpfile=trans.dmp \

transport_tablespaces=INV_DATA,INV_INDEX

4. Copy the Data Pump export dump file to the destination server.

5. Copy the datafile(s) to the destination database. Place the files in the directory where you want them in the destination database server. The filename and directory path must match the import command used in the next step.

6. Import the metadata into the destination database. Use the following parameter file to import the metadata for the datafiles being transported: userid=darl/foo

directory=dp

dumpfile=trans.dmp

transport_datafiles=/ora01/dbfile/O11R2/inv_data01.dbf,

/ora01/dbfile/O11R2/inv_index01.dbf

If everything goes well, you should see some output indicating success: Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT

Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK

Job "DARL"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 11:24:01

If the datafiles that are being transported have a different block size than the destination database, then you must modify your initialization file (or use an ALTER SYSTEM command) and add a buffer pool that contains the block size of the source database. For example, to add a 16KB buffer cache, place this in the initialization file:

db_16k_cache_size=200M

The transportable tablespace mechanism allows you to quickly move datafiles between databases, even if the databases use different block sizes or have different endian formats. This section doesn’t discuss all the details involved with transportable tablespaces; the focus in this chapter is showing how to use Data Pump to transport data. For complete details on transportable tablespaces, refer to the
Oracle Database Administrator’s Guide
available on OTN.


Note
To generate transportable tablespaces, you must use Oracle Enterprise Edition. You can use other editions of Oracle to import transportable tablespaces.

Exporting and Importing Tablespaces and Datafiles

Data Pump contains some flexible features for exporting and importing tablespaces and datafiles. This section shows Data Pump techniques that you can use when working with these important database objects.

321

CHAPTER 13 ■ DATA PUMP

Exporting Tablespace Metadata

Sometimes you want to replicate environments—say, a production environment into a testing environment. One of the first tasks is to replicate the tablespaces. You can use Data Pump to pull out just the DDL required to re-create the tablespaces for an environment:

$ expdp darl/foo directory=dp dumpfile=phredstg.dmp content=metadata_only full=y \

include=tablespace

The FULL parameter instructs Data Pump to export everything in the database. When you also specify INCLUDE, Data Pump exports only those objects. In this scenario, the dump file only has information in it regarding tablespaces.

Now you can use the SQLFILE parameter to view the DDL associated with the tablespaces that were exported:

$ impdp darl/foo directory=dp dumpfile=phredstg.dmp sqlfile=tbsp.sql

Recall that when you use the SQLFILE parameter, nothing is imported. The previous command only creates a file named tbsp.sql that contains SQL statements. You can modify the DDL and run it in the destination database environment; or, if nothing needs to change, you can directly use the dump file by importing it into the destination database.

Specifying Different Datafile Paths and Names

When you’re exporting and then importing, sometimes you import tablespaces into a database where the directory structures are different from the original database. On the import step, the REMAP_DATAFILE

allows you to seamlessly change the underlying names of datafiles. Here’s a parameter file that contains the remapping statements:

userid=darl/foo

directory=dp

dumpfile=phredstg.dmp

full=y

include=tablespace:"like 'TBSP%'"

remap_datafile="'/ora01/dbfile/O11R2/tbsp101.dbf':'/ora02/O11R2/tb1.dbf'"

remap_datafile="'/ora01/dbfile/O11R2/tbsp201.dbf':'/ora02/O11R2/tb2.dbf'"

Other books

Pursuit of a Kiss by Lola Drake
Valeria by Kaitlin R. Branch
The Blood Whisperer by Sharp, Zoe
Maxwell’s Curse by M. J. Trow
Dakota Dream by Sharon Ihle