Pro Oracle Database 11g Administration (57 page)

Read Pro Oracle Database 11g Administration Online

Authors: Darl Kuhn

Tags: #Oracle DBA

BOOK: Pro Oracle Database 11g Administration
13.04Mb size Format: txt, pdf, ePub

This example uses the FLASHBACK_SCN parameter to take an export. To determine the current value of the SCN of your dataset, issue this query:

SQL> select current_scn from v$database;

Here’s some typical output:

CURRENT_SCN

----------------

8400741902387

Next, export using the FLASHBACK_SCN parameter:

$ expdp darl/foo directory=dp flashback_scn=8400741902387

The previous export command ensures that all data exported is consistent with any committed transactions in the database as of SCN 8400741902387.

When you use the FLASHBACK_SCN parameter, Data Pump ensures that the data in the export file is consistent as of the specified SCN. This means any transactions committed after the specified SCN

aren’t included in the export file.


Note
If you use the NETWORK_LINK parameter in conjunction with FLASHBACK_SCN, then the export is taken with the SCN consistent with the database referenced in the database link.

You can also use FLASHBACK_TIME to specify that the export file should be created with consistent committed transactions as of a specified time. When using FLASHBACK_TIME, Oracle determines the SCN

that most closely matches the time specified and uses that to produce an export consistent with that SCN. The syntax for using FLASHBACK_TIME is as follows:

FLASHBACK_TIME="TO_TIMESTAMP{}"

For some operating systems, double quotes directly on the command line must be escaped by a backslash character (\), because the operating system treats the double quotes as special characters.

Due to issues regarding how operating systems treat quote marks, it’s much more straightforward to use a parameter file. Here are the contents of a parameter file that uses FLASHBACK_TIME: directory=datapump

content=metadata_only

dumpfile=exp.dmp

flashback_time="to_timestamp('26-oct-2009 07:03:00','dd-mon-yyyy hh24:mi:ss')"

Depending on your operating system, the command-line version of the previous example must be specified as follows:

flashback_time=\"to_timestamp\(\'26-oct-2009 07:03:00\',

329

CHAPTER 13 ■ DATA PUMP

\'dd-mon-yyyy hh24:mi:ss\'\)\"

This line of code should be specified on one line. It’s wrapped on two lines in this book to fit on the page.

You can’t specify both FLASHBACK_SCN and FLASHBACK_TIME when taking an export; these two parameters are mutually exclusive. If you attempt to use both parameters at the same time, Data Pump throws the following error message and halts the export job:

ORA-39050: parameter FLASHBACK_TIME is incompatible with parameter FLASHBACK_SCN


Note
The FLASHBACK_SCN and FLASHBACK_TIME parameters are only applicable to the Oracle flashback query functionality. These parameters aren’t applicable to flashback database or flashback drop.

Importing When Objects Already Exist

When export and importing data, often you import into a schema that already has the objects created (tables, indexes, and so on). In this situation, you should import the data but instruct Data Pump to not try to create already-existing objects.

You can achieve this with the TABLE_EXISTS_ACTION and CONTENT parameters. The next example instructs Data Pump to append data in any tables that already exist via the TABLE_EXISTS_ACTION=APPEND

option. Also used is the CONTENT=DATA_ONLY option, which instructs Data Pump to not run any DDL to create objects (only load data):

$ impdp darl/foo directory=dk dumpfile=inv.dmp table_exists_action=append \

content=data_only

Existing objects aren’t modified in any way, and any new data that exists in the dump file is inserted into any tables.

You may wonder what happens if you just use the TABLE_EXISTS_ACTION option and don’t combine it with the CONTENT:

$ impdp darl/foo directory=dk dumpfile=inv.dmp table_exists_action=append The only difference is that Data Pump attempts to run DDL commands to create objects if they exist. It doesn’t stop the job from running, but you see an error message in the output indicating that the object already exists. Here’s a snippet of the output for the previous command: ORA-39152: Table "INV_MGMT"."INV" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append The default for the TABLE_EXISTS_ACTION parameter is SKIP, unless you also specify the parameter of CONTENT=DATA_ONLY. If you use CONTENT=DATA_ONLY, then the default for TABLE_EXISTS_ACTION is APPEND.

The TABLE_EXISTS_ACTION parameter takes the following options:

• SKIP (default if not combined with CONTENT=DATA_ONLY)

• APPEND (default if combined with CONTENT=DATA_ONLY)

• REPLACE

• TRUNCATE

330

CHAPTER 13 ■ DATA PUMP

The SKIP option tells Data Pump to not process the object if it exists. The APPEND option instructs Data Pump to not delete existing data, but rather to add data to the table without modifying any existing data. The REPLACE option instructs Data Pump to drop and re-create objects; this parameter isn’t valid when the CONTENT parameter is used with the DATA_ONLY option. The TRUNCATE parameter instructs Data Pump to delete rows from tables via a TRUNCATE statement.

The CONTENT parameter takes the following parameter options:

• ALL (default)

• DATA_ONLY

• METADATA_ONLY

The ALL option instructs Data Pump to load both data and metadata contained in the dump file; this is the default behavior. The DATA_ONLY option instructs Data Pump to load only table data into existing tables; no database objects are created. The METADATA_ONLY option only creates objects; no data is loaded.


Note
With the old imp utility, you can instruct the import process to ignore existing objects via the ignore=y option.

Renaming a Table

Starting with Oracle Database 11
g
, you have the option of renaming a table during import operations.

There are many reasons you may want to rename a table when importing it. For example, you may have a table in the target schema that has the same name as the table you want to import. You can rename a table when importing by using the REMAP_TABLE parameter. This example imports the table from the HEERA user INV table to the HEERA user INVEN table:

$ impdp darl/foo directory=dk dumpfile=inv.dmp tables=heera.inv \

remap_table=heera.inv:inven

Here’s the general syntax for renaming a table:

REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename

or

REMAP_TABLE=[schema.]old_tablename[:partition]:new_tablename

Notice that this syntax doesn’t allow you to rename a table into a different schema. If you’re not careful, you may attempt to do the following (thinking that you’re moving a table and renaming it in one operation):

$ impdp darl/foo directory=dk dumpfile=inv.dmp tables=heera.inv \

remap_table=heera.inv:scott.inven

You end up with a table in the HEERA schema named SCOTT.INVEN. That can be confusing. If you want to import a table into a different schema and rename it at the same time, use REMAP_SCHEMA with the REMAP_TABLE parameters:

$ impdp darl/foo directory=dk dumpfile=inv.dmp remap_schema=heera:darl \

tables=heera.inv remap_table=heera.inv:invent

331

CHAPTER 13 ■ DATA PUMP


Note
The process of renaming a table wasn’t entirely bug-free in Oracle Database 11g release 1. It’s been corrected in Oracle Database 11g release 2. See My Oracle Support Note 886762.1 for more details.

Remapping Data

Starting with Oracle Database 11
g
, when either exporting or importing, you can apply a PL/SQL function to alter a column value. For example, you may have an auditor who needs to look at the data, and one requirement is that you apply a simple obfuscation function to sensitive columns. The data doesn’t need to be encrypted, it just needs to be changed enough that the auditor can’t readily determine the value of the LAST_NAME column in the CUSTOMERS table.

This example first creates a simple package that is used to obfuscate the data: create or replace package obfus is

function obf(clear_string varchar2) return varchar2;

function unobf(obs_string varchar2) return varchar2;

end obfus;

/

--

create or replace package body obfus is

fromstr varchar2(62) := '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' ||

'abcdefghijklmnopqrstuvwxyz';

tostr varchar2(62) := 'defghijklmnopqrstuvwxyzabc3456789012' ||

'KLMNOPQRSTUVWXYZABCDEFGHIJ';

--

function obf(clear_string varchar2) return varchar2 is

begin

return translate(clear_string, fromstr, tostr);

end obf;

--

function unobf(obs_string varchar2) return varchar2 is

begin

return translate(obs_string, tostr, fromstr);

end unobf;

end obfus;

/

Now, when you import the data into the database, you apply the obfuscation function to the LAST_NAME column of the CUSTOMERS table:

$ impdp darl/foo directory=dp dumpfile=cust.dmp tables=customers \

remap_data=customers.last_name:obfus.obf

Selecting LAST_NAME from CUSTOMERS shows that it has been imported in an obfuscated manner:

SQL> select last_name from customers;

LAST_NAME

------------------

yYZEJ

tOXXSMU

xERX

332

CHAPTER 13 ■ DATA PUMP

You can manually apply the package’s UNOBF function to see the real values of the column: SQL> select obfus.unobf(last_name) from customers;

OBFUS.UNOBF(LAST_NAME)

-------------------------

Lopuz

Gennick

Kuhn

Cloning a User

Suppose you need to move a user’s objects and data to a new database. As part of the migration, you want to rename the user. First, create an export file that contains the user you want to clone. In this example, the user name is INV:

$ expdp darl/foo directory=dp schemas=inv dumpfile=inv.dmp

Now you can use Data Pump import to clone the user. You can do this in the database from which the user was exported or in a different database. Use the REMAP_SCHEMA parameter to create a copy of a user. In this example, the INV user is cloned to the INV_DW user:

$ impdp darl/foo directory=dp remap_schema=inv:inv_dw dumpfile=inv.dmp This command copies all structures and data in the INV user to the INV_DW user. The resulting INV_DW

user is identical in terms of objects to the INV user. The duplicated schema also contains the same password as the schema from which it was copied.

If you just want to duplicate the metadata from one schema to another, use the CONTENT parameter with the METADATA_ONLY option:

$ impdp darl/foo remap_schema=inv:inv_dw content=metadata_only dumpfile=inv.dmp The REMAP_SCHEMA parameter provides an efficient way to duplicate a schema with or without the data. During a schema duplication operation, if you want to change the tablespace in which the objects reside, also use the REMAP_TABLESPACE parameter. This allows you to duplicate a schema and also place the objects in a different tablespace than the source objects’ tablespaces.

You can also duplicate a user from one database to another without first creating a dump file. To do this, use the NETWORK_LINK parameter. See the prior section in this chapter on Exporting and Importing Directly Across the Network for details on copying data directly from one database to another.

Suppressing a Log File

By default, Data Pump creates a log file when generating an export or an import. If you know that you don’t want a log file generated, you can suppress it by specifying the NOLOGFILE parameter. Here’s an example:

$ expdp heera/foo directory=dk tables=inv nologfile=y

If you choose to not create a log file, Data Pump still displays status messages on the output device.

In general, I recommend that you create a log file with every Data Pump operation. This gives you an audit trail of your actions.

333

CHAPTER 13 ■ DATA PUMP

Using Parallelism

Use the PARALLEL parameter to parallelize a Data Pump job. For example, if you know you have four CPUs on a box, and you want to set the degree of parallelism to 4, use PARALLEL as follows: $ expdp darl/foo parallel=4 dumpfile=exp.dmp directory=datapump full=y To take full advantage of the parallel feature, ensure that you specify multiple files when exporting.

The following example creates one file for each thread of parallelism: $ expdp darl/foo parallel=4 dumpfile=exp1.dmp,exp2.dmp,exp3.dmp,exp4.dmp You can also use the %U substitution variable to instruct Data Pump to automatically create dump files to match the degree of parallelism. The %U variable starts at the value 01 and increments as additional dump files are allocated. This example uses the %U variable: $ expdp darl/foo parallel=4 dumpfile=exp%U.dmp

Now, say you need to import from the dump files created from an export. You can either individually specify the dump files or, if the dump files were created with the %U variable, use that on import:

$ impdp darl/foo parallel=4 dumpfile=exp%U.dmp

When using the % U substitution variable, in this example the import process starts by looking for a file with the name of exp01.dmp, then exp02.dmp, and so on.


Tip
Oracle recommends that the degree of parallelism not be set to more than two times the number of CPUs available on the server.

You can also modify the degree of parallelism while the job is running. First, attach in the interactive command mode to the job for which you want to modify the degree of parallelism. Then, use the PARALLEL option. In this example, the job attached to is SYS_IMPORT_TABLE_01: $ impdp darl/foo attach=sys_import_table_01

Import> parallel=6

You can check the degree of parallelism via the STATUS command:

Import> status

Here’s some sample output:

Job: SYS_IMPORT_TABLE_01

Operation: IMPORT

Mode: TABLE

State: EXECUTING

Bytes Processed: 0

Other books

Danger Zone by Malek, Doreen Owens
Wild Cowboy Ways by Carolyn Brown
Sword of Shame by The Medieval Murderers
Closer Than A Brother by Hadley Raydeen
Core Punch by Pauline Baird Jones
Life of Pi by Yann Martel
By My Hand by Maurizio de Giovanni, Antony Shugaar
Alone by Erin R Flynn
The Arrangement by Felice Stevens
Deathwatch by Dana Marton