Pro Oracle Database 11g Administration (59 page)

Read Pro Oracle Database 11g Administration Online

Authors: Darl Kuhn

Tags: #Oracle DBA

BOOK: Pro Oracle Database 11g Administration
13.38Mb size Format: txt, pdf, ePub


Note
When Data Pump runs in legacy mode, it doesn’t create an old exp/imp formatted file. Data Pump always creates a Data Pump file and can only read Data Pump files.

Data Pump Mapping to the exp Utility

If you’re used to the old exp/imp parameters, you may initially be confused by some of the syntax semantics. However, after you use Data Pump, you’ll find the newer syntax fairly easy to remember and use. Table 13–3 describes how the legacy export parameters map to Data Pump export.

In many instances, there isn’t a one-to-one mapping. Often, Data Pump automatically provides features that used to require a parameter in the legacy utilities. For example, whereas you used to have to specify DIRECT=Y to get a direct path export, Data Pump automatically uses direct path whenever possible.

341

CHAPTER 13 ■ DATA PUMP

Table 13–3.
Mapping of Old Export Parameters to Data Pump
Original exp Parameter

Similar Data Pump expdp Parameter

BUFFER

N/A

COMPRESS TRANSFORM

CONSISTENT

FLASHBACK_SCN or FLASHBACK_TIME

CONSTRAINTS EXCLUDE=CONSTRAINTS

DIRECT

N/A; Data Pump automatically uses direct path when possible

FEEDBACK

STATUS in client output

FILE

Database-directory object and DUMPFILE

GRANTS EXCLUDE=GRANT

INDEXES

INCLUDE=INDEXES, INCLUDE=INDEXES

LOG

Database-directory object and LOGFILE

OBJECT_CONSISTENT

N/A

OWNER SCHEMAS

RECORDLENGTH

N/A

RESUMABLE

N/A; Data Pump automatically provides functionality

RESUMABLE_NAME

N/A

RESUMABLE_TIMEOUT

N/A

ROWS CONTENT=ALL

STATISTICS

N/A; Data Pump export always exports statistics for tables

TABLESPACES TRANSPORT_TABLESPACES

TRANSPORT_TABLESPACE TRANSPORT_TABLESPACES

TRIGGERS EXCLUDE=TRIGGER

TTS_FULL_CHECK TRANSPORT_FULL_CHECK

VOLSIZE

N/A; Data Pump doesn’t support tape devices

342

CHAPTER 13 ■ DATA PUMP

Data Pump Mapping to the imp Utility

As with Data Pump export, Data Pump import often doesn’t have a one-to-one mapping of the legacy utility parameter. Data Pump import automatically provides many features of the old imp utility. For example, COMMIT=Y isn’t required because Data Pump import automatically commits after each table is imported. Table 13–4 describes how legacy import parameters map to Data Pump import.

Table 13–4.
Mapping of Old Import Parameters to Data Pump
Original imp Parameter

Similar Data Pump impdp Parameter

BUFFER

N/A

CHARSET

N/A

COMMIT

N/A; Data Pump import automatically commits after each table is

exported

COMPILE

N/A; Data Pump import compiles procedures after they’re created

CONSTRAINTS EXCLUDE=CONSTRAINT

DATAFILES TRANSPORT_DATAFILES

DESTROY REUSE_DATAFILES=y

FEEDBACK

STATUS in client output

FILE

Database-directory object and DUMPFILE

FILESIZE

N/A

FROMUSER REMAP_SCHEMA

GRANTS EXCLUDE=OBJECT_GRANT

IGNORE

TABLE_EXISTS_ACTION with APPEND, REPLACE, SKIP, or TRUNCATE

INDEXES EXCLUDE=INDEXES

INDEXFILE SQLFILE

LOG

Database-directory object and LOGFILE

RECORDLENGTH

N/A

RESUMABLE

N/A; this functionality is automatically provided

RESUMABLE_NAME

N/A

RESUMABLE_TIMEOUT

N/A

ROWS=N

CONTENT, with METADATA_ONLY or ALL

SHOW SQLFILE

343

CHAPTER 13 ■ DATA PUMP

Original imp Parameter

Similar Data Pump impdp Parameter

STATISTICS

N/A

STREAMS_CONFIGURATION

N/A

STREAMS_INSTANTIATION

N/A

TABLESPACES TRANSPORT_TABLESPACES

TOID_NOVALIDATE

N/A

TOUSER REMAP_SCHEMA

TRANSPORT_TABLESPACE TRANSPORT_TABLESPACES

TTS_OWNERS

N/A

VOLSIZE

N/A; Data Pump doesn’t support tape devices

Summary

Data Pump is an extremely powerful and feature-rich tool. If you haven’t used Data Pump much, then I implore you to take some time to read this chapter and work through the examples. This tool greatly simplifies tasks like moving users and data from one environment to another. You can export and import subsets of users, filter and remap data via SQL and PL/SQL, rename users and tablespaces, compress, encrypt, and parallelize, all with one command. It really is that powerful.

DBAs sometimes stick with the old exp/imp utilities because that’s what they’re familiar with (I’m occasionally guilty of this). If you’re running Oracle Database 11
g
release 2, you can directly use the old exp/imp parameters and options directly from the command line. Data Pump translates these parameters on the fly to Data Pump–specific syntax. This feature nicely facilitates the migration from the old to the new. For reference, I’ve also provided a mapping of the old exp/imp syntax and how it relates to Data Pump commands.

Although Data Pump is an excellent tool for moving database objects and data from one environment to another, sometimes you need to transfer large quantities of data to and from operating-system flat files. You use external tables to achieve this task. This is the topic of the next chapter in this book.

344

CHAPTER 14 ■ EXTERNAL TABLES

A quick comparison of using SQL*Loader and external tables highlights the differences. Listed next are the SQL*Loader steps that you use to load and transform of data:

1.

Create a parameter file that SQL*Loader uses to interpret the format of the data in the operating-system file.

2.

Create a regular database table into which SQL*Loader will insert records. The data will be staged here until it can be further processed.

3.

Run the SQL*Loader sqlldr utility to load data from the operating-system file into the database table (created in step 2). When loading data, SQL*Loader has some features that allow you to transform data. This step is sometimes frustrating because it can take several trial-and-error runs to correctly map the parameter file to the table and corresponding columns.

4.

Create another table that will contain the completely transformed data.

5.

First run SQL statements to load the data from the staging table (created in step 2); then transform and insert the data into the production table (created in step 4).

Compare the previous SQL*Loader list to the following steps to load and transform data using external tables:

1.

Execute a CREATE TABLE...ORGANIZATION EXTERNAL script that maps the

structure of the operating-system file to table columns. After this script is run, you can directly use SQL to query the contents of the operating-system file.

2.

Create a regular table to hold the completely transformed data.

3.

Run SQL statements to load and fully transform the data from the external table into the table created in step 2.

For many shops, SQL*Loader underpins large data-loading operations. It continues to be a good tool to use for that task. However, you may want to investigate using external tables. External tables have the following advantages:


Loading data with external tables is more straightforward and requires fewer steps.


The interface for creating and loading from external tables is SQL*Plus. Many DBAs/developers find using SQL*Plus more intuitive and powerful than

SQL*Loader’s parameter-file interface.


You can view data in an external table before it’s loaded into a database table.


You can load, transform, and aggregate the data without an intermediate staging table. For large amounts of data, this can be a huge space savings.

The next several sections contain examples of using external tables to read from operating-system files.

346

CHAPTER 14 ■ EXTERNAL TABLES

Loading CSV Files into the Database

You can load small or very large CSV flat files into the database using external tables and SQL. Figure 14–

1 shows the architectural components involved with using an external table to view and load data from an operating-system file. A directory object is required that specifies the location of the operating-system file. The CREATE TABLE...ORGANIZATION EXTERNAL statement creates a database object that SQL*Plus can use to directly select from the operating-system file.

Figure 14–1.
Architectural components of an external table used to read a flat file
Here are the steps for using an external table to access an operating-system flat file: 1. Create a database-directory object that points to the location of the CSV file.

2. Grant read and write privileges on the directory object to the user creating the external table. I usually use a DBA privileged account, so I don’t need to perform this step.

3. Run the CREATE TABLE...ORGANIZATION EXTERNAL statement.

4. Use SQL*Plus to access the contents of the CSV file.

In this example, the flat file is named ex.csv and is located in the /oraet directory. It contains the following data:

5|2|0|0|12/04/2009|Half

6|1|0|1|03/31/2010|Quarter

7|4|0|1|05/07/2010|Full

8|1|1|0|04/30/2010|Quarter


Note
Some of the CSV file examples used in this chapter are actually separated by characters other than a comma, such as a pipe | character. The character used depends on the data and the user who supplies the CSV

file. CSV files are also often commonly referred to as flat files.

347

CHAPTER 14 ■ EXTERNAL TABLES

Creating a Directory Object and Granting Access

First, create a directory object that points to the location of the flat file on disk: SQL> create directory exa_dir as '/oraet';

This example uses a database account that has the DBA role granted to it; therefore, you don’t need to grant READ and WRITE to the directory object to the user (your account) that is accessing the directory object. If you’re not using a DBA account to read from the directory object, then grant these privileges to the account using this object:

SQL> grant read, write on directory exa_dir to reg_user;

Creating an External Table

Next, fashion the script that creates the external table that will reference the flat file. The CREATE

TABLE...ORGANIZATION EXTERNAL statement provides the database with the following information:

• How to interpret data in the flat file, and a mapping of data in file-to-column definitions in the database

• A DEFAULT DIRECTORY clause that identifies the directory object, which in turn specifies the directory of the flat file on disk

• The LOCATION clause, which identifies the name of the flat file

The next statement creates a database object that looks like a table yet is able to retrieve data directly from the flat file:

create table exadata_et(

exa_id NUMBER

,machine_count NUMBER

,hide_flag NUMBER

,oracle NUMBER

,ship_date DATE

,rack_type VARCHAR2(32)

)

organization external (

type oracle_loader

default directory exa_dir

access parameters

(

records delimited by newline

fields terminated by '|'

missing field values are null

(exa_id

,machine_count

,hide_flag

,oracle

,ship_date char date_format date mask "mm/dd/yyyy"

,rack_type)

)

location ('ex.csv')

)

reject limit unlimited;

348

CHAPTER 14 ■ EXTERNAL TABLES

An external table named EXADATA_ET is created when you execute this script. Now, use SQL*Plus to view the contents of the flat file:

SQL> select * from exadata_et;

EXA_ID MACHINE_COUNT HIDE_FLAG ORACLE SHIP_DATE RACK_TYPE

---------- ------------- ---------- ---------- --------- ---------

5 2 0 0 04-DEC-09 Half

6 1 0 1 31-MAR-10 Quarter

7 4 0 1 07-MAY-10 Full

8 1 1 0 30-APR-10 Quarter

Viewing External-Table Metadata

At this point, you can also view metadata regarding the external table. Query the DBA_EXTERNAL_TABLES

view for details:

select

owner

,table_name

,default_directory_name

,access_parameters

from dba_external_tables;

Here’s a partial listing of the output:

OWNER TABLE_NAME DEFAULT_DIRECTO ACCESS_PARAMETERS

--------- ----------------- ------------------ ---------------------------

Other books

The Opposite of Dark by Debra Purdy Kong
Deadly Games by Clark, Jaycee
Heart's Lair by Kathleen Morgan
Amber Brown Is Feeling Blue by Paula Danziger
Hunter's Prey, A by Cameron, Sarah
Evil Jester Digest, Vol.1 by Peter Giglio (Editor)
Caged Warrior by Lindsey Piper