Oracle Essentials Oracle Database 11g (33 page)

BOOK: Oracle Essentials Oracle Database 11g
10.39Mb size Format: txt, pdf, ePub

Sort activity can also affect disk I/O. Prior to Oracle Database 10
g
, youwould get the majority of sorts to occur in memory through tuning the SORT_AREA_

SIZE parameter in the initialization file. Oracle constantly queries and updates the data dictionary stored in the SYSTEM tablespace, and this information is cached in the shared pool section of the SGA, so sizing your shared pool properly is a key to overall performance. As of Oracle Database 10
g
, Oracle can automatically and dynamically size the different pools in the SGA.

Use a different device for archiving and redo log files

To avoid archiving performance issues due to I/O contention, make sure that the archive log destination uses different devices from those used for the redo logs and redo log mirrors.

Some other file placement issues to consider from the perspective of database availability include the following:

If you are directing database backups to disk, store the backups on devices that don’t
contain any database components

This protects the system from the potential loss of the database and the needed backups from the failure of an I/O device.

Make sure the device used for the archive log destination doesn’t contain any database
components or database backups

If the failure of a single device results in the loss of both database components and archived redo logs, or backup components and archived redo logs, recovery will be endangered.

Fault-tolerant disk arrays don’t eliminate the need for a sound backup and recovery strategy. Fault-tolerant storage merely reduces the likelihood of undergoing database recovery due to the failure of a single drive. For full coverage of Oracle databases and

high availability, see Chapter 11.

162

|

Chapter 7: Oracle Performance

Using RAID Disk Array Technology

One of the most powerful ways to reduce performance bottlenecks due to disk I/O is the use of RAID disk arrays. RAID stands for Redundant Array of Inexpensive (or Independent) Disks and is used to group disks into arrays for two reasons: redundancy and performance. The use of RAID for redundancy is detailed in
Chapter 11.

Our focus in this chapter is on the performance aspects of RAID technology.

RAID Basics

RAID disk arrays provide a hardware solution for both reliability and performance.

There are different levels of RAID hardware; the following are most relevant to performance:

RAID-0

Where availability isn’t a concern, the disks can be configured as RAID-0, which is nonredundant disk striping.

RAID-1

Provides the simplest form of redundancy, full duplication of data, which is referred to as
mirroring
.

RAID-0+1

Combines the one-to-one mirroring of RAID-1 with the striping of RAID-0.

RAID-3

Provides redundancy by storing parity information on a single disk in the array.

This parity information can help to recover the data on other disks, should they fail. RAID-3 saves on disk storage compared to RAID-1, but isn’t often used because the parity disk can be a bottleneck.

RAID-5

Uses parity data for redundancy in a way that is similar to RAID-3, but stripes the parity data across all of the disks, like the way in which the actual data is striped.

This alleviates the bottleneck on the parity disk.

There are additional levels of RAID, including RAID-6, which adds dual parity data, and RAID-7 and RAID-8, which add performance enhancements to the characteristics of RAID-5.

RAID groups disk drives into arrays to automatically spread I/O operations across multiple spindles, reducing contention on individual drives. For example, suppose you place a datafile containing an index on a single drive. If multiple processes use the index simultaneously, they will all issue I/O requests to the one disk drive, resulting in contention for the use of that drive.

Oracle and Disk I/O Resources

|

163

Instead, suppose you placed the same datafile on a “disk” that was actually an array of five physical disks. Each physical disk in the array can perform I/O operations independently on different data blocks of the index, automatically increasing the amount of I/O Oracle can perform without causing contention.

Simply using disk arrays won’t, by itself, give you optimal I/O performance. As discussed earlier, you also need to logically place the different types of Oracle files across the available drives, even if the drives are grouped into arrays. As of Oracle Database 10
g
, striping considerations are made simpler through Automatic Storage Management. ASM provides automatic striping and rebalancing of stripe sets. By default, ASM also provides automated mirroring.

Volume managers

With host-based striping, logical volume-management software runs on the database server. Examples of this type of software often used under older Oracle database releases include Hewlett Packard’s Logical Volume Manager (LVM) and Veritas Software’s Volume Manager. The LVM acts as an interface between the operating system that requests I/O and the underlying physical disks. Volume-management software groups disks into arrays, which are then seen by the operating system as single

“disks.” The actual disks are usually individual devices attached to controllers or disks contained in a prepackaged array containing multiple disks and controllers.

This striping is handled by the volume-management software and is completely

transparent to Oracle. Figure 7-1 illustrates host-based volume management.

Database Server

Oracle Instance

Operating System

Volume Manager

Volume 1

Volume 2

Volume 3

RAID-5 Array

RAID-5 Array

RAID-1 Array

Figure 7-1. Host-based volume management

164

|

Chapter 7: Oracle Performance

Oracle began providing its own volume manager software for Linux and Windows in Oracle9
i
Release 2. Since Oracle Database 10
g,
database releases for all supported operating systems include a cluster file system and volume manager in the database that is leveraged by ASM. When using ASM, it is recommended that you not try to leverage an operating system volume manager.

Dedicated storage subsystems

Dedicated storage systems, often referred to as
disk farms
, contain disks, controllers, CPUs, and (usually) memory used as an I/O cache. Vendors include EMC, Network Appliance, Hewlett-Packard, IBM, and Sun. These subsystems offload the task of managing the disk arrays from the database server. The I/O subsystem is attached to the server using controllers. These dedicated storage devices are sometimes grouped into
storage area networks
(SANs) to denote their logical organization as a separate set of networked devices. The disk arrays are defined and managed within the dedicated I/O subsystem, and the resulting logical “disks” are seen by the operating system as physical disks.

This type of disk-volume management is completely transparent to the database server and offers many benefits:

• The database server does not spend CPU resources managing the disk arrays.

• The I/O subsystem uses memory for an I/O cache, so the performance of Oracle I/O can improve significantly (for example, from an average I/O time of 10–12

milliseconds to 3–5 milliseconds).

• Write I/O is completed as soon as the data has been written to the subsystem’s cache.

• The I/O subsystem will destage the data from cache to actual disk later.

• Read I/O can be satisfied from the cache. The subsystem can employ some type of algorithm to sense I/O patterns and preload the cache in anticipation of pending read activity.

Note that you must back up the cache with some type of battery so a power failure doesn’t result in the loss of data that was written to the cache, but hasn’t yet been destaged to the physical disk. Otherwise, data that Oracle assumes made it to disk may be lost, thereby potentially corrupting the database.
Figure 7-2
illustrates a database server with a dedicated I/O subsystem.

Combined host-based and I/O subsystem volume management

In this configuration, disks are grouped into arrays within the I/O subsystem and grouped again into coarser arrays using operating system volume management. On EMC systems, for example, the physical disks are grouped into either RAID-1 mirrored disk pairs or into a RAID-S striped configuration using four disks per stripe set.

Oracle and Disk I/O Resources

|

165

Database Server

Oracle Instance

Operating System

I/O Memory Cache

Volume 1

Volume 2

Volume 3

RAID-5 Array

RAID-5 Array

RAID-1 Array

Volume Manager

Dedicated I/O Subsystem

Figure 7-2. Dedicated I/O subsystems

(RAID-S is the term EMC [
http://www.emc.com
] uses for its specialized striping hardware and software.)

Using EMC technology as an example, the operating system sees horizontal sections of disk space across each RAID-1 disk or RAID-S array as single “disks.” Youcan use the operating system volume management to group these “disks” into arrays. With RAID-1

disks, this configuration delivers the benefits of using a dedicated I/O subsystem with its own cache and processing power while leveraging striping for simplicity. With RAID-S arrays youget the benefit of the dedicated I/O subsystem and further simplify disk management by a striping multiplier effect. An array of five “disks” at the operating system level could map back to five arrays of four disks each in the I/O

subsystem. This configuration maps a logical disk seen by Oracle to 20 physical disks in the underlying I/O subsystem.
Figure 7-3
illustrates a logical drive on the database server mapping to horizontal sections across multiple RAID-S arrays.

Flexibility, Manageability, and Disk Arrays

Many systems today use some type of RAID technology that groups multiple individual disk drives, also referred to as
spindles
, into arrays. Each disk array is then treated
166

|

Chapter 7: Oracle Performance

Database Server

Oracle Instance

Operating System

Volume Manager

Logical Disk A

I/O Memory Cache

Disk 1

Disk 2

Disk 3

Disk 4

Disk 5

RAID-5 Array

RAID-5 Array

RAID-5 Array

RAID-5 Array

RAID-5 Array

Volume Manager

Dedicated I/O Subsystem

Figure 7-3. Combining host-based striping and an EMC I/O subsystem
as a single logical disk for the purpose of planning I/O. Striping allows you to simply spread I/O across multiple disks, without incurring the planning and administrative overhead of dealing with many individual disk drives.

The decision about how many disks should be in each array is often the topic of intense debate. At one extreme, using multiple disks without grouping any of them into arrays provides the most control and flexibility because every disk is visible and can be targeted in isolation by placing certain files on each disk. However, this approach requires more planning and can result in more ongoing administration, because you will have to deal with every individual disk drive. As databases become larger and larger, this approach can become unmanageable.

At the other extreme, youcan group all disks into one single array, seen by the operating system and Oracle as a single “disk.” This makes for extremely simple planning and administration; no effort is required to analyze where you should place the various files, as there is only one “disk.” However, this approach sacrifices flexibility and leads to brute-force solutions to I/O bottlenecks. If I/O performance across the array is unsatisfactory, the solution is to add more controllers and disks. The entire set of disks becomes a black box that either works or doesn’t work as a unit.

Oracle and Disk I/O Resources

|

167

The most useful configuration is one that balances manageability with flexibility. For example, consider a system with 1,000 disks. Neither a single array of 1,000 disks nor a set of 1,000 individual disks is likely to be appropriate. Perhaps 50 arrays of 20

disks each would provide the needed I/O performance without any undue administrative burden. If less flexibility is needed, 20 arrays of 50 disks are more suitable. On the other hand, grouping all the disks into one array may be the simplest way to manage a system with only five disks. For the “right” answer, you must assess your needs to determine the appropriate balance.

Oracle Database 10
g
simplified this by automating the striping and stripe set rebalancing process. ASM divides files into 1 MB extents and spreads the extents evenly across each disk group. Pointers are used to track placement of each extent (instead of using a mathematical function such as a hashing algorithm to stripe the data). So when the disk group configuration changes, individual extents can be moved. In comparison to traditional algorithm-based striping techniques, the need to rerun that algorithm and reallocate all of the data is eliminated. Extent maps are updated when rebalancing the load after a change in disk configuration, opening a new database file, or extending a database file by enlarging a tablespace. By default, each 1 MB

extent is also mirrored, so management of redundancy is also simplified. Mirroring can be extended to triple mirroring or can be turned off. Although you still have to consider how many disk groups to use, implementation of these groups with striping and redundancy is automated with ASM.

Other books

Summer of the Beast by Trinity Blacio
Lumbersexual (Novella) by Leslie McAdam
The Bone Forest by Robert Holdstock
Phoenix Rising I by Morgana de Winter, Marie Harte, Michelle M. Pillow, Sherrill Quinn, Alicia Sparks
If I Were Your Woman by Donna Hill
Vee by Alyssa Linn Palmer
Revolver by Duane Swierczynski
Vintage Volume One by Suzanne, Lisa
Heart's Desire by T. J. Kline