Recovery Manager Features

RMAN is an Oracle utility that you use to manage the backup, restore, and recovery operations on Oracle databases. RMAN has a powerful command language that is independent of the operating system.

RMAN provides several features not available when you make user-managed backups with
operating system commands.

  • Frequently executed operations can be stored as scripts in the database.
  • With block change tracking enabled in the database RMAN can limit incremental backups to recording only those blocks that have changed since the previous backup.
  • RMAN can be used to manage the size of backup pieces and save time by parallelizing the backup operation.
  • RMAN can recover an individual corrupt data block or set of data blocks within a data file rather than restoring and recovering the entire data file.
  • RMAN operations can be integrated with the Oracle Database Scheduler to automate
    backup operations.
  • You can use RMAN to detect block corruption. The information relating to the block
    corruption that is detected during backup can be obtained by using the
  • RMAN provides performance enhancements such as:
            – Automatic parallelization of backup, restore, and recovery operations
            – No generation of extra redo during online database backups
            – Backups can be restricted to limit reads per file, per second to avoid interfering   with OLTP work
            -  Prevention of flooding of any one file with reads and writes while still keeping a tape drive streaming, using multiplexing
  • RMAN has a media management API to work seamlessly with third-party media
    management tools interfacing with storage devices providing increased speed and
  • Under the user-managed method you need to keep track of all database files and backups. In a recovery situation you must locate backups for each data file, copy them to the correct place using operating system commands, and choose which redo log files to apply. RMAN manages these tasks automatically.

Reading Alert Log

With Oracle 10g, external table was required to read alert log  but luckily Oracle 11g created a system table that can be used to access alert log from SQL:  X$DBGALERTEXT. Oracle reads the log.xml from alert directory (which contains all the data what alert.log does), parses it and returns the details back as rows.

Beginning with Release 11g, the alert log file is written as XML formatted and as a text file (as in previous releases). The default location of both these files is the new ADR home (Automatic Diagnostic Respository, another dump dest in 11g).

The database, Automatic Storage Management (ASM), Cluster Ready Services (CRS), and other Oracle products or components store all diagnostic data in the ADR.

The ADR is set by using the DIAGNOSTIC_DEST initialization parameter. If this parameter is omitted, then, the default location of ADR is, ‘$ORACLE_HOME/log’

The location of an ADR home is given by the following path, which starts at the ADR base directory:


v$diag_info can be queried to check ADR locations, including ADR BASE and ADR home.

Within the ADR home directory are subdirectories:

alert       – XML formatted alertlog
trace      – Files and text alert.log file
cdump – core files

message_text can have multiple lines of ORA- errors and related tracing file information, but for monitoring only need first error code.

SELECT        record_id, substr(message_text, instr(message_text,'ORA-'),
              instr(message_text,chr(10),1,2)-instr(message_text,'ORA-') ) message_text
FROM          <strong>X$DBGALERTEXT</strong>
WHERE         originating_timestamp > sysdate-1
              AND message_text like '%ORA-%'
ORDER BY  record_id desc

Query to get errors for the past day:

<pre>SELECT  distinct    originating_timestamp,   message_text
WHERE   originating_timestamp > sysdate-1
AND     (  message_text = 'ORA-00600'  OR
           message_text like '%Fatal%'

Query to get errors for the last 10 minutes:

<pre>SELECT         rownum       “line”,    message_text “error”
WHERE          originating_timestamp > (sysdate – 10/1440)  AND
               message_text like ‘%ORA-%’
ORDER BY       originating_timestamp;


Oracle 11g Reading Alert Log Via SQL

Note: 1072547.1 – How To Edit, Read, and Query the Alert.Log

Oracle 11g Automatic Diagnostic Repository

Backup Concepts

Database Backup


A whole database backup includes all data files and at least one control file.

Full backups make a copy of every data block within the files being backed up that contains data.

Partial database backups may include zero or more tablespaces, zero or more data files, and may or may not include a control file.


Incremental backups make a copy of all data blocks that have changed since some previous backup. Oracle Database 10g supports two levels of incremental backup (0-1). A level 0 or baseline backup is equivalent to a full backup and contains all data blocks. A level 1 incremental backs up all database blocks changed since the level 0 backup. To restore using incremental backups, the baseline backup must first be restored, and then the incremental.

Offline backups or Cold backups(also known as consistent backups) are taken while the database is not open. They are consistent because at the time of the backup, the SCN data file headers matches the SCN in the control files.

Online backups (also known as hot or inconsistent backups) are taken while the database is open. The backups are inconsistent because with the database open there is no guarantee that the data files are synchronized with the control files. Inconsistent backups require recovery in order to be used. For online backup, ARCHIVELOG mode should be enabled.


Modes of Backups


If the database is in noarchive log mode, online redo logs can be overwritten without making sure they are saved. This implies that a database cannot be recovered fully (it can be rolled forward to the last transaction) even if backups were made.


If the database is in log archive mode, the database makes sure that online redo logs are not overwritten before they have been archived. Database can be recovered to the last

Types of Backups

i. Physical Backup

Physical backups are backups of the physical files used in storing and recovering your database, such as datafiles, control files, and archived redo logs.

ii. Logical Backup

Logical backups contain logical data (tables or stored procedures) exported from a database with an Oracle export utility and stored in a binary file, for later re-importing into a database using the corresponding Oracle import utility.


Backup Types based on Physical Backup

For performing backup and recovery based on physical backups, you have two solutions available:

  • Recovery Manager (RMAN), a tool (with command-line client and Enterprise Manager GUI interfaces) that integrates with sessions running on the Oracle server to perform a range of backup and recovery activities, as well as maintaining a repository of historical data about your backups
  • The traditional user-managed backup and recovery, where you directly manage the files that make up your database with a mixture of host operating system commands and SQL*Plus backup and recovery-related capabilities

Recovery Manager isthe preferred solution for database backup and recovery. It can perform the same types of backup and recovery available through user-managed methods more easily, provides a common interface for backup tasks across different host operating systems, and offers a number of backup techniques not available through user-managed methods.


Image Copies and Backup Sets


Image copies are duplicates of data or archived log files (similar to simply copying the files using OS commands).

Backup sets are copies of one or more data or archived log files. With backup sets, empty data locks are not stored, thereby causing backup sets to use less space on disk or tape. Backup sets an be compressed to further reduce the space requirements of the backup.

Image copies must be backed up to disk whereas Backup sets can be sent to disk or directly to tape.

Oracle Database Architecture


Database Architecture                   

An Oracle database server consists of an Oracle database and an Oracle instance.

Oracle Database architecture is based on physical and logical structures. 


Oracle Database Physical Structure

Oracle database consists of physical files to store information. Basically there are three types of physical files required for database:

Core Files:

  • Data Files                    Contains all the data of database
  • Control Files              Contains data about database(metadata)
  • Redo Log Files          Contains all changes made to data, allows recovery

Other Files

  • Parameter File         Defines how instance will be configured when it starts up
  • Password File           Allow users to connect remotely to the database
  • Archive Log Files   Contains history of redo log files
  • Trace Files                 Contains messages written by server and background process
  • Alert File                    Special trace file, chronological log of messages and errors              



Oracle Database Logical Structure

Oracle database is divided into logical storage units known as Tablespaces, which can be used to group related logical structures together. Logical structure mainly consists of following structures starting with the bigger unit and going down to the smallest.


Each database is logically divided into one or more tablespaces. One or more datafiles are explicitly created for each tablespace to physically store the data of all logical structures in a tablespace. The combined size of the datafiles in a tablespace is the total storage capacity of the tablespace.


Tablespace consists of one or more segments. A segment is a set of extents allocated for a certain logical structure stored in the same tablespace. The segments can be of one of following types:

  • Data segment  (stores user data within the database)
  • Index Segment (store Indexes)
  • Temporary Segment ( stores rollback segment)
  • Rollback/Undo segment (created when sql statement needs a temporary work area)


An extent is a specific number of contiguous data blocks, obtained in a single allocation, used to store a specific type of information in one data file.

Data Blocks

Oracle database data is stored in data blocks(at the lowest logical level).  One data block corresponds to a specific number of bytes of physical database space on disk(default 8K). The standard block size is specified by the DB_BLOCK_SIZE initialization parameter.

Oracle Data blocks are the smallest unit of I/O in the database and they are mapped to OS block on the storage device




Oracle Instance

Oracle instance is comprised of memory structures and background processes that are required to perform different tasks to keep the instance and database running. Instance does not exist until database is started, which involves reading initialization parameter file and configuring instance according to the information within the parameter file.

Memory Structures

Instance has two major memory structures:

Background Processes

Oracle creates a set of background processes for an instance that  manage the memory structures, asynchronously perform I/O to write data to disk, and do general housekeeping.
The most common background processes are the following:

  • System monitor (SMON): Performs crash recovery when the instance starts after a failure
  • Process monitor (PMON): Performs process cleanup when a user process fails
  • Database writer (DBWn): Writes modified blocks from the database buffer cache to the files on disk
  • Checkpoint (CKPT): Signals DBWn at checkpoints and updates all of the data files and control files of the database to indicate the most recent checkpoint
  • Log writer (LGWR): Writes redo log entries to disk
  • Archiver (ARCn): Copies the redo log files to archival storage when the log files are full or a log switch occurs


Shutdown Hangup Scenario

If Shutdown normal or shutdown immediate hangs and you try to connect as sys, it would get you connected either as idle instance or simply connected which is in reality a fake connection. In a particular scenario, I changed ORACLE_HOME from EM and then issued simple “SHUTDOWN” command which took a long time to complete. I tried to issue “shutdown immediate” and “shutdown abort” commands and found following errors.

SQL*Plus: Release Production on Sat Mar 5 19:52:01 2011

Enter user-name: sys as sysdba
Enter password:
SQL>  shutdown immediate
ORA-24324: service handle not initialized
ORA-24323:  value not allowed
ORA-01090: shutdown in progress - connection is not  permitted

SQL> shutdown abort
ORA-01031: insufficient privileges

SQL> select instance from v$instance;
select instance from  v$instance
* ERROR at line 1: ORA-01012: not logged on
Process ID:  0
Session ID: 0 Serial number: 0

In this scenario it did “connect / as sysdba”, but in actually failed to connect. Then, when I
tried to shutdown abort, you got ORA-1031 cause you weren’t actually connected.
It’s interesting to note that if you connect as sysdba, and the connection is successful, you see something like:”Connected to:Oracle Database 10g Enterprise Edition Release – 64bit
Production With the Partitioning, OLAP and Data Mining options”If you fail to connect, cause process table is full, you’ll  get: “Connected.”
So, “Connected.” means you *failed* to connect!

ORA-01012: not logged on (check this link for another reason why Oracle shows “not logged on” error)


Although shutdow abort causes instance recovery but it becomes necessary if you want to accomplish a repetitive task at a given time in a specific time. You can combine it with restricted startup and then again shutdown with immediate option and better run it through a script.

# Source the db.env to set the Oracle environment

ORACLE_HOME=`cat /etc/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
export PATH

shutdown abort;
startup restrict;
shutdown immediate;

Debugging a hung database

Starting with Oracle 11g,  SQL*Plus has a new option called “prelim” . This option is very useful for running oradebug and other utilities that do not require a real connection to the database.

C:\ACE> sqlplus –prelim



SQL> set _prelim on
SQL> connect / as sysdba

Run oradebug commands to diagnose a hung database issue using the

new hanganalyze utility:

SQL> oradebug hanganalyze

Wait for some time to identify process state changes.

SQL> oradebug hanganalyze

Open a separate SQL session and immediately generate a system state dump.

SQL> alter session set events 'immediate trace name SYSTEMSTATE level 10';

Further Reading:

Fix Oracle Hanging

Performance Monitoring

List of Common Oracle Metrics

Monitoring Methodologies


  • Reactive
  • Proactive

Reactive monitoring is not a recommend approach but at times it becomes inevitable. Oracle 10g provide tools that aid in proactively monitoring database. Server-generated alerts and Automated Database Diagnostic Monitor(AADM) are going to be discussed in another post.


Database and Instance Metrics

Performance measurements are commonly referred as databases metrics which is basically the rate of change of cumulative statistic that need to be monitored. Metrics are indicators of the health of various database targets. Excerpt from Oracle Database 10g New Features by Rampant TechPress

“The Metrics are the statistics derived from base statistics. They represent the delta values between the snapshot periods. Metrics are used by internal components (clients) for system health monitoring, problem detection and self-tuning. There are hundreds of different measurement points available to monitor database system.

Each metric is also associated with a metric name. You can query the view v$metricname to find the names of all the metrics.

SQL> select METRIC_NAME, METRIC_UNIT from v$metricname; "


Metric                                                  Description                                                                                                  
Event Class Metrics Metrics collected on the wait event class level. e.g.  DB_TIME_WAITING
Event Metrics Metrics collected on various wait events
File Metrics Long Duration Metrics collected at the file level. e.g. AVERAGE_FILE_WRITE_TIME
Service Metrics Metrics collected at the service level. e.g. CPU_TIME_PER_CALL
Session Metrics Short Duration Metrics collected at the session level. e.g. BLOCKED_USERS
System Metrics Short Duration Metrics collected at the system level
Tablespae Metrics Metrics collected at the Tablespace level e.g. TABLESPACE_PCT_FULL


Viewing Metrics

The new MMON background process collects database metrics continuously and automatically saves them in the SGA for one hour.It also transfers the memory version of AWR stats to disk on regular basis(in snapshots). Performance metrics are available through

  • Data Dictionary
  • Dynamic Performance Views
  • Optimizer Statistics

Data Dictionary Metrics

Data dictionary metrics provide information about space consumption and object status. Database indexes and stored procedures both need to be VALID to be used.

Unusable indexes always require DBA intervention to restore them to a valid state. Invalid
PL/SQL objects will normally recompile automatically the first time they are called, but
sometimes require DBA intervention in cases where the automatic recompilation fails.

Compiling PL/SQL procedures and Packages

Indexes can become unusable due to normal maintenance operations on tables. Unusable indexes can be rebuild to make them valid.

Rebuilding Indexes


Viewing In-Memory Metrics

All system related metrics saved in memory and they can be viewed using dynamic performance views. Most of the cumulative statistics can be viewed through following:


Example of few system metrics maintained in V$SYSMETRIC view:

  • Buffer Cache Hit Ratio
  • CPU Usage Per Sec
  • Disk Sort Per Sec
  • Host CPU Utilization (%)
  • Library Cache Hit Ratio
  • SQL Service Response Time
  • Shared Pool Free (%)

Viewing Saved Metrics

After every 60 min, MMON places metric info from SGA to disk through AWR snapshot mechanism. The metric data that AWR collects through MMON is permanently stored in DBA_HIST_* views like


Optimizer Statistics

Optimizer statistics for tables and indexes are stored in the data dictionary. These statistics are not intended to provide real-time data. They provide the optimizer a statistically correct snapshot of data storage and distribution which the optimizer uses to make decisions on how to access data.

Metrics collected include:

  • Size of the table or index in database blocks
  • Number of rows
  • Average row size and chain count (tables only)
  • Height and number of deleted leaf rows (indexes only)

As data is inserted, deleted, and modified these values change. The performance impact of
maintaining real-time data distribution statistics would be prohibitive, so these statistics are updated by periodically gathering statistics on tables and indexes.


SQL*Loader is a bulk loader utility used for moving data from external files(flat files) into the Oracle database.

SQL*Loader reads from a data file(s)(usually a flat file) and a control file(file that contains description of the data to be loaded and it has nothing to do with database control files) . During loading , process information regarding loading is written to log file, bad rows are to bad file and discarded rows to the discard file.

SQL*Loader environment

SQL*Loader Control File

The control file is a text file that contains DDL instructions and provides the following information to SQL*Loader:

  • The name and location of the input data file
  • The format of the records in the input data file
  • The name of the table(s) to be loaded
  • Table and field list information
  • SQL*Loader configuration(memory management, record rejection criteria, interrupted .
  • The names and locations of bad file and  discard file

SQL*Loader Options:

Following are few commonly used Options in control file:

 SKIP = n (Number of records to be skipped before loading)
  ERRORS = n (Number of erroneous records allowed )
 ROWS = n (Number of rows before committing )


If there is a small amount of data then its possible for the control file to contain the actual data to be loaded. This is achieved by using BEGINDATA clause before the actual data in the control file.

Case Study for simple control file including data

SQL*Loader Input Data Files

SQL*Loader reads data from one or more files organized as records in one of three formats:

  • Fixed-record format
  • Variable-record format
  • Stream-record format

Fixed-Record Format

A file is in fixed-record format when all records in a data file are the same byte length.

INFILE  ‘filename’ “fix  n”

The following example shows a control file that specifies a fixed-record format data file. The data file contains 2 physical records. The first record is [0001, abcd], which is exactly nine

bytes long (using a single-byte character set) and the carriage return is the tenth byte.

Control File:

load data

infile ‘example.dat’    “fix 10”

into table example

fields terminated by ’,’

(col1, col2)




Variable-Record Format

A file is in variable-record format when the length of each record in a character field is included at the beginning of each record in the data file.
INFILE  ‘filename’ “var  n”   (n is the number of bytes in record length field)

The following example tells SQL*Loader to look for data in the

input.dat data file and to expect variable-record format where the record length fields are 3

bytes long. The example.dat data file consists of two physical records. The first is specified

to be 009 ( nine) bytes long, the second is 010 bytes long (including a one-character


Control File:

load data

infile ‘example.dat’     “var 3“

into table example

fields terminated by ’,’ optionally enclosed by ’”’

(col1 char(5),col2 char(7))




Stream-Record Format

A file is in stream-record format when the records are not specified by size; instead SQL*Loader forms records by scanning for the record terminator.

INFILE  ‘filename’ “str terminator_string”

The following example illustrates loading data in stream-record format where the terminator string is specified using a character string, ’|\n’.

Control File:

load data

infile ‘example.dat’   “str ’|\n’“

into table example

fields terminated by ’,’ optionally enclosed by ’”’

(col1 char(5), col2 char(7))




* Filler fields in control file are fields that have names but they are not loaded into the table.

Loading Methods

SQL*Loader provides two methods for loading data:

• Conventional path

• Direct path

Conventional Path Load

Conventional path load builds an array of rows to be inserted and uses the SQL INSERT statement to load the data. During conventional path loads, input records are parsed based on field specifications, and an array of records is built and inserted into the table specified in the control file.

Direct Path Load

A direct path load builds blocks of data in memory and saves these blocks directly into the extents allocated for the table being loaded. Online redo log file entries are not generated unless the database is in ARCHIVELOG mode. Direct path loads use the field specifications to build whole Oracle blocks of data, and write the blocks directly to Oracle data files above the high water mark.

The high-water mark is the highest point to which data has been written into the table so far. Direct path load bypasses the database buffer cache and accesses the SGA only for extent management and adjustment of the high-water mark.

Comparing Direct and Conventional Path Loads

Conventional Load Direct Path Load
Uses COMMIT to make changes permanent Uses data saves
Redo entries always generated Generates redo only under specific conditions
Enforces all constraints Enforces only PRIMARY KEY, UNIQUE, and NOT NULL
INSERT triggers fire INSERT triggers do not fire
Can load into clustered tables Cannot load into clustered tables
Other users can make changes to tables Other users cannot make changes to tables
INSERT Triggers are fired INSERT Triggers disabled at start and enabled at the end
Does not lock the table being loaded into so other transactions can change the table Locks table in Exclusive mode except when several parallel direct load sessions are used concurrently

Comparing Direct and Conventional Path Loads

Method of Saving Data

Conventional path loads use SQL processing and a database COMMIT for saving data. The

insertion of an array of records is followed by a commit operation.Direct path loads use data saves to write blocks of data to Oracle data files. The following features differentiate a data save from a COMMIT:

• During a data save, only full database blocks are written to the database.

• The blocks are written after the high-water mark of the table.

• After a data save, the high-water mark is moved.

• Internal resources are not released after a data save.

• A data save does not end the transaction.

• Indexes are not updated at each data save.

Logging Changes

Conventional path loading generates redo entries similar to any DML statement. When using a direct path load, redo entries are not generated if:
• The database is in NOARCHIVELOG mode

• The database is in ARCHIVELOG mode, but logging is disabled.

Logging can be disabled by setting the NOLOGGING attribute for the table or by using the UNRECOVERABLE clause in the control file.

Conventional Vs Direct Path Load (from Shailesh’s Blog )

Useful  Links:

Maximizing SQL*Load Performance( from Burleson Consulting )

Oracle FAQ on Orafaq

Intro to SQL*Loader by O’Reilly

Oracle SQL Loader Concepts on Oracle Site