Recovery Manager Features

RMAN is an Oracle utility that you use to manage the backup, restore, and recovery operations on Oracle databases. RMAN has a powerful command language that is independent of the operating system.

RMAN provides several features not available when you make user-managed backups with
operating system commands.

  • Frequently executed operations can be stored as scripts in the database.
  • With block change tracking enabled in the database RMAN can limit incremental backups to recording only those blocks that have changed since the previous backup.
  • RMAN can be used to manage the size of backup pieces and save time by parallelizing the backup operation.
  • RMAN can recover an individual corrupt data block or set of data blocks within a data file rather than restoring and recovering the entire data file.
  • RMAN operations can be integrated with the Oracle Database Scheduler to automate
    backup operations.
  • You can use RMAN to detect block corruption. The information relating to the block
    corruption that is detected during backup can be obtained by using the
    V$BACKUP_CORRUPTION and V$COPY_CORRUPTION dynamic views.
  • RMAN provides performance enhancements such as:
            – Automatic parallelization of backup, restore, and recovery operations
            – No generation of extra redo during online database backups
            – Backups can be restricted to limit reads per file, per second to avoid interfering   with OLTP work
            -  Prevention of flooding of any one file with reads and writes while still keeping a tape drive streaming, using multiplexing
  • RMAN has a media management API to work seamlessly with third-party media
    management tools interfacing with storage devices providing increased speed and
    reliability.
  • Under the user-managed method you need to keep track of all database files and backups. In a recovery situation you must locate backups for each data file, copy them to the correct place using operating system commands, and choose which redo log files to apply. RMAN manages these tasks automatically.

Reading Alert Log

With Oracle 10g, external table was required to read alert log  but luckily Oracle 11g created a system table that can be used to access alert log from SQL:  X$DBGALERTEXT. Oracle reads the log.xml from alert directory (which contains all the data what alert.log does), parses it and returns the details back as rows.

Beginning with Release 11g, the alert log file is written as XML formatted and as a text file (as in previous releases). The default location of both these files is the new ADR home (Automatic Diagnostic Respository, another dump dest in 11g).

The database, Automatic Storage Management (ASM), Cluster Ready Services (CRS), and other Oracle products or components store all diagnostic data in the ADR.

The ADR is set by using the DIAGNOSTIC_DEST initialization parameter. If this parameter is omitted, then, the default location of ADR is, ‘$ORACLE_HOME/log’

The location of an ADR home is given by the following path, which starts at the ADR base directory:

$ORACLE_BASE/diag/product_type/product_id/insance_id

v$diag_info can be queried to check ADR locations, including ADR BASE and ADR home.

Within the ADR home directory are subdirectories:

alert       – XML formatted alertlog
trace      – Files and text alert.log file
cdump – core files

message_text can have multiple lines of ORA- errors and related tracing file information, but for monitoring only need first error code.

SELECT        record_id, substr(message_text, instr(message_text,'ORA-'),
              instr(message_text,chr(10),1,2)-instr(message_text,'ORA-') ) message_text
FROM          <strong>X$DBGALERTEXT</strong>
WHERE         originating_timestamp > sysdate-1
              AND message_text like '%ORA-%'
ORDER BY  record_id desc

Query to get errors for the past day:

</span>
<pre>SELECT  distinct    originating_timestamp,   message_text
FROM    X$DBGALERTEXT
WHERE   originating_timestamp > sysdate-1
AND     (  message_text = 'ORA-00600'  OR
           message_text like '%Fatal%'
        )

Query to get errors for the last 10 minutes:

</span>
<pre>SELECT         rownum       “line”,    message_text “error”
FROM           X$DBGALERTEXT
WHERE          originating_timestamp > (sysdate – 10/1440)  AND
               message_text like ‘%ORA-%’
ORDER BY       originating_timestamp;


Links:

Oracle 11g Reading Alert Log Via SQL

Note: 1072547.1 – How To Edit, Read, and Query the Alert.Log

Oracle 11g Automatic Diagnostic Repository

Backup Concepts

Database Backup

 

A whole database backup includes all data files and at least one control file.

Full backups make a copy of every data block within the files being backed up that contains data.

Partial database backups may include zero or more tablespaces, zero or more data files, and may or may not include a control file.

 

Incremental backups make a copy of all data blocks that have changed since some previous backup. Oracle Database 10g supports two levels of incremental backup (0-1). A level 0 or baseline backup is equivalent to a full backup and contains all data blocks. A level 1 incremental backs up all database blocks changed since the level 0 backup. To restore using incremental backups, the baseline backup must first be restored, and then the incremental.

Offline backups or Cold backups(also known as consistent backups) are taken while the database is not open. They are consistent because at the time of the backup, the SCN data file headers matches the SCN in the control files.

Online backups (also known as hot or inconsistent backups) are taken while the database is open. The backups are inconsistent because with the database open there is no guarantee that the data files are synchronized with the control files. Inconsistent backups require recovery in order to be used. For online backup, ARCHIVELOG mode should be enabled.

 

Modes of Backups

i. ARCHIVELOG

If the database is in noarchive log mode, online redo logs can be overwritten without making sure they are saved. This implies that a database cannot be recovered fully (it can be rolled forward to the last transaction) even if backups were made.

ii. NOARCHIVELOG

If the database is in log archive mode, the database makes sure that online redo logs are not overwritten before they have been archived. Database can be recovered to the last

Types of Backups

i. Physical Backup

Physical backups are backups of the physical files used in storing and recovering your database, such as datafiles, control files, and archived redo logs.

ii. Logical Backup

Logical backups contain logical data (tables or stored procedures) exported from a database with an Oracle export utility and stored in a binary file, for later re-importing into a database using the corresponding Oracle import utility.

 

Backup Types based on Physical Backup

For performing backup and recovery based on physical backups, you have two solutions available:

  • Recovery Manager (RMAN), a tool (with command-line client and Enterprise Manager GUI interfaces) that integrates with sessions running on the Oracle server to perform a range of backup and recovery activities, as well as maintaining a repository of historical data about your backups
  • The traditional user-managed backup and recovery, where you directly manage the files that make up your database with a mixture of host operating system commands and SQL*Plus backup and recovery-related capabilities

Recovery Manager isthe preferred solution for database backup and recovery. It can perform the same types of backup and recovery available through user-managed methods more easily, provides a common interface for backup tasks across different host operating systems, and offers a number of backup techniques not available through user-managed methods.

 

Image Copies and Backup Sets

 

Image copies are duplicates of data or archived log files (similar to simply copying the files using OS commands).

Backup sets are copies of one or more data or archived log files. With backup sets, empty data locks are not stored, thereby causing backup sets to use less space on disk or tape. Backup sets an be compressed to further reduce the space requirements of the backup.

Image copies must be backed up to disk whereas Backup sets can be sent to disk or directly to tape.

Oracle Database Architecture

 

Database Architecture                   

An Oracle database server consists of an Oracle database and an Oracle instance.

Oracle Database architecture is based on physical and logical structures. 

 

Oracle Database Physical Structure

Oracle database consists of physical files to store information. Basically there are three types of physical files required for database:

Core Files:

  • Data Files                    Contains all the data of database
  • Control Files              Contains data about database(metadata)
  • Redo Log Files          Contains all changes made to data, allows recovery

Other Files

  • Parameter File         Defines how instance will be configured when it starts up
  • Password File           Allow users to connect remotely to the database
  • Archive Log Files   Contains history of redo log files
  • Trace Files                 Contains messages written by server and background process
  • Alert File                    Special trace file, chronological log of messages and errors              

 

 

Oracle Database Logical Structure

Oracle database is divided into logical storage units known as Tablespaces, which can be used to group related logical structures together. Logical structure mainly consists of following structures starting with the bigger unit and going down to the smallest.

Tablespace

Each database is logically divided into one or more tablespaces. One or more datafiles are explicitly created for each tablespace to physically store the data of all logical structures in a tablespace. The combined size of the datafiles in a tablespace is the total storage capacity of the tablespace.

Segment

Tablespace consists of one or more segments. A segment is a set of extents allocated for a certain logical structure stored in the same tablespace. The segments can be of one of following types:

  • Data segment  (stores user data within the database)
  • Index Segment (store Indexes)
  • Temporary Segment ( stores rollback segment)
  • Rollback/Undo segment (created when sql statement needs a temporary work area)

Extent

An extent is a specific number of contiguous data blocks, obtained in a single allocation, used to store a specific type of information in one data file.

Data Blocks

Oracle database data is stored in data blocks(at the lowest logical level).  One data block corresponds to a specific number of bytes of physical database space on disk(default 8K). The standard block size is specified by the DB_BLOCK_SIZE initialization parameter.

Oracle Data blocks are the smallest unit of I/O in the database and they are mapped to OS block on the storage device

 

 

 

Oracle Instance

Oracle instance is comprised of memory structures and background processes that are required to perform different tasks to keep the instance and database running. Instance does not exist until database is started, which involves reading initialization parameter file and configuring instance according to the information within the parameter file.

Memory Structures

Instance has two major memory structures:

Background Processes

Oracle creates a set of background processes for an instance that  manage the memory structures, asynchronously perform I/O to write data to disk, and do general housekeeping.
The most common background processes are the following:

  • System monitor (SMON): Performs crash recovery when the instance starts after a failure
  • Process monitor (PMON): Performs process cleanup when a user process fails
  • Database writer (DBWn): Writes modified blocks from the database buffer cache to the files on disk
  • Checkpoint (CKPT): Signals DBWn at checkpoints and updates all of the data files and control files of the database to indicate the most recent checkpoint
  • Log writer (LGWR): Writes redo log entries to disk
  • Archiver (ARCn): Copies the redo log files to archival storage when the log files are full or a log switch occurs

 

Shutdown Hangup Scenario

If Shutdown normal or shutdown immediate hangs and you try to connect as sys, it would get you connected either as idle instance or simply connected which is in reality a fake connection. In a particular scenario, I changed ORACLE_HOME from EM and then issued simple “SHUTDOWN” command which took a long time to complete. I tried to issue “shutdown immediate” and “shutdown abort” commands and found following errors.


C:\Users\ACE>sqlplus
SQL*Plus: Release 11.2.0.1.0 Production on Sat Mar 5 19:52:01 2011

Enter user-name: sys as sysdba
Enter password:
Connected.
SQL>  shutdown immediate
ORA-24324: service handle not initialized
ORA-24323:  value not allowed
ORA-01090: shutdown in progress - connection is not  permitted


SQL> shutdown abort
ORA-01031: insufficient privileges


SQL> select instance from v$instance;
select instance from  v$instance
* ERROR at line 1: ORA-01012: not logged on
Process ID:  0
Session ID: 0 Serial number: 0

In this scenario it did “connect / as sysdba”, but in actually failed to connect. Then, when I
tried to shutdown abort, you got ORA-1031 cause you weren’t actually connected.
It’s interesting to note that if you connect as sysdba, and the connection is successful, you see something like:”Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit
Production With the Partitioning, OLAP and Data Mining options”If you fail to connect, cause process table is full, you’ll  get: “Connected.”
So, “Connected.” means you *failed* to connect!

ORA-01012: not logged on (check this link for another reason why Oracle shows “not logged on” error)

 

Although shutdow abort causes instance recovery but it becomes necessary if you want to accomplish a repetitive task at a given time in a specific time. You can combine it with restricted startup and then again shutdown with immediate option and better run it through a script.


#!/bin/ksh
# Source the db.env to set the Oracle environment

export ORACLE_SID
ORACLE_HOME=`cat /etc/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export PATH

shutdown abort;
startup restrict;
shutdown immediate;

Debugging a hung database

Starting with Oracle 11g,  SQL*Plus has a new option called “prelim” . This option is very useful for running oradebug and other utilities that do not require a real connection to the database.


C:\ACE> sqlplus –prelim

SQL>

or

SQL> set _prelim on
SQL> connect / as sysdba

Run oradebug commands to diagnose a hung database issue using the

new hanganalyze utility:


SQL> oradebug hanganalyze

Wait for some time to identify process state changes.

SQL> oradebug hanganalyze

Open a separate SQL session and immediately generate a system state dump.

SQL> alter session set events 'immediate trace name SYSTEMSTATE level 10';

Further Reading:

Fix Oracle Hanging

Performance Monitoring

List of Common Oracle Metrics

Monitoring Methodologies

 

  • Reactive
  • Proactive

Reactive monitoring is not a recommend approach but at times it becomes inevitable. Oracle 10g provide tools that aid in proactively monitoring database. Server-generated alerts and Automated Database Diagnostic Monitor(AADM) are going to be discussed in another post.

 

Database and Instance Metrics

Performance measurements are commonly referred as databases metrics which is basically the rate of change of cumulative statistic that need to be monitored. Metrics are indicators of the health of various database targets. Excerpt from Oracle Database 10g New Features by Rampant TechPress

“The Metrics are the statistics derived from base statistics. They represent the delta values between the snapshot periods. Metrics are used by internal components (clients) for system health monitoring, problem detection and self-tuning. There are hundreds of different measurement points available to monitor database system.

Each metric is also associated with a metric name. You can query the view v$metricname to find the names of all the metrics.

SQL> select METRIC_NAME, METRIC_UNIT from v$metricname; "

 

Metric                                                  Description                                                                                                  
Event Class Metrics Metrics collected on the wait event class level. e.g.  DB_TIME_WAITING
Event Metrics Metrics collected on various wait events
File Metrics Long Duration Metrics collected at the file level. e.g. AVERAGE_FILE_WRITE_TIME
Service Metrics Metrics collected at the service level. e.g. CPU_TIME_PER_CALL
Session Metrics Short Duration Metrics collected at the session level. e.g. BLOCKED_USERS
System Metrics Short Duration Metrics collected at the system level
Tablespae Metrics Metrics collected at the Tablespace level e.g. TABLESPACE_PCT_FULL

 

Viewing Metrics

The new MMON background process collects database metrics continuously and automatically saves them in the SGA for one hour.It also transfers the memory version of AWR stats to disk on regular basis(in snapshots). Performance metrics are available through

  • Data Dictionary
  • Dynamic Performance Views
  • Optimizer Statistics

Data Dictionary Metrics

Data dictionary metrics provide information about space consumption and object status. Database indexes and stored procedures both need to be VALID to be used.

Unusable indexes always require DBA intervention to restore them to a valid state. Invalid
PL/SQL objects will normally recompile automatically the first time they are called, but
sometimes require DBA intervention in cases where the automatic recompilation fails.

Compiling PL/SQL procedures and Packages

Indexes can become unusable due to normal maintenance operations on tables. Unusable indexes can be rebuild to make them valid.

Rebuilding Indexes

 

Viewing In-Memory Metrics

All system related metrics saved in memory and they can be viewed using dynamic performance views. Most of the cumulative statistics can be viewed through following:

  • V$SYSSTAT
  • V$SESSSTAT
  • V$SYSMETRIC
  • V$SYSMETRIC_HISTORY

Example of few system metrics maintained in V$SYSMETRIC view:

  • Buffer Cache Hit Ratio
  • CPU Usage Per Sec
  • Disk Sort Per Sec
  • Host CPU Utilization (%)
  • Library Cache Hit Ratio
  • SQL Service Response Time
  • Shared Pool Free (%)

Viewing Saved Metrics

After every 60 min, MMON places metric info from SGA to disk through AWR snapshot mechanism. The metric data that AWR collects through MMON is permanently stored in DBA_HIST_* views like

  • DBA_HIST_SERVICE_NAME
  • DBA_HIST_SESSMETRIC_HISTORY

Optimizer Statistics

Optimizer statistics for tables and indexes are stored in the data dictionary. These statistics are not intended to provide real-time data. They provide the optimizer a statistically correct snapshot of data storage and distribution which the optimizer uses to make decisions on how to access data.

Metrics collected include:

  • Size of the table or index in database blocks
  • Number of rows
  • Average row size and chain count (tables only)
  • Height and number of deleted leaf rows (indexes only)

As data is inserted, deleted, and modified these values change. The performance impact of
maintaining real-time data distribution statistics would be prohibitive, so these statistics are updated by periodically gathering statistics on tables and indexes.

SQL LOADER CONCEPTS

SQL*Loader is a bulk loader utility used for moving data from external files(flat files) into the Oracle database.

SQL*Loader reads from a data file(s)(usually a flat file) and a control file(file that contains description of the data to be loaded and it has nothing to do with database control files) . During loading , process information regarding loading is written to log file, bad rows are to bad file and discarded rows to the discard file.

SQL*Loader environment

SQL*Loader Control File

The control file is a text file that contains DDL instructions and provides the following information to SQL*Loader:

  • The name and location of the input data file
  • The format of the records in the input data file
  • The name of the table(s) to be loaded
  • Table and field list information
  • SQL*Loader configuration(memory management, record rejection criteria, interrupted .
  • The names and locations of bad file and  discard file

SQL*Loader Options:

Following are few commonly used Options in control file:

 SKIP = n (Number of records to be skipped before loading)
  ERRORS = n (Number of erroneous records allowed )
 ROWS = n (Number of rows before committing )
  DIRECT = {TRUE | FALSE}
 PARALLEL = {TRUE | FALSE}

 

If there is a small amount of data then its possible for the control file to contain the actual data to be loaded. This is achieved by using BEGINDATA clause before the actual data in the control file.

Case Study for simple control file including data

SQL*Loader Input Data Files

SQL*Loader reads data from one or more files organized as records in one of three formats:

  • Fixed-record format
  • Variable-record format
  • Stream-record format

Fixed-Record Format

A file is in fixed-record format when all records in a data file are the same byte length.

INFILE  ‘filename’ “fix  n”

The following example shows a control file that specifies a fixed-record format data file. The data file contains 2 physical records. The first record is [0001, abcd], which is exactly nine

bytes long (using a single-byte character set) and the carriage return is the tenth byte.

Control File:

load data

infile ‘example.dat’    “fix 10”

into table example

fields terminated by ’,’

(col1, col2)

input.dat:

0001,abcd

0002,fghi

Variable-Record Format

A file is in variable-record format when the length of each record in a character field is included at the beginning of each record in the data file.
INFILE  ‘filename’ “var  n”   (n is the number of bytes in record length field)

The following example tells SQL*Loader to look for data in the

input.dat data file and to expect variable-record format where the record length fields are 3

bytes long. The example.dat data file consists of two physical records. The first is specified

to be 009 ( nine) bytes long, the second is 010 bytes long (including a one-character

newline).

Control File:

load data

infile ‘example.dat’     “var 3“

into table example

fields terminated by ’,’ optionally enclosed by ’”’

(col1 char(5),col2 char(7))

input.dat:

009hello,cd,

010world,im,

Stream-Record Format

A file is in stream-record format when the records are not specified by size; instead SQL*Loader forms records by scanning for the record terminator.

INFILE  ‘filename’ “str terminator_string”

The following example illustrates loading data in stream-record format where the terminator string is specified using a character string, ’|\n’.

Control File:

load data

infile ‘example.dat’   “str ’|\n’“

into table example

fields terminated by ’,’ optionally enclosed by ’”’

(col1 char(5), col2 char(7))

input.dat:

hello,world,|

james,bond,|

* Filler fields in control file are fields that have names but they are not loaded into the table.

Loading Methods

SQL*Loader provides two methods for loading data:

• Conventional path

• Direct path

Conventional Path Load

Conventional path load builds an array of rows to be inserted and uses the SQL INSERT statement to load the data. During conventional path loads, input records are parsed based on field specifications, and an array of records is built and inserted into the table specified in the control file.

Direct Path Load

A direct path load builds blocks of data in memory and saves these blocks directly into the extents allocated for the table being loaded. Online redo log file entries are not generated unless the database is in ARCHIVELOG mode. Direct path loads use the field specifications to build whole Oracle blocks of data, and write the blocks directly to Oracle data files above the high water mark.

The high-water mark is the highest point to which data has been written into the table so far. Direct path load bypasses the database buffer cache and accesses the SGA only for extent management and adjustment of the high-water mark.

Comparing Direct and Conventional Path Loads

Conventional Load Direct Path Load
Uses COMMIT to make changes permanent Uses data saves
Redo entries always generated Generates redo only under specific conditions
Enforces all constraints Enforces only PRIMARY KEY, UNIQUE, and NOT NULL
INSERT triggers fire INSERT triggers do not fire
Can load into clustered tables Cannot load into clustered tables
Other users can make changes to tables Other users cannot make changes to tables
INSERT Triggers are fired INSERT Triggers disabled at start and enabled at the end
Does not lock the table being loaded into so other transactions can change the table Locks table in Exclusive mode except when several parallel direct load sessions are used concurrently

Comparing Direct and Conventional Path Loads

Method of Saving Data

Conventional path loads use SQL processing and a database COMMIT for saving data. The

insertion of an array of records is followed by a commit operation.Direct path loads use data saves to write blocks of data to Oracle data files. The following features differentiate a data save from a COMMIT:

• During a data save, only full database blocks are written to the database.

• The blocks are written after the high-water mark of the table.

• After a data save, the high-water mark is moved.

• Internal resources are not released after a data save.

• A data save does not end the transaction.

• Indexes are not updated at each data save.

Logging Changes

Conventional path loading generates redo entries similar to any DML statement. When using a direct path load, redo entries are not generated if:
• The database is in NOARCHIVELOG mode

• The database is in ARCHIVELOG mode, but logging is disabled.

Logging can be disabled by setting the NOLOGGING attribute for the table or by using the UNRECOVERABLE clause in the control file.

Conventional Vs Direct Path Load (from Shailesh’s Blog )

Useful  Links:

Maximizing SQL*Load Performance( from Burleson Consulting )

Oracle FAQ on Orafaq

Intro to SQL*Loader by O’Reilly

Oracle SQL Loader Concepts on Oracle Site

Common Oracle TNS Errors

In this post , I am trying to cover common TNS errors and possible corrective measures.

ORA-12154: TNS:could not resolve service name

SQL> connect SYSTEM/manager@DEV
ERROR:
ORA-12154: TNS:could not resolve service name

The most common TNS error mostly due to multiple ORACLE HOME, tnsnames.ora and sqlnet.ora conflicts.

  • Check that the instance or database name is being typed in correctly. Check ORACLE_SID,ORACLE_HOME
  • Make sure listener is up and running
  • lsnrctl > status           if down then

    lsnrctl > start

  • PING the server to see if network is up
  • ping 168.192.1.22

  • TNSPING the server and then try connecting
  • tnsping DEV

  • Check the tns names.ora file to validate that the instance name has been entered correctly. Make sure that all entries are properly terminated with parenthesis.
  • Validate the domain name entry in sqlnet.ora is not conflicting with the full database name resolution.
  • If nothings works, delete and recreate the entry to eliminate any character or control character issues.

For more details on ORA-12154 causes and actions

 

TNS-03505: Failed to resolve name

Message 3511 not found; product=network; facility=TNS
TNS-03505: Failed to resolve name

This is quite similar to the above error so you have to again check the environmental variables like ORACLE_HOME and PATH and make sure their values are set correctly.
The issue usually occurs when the tns entiries we provide has some unwanted characters or spaces etc.Please also check the tns entry(alias) is pointing to the correct database with sid hostname.

For more details on TNS-03505 reasons and remedy actions

ORA-12537: TNS: connection closed error

It happens if the connection has been closed most likely due to the following reasons:

  • Firewall has disconnected idle connections
  • Configuration problem in sqlnet.ora or listener.ora
  • Time out occurred on client connection
  • Path name that is too long for the Oracle TNS client on windows

Try the following to fix this issue:

  1. Install the exe in a shorter named directory.
  2. Modify the TNS Names entry so that it is shorter.
  3. Replace domain names with IP Addresses if that shortens and vice versa.
  4. Remove any parameters from the entry that are not needed.
  5. Patch the Database Listener to the most recent version

For more details on ORA-12537 reasons and remedy actions

 

Troubleshooting Tips from the Field for Oracle Net Services

Here are some tips you may find helpful when you are having difficulty diagnosing network problems from Oracle documentation:

  • Use the node or network address during configuration instead of the name of the server computer

    This eliminates any internal lookup problems and make the connection slightly faster.

  • If you are using TCP/IP addresses, use the IP address rather than host name

    For example, change the (HOST=server_name) line in the tnsnames.ora file with the internet address, for example (HOST=198.32.3.5).

  • Perform a loopback test

    Perform a loopback test on the server as described in “Testing Configuration on the Database Server”. If the test passes, ftp the tnsnames.ora and sqlnet.ora files to the client.

  • Check what is between you and the server

    If it is a wide area network (WAN), identify any intermediate systems that may not work correctly. If all computers are fine, the problem may be a timing issue.

  • Verify whether or not there is a timing issue

    Timing issues are associated with an ORA-12535 error in the client log files.

    To resolve this, try speeding up the connection by using exact addresses instead of names and increase the INBOUND_CONNECT_TIMEOUT_listener_name parameter in the listener.ora file. The default value for this parameter is 10 seconds.

  • Determine which Oracle applications are failing

    SQL*Plus may work, but CASE tools may not. If you determine the problem is a data volume issue, try to transfer a large (5 MB) file with the base connectivity.

 

Resolving most common Error messages for Oracle Net Services

 

Oracle Parameter Files

Parameter Files

Oracle must read either an initialization parameter file or a server
parameter file(SPFILE) to start an instance. These files contain list of initialization parameters and a value for each parameter.

In the platform-specific default location, Oracle Database locates your initialization parameter file by examining filenames in the following order:

  • spfile$ORACLE_SID.ora
  • spfile.ora
  • init$ORACLE_SID.ora

Initialization Parameters

 

Initialization parameters are divided into two groups: basic and advanced. In the majority of cases, it is necessary to set and tune only the basic parameters to get reasonable performance. In rare situations, modification to the advanced parameters may be needed for optimal performance.

Complete List of Initialization Parameters (10g)

Basic Parameters

There are about 29 basic parameters. Only a few of these parameters must be explicitly set because the default values are adequate in the majority of cases.

DB_DOMAIN NLS_LANGUAGE
DB_NAME , DB_UNIQUE_NAME NLS_TERRITORY
DB_RECOVERY_FILE_DEST OPEN_CURSORS
DB_RECOVERY_FILE_DEST_SIZE PGA_AGGREGATE_TARGET
DB_CREATE_ONLINE_LOG_DEST_n PROCESSES
COMPATIBLE REMOTE_LISTENER
CONTROL_FILES REMOTE_LOGIN_PASSWORDFILE
CLUSTER_DATABASE ROLLBACK_SEGMENTS
DB_BLOCK_SIZE SESSIONS
DB_CREATE_FILE_DEST SGA_TARGET
INSTANCE_NUMBER SHARED_SERVERS
JOB_QUEUE_PROCESSES STAR_TRANSFORMATION_ENABLED
LOG_ARCHIVE_DEST_n UNDO_MANAGEMENT
LOG_ARCHIVE_DEST_STATE_n UNDO_TABLESPACE

  

Advanced Parameters

Other than basic parameters, several parameters are used for database tuning and other advance services. Few examples of parameters

SESSION_CACHED_CURSORS
CURSOR_SHARING
DB_KEEP_CACHE_SIZE
OPEN_LINKS
AUDIT_FILE_DEST
JAVA_POOL_SIZE
LARGE_POOL_SIZE
STREAMS_POOL_SIZE
QUERY_REWRITE_ENABLED
 

Hidden/Undocumented Parameters

Hidden parameters should only be used as the last resort and preferably when advised by Oracle Support.

_abort_recovery_on_join

_alert_message_purge

_backup_io_pool_size

_cursor_plan_enabled

_log_checkpoint_recovery_check

Complete List of Hidden Parameters(10g)

 

Viewing and Changing Parameters Settings

  • SHOW PARAMETERS command from SQL*PLUS can be used to to view parameter values
    • SQL> SHOW PARAMTERS CONTROL_FILES

  • V$PARAMETER, V$PARAMETER2 views is used to display current parameter values
  • V$SPPARAMETER view is used to display contents of server parameter file.

CHANGING PARAMETER VALUES 

  PFILE can be edited with any text editor but the SPFILE is a binary file and its parameter values can be changed using “ALTER SYSTEM SET” and “ALTER SYSTEM RESET”commands.

Removing a Parameter from SPFILE: 

SQL> ALTER SYSTEM RESET timed_statistics SCOPE=SPFILE SID=‘*’;

 

Parameter Setting to SPFILE / Memory/ Both

SQL> ALTER SYSTEM SET open_cursors=300 SCOPE=SPFILE;

SQL> ALTER SYSTEM SET timed_statistics=TRUE SCOPE=BOTH SID='*';

 

The scope parameter can be set to SPFILE, MEMORY or BOTH. Memory is default if PFILE was used at STARTUP and BOTH is default if SPFILE was used at STARTUP.

  • MEMORY:  The parameter value is set for the current instance only.
  • SPFILE: The parameter value is updated in SPFILE and takes effect at next STARTUP.
  • BOTH: The parameter value is set for current instance and persists to the SPFILE


Oracle Shutdown Concepts

Shut down the database to make operating system offline backups of all physical structures and to have modified static initialization parameters take effect when restarted.

SHUTDOWN [NORMAL | TRANSACTIONAL | IMMEDIATE | ABORT ]

SHUTDOWN NORMAL

Normal is the default shutdown mode.Normal database shutdown proceeds with the
following conditions:

• No new connections can be made.
• The Oracle server waits for all users to disconnect before completing the shutdown.
• Database and redo buffers are written to disk.
• Background processes are terminated, and the SGA is removed from memory.
• Oracle closes and dismounts the database before shutting down the instance.
• The next startup does not require an instance recovery.

SQL> shutdown

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL>



SHUTDOWN TRANSACTIONAL

A transactional shutdown prevents clients from losing work. A transactional database

shutdown proceeds with the following conditions:

• No client can start a new transaction on this particular instance.
• A client is disconnected when the client ends the transaction that is in progress.
• When all transactions have finished, a shutdown occurs immediately.
• The next startup does not require an instance recovery.

SQL> shutdown transactional

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL>

SHUTDOWN IMMEDIATE

Immediate database shutdown proceeds with the following conditions:

• Current SQL statements being processed by Oracle are not completed.
• The Oracle server does not wait for the users, who are currently connected to the database, to disconnect.
• Oracle rolls back active transactions and disconnects all connected users.
• Oracle closes and dismounts the database before shutting down the instance.
• The next startup does not require an instance recovery.

SQL> shutdown immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL>

SHUTDOWN ABORT

If no shutdown option works then database instance can be shutdown using abort option. Aborting an instance proceeds with the following conditions:

• Current SQL statements being processed by the Oracle server are immediately terminated.
• Oracle does not wait for users currently connected to the database to disconnect.
• Database and redo buffers are not written to disk.
• Uncommitted transactions are not rolled back.
• The instance is terminated without closing the files.
• The database is not closed or dismounted.
• The next startup requires instance recovery, which occurs automatically.

SQL> shutdown abort

ORACLE instance shut down.

SQL>

All shutdown options except ABORT are clean shutdown where no instance recovery is required. Shutdown abort is a dirty shutdown as it requires instance recovery before getting back up.