Archive for the ‘Oracle Utilities’ Category

SQL LOADER CONCEPTS

SQL*Loader is a bulk loader utility used for moving data from external files(flat files) into the Oracle database.

SQL*Loader reads from a data file(s)(usually a flat file) and a control file(file that contains description of the data to be loaded and it has nothing to do with database control files) . During loading , process information regarding loading is written to log file, bad rows are to bad file and discarded rows to the discard file.

SQL*Loader environment

SQL*Loader Control File

The control file is a text file that contains DDL instructions and provides the following information to SQL*Loader:

  • The name and location of the input data file
  • The format of the records in the input data file
  • The name of the table(s) to be loaded
  • Table and field list information
  • SQL*Loader configuration(memory management, record rejection criteria, interrupted .
  • The names and locations of bad file and  discard file

SQL*Loader Options:

Following are few commonly used Options in control file:

 SKIP = n (Number of records to be skipped before loading)
  ERRORS = n (Number of erroneous records allowed )
 ROWS = n (Number of rows before committing )
  DIRECT = {TRUE | FALSE}
 PARALLEL = {TRUE | FALSE}

 

If there is a small amount of data then its possible for the control file to contain the actual data to be loaded. This is achieved by using BEGINDATA clause before the actual data in the control file.

Case Study for simple control file including data

SQL*Loader Input Data Files

SQL*Loader reads data from one or more files organized as records in one of three formats:

  • Fixed-record format
  • Variable-record format
  • Stream-record format

Fixed-Record Format

A file is in fixed-record format when all records in a data file are the same byte length.

INFILE  ‘filename’ “fix  n”

The following example shows a control file that specifies a fixed-record format data file. The data file contains 2 physical records. The first record is [0001, abcd], which is exactly nine

bytes long (using a single-byte character set) and the carriage return is the tenth byte.

Control File:

load data

infile ‘example.dat’    “fix 10”

into table example

fields terminated by ’,’

(col1, col2)

input.dat:

0001,abcd

0002,fghi

Variable-Record Format

A file is in variable-record format when the length of each record in a character field is included at the beginning of each record in the data file.
INFILE  ‘filename’ “var  n”   (n is the number of bytes in record length field)

The following example tells SQL*Loader to look for data in the

input.dat data file and to expect variable-record format where the record length fields are 3

bytes long. The example.dat data file consists of two physical records. The first is specified

to be 009 ( nine) bytes long, the second is 010 bytes long (including a one-character

newline).

Control File:

load data

infile ‘example.dat’     “var 3“

into table example

fields terminated by ’,’ optionally enclosed by ’”’

(col1 char(5),col2 char(7))

input.dat:

009hello,cd,

010world,im,

Stream-Record Format

A file is in stream-record format when the records are not specified by size; instead SQL*Loader forms records by scanning for the record terminator.

INFILE  ‘filename’ “str terminator_string”

The following example illustrates loading data in stream-record format where the terminator string is specified using a character string, ’|\n’.

Control File:

load data

infile ‘example.dat’   “str ’|\n’“

into table example

fields terminated by ’,’ optionally enclosed by ’”’

(col1 char(5), col2 char(7))

input.dat:

hello,world,|

james,bond,|

* Filler fields in control file are fields that have names but they are not loaded into the table.

Loading Methods

SQL*Loader provides two methods for loading data:

• Conventional path

• Direct path

Conventional Path Load

Conventional path load builds an array of rows to be inserted and uses the SQL INSERT statement to load the data. During conventional path loads, input records are parsed based on field specifications, and an array of records is built and inserted into the table specified in the control file.

Direct Path Load

A direct path load builds blocks of data in memory and saves these blocks directly into the extents allocated for the table being loaded. Online redo log file entries are not generated unless the database is in ARCHIVELOG mode. Direct path loads use the field specifications to build whole Oracle blocks of data, and write the blocks directly to Oracle data files above the high water mark.

The high-water mark is the highest point to which data has been written into the table so far. Direct path load bypasses the database buffer cache and accesses the SGA only for extent management and adjustment of the high-water mark.

Comparing Direct and Conventional Path Loads

Conventional Load Direct Path Load
Uses COMMIT to make changes permanent Uses data saves
Redo entries always generated Generates redo only under specific conditions
Enforces all constraints Enforces only PRIMARY KEY, UNIQUE, and NOT NULL
INSERT triggers fire INSERT triggers do not fire
Can load into clustered tables Cannot load into clustered tables
Other users can make changes to tables Other users cannot make changes to tables
INSERT Triggers are fired INSERT Triggers disabled at start and enabled at the end
Does not lock the table being loaded into so other transactions can change the table Locks table in Exclusive mode except when several parallel direct load sessions are used concurrently

Comparing Direct and Conventional Path Loads

Method of Saving Data

Conventional path loads use SQL processing and a database COMMIT for saving data. The

insertion of an array of records is followed by a commit operation.Direct path loads use data saves to write blocks of data to Oracle data files. The following features differentiate a data save from a COMMIT:

• During a data save, only full database blocks are written to the database.

• The blocks are written after the high-water mark of the table.

• After a data save, the high-water mark is moved.

• Internal resources are not released after a data save.

• A data save does not end the transaction.

• Indexes are not updated at each data save.

Logging Changes

Conventional path loading generates redo entries similar to any DML statement. When using a direct path load, redo entries are not generated if:
• The database is in NOARCHIVELOG mode

• The database is in ARCHIVELOG mode, but logging is disabled.

Logging can be disabled by setting the NOLOGGING attribute for the table or by using the UNRECOVERABLE clause in the control file.

Conventional Vs Direct Path Load (from Shailesh’s Blog )

Useful  Links:

Maximizing SQL*Load Performance( from Burleson Consulting )

Oracle FAQ on Orafaq

Intro to SQL*Loader by O’Reilly

Oracle SQL Loader Concepts on Oracle Site

Scheduling Job using DBMS_JOBS

Create a Job (run it every day)

VARIABLE v_jobnum NUMBER;
BEGIN
dbms_job.submit(:v_jobnum, ‘MY_STORED_PROCEDURE;’, sysdate,’sysdate+1′);
END;

print v_jobnum

BEGIN
DBMS_JOB.SUBMIT(JOB => :jobnumber,
WHAT => ‘DBMS_STATS.GATHER_DATABASE_STATS(options => ”GATHER AUTO”);’,
NEXT_DATE => to_date(’11:30 01/23/06′,’HH24:MI MM/DD/YY’),
INTERVAL => ‘SYSDATE + 1′);
COMMIT;
END;

Running Existing Job

While the point is to automate jobs, occasionally you may find you need to run a job manually. To do this you can call the RUN procedure. This is especially useful if you need to run a job which has been marked as broken.

BEGIN
DBMS_JOB.RUN(JOB => 21);
END;

exec dbms_job.run(job_number);

How to tell if a job has failed

When a database job fails an error will be written to the Oracle alert log with the error number of ORA-12012 and will include the job number which failed.

If a job fails to execute after 16 attempts Oracle will mark it as broken and stop attempting to run it. This will be indicated by a ‘Y’ in the BROKEN column of the dba_jobs view. If you want to tell Oracle to not run a job you can manually mark it as broken by executing the following:

BEGIN
DBMS_JOB.BROKEN(JOB => 21, BROKEN => TRUE);
END;

The following will mark job 21 as not broken and have it execute at 11:00pm on January 23.

BEGIN
DBMS_JOB.BROKEN(JOB => 21, BROKEN => FALSE, NEXT_DATE => TO_DATE(’23:00 01/23/06′, ‘HH24:MI MM/DD/YY’));
END;

Checking out Failed Jobs

select job, log_user, last_date, next_date, broken, interval,failures, WHAT from dba_jobs;

If a job fails to execute after 16 attempts Oracle will mark it as broken and stop attempting to run it. This will be indicated by a ‘Y’ in the BROKEN column of the dba_jobs view. If you want to tell Oracle to not run a job you can manually mark it as broken by executing the following:

BEGIN
DBMS_JOB.BROKEN(JOB => 21, BROKEN => TRUE);
END;

Changing a Job

You can update a job with the DBMS_JOB.CHANGE procedure. This procedure takes job number, code, next date and interval, in that order, as conditions. Whatever you don’t want to change can be passed as NULL.

With that in mind, the command to change just the interval would look something like this:

BEGIN
DBMS_JOB.CHANGE(JOB => 21, WHAT => NULL, NEXT_DATE => NULL, INTERVAL => ‘SYSDATE + 7′);
END;

Remove a Job from the Job Queue

exec dbms_job.remove(job => 21);

 

 

 

First and Last Day of Quarter

select TRUNC(sysdate, ‘Q’),TRUNC(ADD_MONTHS(sysdate, +3), ‘Q’)-1 from dual ;

EXPDP and IMPDP

Detailed Documentation of EXPDP

Detailed Documentation of IMPDP

Data Pump is server-based, rather than client-based, dump files, log files, and SQL files are accessed relative to server-based directory paths. Data Pump requires you to specify directory paths as directory objects. A directory object maps a name to a directory path on the file system.

SQL> CREATE DIRECTORY data_pump AS 'C:\source’

After a directory is created, the user creating the directory object needs to grant READ or WRITE permission on the directory to other users.

SQL> GRANT READ, WRITE ON DIRECTORY data_pump TO hr;

 

EXPORT DATA PUMP

Estimating size of HR schema without exporting objects. The estimate is for table row data only and does not include metadata.

expdp  fzafar/123@orcl  logfile=estimate.log schemas=HR  estimate_only=y

Exporting two schemas HR and SCOTT

expdp fzafar/123@orcl dumpfile=hr_scott.dmp logfile=hr_scott.log schemas=HR,SCOTT DIRECTORY=DATA_PUMP

expdp fzafar/123 DIRECTORY=data_pump DUMPFILE=hr_exclude.dmp EXCLUDE=VIEW,PACKAGE, FUNCTION

expdp  fzafar/123@orcl DUMPFILE=HREXCLUDE.dmp logfile=HREXCLUDE.log schemas=HR  DIRECTORY=DATA_PUMP CONTENT=DATA_ONLY EXCLUDE=TABLE:\”IN  \(\’COUNTRIES\’,\’REGION\’\)\”

 
expdp fzafar/123@orcl PARFILE=exp.par
exp.par Parameter File
DIRECTORY=dpump_dir1
DUMPFILE=dataonly.dmp
CONTENT=DATA_ONLY
PARALLEL=3 EXCLUDE=TABLE:"IN ('COUNTRIES', 'REGIONS')"
 
            EXP Parameter            EXPDP Parameter
OWNER SCHEMAS
LOG LOGFILE
STATISTICS=ESTIMATE ESTIMATE=STATISTICS
ROWS CONTENT
FEEDBACK STATUS
FILE DUMPFILE
CONSTRAINTS EXCLUDE|INCLUDE=CONSTRAINTS
GRANTS EXCLUDE|INCLUDE=GRANTS
INDEX EXCLUDE|INCLUDE=INDEX
TRIGGERS EXCLUDE|INCLUDE=TRIGGER

 

ONLY AVAILABLE IN EXP

BUFFER, COMPRESS, DIRECT, INCTYPE, CONSISTENT, OBJECT_CONSISTENT, RECORDLENGTH, RESUMABLE, RESUMEABLE_TIMEOUT,USERID, VOLSIZE

ONLY AVAILABLE IN EXPDP

EXCLUDE, INCLUDE, ESTIMATE_ONLY, PARALLEL

 

IMPORT DATA PUMP

 

            IMP Parameter           IMPDP Paramater
FROMUSER SCHEMAS
TOUSER REMAP_SCHEMA
IGNORE TABLE_EXIST_ACTION
INDEXFILE SQLFILE
SHOW SQLFILE

 

Importing Only two tables from  a schema

Schema Level Import

dumpfile=hr.dmp
DIRECTORY=data_pump
logfile=hr_dup_tbl.log
SCHEMAS=HR
REMAP_SCHEMA=HR:HR_DUP
INCLUDE=TABLE:”IN (‘REGIONS’,’LOCATIONS’)”
TABLE_EXISTS_ACTION=REPLACE

 

Table Level Import

 

dumpfile=hr.dmp
DIRECTORY=data_pump
logfile=hr_dup_tbl.log
REMAP_SCHEMA=HR:HR_DUP
TABLES=REGIONS,LOCATIONS
TABLE_EXISTS_ACTION=REPLACE

 

Example of using the SQLFILE parameter

impdp fzafar/123@orcl dumpfile=hr_scott.dmp DIRECTORY=data_pump SQLFILE=HR_SCOTT

Mapping two schemas HR and SCOTT to one schema HR_DUP (HR_DUP user should exist before impdp)

impdp fzafar/123@orcl parfile=’c:\source\import\imp_hrdup.txt’
dumpfile=hr_scott.dmp
DIRECTORY=data_pump
logfile=hr_dup_tbl.log
SCHEMAS=HR,SCOTT
REMAP_SCHEMA=HR:HR_DUP
REMAP_SCHEMA=SCOTT:HR_DUP
EXCLUDE=SEQUENCE
EXCLUDE=TABLE:”IN (‘HR.TEMP’)”

Importing Selected tables from export dump of two schemas

dumpfile=hr_scott.dmp
DIRECTORY=data_pump
CONTENT=ALL
logfile=hr_dup_tbl.log
REMAP_SCHEMA=HR:HR_DUP
TABLES=HR.REGIONS,HR.LOCATIONS
TABLE_EXISTS_ACTION=skip

Importing Only Tables that are not present already in Schema(Its a workaround not a good technique)

dumpfile=hr_scott.dmp
DIRECTORY=data_pump
CONTENT=ALL
logfile=hr_dup_tbl.log
REMAP_SCHEMA=HR:HR_DUP
REMAP_SCHEMA=SCOTT:HR_DUP
EXCLUDE=sequence
EXCLUDE=procedure
EXCLUDE=TRIGGER
EXCLUDE=view
TABLE_EXISTS_ACTION=skip
STATUS=0
JOB_NAME=’imp_hr_dup’

EXP and IMP Utilities

Command line EXP utility example for export of HR schema

exp fzafar/123@orcl file=c:\source\hr.dmp log=c:\source\hr.log buffer=100000 owner=HR  CONSISTENT=Y DIRECT=Y

IMP utility example to import only two tables in another schema HR_DUP

imp fzafar/123@orcl fromuser=HR touser=HR_DUP buffer=100000 file=c:\source\hr.dmp log=c:\source\hr_dup Tables=Jobs,Locations

Export Parameters

This section contains descriptions of some of the Export command-line parameters.

BUFFER

Default: operating system-dependent.

Specifies the size, in bytes, of the buffer used to fetch rows. As a result, this parameter determines the maximum number of rows in an array fetched by Export. Use the following formula to calculate the buffer size:

buffer_size = rows_in_array * maximum_row_size
buffer_size = (number of rows)100 * 120(max row length in a table)

If you specify zero, the Export utility fetches only one row at a time.Tables with columns of type LOBs, LONG, BFILE, REF, ROWID, LOGICAL ROWID, or DATE are fetched one row at a time.

Note:

The BUFFER parameter applies only to conventional path Export. It has no effect on a direct path Export. For direct path Exports, use the RECORDLENGTH parameter to specify the size of the buffer that Export uses for writing to the export file.

CONSISTENT (Default: n)

Specifies whether or not Export uses the SET TRANSACTION READ ONLY statement to ensure that the data seen by Export is consistent to a single point in time and does not change during the execution of the exp command. You should specify CONSISTENT=y when you anticipate that other applications will be updating the target data after an export has started.

OBJECT_CONSISTENT (Default: n)

Specifies whether or not the Export utility uses the SET TRANSACTION READ ONLY statement to ensure that the data exported is consistent to a single point in time and does not change during the export. If OBJECT_CONSISTENT is set to y, each object is exported in its own read-only transaction, even if it is partitioned. In contrast, if you use the CONSISTENT parameter, then there is only one read-only transaction.

 
Import Parameters
FROMUSER (Default: none)

A comma-delimited list of schemas to import.  The parameter enables you to import a subset of schemas from an export file containing multiple schemas (for example, a full export dump file or a multischema, user-mode export dump file).

Only the name of the object is affected. After the import has completed, items in any TOUSER schema should be manually checked for references to old (FROMUSER) schemas, and corrected if necessary.

TOUSER (Default: none)

Specifies a list of user names whose schemas will be targets for Import. The user names must exist prior to the import operation; otherwise an error is returned. The IMP_FULL_DATABASE role is required to use this parameter. To import to a different schema than the one that originally contained the object, specify TOUSER. For example:

imp SYSTEM/password FROMUSER=scott TOUSER=joe TABLES=emp

If multiple schemas are specified, the schema names are paired. The following example imports scott's objects into joe‘s schema, and fred‘s objects into ted's schema:

imp SYSTEM/password FROMUSER=scott,fred TOUSER=joe,ted
imp SYSTEM/password FROMUSER=scott,fred TOUSER=joe,joe (to import both users)
INDEXFILE (Default: none)

Specifies a file to receive index-creation statements.

When this parameter is specified, index-creation statements for the requested mode are extracted and written to the specified file, rather than used to create indexes in the database. No database objects are imported.

If the Import parameter CONSTRAINTS is set to y, Import also writes table constraints to the index file.The file can then be edited (for example, to change storage parameters) and used as a SQL script to create the indexes.

Using LogMiner

Oracle LogMiner, which is part of Oracle Database, enables you to query online and archived redo log files through a SQL interface.

Because LogMiner provides a well-defined, easy-to-use, and comprehensive relational interface to redo log files, it can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.

Perform Initial Setup Activities

There are initial setup activities that you must perform before using LogMiner for the first time. You only need to perform these activities once, not every time you use LogMiner:

  • Enable the type of supplemental logging you want to use. At the very least, Oracle Corporation recommends that you enable minimal supplemental logging, as follows:
    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA
    

    See Supplemental Logging for more information.

  • Use the DBMS_LOGMNR_D.SET_TABLESPACE routine to re-create all LogMiner tables in an alternate tablespace. For example:
    SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts$');
Directing LogMiner Operations and Retrieving Data of Interest

You direct LogMiner operations using the DBMS_LOGMNR and DBMS_LOGMNR_D PL/SQL packages, and retrieve data of interest using the V$LOGMNR_CONTENTS view, as follows:

  1. Specify a LogMiner dictionary.

    Use the DBMS_LOGMNR_D.BUILD procedure or specify the dictionary when you start LogMiner (in Step 3), or both, depending on the type of dictionary you plan to use.       

              exec dbms_logmnr_d.build(options=>dbms_logmnr_d.store_in_redo_logs);

  1. Specify a list of redo log files for analysis.

    Use the DBMS_LOGMNR.ADD_LOGFILE procedure, or direct LogMiner to create a list of log files for analysis automatically when you start LogMiner (in Step 3). 

EXECUTE DBMS_LOGMNR.ADD_LOGFILE( –

  LOGFILENAME => ‘C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG’,-

       OPTIONS => DBMS_LOGMNR.NEW);

  1. Start LogMiner.

    Use the DBMS_LOGMNR.START_LOGMNR procedure.

  2. Request the redo data of interest.

    Query the V$LOGMNR_CONTENTS view. (You must have the SELECT ANY TRANSACTION privilege to query this view. Per email from Diana Lorentz/Doug Voss on 3/16/05)

  3. End the LogMiner session.

    Use the DBMS_LOGMNR.END_LOGMNR procedure.

You must have been granted the EXECUTE_CATALOG_ROLE role to use the LogMiner PL/SQL packages and to query the V$LOGMNR_CONTENTS view.

Complete Reference is available at link1 and link2

Oracle Listener and Service Name

Oracle Net Listener

The one operation unique to the Oracle database server side is the act of receiving the initial connection through an Oracle Net listener. The Oracle Net listener, commonly known as the listener, brokers a client request, handing off the request to the server. The listener is configured with a protocol address. Clients configured with the same protocol address can send connection requests to the listener. Once a connection is established, the client and Oracle database server communicate directly with one another.

Listener Control utility

The Listener Control utility enables you to administer listeners.You can use its
commands to perform basic management functions on one or more listeners.
Additionally, you can view and change parameter settings.

For a majority of commands, the Listener Control utility establishes an Oracle Net
connection with the listener that is used to transmit the command. To initiate an Oracle
Net connection to the listener, the Listener Control utility needs to obtain the protocol
addresses for the named listener or a listener named LISTENER. This is done by
resolving the listener name with one of the following mechanisms:
■ listener.ora file in the directory specified by the TNS_ADMIN environment
variable mostly in $ORACLE_HOME/network/admin directory on UNIX.
■ Naming method, for example, a tnsnames.ora file
If the listener name is LISTENER and it cannot be resolved, a protocol address of
TCP/IP, port 1521 is assumed.

The Listener Control utility supports several types of commands:

LSNRCTL> HELP
The following operations are available
An asterisk (*) denotes a modifier or extended command:
change_password
exit
quit
reload
services
set*
show*
spawn
start
status
stop
trace
version

LSNRCTL> SET
The following operations are available with set.
current_listener
displaymode
inbound_connect_timeout
log_file
log_directory
log_status
password
raw_mode
save_config_on_stop
startup_waittime
trc_file
trc_directory
trc_level

LSNRCTL> SET LOG_STATUS on

LSNRCTL> SET LOG_DIRECTORY /usr/oracle/admin

 

Net Service Name

A net service name is an alias that is used for a connect descriptor. To access a net service (such as a database) across the network, a net service name is required.

net service names are either found in the tnsnames.ora file

· Directory server

· Oracle Names Server

· External Naming services