Archive for the ‘Logical Backup Export Import’ Category

Backup Concepts

Database Backup

 

A whole database backup includes all data files and at least one control file.

Full backups make a copy of every data block within the files being backed up that contains data.

Partial database backups may include zero or more tablespaces, zero or more data files, and may or may not include a control file.

 

Incremental backups make a copy of all data blocks that have changed since some previous backup. Oracle Database 10g supports two levels of incremental backup (0-1). A level 0 or baseline backup is equivalent to a full backup and contains all data blocks. A level 1 incremental backs up all database blocks changed since the level 0 backup. To restore using incremental backups, the baseline backup must first be restored, and then the incremental.

Offline backups or Cold backups(also known as consistent backups) are taken while the database is not open. They are consistent because at the time of the backup, the SCN data file headers matches the SCN in the control files.

Online backups (also known as hot or inconsistent backups) are taken while the database is open. The backups are inconsistent because with the database open there is no guarantee that the data files are synchronized with the control files. Inconsistent backups require recovery in order to be used. For online backup, ARCHIVELOG mode should be enabled.

 

Modes of Backups

i. ARCHIVELOG

If the database is in noarchive log mode, online redo logs can be overwritten without making sure they are saved. This implies that a database cannot be recovered fully (it can be rolled forward to the last transaction) even if backups were made.

ii. NOARCHIVELOG

If the database is in log archive mode, the database makes sure that online redo logs are not overwritten before they have been archived. Database can be recovered to the last

Types of Backups

i. Physical Backup

Physical backups are backups of the physical files used in storing and recovering your database, such as datafiles, control files, and archived redo logs.

ii. Logical Backup

Logical backups contain logical data (tables or stored procedures) exported from a database with an Oracle export utility and stored in a binary file, for later re-importing into a database using the corresponding Oracle import utility.

 

Backup Types based on Physical Backup

For performing backup and recovery based on physical backups, you have two solutions available:

  • Recovery Manager (RMAN), a tool (with command-line client and Enterprise Manager GUI interfaces) that integrates with sessions running on the Oracle server to perform a range of backup and recovery activities, as well as maintaining a repository of historical data about your backups
  • The traditional user-managed backup and recovery, where you directly manage the files that make up your database with a mixture of host operating system commands and SQL*Plus backup and recovery-related capabilities

Recovery Manager isthe preferred solution for database backup and recovery. It can perform the same types of backup and recovery available through user-managed methods more easily, provides a common interface for backup tasks across different host operating systems, and offers a number of backup techniques not available through user-managed methods.

 

Image Copies and Backup Sets

 

Image copies are duplicates of data or archived log files (similar to simply copying the files using OS commands).

Backup sets are copies of one or more data or archived log files. With backup sets, empty data locks are not stored, thereby causing backup sets to use less space on disk or tape. Backup sets an be compressed to further reduce the space requirements of the backup.

Image copies must be backed up to disk whereas Backup sets can be sent to disk or directly to tape.

Optimizing Import/Export

EXPORT:

  • Set the BUFFER parameter to a high value (e.g. 2Mb — entered as an integer "2000000")
  • Set the RECORDLENGTH parameter to a high value (e.g. 64Kb — entered as an integer "64000")
  • Use DIRECT=yes (direct mode export)
  • Stop unnecessary applications to free-up resources for your job.
  • If you run multiple export sessions, ensure they write to different physical disks.
  • DO NOT export to an NFS mounted filesystem. It will take forever.

IMPORT:

  • Create an indexfile so that you can create indexes AFTER you have imported data. Do this by setting INDEXFILE to a filename and then import. No data will be imported but a file containing index definitions will be created. You must edit this file afterwards and supply the passwords for the schemas on all CONNECT statements.
  • Place the file to be imported on a separate physical disk from the oracle data files
  • Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init$SID.ora file
  • Set the LOG_BUFFER to a big value and restart oracle.
  • Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
  • Create a BIG tablespace with a BIG rollback segment inside. Set all other rollback segments offline (except the SYSTEM rollback segment of course). The rollback segment must be as big as your biggest table (I think?)
  • Use COMMIT=N in the import parameter file if you can afford it
  • Use STATISTICS=NONE in the import parameter file to avoid time consuming to import the statistics.
  • Use ANALYZE=N in the import parameter file to avoid time consuming ANALYZE statements
  • Remember to run the indexfile previously created.

Improving Data Pump Export and Import:

Use Parallel option and export to as many files as the degree of parallelism.

expdp username/password directory=dump_dir filesize=1G dumpfile=full%U.dmp logfile=fulllog parallel=16

Increasing pga_aggregate_target to boost import.

Impdp Options

Analyze once after the load – Set analyze=n and analyze with dbms_stats after the load has completed.

Increase recordlength – Many set recordlength to 64k, but it needs to be a multiple of your I/O chunk size and db_block_size (or your multiple block size, e.g. db_32k_block_size).

Set commit=n – For tables that can afford not to commit until the end of the load, this option provides a significant performance increase.  Larger tables may not be suitable for this option due to the required rollback/undo space.

Dedicate a single, large rollback segment – Many professionals create a single large rollback segment and take all others offline during the import.

Set indexes=n – Index creation can be postponed until after import completes, by specifyingindexes=n.  If indexes for the target table already exist at the time of execution, import performs index maintenance when data is inserted into the table.  Setting indexes=neliminates this maintenance overhead.   You can also Use the indexfile parm to rebuild all the indexes once, after the data is loaded.

Defer CBO stats – Using impdp with the parameter exclude=statistics will greatly improve the import speed, but statistics will need to be re-analyzed or imported later.

Use the buffer parameter – By using a larger buffer setting, import can do more work before disk access is performed.

Disable logging – You can also use the hidden parameter _disable_logging = true to reduce redo, but beware that the resulting import will be unrecoverable.

Further read Oracle data load

EXPDP and IMPDP

Detailed Documentation of EXPDP

Detailed Documentation of IMPDP

Data Pump is server-based, rather than client-based, dump files, log files, and SQL files are accessed relative to server-based directory paths. Data Pump requires you to specify directory paths as directory objects. A directory object maps a name to a directory path on the file system.

SQL> CREATE DIRECTORY data_pump AS 'C:\source’

After a directory is created, the user creating the directory object needs to grant READ or WRITE permission on the directory to other users.

SQL> GRANT READ, WRITE ON DIRECTORY data_pump TO hr;

 

EXPORT DATA PUMP

Estimating size of HR schema without exporting objects. The estimate is for table row data only and does not include metadata.

expdp  fzafar/123@orcl  logfile=estimate.log schemas=HR  estimate_only=y

Exporting two schemas HR and SCOTT

expdp fzafar/123@orcl dumpfile=hr_scott.dmp logfile=hr_scott.log schemas=HR,SCOTT DIRECTORY=DATA_PUMP

expdp fzafar/123 DIRECTORY=data_pump DUMPFILE=hr_exclude.dmp EXCLUDE=VIEW,PACKAGE, FUNCTION

expdp  fzafar/123@orcl DUMPFILE=HREXCLUDE.dmp logfile=HREXCLUDE.log schemas=HR  DIRECTORY=DATA_PUMP CONTENT=DATA_ONLY EXCLUDE=TABLE:\”IN  \(\’COUNTRIES\’,\’REGION\’\)\”

 
expdp fzafar/123@orcl PARFILE=exp.par
exp.par Parameter File
DIRECTORY=dpump_dir1
DUMPFILE=dataonly.dmp
CONTENT=DATA_ONLY
PARALLEL=3 EXCLUDE=TABLE:"IN ('COUNTRIES', 'REGIONS')"
 
            EXP Parameter            EXPDP Parameter
OWNER SCHEMAS
LOG LOGFILE
STATISTICS=ESTIMATE ESTIMATE=STATISTICS
ROWS CONTENT
FEEDBACK STATUS
FILE DUMPFILE
CONSTRAINTS EXCLUDE|INCLUDE=CONSTRAINTS
GRANTS EXCLUDE|INCLUDE=GRANTS
INDEX EXCLUDE|INCLUDE=INDEX
TRIGGERS EXCLUDE|INCLUDE=TRIGGER

 

ONLY AVAILABLE IN EXP

BUFFER, COMPRESS, DIRECT, INCTYPE, CONSISTENT, OBJECT_CONSISTENT, RECORDLENGTH, RESUMABLE, RESUMEABLE_TIMEOUT,USERID, VOLSIZE

ONLY AVAILABLE IN EXPDP

EXCLUDE, INCLUDE, ESTIMATE_ONLY, PARALLEL

 

IMPORT DATA PUMP

 

            IMP Parameter           IMPDP Paramater
FROMUSER SCHEMAS
TOUSER REMAP_SCHEMA
IGNORE TABLE_EXIST_ACTION
INDEXFILE SQLFILE
SHOW SQLFILE

 

Importing Only two tables from  a schema

Schema Level Import

dumpfile=hr.dmp
DIRECTORY=data_pump
logfile=hr_dup_tbl.log
SCHEMAS=HR
REMAP_SCHEMA=HR:HR_DUP
INCLUDE=TABLE:”IN (‘REGIONS’,’LOCATIONS’)”
TABLE_EXISTS_ACTION=REPLACE

 

Table Level Import

 

dumpfile=hr.dmp
DIRECTORY=data_pump
logfile=hr_dup_tbl.log
REMAP_SCHEMA=HR:HR_DUP
TABLES=REGIONS,LOCATIONS
TABLE_EXISTS_ACTION=REPLACE

 

Example of using the SQLFILE parameter

impdp fzafar/123@orcl dumpfile=hr_scott.dmp DIRECTORY=data_pump SQLFILE=HR_SCOTT

Mapping two schemas HR and SCOTT to one schema HR_DUP (HR_DUP user should exist before impdp)

impdp fzafar/123@orcl parfile=’c:\source\import\imp_hrdup.txt’
dumpfile=hr_scott.dmp
DIRECTORY=data_pump
logfile=hr_dup_tbl.log
SCHEMAS=HR,SCOTT
REMAP_SCHEMA=HR:HR_DUP
REMAP_SCHEMA=SCOTT:HR_DUP
EXCLUDE=SEQUENCE
EXCLUDE=TABLE:”IN (‘HR.TEMP’)”

Importing Selected tables from export dump of two schemas

dumpfile=hr_scott.dmp
DIRECTORY=data_pump
CONTENT=ALL
logfile=hr_dup_tbl.log
REMAP_SCHEMA=HR:HR_DUP
TABLES=HR.REGIONS,HR.LOCATIONS
TABLE_EXISTS_ACTION=skip

Importing Only Tables that are not present already in Schema(Its a workaround not a good technique)

dumpfile=hr_scott.dmp
DIRECTORY=data_pump
CONTENT=ALL
logfile=hr_dup_tbl.log
REMAP_SCHEMA=HR:HR_DUP
REMAP_SCHEMA=SCOTT:HR_DUP
EXCLUDE=sequence
EXCLUDE=procedure
EXCLUDE=TRIGGER
EXCLUDE=view
TABLE_EXISTS_ACTION=skip
STATUS=0
JOB_NAME=’imp_hr_dup’

EXP and IMP Utilities

Command line EXP utility example for export of HR schema

exp fzafar/123@orcl file=c:\source\hr.dmp log=c:\source\hr.log buffer=100000 owner=HR  CONSISTENT=Y DIRECT=Y

IMP utility example to import only two tables in another schema HR_DUP

imp fzafar/123@orcl fromuser=HR touser=HR_DUP buffer=100000 file=c:\source\hr.dmp log=c:\source\hr_dup Tables=Jobs,Locations

Export Parameters

This section contains descriptions of some of the Export command-line parameters.

BUFFER

Default: operating system-dependent.

Specifies the size, in bytes, of the buffer used to fetch rows. As a result, this parameter determines the maximum number of rows in an array fetched by Export. Use the following formula to calculate the buffer size:

buffer_size = rows_in_array * maximum_row_size
buffer_size = (number of rows)100 * 120(max row length in a table)

If you specify zero, the Export utility fetches only one row at a time.Tables with columns of type LOBs, LONG, BFILE, REF, ROWID, LOGICAL ROWID, or DATE are fetched one row at a time.

Note:

The BUFFER parameter applies only to conventional path Export. It has no effect on a direct path Export. For direct path Exports, use the RECORDLENGTH parameter to specify the size of the buffer that Export uses for writing to the export file.

CONSISTENT (Default: n)

Specifies whether or not Export uses the SET TRANSACTION READ ONLY statement to ensure that the data seen by Export is consistent to a single point in time and does not change during the execution of the exp command. You should specify CONSISTENT=y when you anticipate that other applications will be updating the target data after an export has started.

OBJECT_CONSISTENT (Default: n)

Specifies whether or not the Export utility uses the SET TRANSACTION READ ONLY statement to ensure that the data exported is consistent to a single point in time and does not change during the export. If OBJECT_CONSISTENT is set to y, each object is exported in its own read-only transaction, even if it is partitioned. In contrast, if you use the CONSISTENT parameter, then there is only one read-only transaction.

 
Import Parameters
FROMUSER (Default: none)

A comma-delimited list of schemas to import.  The parameter enables you to import a subset of schemas from an export file containing multiple schemas (for example, a full export dump file or a multischema, user-mode export dump file).

Only the name of the object is affected. After the import has completed, items in any TOUSER schema should be manually checked for references to old (FROMUSER) schemas, and corrected if necessary.

TOUSER (Default: none)

Specifies a list of user names whose schemas will be targets for Import. The user names must exist prior to the import operation; otherwise an error is returned. The IMP_FULL_DATABASE role is required to use this parameter. To import to a different schema than the one that originally contained the object, specify TOUSER. For example:

imp SYSTEM/password FROMUSER=scott TOUSER=joe TABLES=emp

If multiple schemas are specified, the schema names are paired. The following example imports scott's objects into joe‘s schema, and fred‘s objects into ted's schema:

imp SYSTEM/password FROMUSER=scott,fred TOUSER=joe,ted
imp SYSTEM/password FROMUSER=scott,fred TOUSER=joe,joe (to import both users)
INDEXFILE (Default: none)

Specifies a file to receive index-creation statements.

When this parameter is specified, index-creation statements for the requested mode are extracted and written to the specified file, rather than used to create indexes in the database. No database objects are imported.

If the Import parameter CONSTRAINTS is set to y, Import also writes table constraints to the index file.The file can then be edited (for example, to change storage parameters) and used as a SQL script to create the indexes.