Muhammad Nadeem Chaudhry's Blog


Database Cloning with RMAN – Duplicate Command

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 26, 2011

RMAN DUPLICATE command creates duplicate database or clone from backups of the target database while retaining the original target database. The cloned database is a identical copy or original database.

The duplicate database can be identical to the target database or contain only a subset of the table-spaces in the target database. The target site and the duplicate site can be on separate hosts or on the same host.

The Duplicate DB can run independently and used for several reasons for example:

  • Test backup and recovery procedures
  • Export data such as a table that was inadvertently dropped from the production database, and then import it back into the production database
  • cloned database can be used to transfer from production server to testing server or moving cloned database from one location to another.

1. Duplicate and Standby databases

  • A duplicate database is distinct from a standby database, both are created with the DUPLICATE command.
  • A standby DB is a copy of the primary database that continually update with archived logs from the primary DB.
  • If the primary database is damaged or destroyed, then you can perform fail-over to the standby database and transform it into the new primary database.
  • While duplicate database is not intended for fail-over scenarios and does not support the various standby recovery and fail-over options.

2. Process of Duplication

  • Creates a control file for the duplicate database
  • Restores the target data-files into the duplicate database and performs incomplete recovery using all available archived log and incremental backups.
  • Shuts down and starts the auxiliary instance
  • Opens the duplicate DB with the RESETLOGS option after incomplete recovery to create the online redo logs.
  • Generates a new, unique database identifier for the duplicate database

During duplication, RMAN must perform incomplete recovery because the online redo logs in the target are not backed up and cannot be applied to the duplicate database. The farthest that RMAN can go in recovery of the duplicate database is the most recent redo log archived by the target database.

3. Creating a Duplicate Database with RMAN:

  • You must connect RMAN to both the target (primary) database and an auxiliary instance started in NOMOUNT mode
  • Allocate at least one auxiliary channel on the auxiliary instance.
  • This starts a server session on the duplicate host.
  • This channel then restores the necessary backups of the primary database and initiates recovery

4. Backup and archive log required for Duplicate:

  • Last full backup and archive log are required for duplication.
  • Parameters file for starting new instance.
  • RMAN client can run on any host. All backups and archived logs, must be accessible on duplicate host
  • If the duplicate host is not the same as the target host, then copy backups and archive log to duplicate host with the same full path as in the primary database. Two options can be used
    1. Manually transfer files from the primary host to the remote host to an identical path.
    2. Manually transfer the backups from the primary host to the duplicate host at a new location. Run the CATALOG command to add these copies to the RMAN repository at the duplicate host.

Steps to perform cloning:

 

STEP – 0          Installation of Oracle Database software on duplicate host

Install oracle database software on duplicate host if required where you want to create duplicate database. For example

1
2
3
4
5

Screen 1:          Type of installation  =   Enterprise Edition
Screen
2:          HOME = C:\oracle\product\10.2.
0\db_1
Screen
3
:          No Action
Screen
4:          Install DATABASE
software only
Screen
5:          Click Install

 STEP – 1          Create an Oracle Password File for the Auxiliary Instance

Create a password file on duplicate host using password file utility. This password file will be used in creation of new auxiliary instance.

Orapwd file=C:\oracle\product\10.2.0\db_1\DATABASE\PWDLHDUP.ora password=DUP

 

STEP – 2          Create an Initialization Parameter File for the Auxiliary Instance

Use following list of commands to create parameters file using target DB. This file will be used in creation of new auxiliary instance.

Login to Target DB

1
2

SQL > SYS/****@LHR AS SYSDB
SQL > Create pfile=’C:\initLHDUP.ora’ from spfile;

The file will be created on target host copy or move newly created file to duplicate host following location.

1

’C:\oracle\product\10.2.0\db_1\DATABASE\initLHDUP.ora’

Modify parameters. All path parameter should be accessible on duplicate host.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

db_name=’LHDUP’
 
db_file_name_convert
=
(‘d:\oracle_data_files\oradata\LHDB’,
‘c:\oracle_data_files\LHDUP\oradata’)
 
log_file_name_convert
=
(‘d:\oracle_data_files\oradata\LHDB’,
‘c:\oracle_data_files\LHDUP\oradata’)
 
control_files
=
‘C:\oracle_data_files\LHDUP\oradata\control01.ctl’,

‘C:\oracle_data_files\LHDUP\oradata\control02.ctl’,

‘C:\oracle_data_files\LHDUP\oradata\control03.ctl’
 
compatible
=’10.2.0.1.0′

db_block_size
=8192# The block size for the auxiliary database must
match that of the target DB
 
db_cache_size
=50331648
java_pool_size=25165824
large_pool_size=4194304
shared_pool_size=33554432
streams_pool_size=0
pga_aggregate_target=16777216
sga_max_size=113246208
sga_target=167772160
 
audit_file_dest
=’C:\oracle_data_files\LHDUP\admin\adump’

background_dump_dest
=’C:\oracle_data_files\LHDUP\admin\bdump’

core_dump_dest
=’C:\oracle_data_files\LHDUP\admin\cdump’

user_dump_dest
=’C:\oracle_data_files\LHDUP\admin\udump’

Create directory structure on duplicate host

Directory structure for target host (will be used to keep backups and archive logs for duplication)

1
2

MKDIR F:\flash_recovery_area\LHDB\BACKUPSET
MKDIR F:\flash_recovery_area\LHDB\ARCHIVELOG

New Directory structure for duplicate host (or use batch file provided)

1
2
3
4
5
6
7
8
9
10

MKDIR C:\oracle_data_files\LHDUP\oradata
MKDIR C:\oracle_data_files\LHDUP\admin
MKDIR C:\oracle_data_files\LHDUP\flash_recovery_area\ARCHIVELOG 
MKDIR C:\oracle_data_files\LHDUP\admin\adump
MKDIR C:\oracle_data_files\LHDUP\admin\bdump
MKDIR C:\oracle_data_files\LHDUP\admin\cdump
MKDIR C:\oracle_data_files\LHDUP\admin\udump
MKDIR C:\oracle_data_files\LHDUP\admin\dpdump
MKDIR C:\oracle_data_files\LHDUP\admin\pfile
MKDIR C:\oracle_data_files\LHDUP\admin\scripts

 STEP – 3          Create Auxiliary Instance

1
2
3
4
5
6
7
8
9
10
11

Cmd
Cd\
 
ORADIM -NEW -SID LHDUP -intpwd change_on_install -startmode auto
-pfile ‘c:\oracle\product\10.2.0\db_1\database\initLHDUP.ora’
 
ORADIM -NEW -SID LHDUP -startmode auto -pfile
‘c:\oracle\product\10.2.0\db_1\database\initLHDUP.ora’
            ORADIM -DELETE -SID LHDUP   # to delete existing instance
 
SETORACLE_SID=LHDUP

 STEP – 4          Establish Oracle Net Connectivity to the Auxiliary Instance ( on duplicate host)

  • Add listener on duplicate host if not already added using “Oracle net configuration assistant” wizard
  • Modify Listener.ora file using “Net manager utility“ to add auxiliary instance.

1
2
3
4
5

LISTENER >DATABASE services >ADDDATABASE
 
Global
DATABASE Name  =
LHDUP
SID                  
=
LHDUP
Oracle home directoy 
= C:\oracle\product\10.2.0\db_1
  • Save configuration and exit ( restart listener service)
  • Modify tnsnames.ora file to add following services

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

LHDUP =                # Net connection for Auxiliary instance (required)
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =(COMMUNITY=tcp.world)(PROTOCOL=TCP)(Host=SRDCIT04)(Port=1521))
)
(CONNECT_DATA =
(SID = LHDUP)  )   )
 
LHR
=                   #Net connection for target database (required)
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =(COMMUNITY=tcp.world)(PROTOCOL=TCP)(Host=150.154.0.2)(Port=1521))
)
(CONNECT_DATA =
(SID = LHDB)   )   )
 
DEVDB
=                 # Net connection for recovery catalog (optional)
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS=(COMMUNITY=tcp.world)(PROTOCOL=TCP)(Host=150.154.0.31)(Port=1521))
)
(CONNECT_DATA =
(SID = DEVDB))   )

STEP – 5          Start Auxiliary Instance

1
2

C:\> sqlplus “sys/change_on_install@lhdup as sysdba”
SQL
> startup nomount; 

 STEP – 6          Mount or Open the Target Database  (optional if target Db is not opened)

1
2

C:\> sqlplus “sys/*****@lhr as sysdba”
SQL
> startup open;

STEP – 7          Available Necessary backups and archive logs

Ensure that you have current level 0 backup and level 1 backup and archive logs after level 0 backup.

 STEP – 8          Login to Target and Auxiliary instance

1
2
3
4

RMAN
RMAN
>
CONNECT TARGET SYS@LHR
RMAN
> CONNECT CATALOG RMANLH/CAT@DEVDB     (this IS optional don’t USE)

RMAN
> CONNECT AUXILIARY SYS/DUP@LHDUP

STEP – 9          RUN RMAN DUPLICATE command

1
2
3
4

run {
allocate auxiliary channel ch1 type disk;
duplicate target
DATABASETO
lhdup
pfile
=’c:\oracle\product\10.2.0\db_1\database\initLHDUP.ora’;

 

PATH – Environment Variable Issue

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 9, 2011
Set system environment variable as:
Variable Name: PATH
Variable VALUE: $ORACLE_HOME\Bin; C:\WINDOWS\System32

Materialized View in Oracle

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 7, 2011

A materialized view in Oracle is a database object that contains the results of a query. They are local copies of data located remotely, or are used to create summary tables based on aggregations of a table’s data. Materialized views, which store data based on remote tables are also, know as snapshots.

A materialized view can query tables, views, and other materialized views. Collectively these are called master tables (a replication term) or detail tables (a data warehouse term).

For replication purposes, materialized views allow you to maintain copies of remote data on your local node. These copies are read-only. If you want to update the local copies, you have to use the Advanced

 Replication feature. You can select data from a materialized view as you would from a table or view.

For data warehousing purposes, the materialized views commonly created are aggregate views, single-table aggregate views, and join views.

 In this article, we shall see how to create a Materialized View in Oracle and discuss Refresh Option of the view.

In replication environments, the materialized views commonly created are primary key, rowid, and subquery materialized views.

Primary Key Materialized Views

The following statement creates the primary-key materialized view on the table emp located on a remote database.

SQL>          CREATE MATERIALIZED VIEW mv_emp_pk
                    REFRESH FAST START WITH SYSDATE
                    NEXT  SYSDATE + 1/48
                    WITH PRIMARY KEY
                    AS SELECT * FROM emp@remote_db;
Materialized view created.

Note: When you create a materialized view using the FAST option you will need to create a view log on the master tables(s) as shown below:

SQL> CREATE MATERIALIZED VIEW LOG ON emp;
Materialized view log created.

 Rowid Materialized Views

The following statement creates the rowid materialized view on table emp located on a remote database:

SQL>          CREATE MATERIALIZED VIEW mv_emp_rowid
                    REFRESH WITH ROWID
                    AS SELECT * FROM emp@remote_db;
Materialized view log created.

 Subquery Materialized Views

The following statement creates a subquery materialized view based on the emp and dept tables located on the remote database:

 SQL> CREATE MATERIALIZED VIEW  mv_empdept
AS SELECT * FROM emp@remote_db e
WHERE EXISTS
     (SELECT * FROM dept@remote_db d
     WHERE e.dept_no = d.dept_no)

REFRESH CLAUSE

[refresh [fast|complete|force]

                    [on demand | commit]

                    [start with date] [next date]

                    [with {primary key|rowid}]]

 

The refresh option specifies:

  1. The refresh method used by Oracle to refresh data in materialized view
  2. Whether the view is primary key based or row-id based
  3. The time and interval at which the view is to be refreshed

 Refresh Method – FAST Clause

The FAST refreshes use the materialized view logs (as seen above) to send the rows that have changed from master tables to the materialized view.

You should create a materialized view log for the master tables if you specify the REFRESH FAST clause.

SQL> CREATE MATERIALIZED VIEW LOG ON emp;

Materialized view log created. 

Materialized views are not eligible for fast refresh if the defined subquery contains an analytic function.

 Refresh Method – COMPLETE Clause

The complete refresh re-creates the entire materialized view. If you request a complete refresh, Oracle performs a complete refresh even if a fast refresh is possible.

 Refresh Method – FORCE Clause

When you specify a FORCE clause, Oracle will perform a fast refresh if one is possible or a complete refresh otherwise. If you do not specify a refresh method (FAST, COMPLETE, or FORCE), FORCE is the default.

 PRIMARY KEY and ROWID Clause

WITH PRIMARY KEY is used to create a primary key materialized view i.e. the materialized view is based on the primary key of the master table instead of ROWID (for ROWID clause). PRIMARY KEY is the default option. To use the PRIMARY KEY clause you should have defined PRIMARY KEY on the master table or else you should use ROWID based materialized views.

Primary key materialized views allow materialized view master tables to be reorganized without affecting the eligibility of the materialized view for fast refresh.

Rowid materialized views should have a single master table and cannot contain any of the following:

  • Distinct or aggregate functions
  • GROUP BY Subqueries , Joins & Set operations

 Timing the refresh

The START WITH clause tells the database when to perform the first replication from the master table to the local base table. It should evaluate to a future point in time. The NEXT clause specifies the interval between refreshes

SQL>          CREATE MATERIALIZED VIEW mv_emp_pk

                    REFRESH FAST

                    START WITH SYSDATE

                    NEXT  SYSDATE + 2

                    WITH PRIMARY KEY

                   AS SELECT * FROM emp@remote_db;

Materialized view created.

In the above example, the first copy of the materialized view is made at SYSDATE and the interval at which the refresh has to be performed is every two days.

Summary

Materialized Views thus offer us flexibility of basing a view on Primary key or ROWID, specifying refresh methods and specifying time of automatic refreshes.

(http://www.databasejournal.com/features/oracle/article.php/2192071/Materialized-Views-in-Oracle.htm)

How to use Oracle Logminer to analyze Logfile

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 7, 2011

We know that any changes to database is recored in online redo logfiles. If your database archival mode on then online redo log files are archived which contains the database changes. With these logfile we can analysis any changes in database.

In simple we can say an undo entry provides the values of data stored before a change and the redo entry provides the values of data stored after a change. Because undo segments are database changes, they also generate redo entries. So we can get them from online redo logs and then to archived logs.

So from online redo logs and from archived redo logs we can get database redo and undo information. But online and archived logs have an unpublished format and are not human-readable. With the DBMS_LOGMNR package we can analysis redo log files and can get back undo and redo information in a human readable format.

Another scenario of use of logminer is to investigate database past in time. With Flashback Query we can get prior values of the record in the table at some point in the past but is limited to UNDO_RETENTION parameter (which is often as short as 30 minutes for an OLTP database.).So in order to analysis past activity on the database logminer is a good choice.

In this step I will show you the step by step procedure how we can use logminer.

1)Ensure that you have on at a minimal level supplemental logging
To work with logminer you must have database supplemental logging on of the source database at a minimum level.By default, Oracle Database does not provide any supplemental logging, which means that by default LogMiner is not usable. To make it use you need to on it.

You can check your supplemental logging on of off by following commands,
SQL> select SUPPLEMENTAL_LOG_DATA_MIN from v$database;
SUPPLEME
——–
NO

In order to on it at a minimal level,
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.

2)Install the DBMS_LOGMNR package.
The next step is to install DBMS_LOGMNR package. If you have already installed then ignore this steps. You can install this package by running $ORACLE_HOME/rdbms/admin/dbmslm.sql script. If you create your database with dbca then this script run automatically. So you can ignore this step. However if you create database manually with CREATE DATABASE … command then you must run the script before using logminer. That is ,
SQL>@$ORACLE_HOME/rdbms/admin/dbmslm.sql

3)Grant the EXECUTE_CATALOG_ROLE role.

The user who will do the mining task give him EXECUTE_CATALOG_ROLE. Here user is NADEEM.

SQL>GRANT EXECUTE_CATALOG_ROLE TO NADEEM;

4)Create the synonym. NADEEM creates a public synonym:

CREATE PUBLIC SYNONYM DBMS_LOGMNR FOR SYS.DBMS_LOGMNR;

All above four steps are needed just for once.

5)Specify the scope of the mining.
Now you decide on which file you will do the analysis. You may have interest over archived redo log files or online redo log files based on your scenario. In this post suppose you have recent problem in your database and so you might show interest of your online redo log files. You can see
current online redo logs by,
SQL> SELECT distinct member LOGFILENAME FROM V$LOGFILE;

LOGFILENAME
——————————————————————————–
/oradata2/data1/dbase/redo01.log
/oradata2/data1/dbase/redo03.log
/oradata2/data1/dbase/redo3.log

Sometimes, you want to mine the redo log file that was most recently archived.

SQL> SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME
——————————————————————————–
/oradata2/flash_recovery_area/DBASE/archivelog/2008_08_03/o1_mf_1_839_49bodr0k_.
arc
With the DBMS_LOGMNR.ADD_LOGFILE specify those files that you want to mine.
Here I have given the online redo log files redo01.log and redo03.log.
SQL>BEGIN DBMS_LOGMNR.ADD_LOGFILE
(‘/oradata2/data1/dbase/redo01.log’);
DBMS_LOGMNR.ADD_LOGFILE
(‘/oradata2/data1/dbase/redo03.log’);
END;
/

6)Start the LogMiner session and specify a dictionary.
To start the LogMiner session:

BEGIN
DBMS_LOGMNR.START_LOGMNR
(options =>
dbms_logmnr.dict_from_online_catalog);
END;
/

Using the OPTIONS parameter, it is specified that Oracle Database read the dictionary information to convert the object names from the online catalog while starting LogMiner.

7)Check contents from V$LOGMNR_CONTENTS view.
To get information and when DML or DDL happened in the V$LOGMNR_CONTENTS about table TEST3 we can issue

SQL> select username, to_char(timestamp,’mm/dd/yy hh24:mi:ss’) timestamp,
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = ‘TEST3’;

USERNAM TIMESTAMP SEG_TY SEG_NAME TABLE_SPAC SID SERIAL# OPERATION
——- —————– —— ———- ———- ———- ———- ——————————–
08/03/08 02:57:35 TABLE TEST3 153 3272 DDL
08/03/08 02:57:41 TABLE TEST3 SYSTEM 153 3272 INSERT
08/03/08 02:57:55 TABLE TEST3 SYSTEM 153 3272 UPDATE
08/03/08 03:01:35 TABLE TEST3 SYSTEM 153 3272 UPDATE

We can get SQL_UNDO and SQL_REDO information by,

SQL> select sql_undo, sql_redo
from v$logmnr_contents
where table_name = ‘TEST3′ and OPERATION=’UPDATE’;

SQL_UNDO SQL_REDO
————————————————– ————————————————–
update “SYS”.”TEST3″ set “A” = ‘3’ where “A” = ‘9’ update “SYS”.”TEST3″ set “A” = ‘9’ where “A” = ‘3’
and ROWID = ‘AAAOKVAABAAAP8qAAA’; and ROWID = ‘AAAOKVAABAAAP8qAAA’;

update “SYS”.”TEST3″ set “A” = ‘9’ where “A” = ’10 update “SYS”.”TEST3″ set “A” = ’10’ where “A” = ‘9
‘ and ROWID = ‘AAAOKVAABAAAP8qAAA’; ‘ and ROWID = ‘AAAOKVAABAAAP8qAAA’;

8)End the LogMiner session.

Use the DBMS_LOGMNR.END_LOGMNR procedure.
SQL> BEGIN
DBMS_LOGMNR.END_LOGMNR;
END;
/

PL/SQL procedure successfully completed.

References:

http://arjudba.blogspot.com/2008/08/how-to-use-oracle-logminer-to-analysis.html

http://www.oracle-base.com/articles/11g/FlashbackAndLogminerEnhancements_11gR1.php

 

 

How to compile invalid objects?

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 1, 2011

This is a simple post to learn how to compile invalid objects. My suggestion is to do these steps daily (to keep ur self away from any expecting  Emails or phone calls)

1- Log in to unix or linux as applmgr user.

2- Find the number of invalid objects:

   SQL>sqlplus Username/Password

   SQL>Select COUNT (*) from DBA_OBJECTS where status=’INVALID’;

   You can use utlrp.sql script to compile invalid objects

   $ORACLE_HOME/rdbms/admin

   SQL>@utlrp.sql

   After that check again by using same sql statement

   SQL>select Count (*) from dba_objects where status=’INVALID’;