Muhammad Nadeem Chaudhry's Blog


Materialized View in Oracle

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 7, 2011

A materialized view in Oracle is a database object that contains the results of a query. They are local copies of data located remotely, or are used to create summary tables based on aggregations of a table’s data. Materialized views, which store data based on remote tables are also, know as snapshots.

A materialized view can query tables, views, and other materialized views. Collectively these are called master tables (a replication term) or detail tables (a data warehouse term).

For replication purposes, materialized views allow you to maintain copies of remote data on your local node. These copies are read-only. If you want to update the local copies, you have to use the Advanced

 Replication feature. You can select data from a materialized view as you would from a table or view.

For data warehousing purposes, the materialized views commonly created are aggregate views, single-table aggregate views, and join views.

 In this article, we shall see how to create a Materialized View in Oracle and discuss Refresh Option of the view.

In replication environments, the materialized views commonly created are primary key, rowid, and subquery materialized views.

Primary Key Materialized Views

The following statement creates the primary-key materialized view on the table emp located on a remote database.

SQL>          CREATE MATERIALIZED VIEW mv_emp_pk
                    REFRESH FAST START WITH SYSDATE
                    NEXT  SYSDATE + 1/48
                    WITH PRIMARY KEY
                    AS SELECT * FROM emp@remote_db;
Materialized view created.

Note: When you create a materialized view using the FAST option you will need to create a view log on the master tables(s) as shown below:

SQL> CREATE MATERIALIZED VIEW LOG ON emp;
Materialized view log created.

 Rowid Materialized Views

The following statement creates the rowid materialized view on table emp located on a remote database:

SQL>          CREATE MATERIALIZED VIEW mv_emp_rowid
                    REFRESH WITH ROWID
                    AS SELECT * FROM emp@remote_db;
Materialized view log created.

 Subquery Materialized Views

The following statement creates a subquery materialized view based on the emp and dept tables located on the remote database:

 SQL> CREATE MATERIALIZED VIEW  mv_empdept
AS SELECT * FROM emp@remote_db e
WHERE EXISTS
     (SELECT * FROM dept@remote_db d
     WHERE e.dept_no = d.dept_no)

REFRESH CLAUSE

[refresh [fast|complete|force]

                    [on demand | commit]

                    [start with date] [next date]

                    [with {primary key|rowid}]]

 

The refresh option specifies:

  1. The refresh method used by Oracle to refresh data in materialized view
  2. Whether the view is primary key based or row-id based
  3. The time and interval at which the view is to be refreshed

 Refresh Method – FAST Clause

The FAST refreshes use the materialized view logs (as seen above) to send the rows that have changed from master tables to the materialized view.

You should create a materialized view log for the master tables if you specify the REFRESH FAST clause.

SQL> CREATE MATERIALIZED VIEW LOG ON emp;

Materialized view log created. 

Materialized views are not eligible for fast refresh if the defined subquery contains an analytic function.

 Refresh Method – COMPLETE Clause

The complete refresh re-creates the entire materialized view. If you request a complete refresh, Oracle performs a complete refresh even if a fast refresh is possible.

 Refresh Method – FORCE Clause

When you specify a FORCE clause, Oracle will perform a fast refresh if one is possible or a complete refresh otherwise. If you do not specify a refresh method (FAST, COMPLETE, or FORCE), FORCE is the default.

 PRIMARY KEY and ROWID Clause

WITH PRIMARY KEY is used to create a primary key materialized view i.e. the materialized view is based on the primary key of the master table instead of ROWID (for ROWID clause). PRIMARY KEY is the default option. To use the PRIMARY KEY clause you should have defined PRIMARY KEY on the master table or else you should use ROWID based materialized views.

Primary key materialized views allow materialized view master tables to be reorganized without affecting the eligibility of the materialized view for fast refresh.

Rowid materialized views should have a single master table and cannot contain any of the following:

  • Distinct or aggregate functions
  • GROUP BY Subqueries , Joins & Set operations

 Timing the refresh

The START WITH clause tells the database when to perform the first replication from the master table to the local base table. It should evaluate to a future point in time. The NEXT clause specifies the interval between refreshes

SQL>          CREATE MATERIALIZED VIEW mv_emp_pk

                    REFRESH FAST

                    START WITH SYSDATE

                    NEXT  SYSDATE + 2

                    WITH PRIMARY KEY

                   AS SELECT * FROM emp@remote_db;

Materialized view created.

In the above example, the first copy of the materialized view is made at SYSDATE and the interval at which the refresh has to be performed is every two days.

Summary

Materialized Views thus offer us flexibility of basing a view on Primary key or ROWID, specifying refresh methods and specifying time of automatic refreshes.

(http://www.databasejournal.com/features/oracle/article.php/2192071/Materialized-Views-in-Oracle.htm)

How to use Oracle Logminer to analyze Logfile

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 7, 2011

We know that any changes to database is recored in online redo logfiles. If your database archival mode on then online redo log files are archived which contains the database changes. With these logfile we can analysis any changes in database.

In simple we can say an undo entry provides the values of data stored before a change and the redo entry provides the values of data stored after a change. Because undo segments are database changes, they also generate redo entries. So we can get them from online redo logs and then to archived logs.

So from online redo logs and from archived redo logs we can get database redo and undo information. But online and archived logs have an unpublished format and are not human-readable. With the DBMS_LOGMNR package we can analysis redo log files and can get back undo and redo information in a human readable format.

Another scenario of use of logminer is to investigate database past in time. With Flashback Query we can get prior values of the record in the table at some point in the past but is limited to UNDO_RETENTION parameter (which is often as short as 30 minutes for an OLTP database.).So in order to analysis past activity on the database logminer is a good choice.

In this step I will show you the step by step procedure how we can use logminer.

1)Ensure that you have on at a minimal level supplemental logging
To work with logminer you must have database supplemental logging on of the source database at a minimum level.By default, Oracle Database does not provide any supplemental logging, which means that by default LogMiner is not usable. To make it use you need to on it.

You can check your supplemental logging on of off by following commands,
SQL> select SUPPLEMENTAL_LOG_DATA_MIN from v$database;
SUPPLEME
——–
NO

In order to on it at a minimal level,
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.

2)Install the DBMS_LOGMNR package.
The next step is to install DBMS_LOGMNR package. If you have already installed then ignore this steps. You can install this package by running $ORACLE_HOME/rdbms/admin/dbmslm.sql script. If you create your database with dbca then this script run automatically. So you can ignore this step. However if you create database manually with CREATE DATABASE … command then you must run the script before using logminer. That is ,
SQL>@$ORACLE_HOME/rdbms/admin/dbmslm.sql

3)Grant the EXECUTE_CATALOG_ROLE role.

The user who will do the mining task give him EXECUTE_CATALOG_ROLE. Here user is NADEEM.

SQL>GRANT EXECUTE_CATALOG_ROLE TO NADEEM;

4)Create the synonym. NADEEM creates a public synonym:

CREATE PUBLIC SYNONYM DBMS_LOGMNR FOR SYS.DBMS_LOGMNR;

All above four steps are needed just for once.

5)Specify the scope of the mining.
Now you decide on which file you will do the analysis. You may have interest over archived redo log files or online redo log files based on your scenario. In this post suppose you have recent problem in your database and so you might show interest of your online redo log files. You can see
current online redo logs by,
SQL> SELECT distinct member LOGFILENAME FROM V$LOGFILE;

LOGFILENAME
——————————————————————————–
/oradata2/data1/dbase/redo01.log
/oradata2/data1/dbase/redo03.log
/oradata2/data1/dbase/redo3.log

Sometimes, you want to mine the redo log file that was most recently archived.

SQL> SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

NAME
——————————————————————————–
/oradata2/flash_recovery_area/DBASE/archivelog/2008_08_03/o1_mf_1_839_49bodr0k_.
arc
With the DBMS_LOGMNR.ADD_LOGFILE specify those files that you want to mine.
Here I have given the online redo log files redo01.log and redo03.log.
SQL>BEGIN DBMS_LOGMNR.ADD_LOGFILE
(‘/oradata2/data1/dbase/redo01.log’);
DBMS_LOGMNR.ADD_LOGFILE
(‘/oradata2/data1/dbase/redo03.log’);
END;
/

6)Start the LogMiner session and specify a dictionary.
To start the LogMiner session:

BEGIN
DBMS_LOGMNR.START_LOGMNR
(options =>
dbms_logmnr.dict_from_online_catalog);
END;
/

Using the OPTIONS parameter, it is specified that Oracle Database read the dictionary information to convert the object names from the online catalog while starting LogMiner.

7)Check contents from V$LOGMNR_CONTENTS view.
To get information and when DML or DDL happened in the V$LOGMNR_CONTENTS about table TEST3 we can issue

SQL> select username, to_char(timestamp,’mm/dd/yy hh24:mi:ss’) timestamp,
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = ‘TEST3’;

USERNAM TIMESTAMP SEG_TY SEG_NAME TABLE_SPAC SID SERIAL# OPERATION
——- —————– —— ———- ———- ———- ———- ——————————–
08/03/08 02:57:35 TABLE TEST3 153 3272 DDL
08/03/08 02:57:41 TABLE TEST3 SYSTEM 153 3272 INSERT
08/03/08 02:57:55 TABLE TEST3 SYSTEM 153 3272 UPDATE
08/03/08 03:01:35 TABLE TEST3 SYSTEM 153 3272 UPDATE

We can get SQL_UNDO and SQL_REDO information by,

SQL> select sql_undo, sql_redo
from v$logmnr_contents
where table_name = ‘TEST3′ and OPERATION=’UPDATE’;

SQL_UNDO SQL_REDO
————————————————– ————————————————–
update “SYS”.”TEST3″ set “A” = ‘3’ where “A” = ‘9’ update “SYS”.”TEST3″ set “A” = ‘9’ where “A” = ‘3’
and ROWID = ‘AAAOKVAABAAAP8qAAA’; and ROWID = ‘AAAOKVAABAAAP8qAAA’;

update “SYS”.”TEST3″ set “A” = ‘9’ where “A” = ’10 update “SYS”.”TEST3″ set “A” = ’10’ where “A” = ‘9
‘ and ROWID = ‘AAAOKVAABAAAP8qAAA’; ‘ and ROWID = ‘AAAOKVAABAAAP8qAAA’;

8)End the LogMiner session.

Use the DBMS_LOGMNR.END_LOGMNR procedure.
SQL> BEGIN
DBMS_LOGMNR.END_LOGMNR;
END;
/

PL/SQL procedure successfully completed.

References:

http://arjudba.blogspot.com/2008/08/how-to-use-oracle-logminer-to-analysis.html

http://www.oracle-base.com/articles/11g/FlashbackAndLogminerEnhancements_11gR1.php

 

 

How to compile invalid objects?

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 1, 2011

This is a simple post to learn how to compile invalid objects. My suggestion is to do these steps daily (to keep ur self away from any expecting  Emails or phone calls)

1- Log in to unix or linux as applmgr user.

2- Find the number of invalid objects:

   SQL>sqlplus Username/Password

   SQL>Select COUNT (*) from DBA_OBJECTS where status=’INVALID’;

   You can use utlrp.sql script to compile invalid objects

   $ORACLE_HOME/rdbms/admin

   SQL>@utlrp.sql

   After that check again by using same sql statement

   SQL>select Count (*) from dba_objects where status=’INVALID’;

OS error in starting service OracleMTSRecoveryService 11g

Posted in Uncategorized by Muhammad Nadeem Chaudhry on November 17, 2011

Uploading CSV using Oracle Application Express (APEX)

Posted in Uncategorized by Muhammad Nadeem Chaudhry on July 18, 2011
Sometimes we need the flexibility to upload the data to a table from CSV sheet. The can be implemented very well in JSP and other application. I was having this requirement to implement the same in Oracle Application Express (APEX).

I was going though one of the blog by Ittichai, for the same and he has given a very clean steps for getting this work. I followed the steps and it worked for me. I have documented the similar steps below as per my understanding. Below are the details.

Pre-requisite:

File should be in CSV Format.

First you will need to identify the workspace ID.

SQL> col WORKSPACE_ID format 999999999999999

SQL> select workspace, schemas, workspace_id from apex_workspaces;

You won’t be able to see anything yet.

SQL> select id, flow_id, name, filename from wwv_flow_files;

But after setting the workspace ID, you should be able to see its contents.

SQL> exec wwv_flow_api.set_security_group_id(‘858728211457679’);

PL/SQL procedure successfully completed

SQL> select id, flow_id, name, filename from wwv_flow_files;

Implementing data upload using CSV sheet

Following is the table detail which I want to populate from data in CSV sheet.

SQL> DESC Table_Name;

1) Created a “File Browse” item on a page and having internal name it P35_UPLOAD.

2) Created a button having internal name as “Upload“. This button is used to upload and process the data in the CSV sheet.

3) Created a conditional branch when upload button is pressed, it should branch to the same page (35 in my case)

4) When we select an CSV sheet using browse button and click on submit button, it will branch to same page (indirectly we are saying that its going to refresh the page). In such case even if there is no code written in the backend, APEX is going to load the CSV sheet in a BLOB format into a table wwv_flow_files.

This is the internal table used by APEX. Our task is to read this table and get the required row which got inserted into this table. With each upload 1 row will get inserted into the table. One of the column of the table (BLOB_CONTENT) is BLOB where the actual CSV sheet is uploaded. All other columns are metadata about CSV.

When we upload an CSV sheet, a random name will get generated for the file in the form FXXXXX/<file_name_you_uploaded>. You can use this to get the file details. Also when you refresh the page, it will upload the file to this table wwv_flow_files, but as soon as refresh completed and it displays the page back again, that row will get deleted again. So you have to get that row processed immediately using the code that you will write when submit button is pressed.

5) Following is the code for the same

Before using the below code, make sure you have a function hex_to_decimal  in your database. The code for the same is as given below. 

create or replace function hex_to_decimal 

–this function is based on one by Connor McDonald 

http://www.jlcomp.demon.co.uk/faq/base_convert.html 

( p_hex_str in varchar2 ) return number 

is

v_dec   number; 

v_hex   varchar2(16) := ‘0123456789ABCDEF’; 

begin

v_dec := 0; 

for indx in 1 .. length(p_hex_str) 

loop 

v_dec := v_dec * 16 + instr(v_hex,upper(substr(p_hex_str,indx,1)))-1; 

end loop; 

return v_dec; 

End hex_to_decimal;

Below is the code for uploading the CSV sheet. 

DECLARE

v_blob_data       BLOB; 

v_blob_len        NUMBER; 

v_position        NUMBER; 

v_raw_chunk       RAW(10000); 

v_char      CHAR(1);

c_chunk_len   number       := 1; 

v_line        VARCHAR2 (32767)        := NULL; 

v_data_array      wwv_flow_global.vc_arr2; 

v_rows number; 

v_sr_no number := 1; 

BEGIN

delete from MACS; 

      — Read data from wwv_flow_files</span> 

Select blob_content into v_blob_data 

from wwv_flow_files 

where last_updated = (select max(last_updated) from wwv_flow_files where UPDATED_BY = :APP_USER) 

and id = (select max(id) from wwv_flow_files where updated_by = :APP_USER); 

v_blob_len := dbms_lob.getlength(v_blob_data); 

v_position := 1; 

     — Read and convert binary to char</span> 

WHILE ( v_position <= v_blob_len ) LOOP 

v_raw_chunk := dbms_lob.substr(v_blob_data,c_chunk_len,v_position); 

v_char :=  chr(hex_to_decimal(rawtohex(v_raw_chunk))); 

v_line := v_line || v_char; 

v_position := v_position + c_chunk_len; 

     — When a whole line is retrieved </span> 

IF v_char = CHR(10) THEN

    — Convert comma to : to use wwv_flow_utilities </span> 

v_line := REPLACE (v_line, ‘,’, ‘:’); 

    — Convert each column separated by : into array of data </span> 

v_data_array := wwv_flow_utilities.string_to_table (v_line); 

   — Insert data into target table </span> 

EXECUTE IMMEDIATE ‘insert into MACS (ITEM,MAC) 

values (:1,:2)’

USING 

v_sr_no, 

v_sr_no;

    — Clear out 

v_line := NULL; 

v_sr_no := v_sr_no + 1; 

END IF; 

END LOOP; 

END;

Create a new PLSQL process under Processes section in APEX and put this code under the same. Make the process run when upload button is pressed. Data in CSV sheet will get loaded into table. Hope this helps !!

Reference:

http://oraexplorer.blogspot.com/2007/11/apex-to-upload-text-file-and-write-into.html

Creating Help in Oracle APEX 4.2

Posted in Uncategorized by Muhammad Nadeem Chaudhry on July 15, 2011

Creating Oracle Database Link

Posted in Uncategorized by Muhammad Nadeem Chaudhry on July 11, 2011

create [public] database link db_link_name
connect to      user_name
identified by   password
using              Service_name

service_name must be resolvable on the server, for example by entering it into the tnsnames.ora file. A database link is required for various purposes, such as primary key materialized view replication.  This will create one sided link as specified in clause.

For further inform please visit http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/ds_admin002.htm

RTF Template Based Reports in Oracle APEX – Oracle BI Publisher

Posted in Uncategorized by Muhammad Nadeem Chaudhry on July 8, 2011

–          Set the report printing server as follows:

 

–          Create JDBC Connection.

–          Create report query in Oracle BI Publisher.

–          Generate report template.

–          Download template & format that report. Upload modified template.

Oracle APEX – HTTP Access

Posted in Uncategorized by Muhammad Nadeem Chaudhry on July 7, 2011

By default Apex page of XE, it was set to local access by default.  From Apex, check the link  “Home>Administration>Manage HTTP Access”.  In addition, you can also execute this command below using SYS account:

exec dbms_xdb.setListenerLocalAccess(false);

Change Password (Navigation Bar Entry) In Oracle APEX

Posted in Uncategorized by Muhammad Nadeem Chaudhry on July 4, 2011

Create navigation bar URL as:

javascript:popupURL(‘f?p=4350:58:&SESSION.:::58:F4350_P58_USER_NAME::APP_USER’);

« Previous PageNext Page »