Muhammad Nadeem Chaudhry's Blog


Reclaiming space in flash recovery area

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 30, 2010

If you physically delete archivelogs, Oracle isn’t aware of this, until you instruct RMAN to validate the files.
Therefore you should run these commands in RMAN :

CROSSCHECK ARCHIVELOG ALL;

DELETE EXPIRED ARCHIVELOG ALL;

CROSSCHECK BACKUP OF DATABASE;

DELETE EXPIRED BACKUP;

The CrossCheck is where RMAN validates Archivelogs. It if finds an ArchiveLog in it’s repository but not present on disk, it marks the file as “Expired” in it’s repository. The next command then “deletes”  the entries for these “Expired” file.

This should reduce the size of flash recovery

Archivelog ORA-16014 log sequance not archived or no destination available

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 30, 2010

One common mistake in 10g, when we enable archivelog mode, happens when we use the flash recovery area default settings as the repository for our archived redo log’s… The mistake consequences will happen 20-30 days after the database archivelog mode been enabled.

By default Oracle 10g Database Configuration Assistant (DBCA) sets flash recovery area size to 2GB (db_recovery_file_dest_size parameter) and when we enable DB archivelog mode it will use the default db_recovery_file_dest parameter for the destination of our archivelogs unless we set any other directory as the destination for them.

A common small 10g database generates an average 100MB of archivelog files daily… Sometimes even at weekend days when most applications DML is almost zero. Why this happens? The reason is Automatic Workload Repository (AWR) and Automatic Database Diagnostic Monitor (ADDM). AWR collects detailed performance-related metrics from the database at regular intervals, known as snapshots. After each snapshot is taken, ADDM is invoked to thoroughly analyze the data and metrics deriving from the difference between snapshots, and then recommend necessary actions. This generates lot’s of DML and consequently lot’s of redo data.

So, after some days, your flash recovery area will reach the default 2GB because we have our default settings database archiving there… When this happens our database will be unable to archive due to flash recovery area went full. This happens even if we have lot’s of disk space!

In our alert log file we’ll see something like this:

ORA-16038: log one sequence 3144 cannot be archived
ORA-19809: limit exceeded for recovery files
ORA-00312: online log string thread string: 'string: '/u01/app/oracle/oradata/mydb/redo01.log'
Sat Oct 11 10:43:56 2008
ARCH: Archival stopped, error occurred. Will continue retrying
Sat Oct 11 10:43:56 2008
ORACLE Instance neo - Archival Error

Oracle saves all the information about what we place in the flash recovery area in the RMAN repository/controlfile.  If it determines that there is not sufficient space in the recovery file destination, as set by db_recovery_file_dest_size then it will fail. Just deleting the old backups and archive logs from disk is not sufficient as it’s the rman repository/controlfile that holds the space used information.

How to fix this? If we google we see lot’s of places saying to execute “delete archivelog all” in RMAN. This is the fast easy, but dirty solution, as we don’t want to delete all our precious archivelogs by endanger the ability to do future database recovery. The fix can be done with any of the following solutions:

Solution 1 – Delete unwanted archive log files at OS side, then crosscheck archivelogs to marks the controlfile that the archives have been deleted and deleting expired ones.

1
2
3
4
5
6
[oracle@app oracle]$ cd /u01/app/oracle/flash_recovery_area/mydb/
[oracle@app oracle]$ rm archivelogs*
[oracle@app oracle]$ rman target /
connected to target database: MYDB (DBID=1649570311)
RMAN> crosscheck archivelog all
RMAN> delete expired archivelog all

Solution 2 – Connect RMAN to backup and then delete your archivelogs… this is a much better solution.

1
2
3
4
5
[oracle@app oracle]$ rman target /
connected to target database: MYDB (DBID=1649570311)
RMAN> backup archivelog until logseq  delete all input;
or
RMAN> backup archivelog until time 'sysdate-15' delete all input;

Solution 3 – increase dynamically (without shutdown/startup database) the parameter db_recovery_file_dest_size.

1
2
[oracle@app oracle]$ sqlplus "/ as sysdba"
SQL> alter system set db_recovery_file_dest_size=4G

Then, if needed (usually with solution 1 and 2), just shutdown and startup your database and you’ll get you database again up. We may have to  shutdown abort if the database does not shutdowns normally.

To avoid the same problem in future when the archivelog size reachs the db_recovery_file_dest_size we should set the following parameters to set an alternate place to archive the redo logs.

log_archive_dest_1=’LOCATION=use_db_recovery_file_dest NOREOPEN ALTERNATE=LOG_ARCHIVE_DEST_2′
log_archive_dest_2=’LOCATION=/other_destination_for_archiving’
log_archive_dest_state_1=’enable’
log_archive_dest_state_2=’alternate’
db_recovery_file_dest=’/u01/app/oracle/flash_recovery_area/mydb/’
db_recovery_file_dest_size=2G

Deleting expired archivelogs with RMAN

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 29, 2010
RMAN> delete expired archivelog all;
released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=137 devtype=DISK
specification does not match any archive log in the recovery catalog

RMAN> delete noprompt expired archivelog all;
released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=137 devtype=DISK
specification does not match any archive log in the recovery catalog

SQL Tuning Advisor and SQL Access Advisor in Oracle 10g

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 28, 2010

Oracle 10g with its AWR (Automatic Workload Repository) for automatic collection of performance statistics and with ADDM (Automatic Database Diagnostic Monitor) for identifying root causes of problems is a self-healing and self-tuning getting wiser every day and very soon will start to think on its own database.
There are features in Oracle 10g which help you to quickly find out what is wrong with the database. Features like ASH (Active Session History) to peak into what your active sessions are waiting for and Server Generated Alerts with Metrics and Thresholds are all tools which tell you what is really going on in your database. The OEM (Oracle Enterprise Manager) Database control is a brilliant interface you can use to learn and find out more about all these smart enhancements. All in the name of pro-active database management.
What is database well-being all about anyway? Isn’t it high throughput (OLTP) and response time (DSS)? Aren’t all databases working to achieve these?
Oracle 10g has also an Advisory Framework which based on workload, will also try to advise you on configuration issues like Buffer Cache, Library Cache, PGA, Undo and Segments used. There are also Tuning-Related Advisors which deal with SQL Tuning and these are:

I – SQL Tuning Advisor
II – SQL Access Advisor

In this post I will try to explain the usage of these two advisors in the job of tuning bad and nasty SQL statements.

All automation is good, self healing databases are very good. But what happens in the development arena when a developer turns up to your desk pointing to an email he/she just sent to you and says ‘…what’s wrong with this SQL statement? it takes ages? Why is Oracle so slow?…’ I am particularly fond of the last comment.
How can we quickly look at what is wrong with the SQL statement using the SQL Tuning Advisor? Here is how.

I – SQL Tuning Advisor
Using the DBMS_SQLTUNE package to tune SQL
Assuming the suspect query is something like “SELECT * FROM SESSIONS, SOURCE_EVENTS”, two very large tables, and your schema has the ADVISOR privilege granted, you would put this query into the Oracle SQL Tuning Advisor engine using the PL/SQL package DBMS_SQLTUNE like this:

1. Create the tuning task
In SQL*Plus and in the schema where objects live, use the procedure dbms_sqltune.create_tuning_task to create a tuning task.

Then you can check to make sure that your task is created and get the TASK_NAME value as you will need this in the next step. I could name the task, just being lazy…

2. Execute the tuning tasks
Execute the tuning task you created in step 1 with the procedure dbms_sqltune.execute_tuning_task

3. Get the tuning report
Get the tuning report/recommendation with the following SQL.

 Well done SQL Tuning Advisor, a good answer. It immediately spotted the Cartesian product and it recommends with a rationale that it would be expensive!

II – SQL Access Advisor
SQL Access advisor is another Advisory Framework tool which provides primarily advice on the creation of indexes, materialized views, materialized view logs to improve query performance and response time.

Both SQL Tuning Advisor and SQL Access Advisor tools are quite powerful as they can source the SQL they will tune automatcially from multiple different sources, including SQL cache, AWR, SQL tuning Sets and user defined workloads. Look at the Oracle documentation to find out how all this is possible under OEM Database Control.

What I focus on here is how to quickly pass through these tools ad-hoc SQL that the developer will bring to you and will demand to be tuned there and then.

So using the QUICK_TUNE procedure from the DBMS_ADVISOR package here is how you would tune a nasty single SQL statement for better data access paths with SQL Access Advisor

1. Create tuning task
In the schema where the table is run the following

2. Check to see the task is there
Again to see the task myquicktunetask which you just created run in the schema

3. See tuning advice
Now, to see the advice you get you will have to look in the DBA dictionary view
DBA_ADVISOR_ACTIONS. So login as DBA and filter for the task_name=’myquicktunetask’ in the DBA_ADVISOR_ACTIONS dictionary view. Well, you can view the advice in the USER_ADVISOR_ACTIONS dictionary view as well.

 TASK_NAME||’|’||COMMAND||’|’||ATTR2||’|’||ATTR3||’|’||ATTR4||’|’||ATTR5
(CLOB) myquicktunetask|CREATE MATERIALIZED VIEW LOG||ROWID, SEQUENCE| INCLUDING NEW VALUES|(“SHORT_CODE”)
(CLOB) myquicktunetask|CREATE MATERIALIZED VIEW||REFRESH FAST WITH ROWID|ENABLE QUERY REWRITE|SELECT ACHILLES.SESSIONS.SHORT_CODE C1, COUNT(*) M1 FROM ACHILLES.SESSIONS WHERE (ACHILLES.SESSIONS.SHORT_CODE = ‘tl’) GROUP BY ACHILLES.SESSIONS.SHORT_CODE
(CLOB) myquicktunetask|GATHER TABLE STATISTICS||-1||

Conclusion
Well I find both tools extremely useful. Both are good starting points in resolving SQL tuning issues and discussions. SQL Tuning Advisor seems to be doing logical optimization mainly by checking your SQL structure and statistics and the SQL Access Advisor does suggest good data access paths, that is mainly work which can be done better on disk.

User managed backup

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 24, 2010

1. Physical Backup

1. Cold Backup (Consistent Backup)
2. Hot Backup (Inconsistent Backup)

2. Logical Backup

1. Export / Import
2. Expdp /Impdp (available on oracle10g onwards)

Now we see the physical backup.

Cold Backup: (Consistent Backup)

A cold (or off-line) backup is a backup performed while the database is off-line and unavailable to its users. Cold backups can be taken regardless if the database is in ARCHIVELOG or NOARCHIVELOG mode.It is easier to restore from off-line backups as no recovery (from archived logs) would be required to make the database consistent.

Why you say cold backup is consistent backup?

After shutdown the database we have taken the cold backup. During this time all datafile headers SCN are same. When we restore the cold backup, no redo is needed in case of recovery.We had taken backup datafile, logfile, controlfile, parameter file & password file.

Cold Backup Steps:

1. Get the file path information using below query

Select name from v$datafile;
select member from v$logfile;
select name from v$controlfile;

2.Taken the password file & parameter file backup ( Optional)
3.Alter system switch logfile;
4.Shutdown immediate
5.Copy all the data files /control files /log files using os command & placed in backup path.
6.Startup the database.

Hot Backup (Inconsistent backup)

A hot (or on-line) backup is a backup performed while the database is open and available for use (read and write activity). Except for Oracle exports, one can only do on-line backups when the database is ARCHIVELOG mode.

Why you say Hot backup is inconsistent backup?

While database running we have taken the hot backup. During this time backup tablespace datafile headers SCN are not same with another tablespace datafiles. When we restore the hot backup, redo is needed for recovery.

Hot Backup Steps:

1.Get the file path information using below query.
Select tablespace_name, file_name from dba_data_files order by 1;

2. Put the tablespace in begin backup mode Using os command to copy the datafiles belongs to begin backup mode tablespace & placed in backup path.
3.Once copied the datafile, release the tablespace from begin backup mode to end backup
4.Repeat the steps 1-3 for all your tablespaces.
5.Taken the controlfile backup

Alter Database backup controlfile to ‘/u01/backup/control01.ctl’; —> à Binary format

Below one is readable format. (Used for cloning)

Alter Database backup controlfile to trace;
Alter database backup controlfile to trace as ‘/u01/backup/control01.ora’;

6.Backup all your archive log files between the previous backup and the new backup as well.
7.Taken the password file & parameter file backup ( Optional)

Example:
steps:

2.Alter tablespace system begin backup;
To ensure the begin backup mode tablespace using below query
Select * from v$backup; (refer the Change#, Time column)

3. Host cp /u01/oracle/raja/system1.DBF /u01/backup/system01.dbf à using os command to copy the datafile.
4. Alter tablespace system end backup;
To ensure the begin backup mode tablespace using below query
Select * from v$backup;

Hot Backup internal operations:

Note: While I am trying to write “during hot backup what are the internal operations going on?” I have referred several notes, but below one is very clear and nice one.
Reference: http://knol.google.com/k/franck-pachot/oracle-begin-backupend-backup/17uabcrki6uux/3#

During backup mode, for each datafile in the tablespace, here is what happens:

1- When BEGIN BACKUP is issued:

·The hot backup flag in the datafile headers is set, so that the copy is identified to be a hot backup copy. This is to manage the backup consistency issue when the copy will be used for a recovery.
·A checkpoint is done for the tablespace, so that in case of recovery, no redo generated before that point will be applied. Begin backup command completes only when checkpoint is done.

2- During backup mode:
·The datafile header is frozen so that whenever it is copied, it reflects the checkpoint SCN that was at the beginning of the backup. Then, when the copy will be restored, Oracle knows that it needs to start recovery at that SCN to apply the archived redo logs. This is to avoid the header inconsistency issue.
That means that any further checkpoints do not update the datafile header SCN (but they do update a ‘backup’ SCN)
·Each first modification to a block in buffer cache will write the full block into the redo thread (in addition to the default behavior that writes only the change vector).
This is to avoid the fractured block issue. There may be a fractured block in the copy, but it will be overwritten during the recovery with the full block image.

That means that everything goes as normal except for two operations:
– at checkpoint the datafile header SCN is not updated
– when updating a block, the first time it is updated since it came in the buffer cache, the whole before image of the block is recorded in redo
– direct path writes do not go through the buffer cache, but they always write full blocks and then full block is written to redo log (if not in nologging)

3- When END BACKUP is issued:
·A record that marks the end of backup is written to the redo thread so that if the copy is restored and recovered, it cannot be recovered earlier than that point. This is to avoid the backup consistency issue.
·The hot backup flag in the datafile headers is unset.
·The header SCN is written with the current one.

DB_BLOCK_CHECKING and DB_BLOCK_CHECKSUM

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 24, 2010

DB_BLOCK_CHECKING – http://download.oracle.com/docs/cd/E11882_01/server.112/e17110/initparams046.htm#REFRN10029
DB_BLOCK_CHECKSUM – http://download.oracle.com/docs/cd/E11882_01/server.112/e17110/initparams047.htm#REFRN10030

When the former is set, a block check is done when writing the block to disk (to ensure that it is not corrupted).
When the latter is set, a checksum is calculated and stored in the block header – this checksum is verified when the block is read.

DB_BLOCK_CHECKING prevents logical block corruption, and DB_BLOCK_CHECKSUM physical corruption.

So these two are two levels of “sanity checks”, with the latter being more “robust” than the former.

Recovering from lost sys password in windows & Linux

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 23, 2010

Applies to:

Oracle Server – Enterprise Edition – Version: 8.1.7.0 to 11.1.0.7 – Release: 8.1.7 to 11.1
Information in this document applies to any platform.

Goal

How to recover from lost sys password.

Solution

The methods to be used rely on:
1. OS authentication
2. passwordfile authentication.

It is outside the scope of this note to explain how OS or passwordfile authentication work in detail, for this, please check My Oracle Support document:

Note.50507.1 SYSDBA and SYSOPER Privileges in Oracle

Recovering from lost sys password using OS authentication on Unix.

#1. Make sure that the OS user that is logged on is member of the dba group. If the dba group is not known, check the file:

$ORACLE_HOME/rdbms/lib/config.c (or config.s)

#2. Make sure that the sqlnet.ora file does not contain the:

sqlnet.authentication_services

parameter. If this parameter exist, it should be commented.

#3. check the ORACLE_HOME, ORACLE_SID and PATH parameters. They must correspond to the parameters of the Oracle instance where the SYS password has been lost.

If the instance is already started, use the document:

Note.373303.1 How to Check the Environment Variables for an Oracle Process

to determine the environment variables that were used when the instance was started.

#4. connect to the instance using:

sqlplus /nolog
connect / as sysdba

#5. Change the sys password using:

alter user sys identified by <new password>;

If a passwordfile is in use, the command would update the password file as well, so there is no need to recreate this file.

Recovering the lost sys password using passwordfile authentication on Unix

If the SYS password is not known and the OS authentication is not an option, this method can be used. This method assumes that the SYS account is not accessible, hence an abrupt shutdown for the database could be needed.

#1. Make sure that the initialization parameter REMOTE_LOGIN_PASSWORDFILE is set to SHARED or EXCLUSIVE. Changing this parameter value requires an instance restart.

#2. Disable OS authentication, if it is enabled for some reason, by setting:

sqlnet.authentication_services=(none)

in the sqlnet.ora file (previously, backup the existing sqlnet.ora file).

#3. go to the OS destination for the passwordfile:

cd $ORACLE_HOME/dbs

#4. Make a backup of the previous passwordfile:

cp orapw$ORACLE_SID orapw$ORACLE_SID.bak

#5. Recreate the passwordfile using the orapwd command:

orapwd file=orapw$ORACLE_SID password=<new sys password>

#6. Try to connect using the new password.

sqlplus /nolog
connect sys as sysdba

#7. As an additional check, if the above is successful, query:

select * from v$pwfile_users;

If it returns at least one line, then everything is ok and the new passwordfile is in use.

#8. Change the sys password in the data dictionary using:

alter user sys identified by <new password>;

Of course, the new password must match the new password of the passwordfile, if that password is to be kept, or it can be set to something else, if the initial password is to be changed. This command would update the passwordfile as well.

#9. If the test in step does not succeed, the instance must be restarted. It could be that, if no sysdba or sysoper access are possible, that the instance must be brought down abruptly, do this by:
+ killing a background process (preferrably kill SMON, so PMON can still do some cleanup)
+ removing the shared memory resources using ipcrm if they are not removed by my PMON

#10. After restarting the instance, check if the SYS password is operational and if there is at least one entry in v$pwfile_users. If other users previously had entries in the passwordfile, grant them SYSDBA or SYSOPER again.

#11. Restore the sqlnet.ora file from the backup executed at step #2, if needed.

Recovering from lost sys password using NTS authentication on Windows

1. Make sure that the OS user that is logged on is member of the Windows ORA_DBA group.

#2. Also make sure that the sqlnet.ora file has the following line:

sqlnet.authentication_services = (nts)

#3. Connect to the instance using:

sqlplus /nolog
connect / as sysdba

#4. Change the sys password using:

alter user sys identified by <new password>;

If a passwordfile is in use, the command would update the password file as well, so there is no need to recreate this file.

Recovering the lost sys password using passwordfile authentication on Windows

If the SYS password is not known and the OS authentication is not an option, this method can be used. This method assumes that the SYS account is not accessible, hence a shutdown for the database could be needed, this can be accomplished by stopping the OracleServiceSID on your system.

#1. Make sure that the initialization parameter REMOTE_LOGIN_PASSWORDFILE is set to SHARED or EXCLUSIVE.
#2. disable OS authentication, if it is enabled for some reason, by setting:

sqlnet.authentication_services=(none)

in the sqlnet.ora file
#3. go to the OS destination for the passwordfile:

cd %ORACLE_HOME%\database

#4. Make a backup of the previous passwordfile:

cp pwd<SID>.ora pwd<SID>.ora.bak

#5. Recreate the passwordfile using the orapwd command:

orapwd file=pwd<SID>.ora password=<new sys password>

#6. Try to connect using the new password.

sqlplus /nolog
connect sys as sysdba

#7. As an additional check, if the above is successful, query:

select * from v$pwfile_users;

If it returns at least one line, then everything is ok and the new passwordfile is in use.
#8. change the sys password in the data dictionary using:

alter user sys identified by <new password>;

Of course, the new password must match the new password of the passwordfile, if that password is to be kept, or it can be set to something else, if the initial password is to be changed. This command would update the passwordfile as well.
#9. If the test in step does not succeed, the instance must be restarted. This can be done by restarting the service.
#10. After restarting the instance, check if the SYS password is operational and if there is at least one entry in v$pwfile_users. If other users previously had entries in the passwordfile, grant them SYSDBA or SYSOPER again.

#11. re-enable NTS authentication by setting:

sqlnet.authentication_services=(nts)

Installing Oracle on a machine with DHCP

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 21, 2010

During Oracle installation on a windows machine that is configured to use DHCP in order to obtain its IP address, the Following message will probably be displayed:

Checking Network Configuration requirements …
Check complete. The overall result of this check is: Failed <<<<
Problem: The install has detected that the primary IP address of the system is DHCP-assigned.
Recommendation: Oracle supports installations on systems with DHCP-assigned IP addresses; However, before you can do this, you must configure the Microsoft LoopBack Adapter to be the primary network adapter on the system.  See the Installation Guide for more details on installing the software on systems configured with DHCP.

As the error message say, this due to the fact that you are trying to install Oracle on a machine that is using DHCP to obtain its IP address. You can still install oracle on a machine without a constant IP address. All you need to do is configure the LoopBack adapter:

Install the Loopback adapter
1)     Go to the control panel and choose add hardware
2)    The Add Hardware wizard will be displayed, click Next
3)    Choose “Yes, I have already connected the hardware” and click Next
4)    An hardware list will be displayed. Navigate to the bottom of the list and choose “Add new hardware       device”
5)    Choose Install the hardware that I manually select from a list (Advance)”
6)    Choose “Network adapter”
7)    Choose “Microsoft” on the left side of the window and “Microsoft Loopback Adapter” on the right             window”
8)    Click Next and verify that Microsoft Loopback Adapter is about to install and click next again and       finish.

Configure the loop back adapter
1)    Go to control panel – network connections, you will see a new network connection.
2)    Choose “local area connection 2” (the name may vary but basically it is the one we just added) and       click properties.
3)    Click on tcp/ip and then properties.
4)    Enter an IP address and a subnet mask (you can use 1.1.1.2 and subnet 255.255.255.0)

Go to network connection and click on Advanced settings under the Advanced menu and verify that the loopback adapter is in the first one. (Update: it look like this step is optional because it seems to work in most cases anyway)

Now you can continue your oracle installation.

java.lang.Exception: Exception in sending Request :: null

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 21, 2010

When accessing Enterprise Manager DB Control Console a couple of error messages display, right after providing the sys credentials, the console appears with a down arrow and at the header it says:
java.lang.Exception: Exception in sending Request :: null

If you try to move to another tab, such as the performance tab, it asks for credentials and responses:

Io exception: Unknown host specified

These are symptoms of a not properly configured Agent Time Zone.

Troubleshooting Agent Time Zone

This case happened to me on a Windows XP Professional in Mexican Spanish after a timezone change. I notice a similar error shows up on databases affected by the automatic time zone changes after synchronization with the time.windows.com, which recently has included two time zones for Mexico – one named old and the other named new-
Troubleshooting
0. Consider these files when performing the troubleshooting phase:
emd.properties
supportedtzs.lst

1. Stop the console
\bin\emctl stop dbconsole

2. Backup the file located at ORACLE_HOME/hostname_SID/sysman/config/emd.properties

3. Edit the emd.properties file and look for the agentTZRegion (generally appears at the end of the file)

At the file emd.properties, located at the ORACLE_HOME/hostname_instanceName/sysman/config

change the value

agentTZRegion=GMT To agentTZRegion=America/Mexico_City

For Asia/Karachi change the value to Etc/GMT+5.

Evidently, this is for the particular case of Mexico City, but similar issues could apply to other time zones.
For other timezones affected, such as the one corresponding to Egypt, change the value of this parameter from GMT+2 to Egypt which is included in the supportedtzs.lst file.
So the parameter will be agentTZRegion=Egypt

4. When agent is unable to find a proper time zone it will adopt GMT, so it could be the value registered so far. Change this value by the value corresponding to the OS time zone, this time zone should be one listed at the ORACLE_HOME/sysman/admin/nsupportedtzs.lst file.

5. Execute this command
emctl resetTZ agent
emctl config agent getTZ

After issuing the resetTZ command a similar issue like this one may appear:
To complete this process, you must either:

connect to the database served by this DBConsole as user ‘sysman’, and execute:

SQL> exec mgmt_target.set_agent_tzrgn(‘pc06.oracle.com:3938′,’America/Mexico_City’)

— or —

connect to the database served by this DBConsole as user ‘sys’, and execute:

SQL> alter session set current_schema = SYSMAN;
SQL> exec mgmt_target.set_agent_tzrgn(‘pc06.oracle.com:3938′,’America/Mexico_City’)

In order for these commands to be successful, you are required the agent to have registered some values at the EM repository. Check this query, there should be similar information displayed when connected as sysman:
SQL> select target_name, target_type from mgmt_targets;

TARGET_NAME TARGET_TYPE
————————————
pc06.oracle.com host
orcl.oracle.com oracle_database
pc06.oracle.com:3938 oracle_emd
Management Services and Repository oracle_emrep
LISTENER_pc06.oracle.com oracle_listener

6. The previous command will ask to perform some actions at the sysman repository level. Execute the reset at the repository level by:
SQL> alter session set current_schema = SYSMAN;

SQL> exec mgmt_target.set_agent_tzrgn(‘hostname:3938′,’TimeZone’);

This command should be successful, otherwise it could be because the agent hasn’t ever started and it has never registered, even with the wrong TZ at the repository.

In order for you to verify this has ever run and the agent is properly registered, issue this query as sysman:

SQL> select target_name, target_type from mgmt_targets;
The target with the default port 3938 is the target we are looking for.

The target related to the port 3938 should be listed. Otherwise try to start the agent so it can register this target. If agent is not starting, please veriy at the logs what could be a reason that prevents agent from starting. Most of the times it is because of a wrong specified

6. Try to login to the dbconsole and check if the error still exists.

7. Now start the EM Console, the problem should be fixed by now.

Notes.
The value of the time zone is a valid value listed at ORACLE_HOME/sysman/admin/nsupportedtzs.lst Make sure the time zone matches that of the host computer.

if when executing the emdctl command an error related to java appears, it is because the right java version is not being invoked. Make sure the path environment variable properly includes the jdk environment provided at the same Oracle Home where the database is related to.

path=%PATH%;C:\Oracle\product\10.2.0\db_1\jdk\jre\bin;C:\Oracle\product\10.2.0\db_1\jdk\bin
path=%path%;C:\Oracle\product\10.2.0\db_1\jdk\jre\bin\client

References.

Problem: Startup Agent: ORA-20233: Invalid agent name when running mgmt_target.set_agent_tzrgn procedure in repository
Doc ID: Note:388280.1 –> This note has particularly helped me to solve my particular issue.

Problem: Startup Agent: EM Agent will not start due to Timezone mismatch (Daylight Savings changes for Australia)
Doc ID: Note:362888.1 –> some useful ideas were taken from this note

Problem: Startup Agent: Agent Fails to Start due to Incorrect Timezone File Used as Pointed by ORA_TZFILE
Doc ID: Note:409121.1

Problem: Startup Agent: ORA-20233: Invalid agent name when running mgmt_target.set_agent_tzrgn procedure in repository
Doc ID: Note:388280.1

If there are problems with setting timezone/configurations, i recommend just reset the timezome & re-create theservices.

What is shared SQL area?

Posted in Uncategorized by Muhammad Nadeem Chaudhry on December 21, 2010

A Shared SQL area is required to process every unique SQL statement submitted to a database and contains information such as the parse tree and execution plan for the corresponding statement. Any Oracle process can use a shared SQL area. The use of shared SQL areas reduces memory usage on the database server thereby increasing system throughput.

Shared SQL areas are aged out of the shared pool by way of a least recently used algorithm (similar to database buffers). To improve performance and prevent reparsing you may want to prevent large SQL areas from aging out of the shared pool.

Next Page »