Monday, July 20, 2015

Real time sql monitoring in 11g

source



This article presents new feature of 11g Real time sql monitoring.
Real time sql monitoring introduced in Oracle 11G enables to monitor performance of SQL statements while they are running. Statistics about monitored queries are visible in new views.
Requirements to monitor a query by database
Oracle will monitor a query if following requirements are fulfilled:
  • following instance parameters are set

STATISTIC_LEVEL=TYPICAL or STATISTIC_LEVEL=ALL
CONTROL_MANAGEMENT_PACK_ACCESS=DIAGNOSTIC+TUNING
  • the query is started in parallel or
  • it has consumed at least five seconds of the CPU or I/O time in a single execution or
  • you are using hint MONITOR
    select /*+ MONITOR */ from
You can block monitoring for a query if you specify hint NO_MONITOR
select /*+ NO_MONITOR */ from
Views for real time monitoring
New views are collecting statistics about monitored queries:
V$SQL_MONITOR – here you can see informations like elapsed time, CPU time, number of read and writes, I/O wait time and many others. Statistics are available at least one minute after the query ends. It’s refreshed in almost real time once a second.
V$SQL_PLAN_MONITOR – here you can see very detailed statistics for each step in explain plan used by the query and they are as well updated once a second.
Both above views plus old V$SQL, V$SQL_PLAN, V$SESSION, V$ACTIVE_SESSION_HISTORY, V$SESSION_LONGOPS can give perfect information about execution of your query.
Enterprise Manager 11g and sql monitoring
You can access real time monitoring feature in Enterprise Manager Database Control. You can find link “SQL Monitoring” on “Performance” tab of Enterprise Manager.
once you click the link you should see queries which Oracle is monitoring. In my case I can see only one on my database :) well my database is sleeping well right now.
If you click on the statement you will get more informations like:

– duration of this query

– full explain plan (standard form or graphical form)

– how database time was consumed  per CPU,I/O etc

– how much time was consumed by PL/SQL, Java

– how much buffer gets was executed

– how many I/O request was executed

– how many I/O bytes was used per read/write

– full statistics for each step in explain plan

– usage of parallel processes

– statistics per wait events

– CPU usage across time

– I/O request across time

– I/O throughput across time

– PGA usage across time

– Temp usage across time

By clicking “Report” button you can generate very simple report
or you can save all above pages by clicking “Save”. Saving all above is very usefully in case you want to document something from database or you want send some details to somebody else.
From main page you can also drill down to session which executed this query or historical activity of the query.
DBMS_SQLTUNE and sql monitoring
In case you don’t have access to Enterprise Manager you can generate such nice reports using function REPORT_SQL_MONITOR from package DBMS_SQLTUNE. The function has got many parameters and can produce very detail report(you can turn off some sections if you want) in many formats like text, simple HTML, active HTML, XML.
DBMS_SQLTUNE.REPORT_SQL_MONITOR
( sql_id IN VARCHAR2 DEFAULT NULL, 
  session_id IN NUMBER DEFAULT NULL, 
  session_serial IN NUMBER DEFAULT NULL, 
  sql_exec_start IN DATE DEFAULT NULL, 
  sql_exec_id IN NUMBER DEFAULT NULL, 
  inst_id IN NUMBER DEFAULT NULL, 
  start_time_filter IN DATE DEFAULT NULL, 
  end_time_filter IN DATE DEFAULT NULL, 
  instance_id_filter IN NUMBER DEFAULT NULL, 
  parallel_filter IN VARCHAR2 DEFAULT NULL, 
  plan_line_filter IN NUMBER DEFAULT NULL, 
  event_detail IN VARCHAR2 DEFAULT 'YES', 
  bucket_max_count IN NUMBER DEFAULT 128, 
  bucket_interval IN NUMBER DEFAULT NULL, 
  base_path IN VARCHAR2 DEFAULT NULL, 
  last_refresh_time IN DATE DEFAULT NULL, 
  report_level IN VARCHAR2 DEFAULT 'TYPICAL', 
  type IN VARCHAR2 DEFAULT 'TEXT', 
  sql_plan_hash_value IN NUMBER DEFAULT NULL) 
RETURN CLOB;
simple example
let’s prepare some data
create table test_tbl
(id number);

begin
  for i in 1..100
  loop
    insert into test_tbl values(i);
  end loop;
  commit;
end;
/
let’s run query to monitor
set timing on
select /*+ MONITOR */ count(*) cnt
from test_tbl, test_tbl, test_tbl, test_tbl;
  CNT
----------
 100000000 

Elapsed: 00:00:04.662
you can read details about it in V$SQL_MONITOR
select key, sql_id, status 
from v$sql_monitor 
where sql_text='select /*+ MONITOR */ count(*) cnt 
from test_tbl, test_tbl, test_tbl, test_tbl';

KEY                        SQL_ID        STATUS            
-------------------------- ------------- -------------------
657129996295               9fvrwtjugnkup DONE (ALL ROWS)     

Elapsed: 00:00:00.016
and generate your report as TEXT
select
DBMS_SQLTUNE.REPORT_SQL_MONITOR(
   sql_id=>'9fvrwtjugnkup',
   report_level=>'TYPICAL',
   type=>'TEXT')
from dual;
SQL Monitoring Report

SQL Text
------------------------------
select /*+ MONITOR */ count(*) cnt from test_tbl, test_tbl, test_tbl, test_tbl

Global Information
------------------------------
 Status              :  DONE (ALL ROWS)     
 Instance ID         :  1                   
 Session             :  SYS (204:115)       
 SQL ID              :  9fvrwtjugnkup       
 SQL Execution ID    :  16777216            
 Execution Started   :  02/03/2013 01:57:49 
 First Refresh Time  :  02/03/2013 01:57:49 
 Last Refresh Time   :  02/03/2013 01:57:54 
 Duration            :  5s                  
 Module/Action       :  SQL Developer/-     
 Service             :  SYS$USERS           
 Program             :  SQL Developer       
 Fetch Calls         :  1                   

Global Stats
=================================================
| Elapsed |   Cpu   |  Other   | Fetch | Buffer |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  |
=================================================
|    4.85 |    4.76 |     0.09 |     1 |     12 |
=================================================

SQL Plan Monitoring Details (Plan Hash Value=2897041765)
===========================================================================================================================================
| Id |         Operation         |   Name   |  Rows   | Cost |   Time    | Start  | Execs |   Rows   |  Mem  | Activity | Activity Detail |
|    |                           |          | (Estim) |      | Active(s) | Active |       | (Actual) | (Max) |   (%)    |   (# samples)   |
===========================================================================================================================================
|  0 | SELECT STATEMENT          |          |         |      |         4 |     +2 |     1 |        1 |       |          |                 |
|  1 |   SORT AGGREGATE          |          |       1 |      |         4 |     +2 |     1 |        1 |       |          |                 |
|  2 |    MERGE JOIN CARTESIAN   |          |    100M | 275K |         4 |     +2 |     1 |     100M |       |          |                 |
|  3 |     MERGE JOIN CARTESIAN  |          |      1M | 2752 |         4 |     +2 |     1 |       1M |       |          |                 |
|  4 |      MERGE JOIN CARTESIAN |          |   10000 |   31 |         4 |     +2 |     1 |    10000 |       |          |                 |
|  5 |       TABLE ACCESS FULL   | TEST_TBL |     100 |    2 |         4 |     +2 |     1 |      100 |       |          |                 |
|  6 |       BUFFER SORT         |          |     100 |   29 |         4 |     +2 |   100 |    10000 |  4096 |          |                 |
|  7 |        TABLE ACCESS FULL  | TEST_TBL |     100 |      |         1 |     +2 |     1 |      100 |       |          |                 |
|  8 |      BUFFER SORT          |          |     100 | 2752 |         4 |     +2 | 10000 |       1M |  4096 |          |                 |
|  9 |       TABLE ACCESS FULL   | TEST_TBL |     100 |      |         1 |     +2 |     1 |      100 |       |          |                 |
| 10 |     BUFFER SORT           |          |     100 | 275K |         5 |     +1 |    1M |     100M |  4096 |   100.00 | Cpu (5)         |
| 11 |      TABLE ACCESS FULL    | TEST_TBL |     100 |      |         1 |     +2 |     1 |      100 |       |          |                 |
===========================================================================================================================================
or generate full report using ACTIVE option. Save the output in HTML file to view it.
select
DBMS_SQLTUNE.REPORT_SQL_MONITOR(
   sql_id=>'9fvrwtjugnkup',
   report_level=>'TYPICAL',
   type=>'ACTIVE')
from dual;

ACTIVE reports have a rich, interactive user interface similar to Enterprise Manager while not requiring any EM installation. The report file built is in HTML format, so it can be interpreted by most modern browsers. The code powering the active report is downloaded transparently by the web browser when the report is first viewed, hence viewing it requires outside connectivity.

or generate full report using HTML option. Save the output in HTML file to view it.
select
DBMS_SQLTUNE.REPORT_SQL_MONITOR(
   sql_id=>'9fvrwtjugnkup',
   report_level=>'TYPICAL',
   type=>'HTML')
from dual;
HTML version is pretty nice and simple

Friday, July 10, 2015

Daily Production Oracle DBA checklist !!!



1) Checking all instances status, Make sure all are up and running 

     How: Log in to each instance, run any query to make sure databases are available and running. Such as
            $ export ORACLE_SID=orcl
            $ sqlplus /nolog
            SQL> select host_name from v$instance ; 

2) Check alert log entries for any error or any new things (at least daily two times for all databases)

i) When you enter into office 
ii) When you are leaving office  
And when any complain comes about database slowness or anything

3) Verify success of databases RMAN/export/any other backups.

                                    i)  Check the RMAN/export logs for any error.

                                    ii) Check the backup size for confirmation and daily backup size growth.  

           

4. Verify success of database backup archiving to tape (or let system admin know to back it up.)

5) Verify Auto control file backup successful 

6) Verify success of all schedule job

7) Verify enough spaces are there for -

                                    i) Space of Tablespaces (if autoextend = no)
                                    ii) Space in OS Hard disk/ Storage LUN
                                    iii) Archive log space/ flash_recovery_area locations

8) Check to all objects status to make sure all are 'VALID'

9) If you have a Standby Database, check the status of standby database and gap between primary. 

10) Review contention for CPU, memory, network, and disk resources.

                        In linux can use -  top, netstat, vmstat  etc. commands
                        In Sun Solaris can use – prstat, vmstat ,netstat etc. 

10) Checks for blocking locks , killed sessions etc. 

11) Check connected sessions (average count) to verify there are no abnormalities (sometimes parallel thread exceeds your process parameter’s value) 

                        i) Check top session’s information.
                        ii) Top SQL statements

12) Troubleshooting if any issue reported 

13) Spend sometimes to increase and upgrade your knowledge 

Wednesday, July 8, 2015

Discussion on DCD,Resource Limits, V$SESSION, V$PROCESS and OS processes !!!



Dead Connection Detection (DCD) 

These are previously valid connections with the database but the connection between the client and 
server processes has terminated abnormally.
Examples of a dead connection:
- A user reboots/turns-off their machine without logging off or disconnecting from the database.
- A network problem prevents communication between the client and the server.

In these cases, the shadow process running on the server and the session in the database may not terminate.

Implemented by
      * adding SQLNET.EXPIRE_TIME = to the sqlnet.ora file

With DCD is enabled, the Server-side process sends a small 10-byte packet to the client process after the duration
 of the time interval specified in minutes by the SQLNET.EXPIRE_TIME parameter.

If the client side connection is still connected and responsive, the client sends a response packet back to the database 
server, resetting the timer..and another packet will be sent when next interval expires (assuming no other activity on
 the connection).

If the client fails to respond to the DCD probe packet
     * the Server side process is marked as a dead connection and
     * PMON performs the clean up of the database processes / resources
     * The client OS processes are terminated

NOTE: SQLNET.RECV_TIMEOUT can be set on the SERVER side sqlnet.ora file. This will set a timeout for 
the server process  to wait for data from the client process.

Inactive Sessions:

These are sessions that remain connected to the database with a status in v$session of INACTIVE.
Example of an INACTIVE session:
- A user starts a program/session, then leaves it running and idle for an extended period of time.


Database Resource Limits (using user profiles) 

Implemented by
     * Setting RESOURCE_LIMIT = TRUE in the database startup parameter file (spfile or pfile)
     * Creating or modifying existing user profiles (DBA_PROFILES) to have one or more resource limit
     * Assigning a profile to a user whose resources are wished to be limited

It could happen that if the idle_time has been set on the DEFAULT profile, this can lead to an MTS dispatchers being
 set to 'sniped' and then getting 'cleaned up' via the shell script. The removal of the dispatcher will result in other sessions
 'dying' .In that case, If you are to implement resource limits, may be advisable to create new profiles
that be assigned to users and not to change the characteristics of DEFAULT.
Alternatively, if you do change DEFAULT, ensure that all the properties that you
have affected have been fully tested in a development environment.

When a resource limit is exceeded (for example IDLE_TIME) ... PMON does the following
     * Mark the V$SESSION as SNIPED
     * Clean up the database resources for the session
     * Remove the V$SESSION entry

When a resource limit is exceeded (for example IDLE_TIME) ... PMON marks the session as SNIPED in V$SESSION.  
Then, AFTER the SNIPED session tries to execute any SQL statement, its database resources are cleaned up and its 
V$SESSION entry is removed.

Resource Manager
Resource manager plans can be created to kill inactive sessions with high idle time. Refer to How To Automatic Kill
 Inactive Sessions using Resource Manager (Doc ID 1935739.1). This document contains the details with an example.
 You can customize the plan directives as per your requirement.

In this case, once the inactive time goes beyond the specified MAX_IDLE_TIME, PMON marks the session as KILLED. 
When this KILLED later tries to execute any SQL statement, its database resources are cleaned up and its V$SESSION
 entry is removed.

It is strongly recommended that both DCD and Resource Limits with Profiles be implemented in order to
 clean up resources at both the database and OS level

This combination will not clean up IDLE / ABANDONED / INACTIVE connections (OS processes) as these 
sessions still have active clients

For this case we will see that :
     * PMON has cleaned up the V$SESSION entries .. but both the OS processes and the V$PROCESS entries 
will still exist
     * SQLNET will continue to be able to send the 10 byte packet successfully until the session is logged off
This condition can be a major problem as
     * The database exhausts PROCESSES and gives ORA-20 maximum number of processes exceeded
     * The OS can become exhausted due to the un needed resources consumed by the abandoned processes

The SYMPTOMS of this condition are
     * The database view V$PROCESS will have no corresponding V$SESSION entry
     * An OS process / thread still exists for the SNIPED session

The solutions to this scenario can are to cleanup the OS processes ... after which the V$PROCESS entries should
 be removed automatically

Methods to cleanup OS processes:

     * UNIX : kill -x ... the OS process at the OS level (typically kill -9)
     * UNIX:  if using a dedicated server, use the following shell script to kill the shadow process 
(script has been tested on Solaris, AIX, Tru64 and  HPUX):

#!/bin/sh
tmpfile=/tmp/tmp.$$
sqlplus system/manager <spool $tmpfile
select p.spid from v\$process p,v\$session s
where s.paddr=p.addr
and s.status in ('INACTIVE,'SNIPED');
spool off
EOF
for x in `cat $tmpfile | grep "^[0123456789]"`
do
kill -9 $x
done
rm $tmpfile

NOTE: If you are running in a shared server environment, you need to be careful not to accidentally kill your
 dispatchers and/or shared servers. In Oracle 10.2 (or higher) a dedicated connections
 V$SESSION + V$PROCESS + OS Process can be cleaned up with

ALTER SYSTEM DISCONNECT SESSION ',' IMMEDIATE
At this point in versions prior to 10.2 and for shared server connections the only solution is to kill the session
 at the OS level (see Kill and ORAKILL above)
     * Windows : use the orakill command .... orakill (see Note 69882.1 for details)

On occasions we see conditions where a database session has a V$SESSION.STATUS = SNIPED ... and the entry
 never goes away . This condition can be achieved by implementing Database Resource Limits + Profiles without DCD 
and allow the database session to exceed the limit in the profile


Summary of what was discussed here:1) DCD initiates clean up of OS and database processes  that have disconnected / terminated abnormally
2) DCD will not initiate clean up sessions that are still connected ... but are idle / abandoned / inactive
3) Database Resource Limits + user Profiles clean up database resources for user sessions that exceed resource limits
4) Database Resource Limits + user Profiles will not clean up OS processes
5) If DCD and Database Resource Limits + user Profiles are used in combination .. Dead Connections OS and Database
 Resources will be cleaned up 
6) IDLE / ABANDONED / INACTIVE sessions OS processes will not be cleaned
 up even if DCD and Database Resource Limits + user Profiles are used in combination ... these must be cleaned
 up manually
7) Resource Manager can be used to kill inactive session, which exceeds the idle time mentioned in the plan directives.

REFERENCES

NOTE:395505.1 - How to Check if Dead Connection Detection (DCD) is Enabled in 9i ,10g and 11g
NOTE:805586.1 - Troubleshooting Session Administration
NOTE:151972.1 - Dead Connection Detection (DCD) Explained
NOTE:438923.1 - How To Track Dead Connection Detection(DCD) Mechanism Without Enabling Any
 Client/Server Network Tracing
NOTE:1287854.1 - Troubleshooting Guide - ORA-20: Maximum Number Of Processes (%S) Exceeded
NOTE:1935739.1 - How To Automatic Kill Inactive Sessions using Resource Manager

Monday, July 6, 2015

Oracle Port List !!!



Port#NameDescription
199SNMP Peer MasterOracle SNMP Peer Master Agent allows identification and monitoring of Oracle systems.
389Internet DirectoryLDAP v2 or v3 based Oracle Internet Directory serves as a central user repository for Oracle Identity Management.
636Oracle Internet DirectoryLDAP v2 or v3 based Oracle Internet Directory serves as a central user repository for Oracle Identity Management. This is an SSL port.
1520-1530TNS ListenerTNS Listener (Transparent Network Substrate) serves as a main communication point for Oracle. 1521 is the default port in this range.
1575Name ServerOracle Name Server is an Oracle-specific name service that maintains a central store of service address.
1630CMANCMAN (Oracle Connection Manager) is a proxy server to manage client connections to database servers.
1701Oracle RdbOracle JDBC for Rdb thin server is the server-side component that services JDBC requests issued by thin clients using the Oracle Rdb thin driver.
1748, 1754, 1808, 1809Intelligent AgentOracle Intelligent Agent's primary function is to monitor an Oracle server or servers. It can also run jobs sent by the OEM (Oracle Enterprise Manager), handle events, can start up and shut down a database.
1810Enterprise ManagerOracle Enterprise Manager provides tools to monitor the Oracle environment and automate tasks. It can run predefined jobs like Export, Import, run OS commands, run sql scripts, TCL scripts, SQL*Plus commands. HTTP Console.
1830, 1831Connection ManagerCMAN (Oracle Connection Manager) is a proxy server to manage administrative connections to database servers.
1850Java ORMIOracle Enterprise Manager provides tools to monitor the Oracle environment and automate tasks. It can run predefined jobs like Export, Import, run OS commands, run sql scripts, TCL scripts, SQL*Plus commands. RMI Console.
2100XMLDB FTP PortOracle XML DB Repository provides a hierarchical data repository in the database modeled on XML. Standard FTP Client can be used to access XMLDB.
2481GIOP IIOPOracle's GIOP (General Inter-ORB Protocol) implementation of IIOP (Internet Inter-Orb Protocol)
2482GIOP IIOPOracle's GIOP (General Inter-ORB Protocol) implementation of IIOP (Internet Inter-Orb Protocol). This port supports SSL.
3025, 3026, 4696OCSSDThe Oracle Cluster Synchronization Service Daemon
6003, 6004, 6200, 6201OPMNOracle Process Manager and Notification Server
7777OASOracle Application Server
8080XMLDB HTTP portOracle XML DB Repository provides a hierarchical data repository in the database modeled on XML. Standard HTTP Client can be used to access XMLDB.
9090

Default ports of Oracle 9i R2 running on Windows 2003

PORT     STATE         SERVICE       VERSION
80/tcp   open          http          Oracle HTTP Server Powered by Apache 1.3.22
135/tcp  open          msrpc         Microsoft Windows RPC
139/tcp  open          netbios-ssn
443/tcp  open          ssl           OpenSSL
445/tcp  open          microsoft-ds  Microsoft Windows 2003 microsoft-ds
1025/tcp open          NFS-or-IIS?
1026/tcp open          msrpc         Microsoft Windows RPC
1034/tcp open          oracle-tns    Oracle TNS Listener
1521/tcp open          oracle-tns    Oracle TNS Listener
1748/tcp open          oracle-dbsnmp Oracle DBSNMP
1754/tcp open          oracle-tns    Oracle TNS Listener
1808/tcp open          unknown
1809/tcp open          unknown
2030/tcp open          oracle-mts    Oracle MTS Recovery Service
2100/tcp open          ftp           Oracle Enterprise XML DB ftpd 9.2.0.1.0
3339/tcp open          http          Oracle HTTP Server Powered by Apache 1.3.22
8080/tcp open          http          Oracle XML DB Enterprise Edition httpd 9.2.0.1.0
8228/tcp open          unknown
123/udp  open|filtered ntp
137/udp  open          netbios-ns    Microsoft Windows NT netbios-ssn
445/udp  open|filtered microsoft-ds
500/udp  open|filtered isakmp
1031/udp open|filtered iad2
4500/udp open|filtered sae-urn

Default ports of Oracle 10g R2 running on Windows 2003

PORT     STATE         SERVICE      VERSION
135/tcp  open          msrpc?
139/tcp  open          netbios-ssn
445/tcp  open          microsoft-ds Microsoft Windows 2003 microsoft-ds
1025/tcp open          NFS-or-IIS?
1026/tcp open          msrpc        Microsoft Windows RPC
1521/tcp open          oracle-tns   Oracle TNS Listener
5560/tcp open          http         Oracle Application Server httpd 9.0.4.1.0
5580/tcp open          sdlog        Oracle Enterprise Manager
123/udp  open|filtered ntp
137/udp  open          netbios-ns   Microsoft Windows NT netbios-ssn
138/udp  open|filtered netbios-dgm
445/udp  open|filtered microsoft-ds
500/udp  open|filtered isakmp
4500/udp open|filtered sae-urn


Sunday, July 5, 2015

Datapump excluding statistics ; makes faster import !!!



Exclude statistics in data pump

Normally, when I tried to play with Data Pump, the expdp utility will always take less time when compared to impdp. The impdp takes more time in importing the collected statistics into the new schema.
So, yesterday I tried to use the exclude parameter in impdp/expdp to utilize the power of the data pump. Actually what I did was I didn't collect the statistics during expdp,
expdp exclude=statistics...
After this, the entire impdp task was very fast. The same exclude statistics can also be used in the impdp utility as given below,
impdp exclude=statistics...

But you should not use the exclude=statistics parameter if you have already done this during the expdp. It will throw an error "ORA-39168: Object path STATISTICS was not found."The Data Pump is a good utility.




>>  Using the EXCLUDE=STATISTICS parameter value of impdp prevents the automatic of gathering of object statistics by impdp.  Setting this parameter value will greatly improve impdp performance.

Version: 10.2.0.4
Assume:
1.)      expdp included table/index statistics
2.)      the following parameter on the target db is set to 
false:"_optimizer_compute_index_stats",

It is my understanding that impdp will _only_ import statistics, not do a 
regather statistics operation.

Wednesday, May 20, 2015

Oracle Log Rotation


Oracle Log Rotation by Caleb Small

Copyright Ó 2008, Caleb.com

Three types of logs to deal with:

1)      Typical log files (eg. alert, sqlnet, listener, etc) that constantly grow
2)      Trace files (.trc) that are constantly being created
3)      Audit trace files.  One trc file is created for each process PID when a session starts.  When the same PID is used again in the future (which it will) the existing file is appended.

The Linux find utility is useful for locating trace files older than a certain number of days.  It can be easily configured to identify and then delete these files, for example:

$ find /oracle/app/admin -name '*.trc' -mtime +361 -exec rm -f {} \; >/dev/null 2>&1

This can be applied to the audit trace files as well but the results are unpredictable.  Trace files older than the threshold will be deleted.  However, if the threshold is very long the probability that a process PID will be re-used and hence the file date updated becomes more likely.  If create date is used instead then we are deleting audit information before its time.

Typical log files can be handled with the Linux logrotate utility.  Configuration files determine which files are rotated, how often, and when they are purged.  The setup is as follows:

1)      Create a script to be placed in the appropriate cron directory.  This script can perform the find-deletes described above, and then launch logrotate utility with configuration files specifically for Oracle logs
2)      Create a logrotate.conf file to list the Oracle files to be rotated.  Wildcards can be used but they must specifically identify the logfiles to be rotated, not just *.
3)      Create a logrotate.status file which is used by logrotate to determine when each file was last rotated.  This can either be done manually, or automatically by running logrotate with the -f (force) option the first time.  Note, however, that if additional log files are added in the future (even if wildcards would identify them) they will not be rotated unless they are listed in the status file.

Typical cron file placed in /etc/cron.daily


-rwxr-xr-x  1 oracle dba    894 Dec  6 16:59 oralogrotate

#!/bin/bash
#
# Rotate Oracle log files and purge trace files
# Copyright December 2008, Caleb.com
#
# Place this file in /etc/cron.weekly (or whatever period is desired), with root execute privs
# Modify -mtime retention settings as necessary for trace file purging
#
# Place oralogrotate.conf in /etc directory, and modify as necessary to include log files
# Run logrotate once with -f (force) option to create status file
# status file must be updated manually if new logs are added in the future
#
# STEP 1 - Purge trace files
#
# Use find to locate files older than threshold number of days and delete them
# This is useful for the likes of trace files which are constantly being created
#
/usr/bin/find /oracle/app/admin -name '*.trc' -mtime +361 -exec rm -f {} \; >/dev/null 2>&1
/usr/bin/find /oracle/app/admin -name '*.aud' -mtime +31 -exec rm -f {} \; >/dev/null 2>&1
/usr/bin/find /oracle/app/product/10.2.0.1/db_1/gamma1.psoug.org_orcl/sysman/recv -name '*.err*' -mtime +31 -exec rm -f {} \; >/dev/null 2>&1
#
# Some logs (eg core dumps) create a subdir which can be emptied by the above
# commands, but the subdirs will remain.  This will clean out empty dirs:
#
/bin/rmdir --ignore-fail-on-non-empty /oracle/app/admin/orcl/cdump/core* >/dev/null 2>&1
#
# STEP 2 - Rotate log files
#
# Use the Linux log rotator for typical log files (alert log, listener.log, etc)
# that constantly grow.
# Note that the -f (force) option may be *required* the first time in order to
# generate a status file.  If new logs are added in the future they must be added
# to the status file or they will not be rotated.
#
/usr/sbin/logrotate -v -s /var/lib/oralogrotate.status /etc/oralogrotate.conf >/home/oracle/oralogrotate_`date +%F:%T`_`hostname -s`.log 2>&1

Typical oralogrotate.conf file in /etc:

-rw-r--r-- 1 oracle dba 423 Dec  6 17:25 oralogrotate.conf

monthly
rotate 12
copytruncate
compress
/oracle/app/admin/+ASM/bdump/alert*.log {
}
/oracle/app/admin/ORCL/bdump/alert*.log {
}
/oracle/app/admin/ORCL/bdump/drc*.log {
}
/oracle/app/product/10.2.0.1/db_1/network/log/*.log {
}
/oracle/app/product/10.2.0.1/crs_1/log/gamma1/*.log {
}
/oracle/app/product/10.2.0.1/db_1/oc4j/j2ee/OC4J_DBConsole_gamma1_ORCL1/log/*.log {
}
/oracle/app/product/10.2.0.1/db_1/gamma1_ORCL1/sysman/log/emdb.nohup {
}
Note: on systems that use ASM or RAC, or other additional Oracle homes, each Oracle home must be scanned for candidate log files and added to this file.  Be careful of root permission issues on some Clusterware log files. 

Typical oralogrotate.status file in /var/lib:

-rw-r--r-- 1 root root 1110 Dec  6 17:11 oralogrotate.status

logrotate state -- version 2
"/oracle/app/admin/orcl/bdump/alert_orcl.log" 2008-12-6
"/oracle/app/product/10.2.0.1/db_1/sqlnet.log" 2008-12-6
"/oracle/app/product/10.2.0.1/db_1/network/log/listener.log" 2008-12-5
"/oracle/app/product/10.2.0.1/db_1/network/log/sqlnet.log" 2008-12-5
"/oracle/app/product/10.2.0.1/db_1/gamma1.psoug.org_orcl/sysman/log/emagent.log" 2008-12-6
"/oracle/app/product/10.2.0.1/db_1/gamma1.psoug.org_orcl/sysman/log/emagentfetchlet.log" 2008-12-6
"/oracle/app/product/10.2.0.1/db_1/gamma1.psoug.org_orcl/sysman/log/emdctl.log" 2008-12-6
"/oracle/app/product/10.2.0.1/db_1/gamma1.psoug.org_orcl/sysman/log/emoms.log" 2008-12-6
"/oracle/app/product/10.2.0.1/db_1/gamma1.psoug.org_orcl/sysman/log/emagent.trc" 2008-12-6
"/oracle/app/product/10.2.0.1/db_1/gamma1.psoug.org_orcl/sysman/log/emagentfetchlet.trc" 2008-12-6
"/oracle/app/product/10.2.0.1/db_1/gamma1.psoug.org_orcl/sysman/log/emdctl.trc" 2008-12-6
"/oracle/app/product/10.2.0.1/db_1/gamma1.psoug.org_orcl/sysman/log/emoms.trc" 2008-12-6
"/oracle/app/product/10.2.0.1/db_1/gamma1.psoug.org_orcl/sysman/log/emdb.nohup" 2008-12-6

Note: this file can be created manually, or by running oralogrotate the first time with the -f (force) option which causes logs to be rotated whether it is needed or not.   As new log files are added to the configuration, or to the system under wildcard directories, they must be added to the status file or they will not be rotated.