Performance Tuning

Last modification: 18-Aug-11

            - Installation and Top Init.ora Parameters
             -  Oracle Performance Checklist
            - Instance Tuning
            - Application and SQL Tuning
            - Distribution of Disk I/O
            - ANALYZE and DBMS_STATS Package
            - Working with UNDO
            - Indexes on Foreign Keys (FK)
            - Rebuild Indexes
            - Hints
            - Nologging
            - CBO Options (Optimizer Mode)
            - Connect using IPC to Local Databases
            - Space used per block


Memory Tuning
The total available memory on a system should be configured in such a manner, that all components of the system function at optimum levels. The following is a rule-of-thumb breakdown to help assist in memory allocation for the various components in a system with an Oracle back-end.



Oracle SGA Components

~ 50%

Operating System +Related Components


User Memory

~ 35%

The following is a rule-of-thumb breakdown of the ~50% of memory that is allocated for an Oracle SGA. These are good starting numbers and will potentially require fine-tuning, when the nature and access patterns of the application is determined.



Database Buffer Cache


Shared Pool Area


Fixed Size + Misc


Redo Log Buffer


The following is an example to illustrate the above guidelines. In the following example, it is assumed that the system is configured with 2 GB of memory, with an average of 100 concurrent sessions at any given time. The application requires response times within a few seconds and is mainly transactional. But it does support batch reports at regular intervals.



Oracle SGA Components


Operating System +Related Components


User Memory


In the aforementioned breakdown, approximately 694MB of memory will be available for Program Global Areas (PGA) of all Oracle Server processes. Again, assuming 100 concurrent sessions, the average memory consumption for a given PGA should not exceed ~7MB. It should be noted that SORT_AREA_SIZE is part of the PGA.



Database Buffer Cache


Shared Pool Area

~128 - 188

Fixed Size + Misc

~ 8

Redo Log Buffer

~ 1 (average size 512K)

Another Example

Let's assume that we have a high water mark of 100 connects sessions to our Oracle database server. We multiply 100 by the total area for each PGA memory region, and we can now determine the maximum size of our SGA:
The total RAM demands for Oracle is 20 percent of total RAM for MS-Windows, 10% of RAM for UNIX

Here we can see the values for sort_area_size and hash_area_size for our Oracle database. To compute the value for the size of each PGA RAM region, we can write a quick data dictionary query against the v$parameter view :
set pages 999;
column pga_size format 999,999,999
    2048576 + a.value + b.value   pga_size
from v$parameter a,  v$parameter b
where = 'sort_area_size'
and = 'hash_area_size';


The output from this data dictionary query shows that every connected Oracle session will use 3.6 megabytes of RAM memory for the Oracle PGA. Now, if we were to multiply the number of connected users by the total PGA demands for each connected user, we will know exactly how much RAM memory in order to reserve for connected sessions.

Total RAM on Windows Server          1250 MB
Total PGA regions for 100 users:      362 MB
RAM reserved for Windows (20 percent) 500 MB
                                      862 MB

Hence, we would want to adjust the RAM to the data buffers in order to make the SGA size less than 388 MB (that is 1250MB - 862 MB). Any SGA size greater than 388 MB, and the server will start RAM paging, adversely affecting the performance of the entire server. The final task is to size the Oracle SGA such that the total memory involved does not exceed 388 MB.

Examples for UNIX Environments
0) for super machines with 4 GB of ram & swap 12 GB
set shmsys:shminfo_shmmax=3221225471
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=1024
set shmsys:shminfo_shmseg=100
set semsys:seminfo_semmni=1024
set semsys:seminfo_semmns=163840
set semsys:seminfo_semmsl=160
set semsys:seminfo_semmap=163840
set semsys:seminfo_semmnu=163840
set msgsys:msginfo_msgmap=163840
set msgsys:msginfo_msgmax=6144
set msgsys:msginfo_msgmni=640
set msgsys:msginfo_msgssz=64
set msgsys:msginfo_msgtql=640
set msgsys:msginfo_msgseg=32768

1) For high end machines with 2 GB of RAM & 6 GB of swap, we recommend the following:
set shmsys:shminfo_shmmax=1073741824
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=250
set shmsys:shminfo_shmseg=100
set semsys:seminfo_semmni=750
set semsys:seminfo_semmns=75000
set semsys:seminfo_semmsl=100
set semsys:seminfo_semmap=75000
set semsys:seminfo_semmnu=75000
set msgsys:msginfo_msgmap=75000
set msgsys:msginfo_msgmax=6144
set msgsys:msginfo_msgmni=640
set msgsys:msginfo_msgssz=64
set msgsys:msginfo_msgtql=640
set msgsys:msginfo_msgseg=32768

2) For medium end machines with 1 GB of RAM & 3 GB of swap we recommend the following:
set shmsys:shminfo_shmmax=536870912
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=150
set shmsys:shminfo_shmseg=50
set semsys:seminfo_semmni=500
set semsys:seminfo_semmns=50000
set semsys:seminfo_semmsl=100
set semsys:seminfo_semmap=50000
set semsys:seminfo_semmnu=50000
set msgsys:msginfo_msgmap=50000
set msgsys:msginfo_msgmax=2048
set msgsys:msginfo_msgmni=512
set msgsys:msginfo_msgssz=32
set msgsys:msginfo_msgtql=512
set msgsys:msginfo_msgseg=16384

Top Oracle Init.ora's Parameters

Reference for setting DB_BLOCK_LRU_LATCHES parameter
Default value: 1/2 the # of CPU's
MAX Value: Min 1, Max about 6 * max(#cpu's,#processor groups)
1)Oracle has found that a optimal value for this would be 2 X #CPU's and would recommend testing at this level.
2)Also setting this parameter to a multiple of #CPU's is important for Oracle to properly allocate and utilize working sets.
3)This value is hard coded in 9i
Increasing this parameter greater than 2 X #CPU's may have a negative impact on the system.

You have just upgraded to 8.0 or 8.1 and have found that there are 2 new parameters regarding DBWR. You are wondering what the differences are and which one you should use.

In Oracle7, the multiple DBWR processes were simple slave processes; i.e., unable to perform async I/O calls. In Oracle80, true asynchronous I/O is provided to the slave processes, if available. This feature is implemented via the init.ora parameter dbwr_io_slaves. With dbwr_io_slaves, there is still a master DBWR process and its slave processes. This feature is very similar to the db_writers in Oracle7, except the IO slaves are now capable of asynchronous I/O on systems that provide native async I/O, thus allowing for much better throughput as slaves are not blocked after the I/O call. I/O slaves for DBWR are allocated immediately following database open when the first I/O request is made. 

Multiple database writers is implemented via the init.ora parameter db_writer_processes. This feature was enabled in Oracle8.0.4, and allows true database writers; i.e., no master-slave relationship. With Oracle8 db_writer_processes, each writer process is assigned to a LRU latch set. Thus, it is recommended to set db_writer_processes equal to the number of LRU latches (db_block_lru_latches) and not exceed the number of CPUs on the system. For example, if db_writer_processes was set to four and db_lru_latches=4, then each writer process will manage its corresponding set.

Things to know and watch out for....
1. Multiple DBWRs and DBWR IO slaves cannot coexist. If both are enabled, then the following error message is produced: ksdwra("Cannot start multiple dbwrs when using I/O slaves.\n"); Moreover, if both parameters are enabled, dbwr_io_slaves will take precedence.
2. The number of DBWRs cannot exceed the number of db_block_lru_latches. If it does, then the number of DBWRs will be minimized to equal the number of db_block_lru_latches and the following message is produced in the  alert.log during startup:  ("Cannot start more dbwrs than db_block_lru_latches.\n"); However, the number of lru latches can exceed the number of DBWRs. 
3. dbwr_io_slaves are not restricted to the db_block_lru_latches;  i.e., dbwr_io_slaves >= db_block_lru_latches.

Although both implementations of DBWR processes may be beneficial, the general rule, on which option to use, depends on the following : 
1) the amount write activity; 
2) the number of CPUs (the number of CPUs is also indirectly related to the number LRU latch sets); 
3) the size of the buffer cache; 
4) the availability of asynchronous I/O (from the OS).

There is NOT a definite answer to this question but here are some considerations to have when making your choice. Please note that it is recommended to try BOTH (not simultaneously) against your system to determine which best fits the environment. 

-- If the buffer cache is very large (100,000 buffers and up) and the application is write intensive, then db_writer_processes may be beneficial. Note, the number of writer processes should not exceed the number of CPUs.

-- If the application is not very write intensive (or even a DSS system) and async I/O is available, then consider a single DBWR writer process;  If async I/O is not available then use dbwr_io_slaves.

-- If the system is a uniprocessor(1 CPU) then implement may want to use dbwr_io_slaves.

Implementing db_io_slaves or db_writer_processes comes with some overhead cost.  Multiple writer processes and IO slaves are advanced features, meant for high IO throughput. Implement this feature only if the database environment requires such IO throughput. In some cases, it may be acceptable to disable  I/O slaves and run with a single DBWR process.

Other Ways to Tune DBWR Processes
It can be easily seen that reducing buffer operations will be a direct benefit to DBWR and also help overall database performance. Buffer operations can be reduced by: 
1) using dedicated temporary tablespaces 
2) direct sort reads
3) direct Sqlloads
4) performing direct exports. 

In addition, keeping a high buffer cache hit ratio will be extremely beneficial not only to the response time of applications, but the DBWR as well. 

Oracle Performance Checklist
As a consultant, I follow a standard procedure when I come into a new shop with a database that I have never seen before. My goal is to quickly identify and correct performance problems. Here is a summary of the things that I look at first:
1 - Install STATSPACK first, and get hourly snaps working.
2 - Get an SQL access report, a spreport during peak times, and statspack_alert.sql output.
3 - Look for silver bullets:
        - partial schema stats
        - missing indexes
        -optimizer_index_cost_adj=15        #10-15 for OLTP systems, 50 for DW  #This adjusts the optimizer to favor index access
        -optimizer_index_caching=85 (depending on RAM for index caching, around 85)
        - optimizer_mode=first_rows (for OLTP)  #More information, HERE
        - hash_area_size too small (too many nested loop joins)
        - parallel_automatic_tuning=TRUE When set to "on", this parameter parallelizes full-table scans . Because parallel full-table scans are very fast, the CBO will give a higher cost to index access and be friendlier to full-table scans.
4 - Fully utilize server RAM - On a dedicated Oracle server, use all extra RAM for db_cache_size less PGA's and 20% RAM reserve for OS.
5 - Get the bottlenecks - See STATSPACK top 5 wait events - OEM performance pack reports - TOAD reports
6 - Look for Buffer Busy Waits resulting from table/index freelist shortages
7 - See if large-table full-table scans can be removed with well-placed indexes
8 - If tables are low volatility, seek an MV that can pre-join/pre-aggregate common queries. Turn-on automatic query rewrite
9 - Look for non-reentrant SQL - (literals values inside SQL from v$sql) - If so, set cursor_sharing=force
10 - Monitor over time - The ongoing STATSPACK reports should show any new performance problems.

1) Library Cache Hit Ratio:
In the most basic terms, the library cache is a memory structure that holds the parsed (ie. already examined to determine syntax correctness, security privileges, execution plan, etc.) versions of SQL statements that have been executed at least once. As new SQL statements arrive, older SQL statements will be pushed from the memory structure to provide space for the new statements. If the older SQL statements need to be re-executed, they will now have to be re-parsed. Also, a SQL statement that is not exactly the same as an already parsed statement (including even capitalization) will be reparsed even though it may perform the exact same operation. Parsing is an expensive operation, so the objective is to make the memory structure large enough to hold enough parsed SQL statements to avoid a large percentage of re-parsing.
Target: 99% or greater.
Value: SELECT (1 - SUM(reloads)/SUM(pins)) FROM v$librarycache;
Correction: Increase the SHARED_POOL_SIZE parameter (in bytes) in the INIT.ORA file.

2) Dictionary Cache Hit Ratio:
The dictionary cache is the memory structure that holds the most recently used contents of ORACLE's data dictionary, such as security privileges, table structures, column data types, etc. This data dictionary information is necessary for each and every parsing of a SQL statement. Recalling that memory is around 300 times faster than disk, it is needless to say that performance is improved by holding enough data dictionary information in memory to significantly minimize disk accesses.
Target: 90%
Value: SELECT (1 - SUM(getmisses)/SUM(gets)) FROM v$rowcache;
Correction: Increase the SHARED_POOL_SIZE parameter (in bytes) in the INIT.ORA file.

3) Buffer Cache Hit Ratio:
The buffer cache is the memory structure that holds the most recently used blocks read from disk, whether table, index, or other segment type. As new data is read into the buffer cache, data that hasn't been recently used is pushed out. Again recalling that memory is approximately 300 times faster than disk, the objective is to hold enough data in memory to minimize disk accesses. Note that data read from tables through the use of indexes is held in the buffer cache much longer than data read via full-table scans.
Target: 90% (although some shops find 80% or even 70% acceptable)
SELECT value FROM v$sysstat WHERE name = 'consistent gets';
SELECT value FROM v$sysstat WHERE name = 'db block gets';
SELECT value FROM v$sysstat WHERE name = 'physical reads';
Buffer cache hit ratio = 1 - physical reads/(consistent gets + db block gets)
Correction: Increase the DB_CACHE_SIZE parameter in the INIT.ORA file.
Other notes:
- Compare the values for "table scans" and "table access by rowid" in the v$sysstat table to gain general insight into whether additional indexing is needed. Tuning specific applications via indexing will increase the "table access by rowid" value (ie. tables read through the use of indexes) and decrease the "table scans" values. This effect tends to improve the buffer cache hit ratio since a smaller volume of data is read into the buffer cache from disk, so less previously cached data is pushed out. (See the article on application tuning for more details regarding indexing.)
- A low buffer cache hit ratio can very quickly lead to an I/O bound situation, as more reads are required per period of time to provide the requested data. When the reads/time period exceed the workload supported by the disk subsystem, exponential performance degradations can occur. (Please see the section on Operating System tuning.)
- Since the buffer cache will typically be the largest memory structure allocated in the ORACLE instance, it is the structure most likely to contribute to O/S paging. If the buffer cache is sized such that the hit ratio is 90%, but excessive paging occurs at this setting, performance may be better if the buffer cache were sized to achieve an 85% hit ratio. Careful analysis is necessary to balance the buffer cache hit ratio with the O/S paging rate.

4) Sort Area Hit Ratio:
Sorts that are too large to be performed in memory are written to disk. Once again, memory is about 300 times faster than disk, so for instances where a large volume of sorting occurs (such as decision support systems or data warehouses), sorting on disk can degrade performance. The objective, of course, is to allow a significant percentage of sorts to occur in memory.
Target: 90% (although many shops find 80% or less acceptable)
SELECT value FROM v$sysstat WHERE name = 'sorts (memory)';
SELECT value FROM v$sysstat WHERE name = 'sorts (disk)';
Sort area hit ratio = 1 - disk sorts/(memory sorts + disk sorts);

Correction: Increase the SORT_AREA_SIZE parameter (in bytes) in the INIT.ORA file.
Other notes:
- With release 7.3 and above, setting the SORT_DIRECT_WRITES = TRUE initialization parameter causes sorts to disk to bypass the buffer cache, thus improving the buffer cache hit ratio.
- As with buffer cache hit ratio, examine the values for "table scans" and "table access by rowid" in the v$sysstat table to determine if additional indexing is needed. In some cases, the optimizer will choose to retrieve the rows in the correct order by using the index, thus avoiding a sort. In other cases, retrieval by index rather than full-table scan tends to collect a smaller quantity of rows to be sorted, thus increasing the probability that the sort can occur in memory, which also tends to improve the sort area hit ratio.
- Also, as with buffer cache hit ratio, sort area size (if very large) can contribute to O/S paging. In general, sorting on disk should be favored over excessive paging, as paging effects all memory structures (ORACLE and non-ORACLE) while sorting on disk only effects sorts performed by the ORACLE instance.

5) Redo Log Space Requests:
Redo logs (and archive logs if the ORACLE instance is run in ARCHIVELOG mode) are transaction logs involving a variety of structures. The redo log buffer is a memory structure into which changes are recorded as they are applied to blocks in the buffer cache (including data, index, rollback segments, etc.). Committed changes are synchronously flushed to redo log file members on disk, while uncommited changes are asynchronously written to redo log files. (This approach makes perfect sense on inspection. If an instance crash occurs, commited changes are already written to the redo logs on disk and are applied during instance recovery. Uncommited changes in the redo log buffer not yet written to disk are lost, and any uncommited changes that have been written to disk are rolled-back during instance recovery.) A session performing an update and an immediate commit will not return until the committed change has been written to the redo log buffer and flushed to the redo log files on disk. Redo log groups are written to in a round-robin manner. When the mirrored members of a redo log group become full, a log switch occurs, thus archiving one member of the redo log group (if ARCHIVELOG mode is TRUE), then clearing the members of that redo log group. Note that a checkpoint also occurs at least on each redo log switch. In most basic form, the redo log buffer should be large enough that no waits for available space in the memory structure occur while changes are written to redo log files. The redo log file size should be large enough that the redo log buffer does not fill during a redo log switch. Finally, there should be enough redo log groups that the archiving and clearing of filled redo logs does not cause waits for redo log switches, thus causing the redo log buffer to fill. The inability to write changes to the redo log buffer because it is full is reported as redo log space requests in the v$sysstat table.
Target: 0
Value: SELECT value FROM v$sysstat WHERE name = 'redo log space requests';
- Increase the LOG_BUFFER parameter (in bytes) in the INIT.ORA file.
- Increase the redo log size.
- Increase the number of redo log groups.
Other notes:
- The default configuration of small redo log size and two redo log groups is seldom sufficient. Between 4 and 10 groups typically yields adequate results, depending on the particular archive log destination (whether a single disk, RAID array, or tape). Size will be very dependent upon the specific application characteristics and throughput requirements, and can range from less than 10 Mb to 500 Mb or greater.
- Since redo log sizes and groups can be changed without a shutdown/restart of the instance, increasing the redo log size and number of groups is typically the best area to start tuning for reduction of redo log space requests. If increasing the redo log size and number of groups appears to have little impact on redo log space requests, then increase the LOG_BUFFER initialization parameter.

6) Redo Buffer Latch Miss Ratio:
One of the two types of memory structure locking mechanisms used by an ORACLE instance is the latch. A latch is a locking mechanism that is implemented entirely within the executable code of the instance (as opposed to an enqueue, see below). Latch mechanisms most likely to suffer from contention involve requests to write data into the redo log buffer. To serve the intended purpose, writes to the redo log buffer must be serialized (ie. one process locks the buffer, writes to it, then unlocks it, a second process locks, writes, and unlocks, etc., while other processes wait for their chance to acquire these same locks). There are four different groupings applicable to redo buffer latches: redo allocation latches and redo copy latches, each with immediate and willing-to-wait priorities. Redo allocation latches are acquired by small redo entries (having an entry size smaller than or equal to the LOG_SMALL_ENTRY_MAX_SIZE initialization parameter) and utilize only a single CPU's resources for execution. Redo copy latches are requested by larger redo entries (entry size larger than the LOG_SMALL_ENTRY_MAX_SIZE), and take advantage of multiple CPU's for execution. Recall from above that committed changes are synchronously written to redo logs on disk: these entries require an immediate latch of the appropriate type. Uncommitted changes are asynchronously written to redo log files, thus they attempt to acquire a willing-to-wait latch of the appropriate type. Below, each category of redo buffer latch will be considered seperately.
- Redo allocation immediate and willing-to-wait latches:
Target: 1% or less
Value (immediate):
SELECT a.immediate_misses/(a.immediate_gets + a.immediate_misses + 0.000001)
FROM v$latch a, v$latchname b
WHERE = 'redo allocation' AND b.latch# = a.latch#;
Value (willing-to-wait):
SELECT a.misses/(a.gets + 0.000001)
FROM v$latch a, v$latchname b
WHERE = 'redo allocation' AND b.latch# = a.latch#;
Correction: Decrease the LOG_SMALL_ENTRY_MAX_SIZE parameter in the INIT.ORA file.
Other notes:
- By making the max size for a redo allocation latch smaller, more redo log buffer writes qualify for a redo copy latch instead, thus better utilizing multiple CPU's for the redo log buffer writes. Even though memory structure manipulation times are measured in nanoseconds, a larger write still takes longer than a smaller write. If the size for remaining writes done via redo allocation latches is small enough, they can be completed with little or no redo allocation latch contention.
- On a single CPU node, all log buffer writes are done via redo allocation latches. If log buffer latches are a significant bottleneck, performance can benefit from additional CPU's (thus enabling redo copy latches) even if the CPU utilization is not an O/S level bottleneck.
- In the SELECT statements above, an extremely small value is added to the divisor to eliminate potential divide-by-zero errors.

- Redo copy immediate and willing-to-wait latches:
Target: 1% or less
Value (immediate):
SELECT a.immediate_misses/(a.immediate_gets + a.immediate_misses + 0.000001)
FROM v$latch a, v$latchname b
WHERE = 'redo copy' AND b.latch# = a.latch#;
Value (willing-to-wait):
SELECT a.misses/(a.gets + 0.000001)
FROM v$latch a, v$latchname b
WHERE = 'redo copy' AND b.latch# = a.latch#;
Correction: Increase the LOG_SIMULTANEOUS_COPIES parameter in the INIT.ORA file.
Other Notes:
- Essentially, this initialization parameter is the number of redo copy latches available. It defaults to the number of CPU's (assuming a multiple CPU node). Oracle Corporation recommends setting it as large as 2 times the number of CPU's on the particular node, although quite a bit of experimentation may be required to get the value adjusted in a suitable manner for any particular instance's workload. Depending on CPU capability and utilization, it may be beneficial to set this initialization parameter smaller or larger than 2 X #CPU's.
- Recall that the assignment of log buffer writes to either redo allocation latches or redo copy latches is controlled by the maximum log buffer write size allowed for a redo allocation latch, and is specified in the LOG_SMALL_ENTRY_MAX_SIZE initialization parameter. Recall also that redo copy latches apply only to multiple CPU hosts.

7) Enqueue Waits:
The second of the two types of memory structure locking mechanisms used by an ORACLE instance is the enqueue. As opposed to a latch, an enqueue is a lock implemented through the use of an operating system call, rather than entirely within the Instance's executable code. Exactly what operations use locks via enqueues is not made sufficiently clear from any Oracle documentation (or at least none that the author has seen), but the fact that enqueues waits do degrade instance performance is reasonably clear. Luckily, tuning enqueues is very straight-forward.
Target: 0
Value: SELECT value FROM v$sysstat WHERE name = 'enqueue waits';
Correction: Increase the ENQUEUE_RESOURCES parameter in the INIT.ORA file.

8) Checkpoint Contention:
A checkpoint is the process of flushing all changed data blocks (table, index, rollback segments, etc.) held in the buffer cache to their corresponding datafiles on disk. This process occurs during each redo log switch, each time the number of database blocks specified in the LOG_CHECKPOINT_INTERVAL initialization parameter is reached, and each time the number of seconds specified in the LOG_CHECKPOINT_TIMEOUT is reached. (Also, checkpoints occur during a NORMAL or IMMEDIATE SHUTDOWN, when a tablespace is placed in BACKUP mode, or when an ALTER SYSTEM CHECKPOINT is manually issued, but these occurrences are usually outside the scope of normal daytime operation.) Depending on the number of changed blocks in the buffer cache, a checkpoint can take considerable time to complete. Since this process is essentially done asynchronously, user sessions performing work will typically not have to wait for a checkpoint to complete. However checkpoints can effect overall system performance since they are fairly resource intensive operations, even though they occur in the background. Checkpoints are, of course, absolutely necessary, but it is quite possible for one checkpoint to begin (because of LOG_CHECKPOINT_INTERVAL or LOG_CHECKPOINT_TIMEOUT settings) and partially complete, then be rolled-back because another checkpoint was issued (perhaps because of a redo log switch). It is desirable to avoid this checkpoint contention because it wastes considerable resources that can be used by other processes. Checkpointing statistics are readily available in the v$sysstat table, and the contention is fairly simple to determine.
Target: 1 or less
SELECT value FROM v$sysstat WHERE name = 'background checkpoints started';
SELECT value FROM v$sysstat WHERE name = 'background checkpoints completed';
Checkpoints rolled-back = checkpoints started - checkpoints completed;
- Increase the LOG_CHECKPOINT_TIMEOUT parameter (in seconds) in the INIT.ORA file, or set it to 0 to disable time-based checkpointing. If time-based checkpointing is not disabled, set it to checkpoint once per hour or more.
- Increase the LOG_CHECKPOINT_INTERVAL parameter (in db blocks) in the INIT.ORA file, or set it to an arbitrarily large value so that change-based checkpoints will only occur during a redo log switch.
- Examine the redo log size and the resulting frequency of redo log switches.
Other notes: Note that regardless of the checkpoint frequency, no data is lost in the event of an instance crash. All changes are recorded to the redo logs and would be applied during instance recovery on the next startup, so checkpoint frequency will impact the time required for instance recovery. Presented below is a typical scenario:
- Set the LOG_CHECKPOINT_INTERVAL to an arbitrarily large value, set the LOG_CHECKPOINT_TIMEOUT to 2 hours, and size the redo logs so that a log switch will normally occur once per hour. During times of heavy OLTP activity, a change-based log switch will occur approximately once per hour, and no time-based checkpoints will occur. During periods of light OLTP activity, a time-based checkpoint will occur at least once every two hours, regardless of the number of changes. Setting the LOG_CHECKPOINT_INTERVAL arbitrarily large allows change-based checkpoint frequency to be adjusted during periods of heavy use by re-sizing the redo logs on-line rather than adjusting the initialization parameter and performing an instance shutdown/restart.

9) Rollback Segment Contention:
Rollback segments are the structures into which undo information for uncommited changes are temporarily stored. This behavior serves two purposes. First, a session can remove a change that was just issued by simply issuing a ROLLBACK rather than a COMMIT. Second, read consistency is established because a long-running SELECT statement against a table that is constantly being updated (for example) will get data that is consistent with the start time of the SELECT statement by reading undo information from the appropriate rollback segment. (Otherwise, the answer returned by the long-running SELECT would vary depending on whether that particular block was read before the update occurred, or after.) Rollback segments become a bottleneck when there are not enough to handle the load of concurrent activity, in which case, sessions will wait for write access to an available rollback segment. Some waits for rollback segment data blocks or header blocks (usually header blocks) will always occur, so criteria for tuning is to limit the waits to a very small percentage of the total number of all data blocks requested. Note that rollback segments function exactly like table segments or index segments: they are cached in the buffer cache, and periodically checkpointed to disk.
Target: 1% or less
Rollback waits = SELECT max(count) FROM v$waitstat
WHERE class IN ('system undo header', 'system undo block','undo header', 'undo block')
GROUP BY class;
Block gets = SELECT sum(value) FROM v$sysstat WHERE name IN ('consistent gets','db block gets');
Rollback segment contention ratio = rollback waits / block gets
Correction: Create additional rollback segments.

10) Freelist contention:
In each table, index, or other segment type, the first one or more blocks contain one or more freelists. The freelist(s) identify the blocks in that segment that have free space available and can accept more data. Any INSERT, UPDATE, or DELETE activity will cause the freelist(s) to be accessed. Change activity with a high level of concurrency may cause waits to access to these freelist(s). This is seldom a problem in decision support systems or data warehouses (where updates are processed as nightly single-session batch jobs, for example), but can become a bottleneck with OLTP systems supporting large numbers of users. Unfortunately, there are no initialization parameters or other instance-wide settings to correct freelist contention: this must be corrected on a table by table basis by re-creating the table with additional freelists and/or by modifying the PCT_USED parameter. (Please see the article on storage management.) However, freelist contention can be measured at the instance level. Some freelist waits will always occur; the objective is to limit the freelist waits to a small percentage of the total blocks requested.
Target: 1% or less
Freelist waits = SELECT count FROM v$waitstat WHERE class = 'free list';
Block gets = SELECT sum(value) FROM v$sysstat WHERE name IN ('consistent gets','db block gets');
Freelist contention ratio = Freelist waits / block gets
Correction: No method for instance-level correction. Please see the article on storage management.

11) Oracle Session hogs
If the complaint of poor performance is current, then the connected sessions are one of the first things to check to see which users are impacting the system in undesirable ways. There are a couple of different avenues to take here. First, you can get an idea of the percentage that each session is/has taken up with respect to I/O. One rule of thumb is that if any session is currently consuming 50% or more of the total I/O, then that session and its SQL need to be investigated further to determine what activity it is engaged in. If you are a DBA that is just concerned with physical I/O, then the physpctio.sql query will provide the information you need:
This script queries the sys.v_$statname, sys.v_$sesstat, sys.v_$session, and sys.v_$bgprocess views.
select sid, username,
       round(100 * total_user_io/total_io,2) tot_io_pct
from (select b.sid sid,nvl(b.username, username,
             sum(value) total_user_io
        from sys.v_$statname c, sys.v_$sesstat a,
             sys.v_$session b, sys.v_$bgprocess p
        where a.statistic#=c.statistic# and
              p.paddr (+) = b.paddr and
              b.sid=a.sid and
     in ('physical reads',
                         'physical writes',
                         'physical writes direct',
                         'physical reads direct',
                         'physical writes direct (lob)',
                         'physical reads direct (lob)')
group by b.sid, nvl(b.username,,
        (select sum(value) total_io
          from sys.v_$statname c, sys.v_$sesstat a
          where a.statistic#=c.statistic# and
       in ('physical reads',
                           'physical writes',
                           'physical writes direct',
                           'physical reads direct',
                           'physical writes direct (lob)',
                           'physical reads direct (lob)'))
order by 3 desc;
Regardless of which query you use, the output might resemble something like the following:
----  --------   -------------------
9     USR1       71.26
20    SYS        15.76
5     SMON       7.11
2     DBWR       4.28
12    SYS        1.42
6     RECO       .12
7     SNP0       .01
10    SNP3       .01
11    SNP4       .01
8     SNP1       .01
1     PMON       0
3     ARCH       0
4     LGWR       0
In the above example, a DBA would be prudent to examine the USR1 session to see what SQL calls they are making. You can see that the above queries are excellent weapons that you can use to quickly pinpoint problem I/O sessions.

Application and SQL Tuning

* Check DB Parameters
select substr(name,1,20), substr(value,1,40), isdefault, isses_modifiable, issys_modifiable
  from v$parameter
  where issys_modifiable <> 'FALSE'
     or isses_modifiable <> 'FALSE'
  order by name;

* The SQL sentences must be the same in order to re-use them in memory.

* Size of Database
compute sum of bytes on report
break on report
Select tablespace_name, sum(bytes) bytes
From dba_data_files
Group by tablespace_name;

* How much Space is Left?
compute sum of bytes on report
Select tablespace_name, sum(bytes) bytes
From dba_free_space
Group by  tablespace_name;

* Memory Values.
select substr(name,1,35) name, substr(value,1,25) value
from v$parameter
where name in ('db_cache_size','db_block_size','shared_pool_size','sort_area_size'); 

* Identify the SQL responsible for the most BUFFER HITS and/or DISK READS. If I want to see what is on SQL AREA:
SELECT SUBSTR(sql_text,1,80) Text, disk_reads, buffer_gets, executions
   FROM v$sqlarea
   WHERE executions  > 0
    AND buffer_gets > 100000
    and DISK_READS > 100000

The column BUFFER_GETS is the total number of times the SQL statement read a database block from the buffer cache in the SGA. Since almost every SQL operation passes through the buffer cache, this value represents the best metric for determining how much work is being performed. It is not perfect, as there are many direct-read operations in Oracle that completely bypass the buffer cache. So, supplementing this information, the column DISK_READS is the total number times the SQL statement read database blocks from disk, either to satisfy a logical read or to satisfy a direct-read. Thus, the formula:
is a very adequate metric of the amount of work being performed by a SQL statement. The weighting factor of 100 is completely arbitrary, but it reflects the fact that DISK_READS are inherently more expensive than BUFFER_GETS to shared memory.
Patterns to look for
DISK_READS close to or equal to BUFFER_GETS This indicates that most (if not all) of the gets or logical reads of database blocks are becoming physical reads against the disk drives. This generally indicates a full-table scan, which is usually not desirable but which usually can be quite easy to fix.

* Finding the top 25 SQL
 top25 number;
 text1 varchar2(4000);
 x number;
 len1 number;
cursor c1 is
  select buffer_gets, substr(sql_text,1,4000)
  from v$sqlarea
  order by buffer_gets desc;
 dbms_output.put_line('Gets'||'    '||'Text');
 ' '||'----------------------');
 open c1;
 for i in 1..25 loop
  fetch c1 into top25, text1;
   ' '||substr(text1,1,66));
  while len1 > x-1 loop
   dbms_output.put_line('"         '||substr(text1,x,66));
  end loop;
 end loop;

* Displays the porcentage of SQL executed that did NOT incur an expensive hard parse. So a low number may indicate a literal SQL or other sharing problem.
Ratio success is dependant on your development environment.  OLTP should be 90 percent.

select 100 * (1-a.hard_parses/b.executions) noparse_hitratio
   from (select value hard_parses
         from v$sysstat
         where name = 'parse count (hard)' ) a
     ,(select value executions
         from v$sysstat
         where name = 'execute count') b;

column HitRatio format 999.99
select substr(Username,1,15) username, Consistent_Gets,
              Block_Gets, Physical_Reads, 100*(Consistent_Gets+Block_Gets-Physical_Reads)/(Consistent_Gets+Block_Gets) HitRatio
   and (Consistent_Gets+Block_Gets)>0
   and Username is not null;

select substr(DF.Name,1,40) File_Name,
              FS.Phyblkrd Blocks_Read,
              FS.Phyblkwrt Blocks_Written,
              FS.Phyblkrd+FS.Phyblkwrt Total_IOs
where DF.File#=FS.File#
order by FS.Phyblkrd+FS.Phyblkwrt desc;

* Schema's Report
select substr(username,1,10) "Username", created "Created",
          substr(granted_role,1,25) "Roles",
          substr(default_tablespace,1,15) "Default TS",
          substr(temporary_tablespace,1,15) "Temporary TS"
from sys.dba_users, sys.dba_role_privs
where username = grantee (+)
order by username;

* Free space on TABLESPACES:
select substr(a.tablespace_name,1,10) tablespace,
       round(sum(a.total1)/1024/1024, 1) Total,
       round(sum(a.total1)/1024/1024, 1)-
       round(sum(a.sum1)/1024/1024, 1) used,
       round(sum(a.sum1)/1024/1024, 1) Free,
       round(sum(a.sum1)/1024/1024,1)*100/round(sum(a.total1)/1024/1024,1) porciento_fr,
       round(sum(a.maxb)/1024/1024, 1) Largest,
       max(a.cnt) Fragment
from (select tablespace_name, 0 total1, sum(bytes) sum1,
             max(bytes) MAXB,count(bytes) cnt
       from dba_free_space
       group by tablespace_name
       select tablespace_name, sum(bytes) total1, 0, 0, 0
        from dba_data_files
        group by tablespace_name) a
 group by a.tablespace_name;

* Segments whose next extent can't fit
select substr(owner,1,10) owner, substr(segment_name,1,40) segment_name, substr(segment_type,1,10) segment_type, next_extent
from dba_segments
where next_extent>
(select sum(bytes) from dba_free_space
where tablespace_name = dba_segments.tablespace_name);

* Find Tables/Indexes fragmented into > 15 pieces
Select substr(owner,1,8) owner, substr(segment_name,1,42) segment_name, segment_type, extents
From  dba_segments
Where extents > 15;

* COALESCING FREE SPACE = Los distintos bloques libres (chunks) que sean adjuntos se pueden juntar en uno mas grande. Inspecciono con:
select file_id, block_id, blocks, bytes from dba_free_space
where tablespace_name = 'xxx' order by 1,2;
Esto me devuelve una lista de resultados. Si file_id de 2 filas es igual y el block_id + blocks = Block_id de la fila siguiente, entonces los puedo juntar.

* Quick Script to coalesce all the tablespaces tablespaces
set echo off pages 0 trimsp off feed off
spool coalesce.sql
select 'alter tablespace '||tablespace_name||' coalesce;'
from sys.dba_tablespaces
where tablespace_name not in ('TEMP','ROLLBACK');
spool off
host rm coalesce.sql

* Information about a Table
Select  Table_Name, Initial_Extent, Next_Extent,
  Pct_Free, Pct_Increase
From  dba_tables
Where  Table_Name = upper('&Table_name');

* Information about an Index:
Select  Index_name, Initial_Extent, Next_Extent
From  Dba_indexes
Where  Index_Name = upper('&Index_name');

* Fixing Table Fragmentation
Example: CUSTOMER Table is fragmented
Currently in 22 Extents of 1M each.
(Can be found by querying DBA_EXTENTS)

(Create all necessary privileges,grants, etc.)

PINS and UNPIN objects:
        execute dbms_shared_pool_keep('object_name','P o R o Q');
Use 'P' for procedure (or funcion), 'R' for trigger and 'Q' for sequence. Previously I should run the package dbmspool.sql y prvtpool.plb located on  $ORACLE_HOME/rdbms/admin as sys or internal and grant execute on dbms_shared_pool.
        exec dbms_shared_pool.unkeep('SCOTT.TEMP','P');
If you want to have a table in memory, add the CACHE word at the end of the creation script. You can also use the  /*+ cache(table) */ hint.

To Load the code automatically on each startup::

1- Create the following Trigger
create or replace trigger pin_packs
after startup on database
      --You can interrogate the v$db_object_cache view to see the most frequently used packages
      -- Application-specific packages
      -- Oracle-supplied software packages
      -- Son usados estos?

2- Run the following Script to check pinned/unpinned packages
SELECT substr(owner,1,10)||'.'||substr(name,1,35) "Object Name",
                '  Type: '||substr(type,1,12)||
                '  size: '||sharable_mem ||
                ' execs: '||executions||
                ' loads: '||loads||
                ' Kept: '||kept
  FROM v$db_object_cache
--   AND  executions > 0
  ORDER BY executions desc,
           loads desc,
           sharable_mem desc;

* To find out chained rows

Then from DBA_TABLES,

This will give us the chained rows as a percentage of the total number of rows in that table. If this percentage is high near 5% and the row doe not contain LONG or similar datatype or the row can be contained inside one single data block then PCTFREE should definitely be decreased.


Distribution of disk I/O

1-  exec, index, redo logs, export files, control files
2-  data, rollback segments, temp, archive log files, control files

Disk 1: SYSTEM tablespace, control file, redo log
Disk 2: INDEX tablespace, control file, redo log, ROLLBACK tablespace
Disk 3: DATA tablespace, control file, redo log
Disk 1: SYSTEM tablespace, control file, redo log
Disk 2: INDEX tablespace, control file, redo log
Disk 3: DATA tablespace, control file, redo log, ROLLBACK tablespace

1-  exec, redo logs, export files, control files
2-  data, temp, control files
3-  indexes, control files
4-  archive logs, rollback segs, control files

1-  exec, redo logs, system tablespace, control files
2-  data, temp, control files
3-  indexes, control files
4-  rollback segments, export, control files
5-  archive, control files


Oracle Corporation strongly recommends that you use the DBMS_STATS package rather than ANALYZE to collect optimizer statistics. That package lets you collect statistics in parallel, collect global statistics for partitioned objects, and fine tune your statistics collection in other ways. Further, the cost-based optimizer will eventually use only statistics that have been collected by DBMS_STATS

The DBMS_STATS package can gather statistics on indexes, tables, columns, and partitions, as well as statistics on all schema objects in a schema or database. The statistics-gathering operations can run either serially or in parallel (DATABASE/SCHEMA/TABLE only)

Previous to 8i, you would be using the ANALYZE ... methods. However 8i onwards, using ANALYZE for this purpose is not recommended because of various restrictions; for example:

  1. ANALYZE always runs serially.
  2. ANALYZE calculates global statistics for partitioned tables and indexes instead of gathering them directly. This can lead to inaccuracies for some statistics, such as the number of distinct values.
  3. ANALYZE cannot overwrite or delete some of the values of statistics that were gathered by DBMS_STATS.
  4. Most importantly, in the future, ANALYZE will not collect statistics needed by the cost-based optimiser.
ANALYZE can gather additional information that is not used by the optimiser, such as information about chained rows and the structural integrity of indexes, tables, and clusters. DBMS_STATS does not gather this information.
- In 10g statistics get gathered automatically

DML Monitoring
Used by dbms_stats to identify objects with "stale" statistics
- On by default in 10g, not in 9i
alter table <table_name> monitoring;
- 9i and 10g use 10% change as the threshold to gather stats

In Oracle 10g, Oracle automatically gathers index statistics whenever the index is created or rebuilt.


EXEC DBMS_STATS.gather_table_stats(USER,  'LOOKUP', cascade => TRUE);

execute dbms_stats.gather_table_stats 
                        (ownname  => 'SCOTT'
                        , tabname => 'DEPT'
                        , partname=> null
                        , estimate_percent => 20
                        , degree => 5
                        , cascade => true
                        , options => 'GATHER AUTO');

execute dbms_stats.gather_schema_stats
                        (ownname => 'SCOTT'
                        , estimate_percent => 10
                        , degree => 5
                        , cascade => true);

execute dbms_stats.gather_database_stats
                        (estimate_percent => 20
                        , degree => 5
                        , cascade => true);

There are several values for the options parameter that we need to know about:
 -    gather - re-analyzes the whole schema.
-     gather empty - Only analyze tables that have no existing statistics.
-     gather stale - Only re-analyze tables with more than 10% modifications (inserts, updates, deletes). The table should be in monitor status first.
-     gather auto - This will re-analyze objects which currently have no statistics and objects with stale statistics.The table should be in monitor status first.
                            Using gather auto is like combining gather stale and gather empty.
Note that both gather stale and gather auto require monitoring. If you issue the "alter table xxx monitoring" command, Oracle tracks changed tables with the dba_tab_modifications view. Below we see that the exact number of inserts, updates and deletes are tracked since the last analysis of statistics.
The most interesting of these options is the gather stale option. Because all statistics will become stale quickly in a robust OLTP database, we must remember the rule for gather stale is > 10% row change (based on num_rows at statistics collection time).
Hence, almost every table except read-only tables will be re-analyzed with the gather stale option. Hence, the gather stale option is best for systems that are largely read-only. For example, if only 5% of the database tables get significant updates, then only 5% of the tables will be re-analyzed with the "gather stale" option.
The CASCADE => TRUE option causes all indexes for the tables to also be analyzed.  In Oracle 10g, set CASCADE to AUTO_CASCADE to let Oracle decide whether or not new index statistics are needed.
The DEGREE Option

Note that you can also parallelize the collection of statistics because the CBO does full-table and full-index scans. When you set degree=x, Oracle will invoke parallel query slave processes to speed up table access. Degree is usually about equal to the number of CPUs, minus 1 (for the OPQ query coordinator).
In Oracle 10g, set DEGREE to DBMS_STATS.AUTO_DEGREE to let Oracle select the appropriate degree of parallelism.

Force Statistics to a Table
You can use the following sentence to force statistics to a Table:

exec dbms_stats.set_table_stats( user, 'EMP', numrows => 1000000, numblks => 300000 );

New in Oracle Database 10g is the ability to gather statistics for the data dictionary.  The objective is to enhance the performance of queries.  There are two basic types of dictionary base tables. 
The statistics for normal base tables are gathered using GATHER_DICTIONARY STATISTICS.  They may also be gathered using GATHER_SCHEMA_STATS for the SYS schema.  Oracle recommends gathering these statistics at a similar frequency as your other database objects.
Statistics for fixed objects (the V$ views on the X$ tables) are gathered using the GATHER_FIXED_OBJECT_STATS procedure.  The initial collection of these statistics is normally sufficient.  Repeat only if workload characteristics have changed dramatically.  The SYSDBA privilege or ANALYZE ANY DICTIONARY and ANALYZE ANY privileges are required to execute the procedures for gathering data dictionary statistics.

SQL Source - Dynamic Method

sql_stmt    VARCHAR2(1024);
  FOR tab_rec IN (SELECT owner,table_name
                               FROM all_tables WHERE owner like UPPER('&1') ) LOOP
        sql_stmt := 'BEGIN dbms_stats.gather_table_stats 
                     (ownname  => :1,
                      tabname    => :2,
                      partname   => null,
                      estimate_percent => 10,
                      degree => 3 ,
                      cascade => true);  END;'  ;

      EXECUTE IMMEDIATE sql_stmt USING tab_rec.owner, tab_rec.table_name ;


* Some Dictionary Views
DBA_TABLES -> owner, table_name, num_rows, blocks, emptiy_blocks, avg_space, chain_cnt, avg_row_len, sample_size, last_analyzed
DBA_INDEXES -> owner, INDEX_name, leaf_blocks, distinct_keys, avg_leaf_blocks_per_key, avg_data_blocks_per_key.

More examples:
CREATE OR REPLACE PROCEDURE analyze_any_schema ( p_inOwner IN all_users.username%TYPE)
    FOR v_tabs  IN  (SELECT owner, table_name
                        FROM all_tables
                        WHERE owner       =   p_inOwner
                          AND temporary   <>  'Y')
        DBMS_OUTPUT.put_line ('EXEC  DBMS_STATS.gather_table_stats('''||v_tabs.owner||
                               ''','''||v_tabs.table_name||''',NULL,1);' );
           DBMS_OUTPUT.put_line ('Analyzed '||v_tabs.owner||'.'||table_name||'... ');
            WHEN OTHERS THEN
                DBMS_OUTPUT.put_line ('Exception on analysis of '||v_tabs.table_name||'!');
                DBMS_OUTPUT.put_line (SUBSTR(SQLERRM,1,255));

CREATE OR REPLACE Procedure DB_Maintenance_Weekly  is
  sql_stmt    varchar2(1024);
  v_sess_user varchar2(30);
    select sys_context('USERENV','SESSION_USER') into v_sess_user
      from dual ;
    --Analyze all Tables
    FOR tab_rec IN (SELECT table_name
                       FROM all_tables
                       WHERE owner = v_sess_user
                         and table_name not like 'TEMP_%') LOOP
        sql_stmt := 'BEGIN dbms_stats.gather_table_stats
                     (ownname  => :1,
                      tabname    => :2,
                      partname   => null,
                      estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
                      degree => 3 ,
                      cascade => true);  END;'  ;
      EXECUTE IMMEDIATE sql_stmt USING v_sess_user, tab_rec.table_name ;
        NULL ;

Working with UNDO Parameters
When you are working with UNDO Tablespace, there are two important things to consider:

To get information of your current settings you can use the following query:
set serveroutput on
 tsn    VARCHAR2(40);
 tss    NUMBER(10);
 aex    BOOLEAN;
 unr    NUMBER(5);
 rgt    BOOLEAN;
 retval BOOLEAN;
 v_undo_size NUMBER(10);
  select sum(a.bytes)/1024/1024 into v_undo_size
     from v$datafile a, v$tablespace b, dba_tablespaces c
     where c.contents = 'UNDO'
       and c.status = 'ONLINE'
       and = c.tablespace_name
       and a.ts# = b.ts#;

  retval := dbms_undo_adv.undo_info(tsn, tss, aex, unr, rgt);
  dbms_output.put_line('UNDO Tablespace is         : ' || tsn);
  dbms_output.put_line('UNDO Tablespace size is    : ' || TO_CHAR(v_undo_size) || ' MB');

  IF aex THEN
    dbms_output.put_line('Undo Autoextend is set to  : TRUE');
    dbms_output.put_line('Undo Autoextend is set to  : FALSE');

  dbms_output.put_line('Undo Retention is          : ' || TO_CHAR(unr));

  IF rgt THEN
    dbms_output.put_line('Undo Guarantee is set to   : TRUE');
    dbms_output.put_line('Undo Guarantee is set to   : FALSE');
UNDO Tablespace is         : UNDOTBS1

UNDO Tablespace size is    : 925 MB
Undo Autoextend is set to  : TRUE
Undo Retention is          : 900
Undo Guarantee is set to   : FALSE

You can choose to allocate a specific size for the UNDO tablespace and then set the UNDO_RETENTION parameter to an optimal value according to the UNDO size and the database activity. If your disk space is limited and you do not want to allocate more space than necessary to the UNDO tablespace, this is the way to proceed.  If you are not limited by disk space, then it would be better to choose the UNDO_RETENTION time that is best for you (for FLASHBACK, etc.). Allocate the appropriate size to the UNDO tablespace according to the database activity.
This tip help you get the information you need whatever the method you choose.

set serverout on size 1000000
set feedback off
set heading off
set lines 132
  cursor get_undo_stat is
     select d.undo_size/(1024*1024) "C1", substr(e.value,1,25)    "C2",
            (to_number(e.value) * to_number(f.value) * g.undo_block_per_sec) / (1024*1024) "C3",
            round((d.undo_size / (to_number(f.value) * g.undo_block_per_sec)))             "C4"
       from (select sum(a.bytes) undo_size
               from v$datafile a, v$tablespace b, dba_tablespaces c
               where c.contents = 'UNDO'
                 and c.status = 'ONLINE'
                 and = c.tablespace_name
                 and a.ts# = b.ts#)  d,
          v$parameter e, v$parameter f,
          (select max(undoblks/((end_time-begin_time)*3600*24)) undo_block_per_sec from v$undostat)  g
        where = 'undo_retention'
          and = 'db_block_size';
    dbms_output.put_line(chr(10)||chr(10)||chr(10)||chr(10) || 'To optimize UNDO you have two choices :');
  dbms_output.put_line('====================================================' || chr(10));

  for rec1 in get_undo_stat loop
      dbms_output.put_line('A) Adjust UNDO tablespace size according to UNDO_RETENTION :' || chr(10));
      dbms_output.put_line(rpad('ACTUAL UNDO SIZE ',60,'.')|| ' : ' || TO_CHAR(rec1.c1,'999999') || ' MB');
      dbms_output.put_line(rpad('OPTIMAL UNDO SIZE WITH ACTUAL UNDO_RETENTION (' || ltrim(TO_CHAR(rec1.c2,'999999')) || ' SECONDS) ',60,'.') || ' : ' || TO_CHAR(rec1.c3,'999999') || ' MB');
      dbms_output.put_line('B) Adjust UNDO_RETENTION according to UNDO tablespace size :' || chr(10));
      dbms_output.put_line(rpad('ACTUAL UNDO RETENTION ',60,'.') || ' : ' || TO_CHAR(rec1.c2,'999999') || ' SECONDS');
      dbms_output.put_line(rpad('OPTIMAL UNDO RETENTION WITH ACTUAL UNDO SIZE (' || ltrim(TO_CHAR(rec1.c1,'999999'))
|| ' MEGS) ',60,'.') || ' : ' || TO_CHAR(rec1.c4,'999999') || '
  end loop;

To optimize UNDO you have two choices :

A) Adjust UNDO tablespace size according to UNDO_RETENTION :
ACTUAL UNDO SIZE ........................................... :     925 MB

B) Adjust UNDO_RETENTION according to UNDO tablespace size :
ACTUAL UNDO RETENTION ...................................... :     900 SECONDS

Undo Segments

With the following query, we can check the segments of the UNDO:
SELECT, unexpired.unexpired, expired.expired
   FROM (SELECT Sum(bytes / 1024 / 1024) AS unexpired
           FROM dba_undo_extents
           WHERE status = 'UNEXPIRED') unexpired,
        (SELECT Sum(bytes / 1024 / 104) AS expired
           FROM dba_undo_extents tr
           WHERE status = 'EXPIRED') expired,
        (SELECT CASE
              WHEN Count(status) = 0 THEN 0
              ELSE Sum(bytes / 1024 / 1024)
            END AS active
           FROM dba_undo_extents
           WHERE status = 'ACTIVE') active;

---------- ---------- ----------
         0         10 100.923077

So when you execute an insert, you start using undo segments and those are in ACTIVE state until you fire the COMMIT.
Once the COMMIT is fired, they are on UNEXPIRED status (still using UNDO Tablespace) until they reach the "undo_retention" time.
Once that time is completed, they are moved to the EXPIRED status.

Monitorting Transactions in UNDO
It's possible to monitor the transactions that are taking UNDO segments with the following query:
SELECT v$transaction.status AS status_transaccion, start_time, logon_time, blocking_session_status,
       schemaname, machine, program, v$session.module, v$sqlarea.sql_text, serial#, sid, username,
       v$session.status AS status_sesion, v$session.sql_id, prev_sql_id
FROM v$transaction
INNER JOIN v$session ON v$transaction.ses_addr = v$session.saddr
LEFT JOIN v$sqlarea ON v$session.sql_id = v$sqlarea.sql_id;

If there are ACTIVE Transactions (not commited transactions) it will show:
08/16/11 09:08:40
LOGON_TIME 8/16/2011 9:08:15
PROGRAM sqlplus.exe
SQL_TEXT insert into state values (1111, 'AAA');
SERIAL# 1489
SQL_ID 9babjv8yq8ru3
PREV_SQL_ID 9babjv8yq8ru3

Where each value means:
LOGON_TIME = Date and time when the instruction was executed
BLOCKING_SESSION_STATUS = It says if this session is blocking another session
SCHEMANAME = Schema that executed the instruction.
MACHINE = Machine name that executed the instruction.
PROGRAM = Program Name that executed the instruction.
MODULE = Module that executed the instruction.
SQL_TEXT = Executed instruction.
SERIAL# = Serial number of the session that executed the instraction.
SID = ID Session that executed the instraction..
USERNAME = User that executed the instraction..
STATUS_SESION = Status of the that executed the instraction, ACTIVE if that is CURRENTLY performing any actions, INACTIVE if is not performing and actions.
SQL_ID = Internal ID that executed the instraction.
PREV_SQL_ID = ID of the instructions previous to the current executed instraction.

set serveroutput on
v_MaxHeight integer := 3;
v_MaxLeafsDeleted integer := 20;
v_Count integer := 0;

--Cursor to Manage NON-Partitioned Indexes
cursor cur_Global_Indexes is
select index_name, tablespace_name
from user_indexes
where partitioned = 'NO';

--Cursor to Manage Current Index
cursor cur_IndexStats is
select name, height, lf_rows as leafRows, del_lf_rows as leafRowsDeleted
from index_stats;
v_IndexStats cur_IndexStats%rowtype;

--Cursor to Manage Partitioned Indexes
cursor cur_Local_Indexes is
select index_name, partition_name, tablespace_name
from user_ind_partitions
where status = 'USABLE';

/* Global or Standard Indexes Section */
for v_IndexRec in cur_Global_Indexes
dbms_output.put_line('before analyze ' || v_IndexRec.index_name);
execute immediate 'analyze index ' || v_IndexRec.index_name || ' validate structure';
dbms_output.put_line('After analyze ');
open cur_IndexStats;
fetch cur_IndexStats into v_IndexStats;
if cur_IndexStats%found then
if (v_IndexStats.height > v_MaxHeight) OR
(v_IndexStats.leafRows > 0 AND v_IndexStats.leafRowsDeleted > 0 AND
(v_IndexStats.leafRowsDeleted * 100 / v_IndexStats.leafRows) > v_MaxLeafsDeleted) then

dbms_output.put_line('Rebuilding index ' || v_IndexRec.index_name || ' with '
|| to_char(v_IndexStats.height) || ' height and '
|| to_char(trunc(v_IndexStats.leafRowsDeleted * 100 / v_IndexStats.leafRows)) || ' % LeafRows');
--- Commented line was needed for Oracle 9i
--- On 10g Oracle now automatically collects statistics during index creation and rebuild
execute immediate 'alter index ' || v_IndexRec.index_name ||
' rebuild' ||
' parallel nologging compute statistics' ||
' tablespace ' || v_IndexRec.tablespace_name;
execute immediate 'alter index ' || v_IndexRec.index_name ||
' rebuild parallel nologging tablespace ' || v_IndexRec.tablespace_name;
v_Count := v_Count + 1;
when OTHERS then
dbms_output.put_line('The index ' || v_IndexRec.index_name || ' WAS NOT rebuilt');
end if;
end if;
close cur_IndexStats;
when OTHERS then
dbms_output.put_line('The index ' || v_IndexRec.index_name || ' WAS NOT ANALYZED');
end loop;

dbms_output.put_line('Global or Standard Indexes Rebuilt: ' || to_char(v_Count));
v_Count := 0;

/* Local indexes Section */
for v_IndexRec in cur_Local_Indexes
execute immediate 'analyze index ' || v_IndexRec.index_name ||
' partition (' || v_IndexRec.partition_name ||
') validate structure';

open cur_IndexStats;
fetch cur_IndexStats into v_IndexStats;
if cur_IndexStats%found then
if (v_IndexStats.height > v_MaxHeight) OR
(v_IndexStats.leafRows > 0 and v_IndexStats.leafRowsDeleted > 0 AND
(v_IndexStats.leafRowsDeleted * 100 / v_IndexStats.leafRows) > v_MaxLeafsDeleted) then

v_Count := v_Count + 1;
dbms_output.put_line('Rebuilding Index ' || v_IndexRec.index_name || '...');
/* execute immediate 'alter index ' || v_IndexRec.index_name ||
' rebuild' ||
' partition ' || v_IndexRec.partition_name ||
' parallel nologging compute statistics' ||
' tablespace ' || v_IndexRec.tablespace_name;
end if;
end if;
close cur_IndexStats;
end loop;
dbms_output.put_line('Local Indexes Rebuilt: ' || to_char(v_Count));

Make a Script

The drawback you will see when working with index_stats is that it only holds one row at a time. So we will first create a table to hold the results from this view:

create table t_ind_used_size
(owner varchar2(30)
,name varchar2(30)
,btree_space number(12)
,pct_used number(3)
,del_len number(12)
,dt date
tablespace xxx
storage (initial 256k next 256k pctincrease 0) pctused 80 pctfree 0;

Now I know that what I want to do is to check each index for a given owner:

v_stmt varchar2(100);
cursor c1 is
select owner,index_name from dba_indexes where owner = 'JOHN';
for line in c1 loop
v_stmt := 'analyze index '||line.owner||'.'||line.index_name||
' validate structure';
execute immediate v_stmt;
insert into t_ind_used_size
select line.owner,name,btree_space,pct_used,del_lf_rows_len,sysdate
from index_stats;
if mod(c1%rowcount,100)=0 then
end if;
end loop;

Our cursor gives us all of the indexes for this owner. You can also take the "where" clause off the cursor and get all indexes.For each index, we create the analyze statement and then execute it using dynamic SQL. The results from this analyze statement are then put into our table for reference later. Remember that the "analyze" will lock the index, so be sure to run this operation during off-hours. I have 372 indexes taking 2564M of space, and this script takes 7:40 to complete. Not too bad.

Now Let's Use the Information

So we have gathered all of this information. We can just look at it to get an overview with:

variable block_size number;
select value into :block_size from v$parameter where name = 'db_block_size';

select * from t_ind_used_size
where btree_space > :block_size order by pct_used DESC;

Notice that I am only interested in the indexes that are taking more than one block of space. Any indexes currently taking one block cannot be improved, no matter what percentage is being used. I have 176 rows from this query with the indexes making the least efficient use of space at the bottom of the result set.

You will notice that we have the date in the table, too, so we can compare over time with the following:

select a.owner,,a.dt,a.pct_used,b.dt,b.pct_used
from t_ind_used_size a, t_ind_used_size b
where a.owner = b.owner and = and a.pct_used>1.1*b.pct_used
and a.dt >= (b.dt - 7);

This would show us the indexes that have dropped their percent used by more than 10 percent during the last seven days. Here we are assuming that you would run this periodically (daily or weekly).

I am not so much interested in the change over time than in the use of space right now. We want to reclaim space, so we will rebuild all of the indexes that have too much unused space. For "too much," I have chosen indexes that are using less than 75 percent of their space held or that have more than one block of delete space that is unusable.

My "where" clause for this is:

select count(1) 
from t_ind_used_size a, dba_indexes b
where btree_space > :block_size
and (pct_used < 75 or del_len > :block_size)
and a.owner = b.owner and = b.index_name
order by pct_used;


I join my table with dba_indexes so I can get more information on how the index is created. We now have 46 indexes that are candidates for a rebuild.

For each index we want to; rebuild, analyze to get new statistics, and then remove the index from the t_ind_used_size table so we don't do it again. It will take me forever to alter each one manually so we make a script to do it for us:

select 'alter index '||a.owner||'.'||||
' rebuild tablespace '||b.tablespace_name||chr(10)||
'storage (initial '||initial_extent||
' next '||next_extent||
' pctincrease '||pct_increase||') pctfree 0 nologging;'||chr(10)||
'analyze index '||a.owner||'.'||||
' compute statistics;'||chr(10)||
'delete t_ind_used_size where name = '''||||
''' and owner = '''||a.owner||''';'||chr(10)||
from t_ind_used_size a, dba_indexes b
where btree_space > :block_size
and (pct_used < 75 or del_len > :block_size)
and a.owner = b.owner and = b.index_name
order by pct_used;

This gives us 46 rows like:

alter index JOHN.APPTHIST_CURR_STAT_FK rebuild tablespace LOCAL1M_IDX
storage (initial 1048576 next 1048576 pctincrease 0) pctfree 0
analyze index JOHN.APPTHIST_CURR_STAT_FK compute statistics;
delete t_ind_used_size where name = 'APPTHIST_CURR_STAT_FK' and owner = 'JOHN';

So you see, we will rebuild in the same tablespace, analyze, delete the row, and move on to the next.

Problem Solved

This takes care of the two problems I started with. I know when to rebuild my large indexes based on the percent used and delete space. If there is only lookup activity against a large index, I'll never rebuild it once it is the right size.

I can also take care of those active indexes that are holding deleted space that is unusable.

You can pick any numbers for your limits but a block of deleted space and 75 percent usage seemed reasonable to me. I don't want to be rebuilding all indexes every week.

This script can, of course, just be added to you weekly processing. Just spool out the output and then execute that created script. With this, when you again start getting tight for space, you know you really are tight.

You should first get the explain plan of your SQL and determine what changes can be done to make the code operate without using hints if possible. However, Oracle hints such as ORDERED, LEADING, INDEX, FULL, and the various AJ and SJ Oracle hints can tame a wild optimizer and give you optimal performance.

Some suggestions:
- Use ALIASES for the tablenames in the hints.
- Ensure tables containst up-to-date statistics
- Syntax: /*+ HINT  HINT  ... */  (In PLSQL the space between the '+' and the first letter of the hint is vital  so /*+ ALL_ROWS */ is fine but /*+ALL_ROWS */ will cause problems

Here is a list of all the Hints:

Oracle Hint



Must be immediately after comment indicator, tells Oracle this is a list of hints.


Use the cost based approach for best throughput.


Default, if statistics are available will use cost, if not, rule.


Use the cost based approach for best response time.


Use rules based approach; this cancels any other hints specified for this statement.

Access Method Oracle Hints:



This tells Oracle to do a cluster scan to access the table.


This tells the optimizer to do a full scan of the specified table.


Tells Oracle to explicitly choose the hash access method for the table.


Transforms a NOT IN subquery to a hash anti-join.


Forces a rowid scan of the specified table.

INDEX(table [index])

Forces an index scan of the specified table using the specified index(s). If a list of indexes is specified, the optimizer chooses the one with the lowest cost. If no index is specified then the optimizer chooses the available index for the table with the lowest cost.

INDEX_ASC (table [index])

Same as INDEX only performs an ascending search of the index chosen, this is functionally identical to the INDEX statement.

INDEX_DESC(table [index])

Same as INDEX except performs a descending search. If more than one table is accessed, this is ignored.

INDEX_COMBINE(table index)

Combines the bitmapped indexes on the table if the cost shows that to do so would give better performance.

INDEX_FFS(table index)

Perform a fast full index scan rather than a table scan.

MERGE_AJ (table)

Transforms a NOT IN subquery into a merge anti-join.

AND_EQUAL(table_name index_Name1)

This hint causes a merge on several single column indexes. Two must be specified, five can be.


Transforms a NOT IN subquery into a NL anti-join (nested loop).

HASH_SJ(t1, t2)

Inserted into the EXISTS subquery; This converts the subquery into a special type of hash join between t1 and t2 that preserves the semantics of the subquery. That is, even if there is more than one matching row in t2 for a row in t1, the row in t1 is returned only once.

MERGE_SJ (t1, t2)

Inserted into the EXISTS subquery; This converts the subquery into a special type of merge join between t1 and t2 that preserves the semantics of the subquery. That is, even if there is more than one matching row in t2 for a row in t1, the row in t1 is returned only once.


Inserted into the EXISTS subquery; This converts the subquery into a special type of nested loop join between t1 and t2 that preserves the semantics of the subquery. That is, even if there is more than one matching row in t2 for a row in t1, the row in t1 is returned only once.

Oracle Hints for join orders and transformations:



This hint forces tables to be joined in the order specified. If you know table X has fewer rows, then ordering it first may speed execution in a join.


Forces the largest table to be joined last using a nested loops join on the index.


Makes the optimizer use the best plan in which a start transformation is used.


When performing a star transformation use the specified table as a fact table.


When performing a star transformation do not use the specified table as a fact table.


This causes nonmerged subqueries to be evaluated at the earliest possible point in the execution plan.


If possible forces the query to use the specified materialized view, if no materialized view is specified, the system chooses what it calculates is the appropriate view.


Turns off query rewrite for the statement, use it for when data returned must be concurrent and cant come from a materialized view.


Forces combined OR conditions and IN processing in the WHERE clause to be transformed into a compound query using the UNION ALL set operator.

NO_MERGE (table)

This causes Oracle to join each specified table with another row source without a sort-merge join.


 Prevents OR and IN processing expansion.

Oracle Hints for Join Operations:


USE_HASH (table)

This causes Oracle to join each specified table with another row source with a hash join.


This operation forces a nested loop using the specified table as the controlling table.


This operation forces a sort-merge-join operation of the specified tables.


The hint forces query execution to be done at a different site than that selected by Oracle. This hint can be used with either rule-based or cost-based optimization.


The hint causes Oracle to use the specified table as the first table in the join order.

Oracle Hints for Parallel Operations:



This specifies that data is to be or not to be appended to the end of a file rather than into existing free space. Use only with INSERT commands.


This specifies the operation is not to be done in parallel.

PARALLEL(table, instances)

This specifies the operation is to be done in parallel.


Allows parallelization of a fast full index scan on any index.

Other Oracle Hints:



Specifies that the blocks retrieved for the table in the hint are placed at the most recently used end of the LRU list when the table is full table scanned.


Specifies that the blocks retrieved for the table in the hint are placed at the least recently used end of the LRU list when the table is full table scanned.


For insert operations will append (or not append) data at the HWM of table.


Turns on the UNNEST_SUBQUERY option for statement if UNNEST_SUBQUERY parameter is set to FALSE.


Turns off the UNNEST_SUBQUERY option for statement if UNNEST_SUBQUERY parameter is set to TRUE.


 Pushes the join predicate into the view.


The NOLOGGING clause only affects Direct-path INSERT and Direct Loader (SQL*Loader) all other DML (insert/update/delete) are logged to the redo logs. Regular DML statements are always logged. So you should be able to recover them even if the table mode is nologging

Although you can set the NOLOGGING attribute for a table, partition, index, or tablespace, NOLOGGING mode does not apply to every operation performed on the schema object for which you set the NOLOGGING attribute. Only the following operations can make use of the NOLOGGING option:

All of these SQL statements can be parallelized. They can execute in LOGGING or NOLOGGING mode for both serial and parallel execution.
Other SQL statements (such as UPDATE, DELETE, conventional path INSERT, and various DDL statements not listed above) are unaffected by the NOLOGGING attribute of the schema object."  NOLOGGING is used mainly for SQL-LOADER and DIRECT-INSERTS. If you are not performing either of these (or those mentioned above) then the operation you perform WILL be logged.

If you performed any of those operations you should backup your database ASAP.
If you performed any of those operations the steps to recover a standby database would be:
1. Stop recovery on the standby.
2. Put the datafile in backup mode, back it up, and ftp the file to the standby host (in binary mode).
3. Put the Standby in Managed Recovery Mode:
On the Standby:
SQL> alter database recover managed standby database disconnect;
if you use RMAN:
1. Stop recovery on the standby.
2. Connect to the target and standby:
rman target / auxiliary sys/change_on_install@standby
3. Restore and recover the file with something like this:
run {
set newname for datafile 8 to
restore datafile 8;
set until time 'Oct 24 2000 08:00:00';
clone database; }
4. Put the standby back into recovery mode.

Anyway, you can run the following PL/SQL as the owner of the table to modify those tables, indexes or tablespaces:
set heading off
set feedback off
set pagesize 200
spool tables_logging.sql
select 'alter table ' || table_name || ' logging;'
  from user_tables
    where logging = 'NO'
    and temporary = 'N';
spool off
set heading off
set feedback off
set pagesize 200
spool indexes_logging.sql
select 'alter index ' || index_name || ' logging;'
  from user_indexes
    where logging = 'NO';
spool off

set heading off
set feedback off
set pagesize 200
spool tablespace_logging.sql
select 'alter tablespace ' || tablespace_name || ' logging;'
  from dba_tablespaces
    where logging = 'NOLOGGING';
spool off

CBO Options

This is the most important parameter of all, and the default setting of 100 is incorrect for most Oracle systems.  For OLTP systems, re-setting this parameter to a smaller value (between 10- to 30) may result in huge performance gains!
If you are having slow performance because the CBO first_rows optimizer mode is favoring too many full-table scans, you can reset the optimizer_index_cost_adj parameter to immediately tune all of the SQL in your database to favor index scans over full-table scans. This is a "silver bullet" that can improve the performance of an entire database in cases where the database is OTLP and you have verified that the full-table scan costing is too low.
It can also be enabled at the session level by using the alter session set optimizer_index_cost_adj = nn syntax. The optimizer_index_cost_adj parameter is a great approach to whole-system SQL tuning, but you will need to evaluate the overall effect by slowly resetting the value down from 100 and observing the percentage of full-tale scans. You can also slowly bump down the value of optimizer_index_cost_adj when you bounce the database and then either use the access.sql scripts or reexamine SQL from the STATSPACK stats$sql_summary table to see the net effect of index scans on the whole database.

We have seen that there are two assumptions built into the optimizer that are not very sensible.
-     A single block read costs just as much as a multi-block read - (not really likely, particularly when running on file systems without direction)
-     A block access will be a physical disk read - (so what is the buffer cache for?)

Set the optimizer_index_caching to something in the region of the "buffer cache hit ratio." (You have to make your own choice about whether this should be the figure derived from the default pool, keep pool, or both).

Another method to define it:
col a1 head "avg. wait time|(db file sequential read)"
col a2 head "avg. wait time|(db file scattered read)"
col a3 head "new setting for|optimizer_index_cost_adj"

select a.average_wait a1,
       b.average_wait a2,
       round( ((a.average_wait/b.average_wait)*100) ) a3
      (select d.kslednam EVENT,
              s.kslestim / (10000 * s.ksleswts) AVERAGE_WAIT
       from x$kslei s, x$ksled d
       where s.ksleswts != 0 and s.indx = d.indx) a,
      (select d.kslednam EVENT,
              s.kslestim / (10000 * s.ksleswts) AVERAGE_WAIT
       from x$kslei s, x$ksled d
       where s.ksleswts != 0 and s.indx = d.indx) b
where a.event = 'db file sequential read'
and b.event = 'db file scattered read';

Some results I have obtained from various combinations of hardware platform and IO sub-system.

         avg. wait time           avg. wait time           new setting for
(db file sequential read) (db file scattered read) optimizer_index_cost_adj
------------------------- ------------------------ ------------------------
               .171659257               3.33033582                        5
                   .13254                  1.12365                       12
               .017605522               .104148241                       17
               1.29639067               2.06954043                       63
               .535133533               .397919802                      134
               .940889054               .509830001                      185
               .537904057               .145183814                      370

In real life, this metric is only good enough to give a very rough indicator as to how fast the IO sub-system is. New-value settings below 100 indicate slow disks, anything above 100 might indicate the presence of fast or cache-backed disks (or abuse of the UNIX file system cache). You have to exaggerate these results for it to have any real influence on the CBO. For example, if the above query suggests a new setting of 63%, you may have to go as low as 1% or 2% before the CBO will actually use an index. Conversely, a suggestion of 370% may need to be bumped up to around 3700% before a full-table or index fast-full scan is favoured.

Optimizer Modes
In Oracle there are four optimizer modes, all determined by the value of the optimizer_mode parameter.  The values are rule, choose, all_rows and first_rows.  The rule and choose modes reflect the obsolete rule-based optimizer so we will focus on the CBO modes.
The optimizer mode can be set at the system-wide level, for an individual session, or for a specific SQL statement:
alter system set optimizer_mode=first_rows_10;
alter session set optimizer_goal = all_rows;
select /*+ first_rows(100) */ from student;

Oracle offers several optimizer modes that allow you to choose your definition of the “best” execution plan for you:

While the optimizer_mode is the single most important factor in invoking the cost-based optimizer, there are other parameters that influence the CBO behavior. 

Using histograms with the CBO
In some cases, the distribution of values within an index will effect the CBOs decision to use an index vs. perform a full-table scan.  This happens when the value with a where clause has a disproportional amount of values, making a full-table scan cheaper than index access.
A column histogram should only be created when we have a highly-skewed column, where some values have a disproportional number of rows.  In the real world, this is quite rare, and one of the most common mistakes with the CBO is the unnecessary introduction of histograms in the CBO statistics.  The histograms signals the CBO that the column is not linearly distributed, and the CBO will peek into the literal value in the SQL where clause and compare that value to the histogram buckets in the histogram statistics.
As a general rule, histograms are used to predict the cardinality and the number of rows returned in the result set.  For example, assume that we have a product_type index and 70% of the values are for the HARDWARE type.  Whenever SQL with where product_type=’HARDWARE’ is specified, a full-table scan is the fastest execution plan, while a query with where product_type=’SOFTWARE’ would be fastest using index access.
Because histograms add additional overhead to the parsing phase of SQL, they should be avoided unless they are required for a faster CBO execution plan. 

So how do we find those columns that are appropriate for histograms?  One exciting feature of dbms_stats is the ability to automatically look for columns that should have histograms, and create the histograms.  Again, remember that multi-bucket histograms add a huge parsing overhead to SQL statements, and histograms should ONLY be used when the SQL will choose a different execution plan based upon the column value.
To aid in intelligent histogram generation, Oracle uses the method_opt parameter of dbms_stats.  There are also important new options within the method_opt clause, namely skewonly, repeat and auto.
      method_opt=>'for all columns size skewonly'
      method_opt=>'for all columns size repeat'
      method_opt=>'for all columns size auto'

Let’s take a close look at each method option.

The first is the “skewonly” option which very time-intensive because it examines the distribution of values for every column within every index.  If dbms_stats discovers an index whose columns are unevenly distributed, it will create histograms for that index to aid the cost-based SQL optimizer in making a decision about index vs. full-table scan access.  For example, if an index has one column that is in 50% of the rows, a full-table scan is faster than and index scan to retrieve these rows.

Histograms are also used with SQL that has bind variables and SQL with cursor_sharing enabled.  In these cases, the CBO determines if the column value could affect the execution plan, and if so, replaced the bind variable with a literal and performs a hard parse.
   ownname          => 'SCOTT',
   estimate_percent => dbms_stats.auto_sample_size,
   method_opt       => 'for all columns size skewonly',
   degree           => 7
The auto option is used when monitoring is implemented (alter table xxx monitoring;) and creates histograms based upon data distribution and the manner in which the column is accessed by the application (e.g. the workload on the column as determined by monitoring).  Using method_opt=>’auto’ is similar to using the gather auto in the option parameter of dbms_stats.
      ownname          => 'SCOTT',
      estimate_percent => dbms_stats.auto_sample_size,
      method_opt       => 'for all columns size auto',
      degree           => 7

Improving Performance By Using IPC Connections To Local Databases
"When a process is on the same machine as the server, use the IPC protocol for connectivity instead of TCP. Inner Process Communication on the same machine does not have the overhead of packet building and deciphering that TCP has. I've seen a SQL job that runs in 10 minutes using TCP on a local machine run as fast as one minute using an IPC connection. The difference in time is most dramatic when the Oracle process has to send and/or receive large amounts of data to and from the database. For example, a SQL*Plus connection that counts the number of rows of some tables will run about the same amount of time, whether the database connection is made via IPC or TCP. But if the SQL*Plus connection spools much data to a file, the IPC connection will often be much faster -- depending on the data transmitted and the machine workload on the TCP stack.
You can set up your tnsnames file like this on a local machine so that local connection with use IPC connections first and then TCP connection second.
      (ADDRESS = (PROTOCOL = IPC)(Key = IPCKEY))   or  (Key = SID))
      (SID = PROD)
To see if the connections are being made via IPC or TCP, turn on listener logging and review the listener log file."
Note 207434.1

Space Used per Block
Remember each INITRANS takes 24 bytes in a block. Approximately 120 bytes is needed for block header info.

Available space for new insert = DB_BLOCK_SIZE - ((DB_BLOCK_SIZE - header info) * PCTFREE ) - (INITRANS * 24)

With the following (BAD) values:
Assume your block size = 8192.
The PCTFREE 60 will simply take away 4843 bytes ((8192 -120)*0.60).
Then INITRANS 90 will consume 2160 bytes (90*24).
Available space for new insert = 8192 - 4843 - 2160 = 1189.