Oracle 10g New
Features
The G stands for Grid Computing. A common missconception seems to be
that grid is just the new name for RAC (having improved RAC) This is
not the case. 10g comes with both RAC and grid. One will be able to
install 10g with RAC only, with grid only, without either and with
both. There is a profound difference between grid and RAC. 10g is
said to have 149 new features. 10g provides a wealth of features
that can be used to automate almost every aspect of its database
administration. It is important to note that these automation
features are optional, and they are not intended to replace standard
DBA activities. Rather, the Oracle10g automation features are aimed
at shops that do not have the manpower or expertise to manually
perform the tasks.
Oracle Enhancements by Oracle Release
New Utilities in Oracle10g release 10.1.0:
• Oracle10g Grid – RAC enhanced for Oracle10g
dynamic scalability with server blades (extra-cost option)
• Completely reworked 10g Enterprise Manager (OEM)
• AWR and ASH tables incorporated into OEM
Performance Pack and Diagnostic Pack options (extra-cost option)
• Automated Session History (ASH) materializes the
Oracle Wait Interface over time (extra-cost
option)
• DataPump replaces imp utility with impdp
• Automatic Database Diagnostic Monitor (ADDM) (extra-cost option)
• Automatic Storage Management (ASM) introduced
Stripe And Mirror Everywhere (SAME) standard
• Automatic Workload Repository (AWR) replaces
STATSPACK (extra-cost option)
• SQLTuning Advisor
• SQLAccess Advisor
• Rolling database upgrades (using Oracle10g RAC)
• dbms_scheduler package replaces dbms_job for
scheduling
• Set Database Default Tablespace syntax
• Rename Tablespace command
• Introduced RECYCLEBIN command for storing
objects before they are dropped. Required new PURGE command for
maintenance.
• sqlplus / as sysdba accessibility without quote
marks
• SYSAUX tablespace
• Multiple Temporary Tablespaces supported to
reduce stress on sorting in TEMP
• RMAN introduces compression for backups
• New drop database syntax
• New alter database begin backup syntax and alter
database end backup. You don't need to specify the tablespaces
one by one!!
• Oracle10g Data Guard Broker introduced
• Oracle10g RAC supports secure Redo Log transport
• Flashback enhancements for flashback database
and flashback table syntax
• SQL Apply feature
• Cross Platform Transportable Tablespaces
• External Table unload utility
• SQL Regular Expression Support with the evaluate
syntax
• New ROW TIMESTAMP column
• Automatic Database Tuning of Checkpoints, Undo
Segments and shared memory
• Automated invoking of dbms_stats for CBO
statistics collection
• Oracle Application Builder supports HTML DB
• Browser Based Data Workshop and SQL Workshop
• PL/SQL Compiler enhanced for compile-time
Warnings in utl_mail and utl_compress
• Run Faster PL/SQL Programs because The new
PL/SQL optimizing compiler and Implicit array fetching. So even if
you use
For x in (select * from table)
Loop
Process data;
End loop;
PL/SQL is silently "array fetching" 100 rows at a time.
Statistics Collection
These new feature include collection of data dictionary statistics (
which is now recommended as a best practice by Oracle) , new
behaviors associated with the dbms_stats package, and new features
related to monitoring tables in the database. The Rule Based
Optimizer (RBO) is desupported with 10g. Also, 10g includes new
features that enhance the generation of object level statistics
within the database. Let’s take a look. Also new init.ora parameters
has been added: and modified
Optimizer_ mode
The parameter optimizer_mode has a new default value of ALL_ROWS in Oracle database
10g.
This means the Optimizer will no longer operate under RULE mode
(RBO) when a table has no statistics. In Oracle database 10g the
Optimizer operates under ALL_ROWS mode (CBO) and will use dynamic
sampling to get statistics for any tables that do not have
statistics and will use CBO. The other possible values are FIRST_ROWS_1, FIRST_ROWS_10,
FIRST_ROWS_100, and FIRST_ROWS_1000.
The CHOOSE, RULE, and FIRST_ROWS modes have been deprecated.
Optimizer_dynamic_sampling
The parameter optimizer_dynamic_sampling has a new default value of
2 in Oracle Database 10g. This means dynamic sampling will be
applied to all unanalyzed tables. It also means that twice the
number of blocks will be use to calculate statistics than were used
in Oracle database 9i. The default value for dynamic sampling in 9i
was 1.
System
Statistics
In 10g the use of systems statistics is enabled by default and
system statistics are automatically initialized with heuristic
default values; these values do not represent your actual system.
When you gather system statistics in Oracle Database 10g they will
override these initial values. To gather system statistics you
should use DBMS_STATS.GATHER_SYSTEM_STATS during your peak workload
time window. At the beginning of the peak workload window execute
the following command:
BEGIN
DBMS_STATS.GATHER_SYSTEM_STATS(‘START’);
END;
/
At the end of the peak workload window execute the following
command:
BEGIN
DBMS_STATS.GATHER_SYSTEM_STATS(‘END’);
END;
/
Oracle recommends gathering system statistics during a
representative workload, ideally at peak workload time. You only
have to gather system statistics once.
System statistics are not automatically collected as part of the new
statistics gather job (see the automatic statistics gathering job
section below for more details).
Data Dictionary Statistics
Collection
Oracle recommends that you analyze the data dictionary. You can
collect these statistics by using either the
dbms_stats.gather_schema_stats or dbms_stats.gather_database_stats
Oracle-supplied procedures, as shown here:
Exec
dbms_stats.gather_schema_stats(’SYS’)
The gather_schema_stats and
gather_database_stats
procedures are NOT new in Oracle 10 g, but using them to collect
data dictionary statistics is new, as there are some new parameters
that are available with these procedures. Oracle 10g also offers two
new procedures in the dbms_stats package. First, the dbms_stats.gather_dictionary_stats
procedure facilitates analysis of the data dictionary. Second the dbms_stats.delete_dictionary_ stats
procedure allows you to remove data dictionary stats. Here is an
example of the use of the dbms_stats.gather_dictionary_stats
procedure:
exec
dbms_stats.gather_dictionary_stats;
This example gathers statistics from the SYS and SYSTEM schemas as
well as any other schemas that are related to RDBMS components
(e.g., OUTLN or DBSNMP). Any user with SYSDBA privileges can analyze
the data dictionary.
Statistics on the dictionary tables will be maintained via the
automatic statistics gathering job run during the nightly maintance
window. If you choose to switch off the automatic statistics
gathering job for your main application schema consider leaving it
on for the dictionary tables. You can do this by changing the value
of AUTOSTATS_TARGET to ORACLE instead of AUTO using the following:
BEGIN
DBMS_STATS.SET_PARAM(AUTOSTATS_TARGET,'ORACLE');
END;
/
When to Collect Dictionary Statistics
Oracle recommends the following strategy with regard to analyzing
the data dictionary in Oracle 10g:
1. Analyze normal data dictionary objects (not fixed dictionary
objects) using the same interval that you currently use when
analyzing other objects. Use gather_database_stats,
gather_schema_stats, or gather_dictionary_stats to perform this
action. Here is an example:
Exec
dbms_stats.gather_schema_stats(’SYS’,gather_fixed=>TRUE)
2. Analyze fixed objects only once, unless the workload footprint
changes. Generally, use the dbms_stats.gather_fixed_object_stats
supplied procedure when connected as SYS or any other SYSDBA
privileged user. Here is an example:
Exec
dbms_stats.gather_fixed_objects_stats(’ALL’);
Gathering
Fixed
Table Statistics
A new parameter to the dbms_stats.gather_database_stats
and dbms_stats.gather_database_stats
packages is gather_fixed.
This parameter is set to false by
default,
which disallows statistics collection for fixed data dictionary
tables (e.g., x$tables). Oracle suggests that you analyze fixed
tables only once during a typical system workload. You should do
this as soon as possible after your upgrade to Oracle 10 g, but
again it should be under a normal workload. Here is an example of
the use of the gather_fixed argument within the
dbms_stats.gather_schema_stats procedure:
Exec
dbms_stats.gather_schema_stats(’SYS’,gather_fixed=>TRUE)
Yet another new procedure, dbms_stats.gather_fixed_objects_stats,
has been provided in Oracle 10g to collect object statistics on
fixed objects. It also has a brother, delete_fixed_objects_stats, which will remove the
object statistics. Second cousins and new Oracle 10 gprovided
procedures include dbms_stats.export_fixed_objects_stats
and dbms_stats.import_fixed_
objects_stats. These allow you to export and import
statistics to user-defined statistics tables, just as you could with
normal table statistics previously. This allows your data dictionary
fixed statistics to be exported out of and imported into other
databases as required.
It's important to gather statistics on the fixed objects as they are
often queried to supply information to Statspack and the new
Automatic Workload Repository(AWR) in Oracle Database 10g and you
need to give the CBO accurate statistics for these objects. You only
need to gather fixed objects statistics once for a representative
workload and they are not updated by the automatic statistics
gathering job. You can collect statistics on fixed objects using:
BEGIN
DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
END;
/
Automatic
Optimizer
Statistics Collection
By default Oracle 10g automatically gathers optimizer statistics
using a scheduled job called GATHER_STATS_JOB. By default this job
runs within a maintenance windows between 10 P.M. to 6 A.M. week
nights and all day on weekends. The job calls the
DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC internal procedure which
gathers statistics for tables with either empty or stale statistics,
similar to the DBMS_STATS.GATHER_DATABASE_STATS procedure using the
GATHER AUTO option. The main difference is that the internal job
prioritizes the work such that tables most urgently requiring
statistics updates are processed first.
The GATHER_DATABASE_STATS_JOB_PROC procedure collects statistics on
database objects when the object has no previously gathered
statistics or the existing statistics are stale because the
underlying object has been modified significantly (more than 10% of
the rows).The DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC is an
internal procedure, but its operates in a very similar fashion to
the DBMS_STATS.GATHER_DATABASE_STATS procedure using the GATHER AUTO
option. The primary difference is that the
DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure prioritizes the
database objects that require statistics, so that those objects
which most need updated statistics are processed first. This ensures
that the most-needed statistics are gathered before the maintenance
window closes.
In some cases automatically gathering statistics can cause problems.
Highly volatile tables and load tables may have their statistics
gathered when there is an unrepresentative number of rows present.
These situations can be avoided by using one of two methods:
* The current statistics can be deleted and locked to prevent
DBMS_STATS from gathering new statistics. If the
OPTIMIZER_DYNAMIC_SAMPLING parameter is set to 2 (the default) or
higher the necessary statistics will be gathered as part of the
query optimization stage (See Dynamic Sampling):
BEGIN
DBMS_STATS.delete_table_stats('MY_SCHEMA','LOAD_TABLE');
DBMS_STATS.lock_table_stats('MY_SCHEMA','LOAD_TABLE');
END;
/
* The statistics can be gathered then locked at a time when the
table contains the appropriate data:
BEGIN
DBMS_STATS.gather_table_stats('MY_SCHEMA','LOAD_TABLE');
DBMS_STATS.lock_table_stats('MY_SCHEMA','LOAD_TABLE');
END;
/
System statistics and statistics for fixed object, such as dynamic
performance tables, are not gathered automatically.
Dynamic
Sampling
Dynamic sampling enables the server to improve performance by:
* Estimate single-table predicate selectivities
where available statistics are missing or may lead to bad
estimations.
* Estimate statatistics for tables and indexes
with missing statistics.
* Estimate statatistics for tables and indexes
with out of date statistics.
Dynamic sampling is controled by the OPTIMIZER_DYNAMIC_SAMPLING
parameter which accepts values from "0" (off) to "10" (agressive
sampling) with a default value of "2". At compile-time Oracle
determines if dynamic sampling would improve query performance. If
so it issues recursive statements to estimate the necessary
statistics. Dynamic sampling can be beneficial when:
* The sample time is small compared to the
overall query execution time.
* Dynamic sampling results in a better performing
query.
* The query may be executed multiple times.
In addition to the OPTIMIZER_DYNAMIC_SAMPLING system parameter the
dynamic sampling level can be set using the DYNAMIC_SAMPLING
optimizer hint for specific queries like:
SELECT /*+
dynamic_sampling(emp 10) */
empno, ename, job, sal
FROM emp
WHERE
deptno = 30;
The results of dynamic sampling are repeatable provided no rows are
inserted, updated or deleted from the sampled table. The
OPTIMIZER_FEATURES_ENABLE parameter will turns off dynamic sampling
if it is set to a version earlier than 9.2.0.
New DBMS_STATS Behaviors
Oracle has introduced some new arguments that you can use with the dbms_stats package in Oracle
Database 10g. The granularity parameter
is
used
in
several dbms_stats subprograms (e.g., gather_table_stats and gather_schema_stats) to indicate the granularity of
the statistics that you want to collect, particularly for
partitioned tables. For example, you can opt to only gather global
statistics on a partitioned table, or you can opt to gather global
and partition-level statistics. The granularity parameter comes with an auto option. When auto is used, Oracle collects
global, partition-level, and sub-partition-level statistics for a
range-list partitioned table. For other partitioned tables, only
global and partition-level statistics will be gathered. A second
granularity option, global and
partition, will gather the global and partition-level
statistics but no sub-partition-level statistics, regardless of the
type of partitioning employed on the table. Here are some examples
of using these new options:
Exec
dbms_stats.gather_table_stats(’my_user’,’my_tab’,granularity=>’AUTO’);
Exec
dbms_stats.gather_table_stats(’my_user’,’my_tab’,
granularity=>’GLOBAL AND PARTITION’);
New options are also available with the degree parameter, which allows you to parallelize
the statistics-gathering process. Using the new auto_degree option, Oracle will
determine the degree of parallelism that should be used when
analyzing the table.
Simply use the predefined value, dbms_stats.auto_degree,
in the degree parameter.
Oracle will then decide the degree of parallelism to use. It may
choose to use either
no parallelism or a default degree of parallelism, which is
dependent on the number of CPUs and the value of various database
parameter settings. Here is an example of the use of the new degree
option:
Exec
dbms_stats.gather_table_stats
(’my_user’,’my_tab’,degree=>dbms_stats.auto_degree);
Finally, the stattype parameter
is
a
new
parameter that allows you the option of gathering both data and
caching statistics (which is the default) or only data statistics or
only caching statistics. Valid options are all, cache, or data,
depending on the type of statistics you wish to gather. Here is an
example of the use of the stattype
parameter:
Exec
dbms_stats.gather_table_stats
(’my_user’,’my_tab’,stattype=>’ALL’);
You can query details about these jobs by selecting data from
dba_scheduler_jobs:
JOB_NAME
START
STATE
COUNT
COUNT
LAST START RUN
TIME
--------------------
------------------ ----- ----- ------- -------------------
--------
PURGE_LOG
11-AUG-04
03:00
AM
SCHED 513 0
05-OCT-06 03:00 AM 00:00:00
GATHER_STATS_JOB
SCHED
194
0
04-OCT-06
11:00 PM 06:23:54
ADV_SHRINK_1988719
SCHED
152
0
04-OCT-06
11:00 PM 00:00:04
ADV_SHRINK_2709655
SCHED
151
1
04-OCT-06
11:00 PM 01:39:04
MY_JOB
18-FEB-06
12:00
AM
SCHED 244 24
05-OCT-06 04:00 AM 00:41:57
ADV_SHRINK_2457725
SCHED
151
0
04-OCT-06
11:00 PM 00:58:33
ADV_SQLACCESS2599661
SCHED
148
148
04-OCT-06 11:00 PM 00:00:00
The PURGE_LOG job has run 513 times since 11-AUG-04 and
GATHER_STATS_JOB has 194 runs. The number of days does not add up,
as the number of days between August, 2004 and October, 2006 is on
the order of 770 plus days. One set of days which is fairly accurate
is for MY_JOB at 244. At time of this article, there have been 230
days since 18-FEB-06, so the extra runs (both good and bad) are
probably due to manual invocations (testing, ad hoc, etc.) of the
job.
Recall that the nightly maintenance window is only eight hours, and
the RUN TIME column shows that more than eight hours are required.
It is possible that some jobs do not run because they are lower in
priority (as determined by Oracle). There is an attribute/setting
which may be the culprit of why a particular run count may off. As
explained in the tuning guide:
The stop_on_window_close attribute controls whether the
GATHER_STATS_JOB continues when the maintenance window closes. The
default setting for the stop_on_window_close attribute is TRUE,
causing Scheduler to terminate GATHER_STATS_JOB when the maintenance
window closes.
The view also is telling in that it identifies jobs with serious
problems (read as “does not work at all”) such as the SQL access
advisory job at the bottom. If you don't have alerts setup to notify
you when a job succeeds (or fails), it is probably worthwhile to
periodically inspect the STATE of a job via DBA_SCHEDULER_JOBS.
Disabling a job, for example, is as simple as executing (in a PL/SQL
block if you want) dbms_scheduler.disable(‘Enter_Job_Name'); in a
SQL*Plus session (qualify with owner name if necessary). To disable
the forever-failing ADV_SQLACCESS2599661 job (this is owned by SYS):
SQL> set serveroutput on
SQL> exec
dbms_scheduler.disable('ADV_SQLACCESS2599661');
Are you stuck with the 10-6 and weekend schedule for
GATHER_STATS_JOB? Let's look at some of the details of the job via
DBA_SCHEDULER_WINDOWS.
WINDOW_NAME
NEXT_START_DATE COMMENTS
----------------
--------------------- -------------------------------------
WEEKNIGHT_WINDOW 05-OCT-06
10.00.00 PM Weeknight window for maintenance task
WEEKEND_WINDOW
07-OCT-06 12.00.00 AM Weekend window for maintenance task
The attribute that cannot be changed is WINDOW_NAME. Also included
in Chapter 27 is information about altering a window. It is possible
to increase or decrease the amount of time for the maintenance
window.
You alter a window using the SET_ATTRIBUTE procedure or Enterprise
Manager. With the exception of WINDOW_NAME, all the attributes of a
window can be changed when it is altered. The attributes of a window
are available in the *_SCHEDULER_WINDOWS views.
When a window is altered, it does not affect an active window. The
changes only take effect the next time the window opens.
All windows can be altered. If you alter a window that is disabled,
it will remain disabled after it is altered. An enabled window will
be automatically disabled, altered, and then reenabled, if the
validity checks performed during the enable process are successful.
Flushing the Buffer Cache
Prior to Oracle Database 10g, the only way to flush the
database buffer cache was to shut down the database and restart
it. Oracle Database 10g now allows you to flush the database
buffer cache with the alter system command using the flush
buffer_cache parameter. The FLUSH Buffer Cache clause is useful if
you need to measure the performance of rewritten queries or a
suite of queries from identical starting points. Use the following
statement to flush the buffer cache.
ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER
SYSTEM FLUSH SHARED_POOL;
However, note that this clause is intended for use only
on a test database. It is not advisable to use this clause on a
production database, because subsequent queries will have no hits,
only misses.
Database Resource
Manager New Features
The Database Resource Manager in Oracle Database 10g
offers a few new features that you need to be aware of:
■ The ability to
revert to the original consumer group at the end of an operation
that caused a change of consumer groups
■ The ability to set
idle timeout values for consumer groups
■ The ability to
create mappings for the automatic assignment of sessions to
specific consumer groups
Each of these topics is discussed, in turn, in more
detail in the following sections.
Reverting Back to the
Original Consumer Group
Prior to Oracle Database 10g, if a SQL call caused a
session to be put into a different consumer group (for example,
because a long-running query exceeded a SWITCH_TIME directive
value in the consumer group), then that session would remain
assigned to the new resource group until it was ended. Oracle
Database 10g allows you to use the new SWITCH_BACK_AT_CALL_END
directive to indicate that the session should be reverted back to
the original consumer group once the call that caused it to switch
consumer groups (or the top call) is complete. This is very useful
for n-tier applications that create a pool of sessions in the
database for clients to share. Previously, after the consumer
group had been changed, all subsequent connections would be
penalized based on the settings of the consumer group resource
plan. The new SWITCH_BACK_AT_CALL_END directive allows the session
to be reset, thus eliminating the impact to future sessions. Here
is an example of the use of this new feature:
EXEC
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN =>
'main_plan',
GROUP_OR_SUBPLAN
=>
'goonline',
COMMENT
=> 'Online sessions', CPU_P1 => 80,
SWITCH_GROUP
=>
'ad-hoc',
SWITCH_TIME
=> 3,SWITCH_ESTIMATE => TRUE,
SWITCH_BACK_AT_CALL_END=>TRUE);
In this case, I have created a plan directive that is a part
of an overall plan called MAIN_PLAN. This particular plan
directive is designed to limit the impact of online ad-hoc users
(or perhaps applications that are throwing out a great deal of
dynamic SQL that’s hard to tune) if they issue queries that take a
long time (in this example, 3 seconds). This directive causes a
switch to a consumer group called ad-hoc, which would likely
further limit CPU and might also provide for an overall run-time
limit on executions in this particular plan/resource group. Since
I have included the SWITCH_BACK_AT_CALL_END directive in this plan
directive, the consumer group will revert back to the original
plan after the completion of the long-running operation.
Setting the Idle
Timeout
Oracle Database 10g allows you to limit the maximum time
that a session is allowed to remain idle. The max_idle_time parameter allows you to define a maximum amount of time that
a given session can sit idle, as is shown in the upcoming example.
PMON will check the session once a minute and kill any session
that has been idle for the amount of time defined in the plan.
EXEC
DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN =>
'main_plan',
GROUP_OR_SUBPLAN => 'online', max_idle_time=>300,
comment=> ’Set max_idle_time’);
Creating Mappings for
Automatic Assignment of Sessions to Consumer Groups
The dbms_resource_manager.set_group_mapping
procedure allows you to map a specific
consumer group to a given session based on either login or
run-time attributes.
These attributes include:
■ The username
■ The service name
■ The client OS
username
■ The client program
name
■ The client machine
■ The module name
■ The module name
action
You then have to determine what session attributes you
want to map to a given consumer group. In this example, I have
mapped the client machine called tiger to the resource consumer
group LOW_PRIORITY:
Exec
dbms_resource_manager.set_group_mapping
(DBMS_RESOURCE_MANAGER.CLIENT_MACHINE,‘tiger’,’low_priority’);
Thus, if anyone logs in to the database from the machine
named tiger, they will be assigned to the consumer group
LOW_PRIORITY, which will have already been created.
Often times, there can be a number of mappings that
apply to a given session, and a priority has to be defined. This
is done by using the procedure dbms_resource_manager.set_mapping_priority. This example creates two mappings:
Dbms_resource_manager.set_group_mapping
(DBMS_RESOURCE_MANAGER.CLIENT_MACHINE,
‘tiger’,’low_priority’);
Dbms_resource_manager.set_group_mapping
(DBMS_RESOURCE_MANAGER.ORACLE_USER,
‘NUMBER_ONE’,’high_priority’);
In this case, anyone signing in from tiger is assigned
to the LOW_PRIORITY consumer group, but where will the user
NUMBER_ONE be assigned? Well, right now it’s hard to tell. So, to
make sure that NUMBER_ONE is always set to be assigned to the
high-priority resource consumer group, I can use the provided
procedure called dbms_resource_manager.set_mapping_priority:
Dbms_resource_manager.set_mapping_priority(ORACLE_USER=>1,
CLIENT_MACHINE=>2, EXPLICIT=>3, MODULE_NAME=>4,
SERVICE_NAME=>5,
CLIENT_OS_USER=>6, CLIENT_PROGRAM=>7,
MODULE_NAME_ACTION=>8);
This code will cause Oracle to prioritize consumer group
selection based first on username and then on the client machine
name. So, now the user NUMBER_ONE will always get the
higher-priority consumer group assignment. Be aware that
regardless of consumer group assignments, a user must still be
given switching privileges into a given consumer group. If the
user has not been granted such privileges, then sessions will not
be switched.
Scheduler Changes
Oracle
Database 10g offers a brand new job-scheduling facility, known as
The Scheduler, controlled via the new package dbms_scheduler.
This package replaces the dbms_job (but that one is still
available). The new scheduler offers more functionality over the
dbms_job package. The Scheduler enables you to execute a variety of
stored code (such as PL/SQL), a native binary executable, and OS
scripts (so you can get rid of cron jobs). The object that is being
run by The Scheduler is known as the program. The program is more
than just the name; it includes related metadata about the program,
such as the arguments to be passed to it and the type of program
that is being run.
Different users can use a program at different times, eliminating
the need to have to redefine the program every time you wish to
schedule a job. Programs can be stored in program libraries, which
allows for easy reuse of program code by other users. Each program,
when scheduled, is assigned to a job. A job can also just contain an
anonymous PL/SQL block instead of a program. The job is a
combination of the program (or anonymous PL/SQL block) and the
schedule associated with the program, which defines when the job is
to run. Also associated with the job is other metadata related to
the job, such as the job class and the window or window group. The
job class is a category of jobs that share various characteristics,
such as resource consumer group assignments and assignments to a
common, specific, service name. The job class is related to the job
window. The job window, or window group, essentially allows the job
to take advantage of specific resource plans. For example, if the
schedule for a job is for it to run every hour, the job window will
allow it to run under one resource group in the morning and a
different resource group in the evening. That way, you can control
the resources the job can consume at different times throughout the
day. Oracle provides two different interfaces into The Scheduler.
The first is the dbms_scheduler package and the second is through
the Oracle Enterprise Manager (OEM).
More information HERE
User-Configurable
Default Tablespaces
Oracle offers default tablespaces in Oracle Database
10g. Once you configure a default user tablespace, all new users
will be assigned to that tablespace rather than the SYSTEM
tablespace. Syntax:
ALTER DATABASE
DEFAULT TABLESPACE <tsname>;
Temporary Tablespace Groups
A temporary tablespace group is a list of temporary tablespaces. It
has the following advantages:
• You define more than one default temporary tablespace, and a
single SQL operation can use more than one temporary tablespace for
sorting. This prevents large tablespace operations from running out
of temporary space.
• Enables one particular user to use multiple temporary tablespaces
in different sessions at the same time
• Enables the slave processes in a single parallel operation to use
multiple temporary tablespaces
You create a temporary tablespace group when you specify the
TABLESPACE GROUP clause in a CREATE TABLESPACE statement:
CREATE TEMPORARY TABLESPACE temp_old TEMPFILE
'/u01/oracle/oradata/temp01.dbf' SIZE 500M TABLESPACE GROUP
group1;
You can also create a temporary tablespace group by:
ALTER TABLESPACE temp_old TABLESPACE GROUP group1;
Note: If you specify the NULL or '' tablespace group, it is
equivalent to the normal temporary tablespace creation statement
(without any groups).
Assigning Temporary Tablespaces to Tablespace Groups
alter tablespace temp_tbs_01 tablespace group tbs_group_02;
There is no limit to the number of tablespaces that can be assigned
to a tablespace group. The tablespace group shares the same
namespace as normal tablespaces, so tablespace names and tablespace
group names are mutually exclusive. You can also remove a tablespace
from a group by using the alter tablespace command and using empty
quotes as an argument to the tablespace group parameter, as shown in
this example:
Alter tablespace temp3 tablespace group ;
Then you can assign a Temporary Tablespace Group to Users
CREATE USER sam IDENTIFIED BY sam DEFAULT TABLESPACE users
TEMPORARY TABLESPACE group1;
ALTER USER SAM TEMPORARY TABLESPACE GROUP2;
Defining a Tablespace Group as the Default
Temporary Tablespace
After
you have created the tablespace group and assigned a set of
tablespaces to that group, you can assign that group of temporary
tablespaces (or that tablespace group) as the default temporary
tablespace for the system, or as a temporary tablespace group for
specific users.
You can do this in the create database statement when you create the
database, or you can use the alter database statement to modify the
temporary tablespace settings. Using either statement, you simply
define the tablespace group as the default tablespace, as shown in
this example:
Alter database default temporary tablespace tbs_group_01;
This has the effect of assigning multiple tablespaces as the default
temporary tablespace. Once you have assigned a tablespace group as
the default temporary tablespace group, you cannot drop any
tablespace in that group. So, now you can define more than a single
tablespace as the database default temporary tablespace; as a
result, larger SQL operations can use more than one tablespace for
sort operations, thereby reducing the risk of running out of space.
This also provides more tablespace space, and potentially better I/O
distribution for sort operations and parallel slave operations that
use temporary tablespaces. If a tablespace group is defined as the
default temporary tablespace, then no tablespaces in that group can
be dropped until that assignment has been changed. You can assign a
user to a tablespace group that might not be the default tablespace
group either in the create user or alter user statements, as shown
in these examples that assign the TBS_GROUP_01 tablespace to the
user NO_PS:
Create user no_ps identified by gonesville default tablespace
dflt_ts temporary tablespace tbs_group_01;
alter user no_ps temporary tablespace tbs_group_02;
Tablespace Group Data Dictionary View
A
new view, DBA_TABLESPACE_GROUPS, is available to associate specific
temporary tablespaces with tablespace groups. The
TEMPORARY_TABLESPACE column of the *_users views will report either
the temporary tablespace name or the temporary tablespace group name
that is assigned to the user. Here is an example of a query that
joins the DBA_USERS and DBA_TABLESPACE_GROUPS views together and
gives you a list of users who are assigned a tablespace group as
their temporary tablespace name, and all of the tablespaces that are
associated with that group:
Select a.username, a.temporary_tablespace, b.tablespace_name
from dba_users a, dba_tablespace_groups b
Where a.temporary_tablespace in (select distinct
group_name
from dba_tablespace_groups);
Renaming Tablespaces
Oracle 10g includes the ability to rename tablespaces.
You use the alter tablespace command with the rename to parameter, as shown in this example:
Alter
tablespace production_tbs rename to prod_tbs;
You cannot rename
the SYSTEM or the SYSAUX tablespace. Renaming that tablespace name
DOES NOT CHANGE its datafile. Another nice feature is that if the
tablespace is an UNDO tablespace, and you are using a server
parameter file (SPFILE), Oracle will change the UNDO_TABLESPACE
parameter in the SPFILE to reflect the new UNDO tablespace name.
The ability to rename tablespaces has some great practical
applications with operations such as transportable tablespaces.
Now, rather than having to drop the existing tablespace before you
can transport it in, you only need rename that tablespace.
CAUTION
You should back up the control file as soon as possible
after renaming tablespaces within the database. If you do not,
depending on when the backup of the control file took place, a
divergence may exist between the tablespace names in the control
file and the actual tablespace names in the database.
Dropping Databases
The drop database
command can be used to drop your
database. Oracle will drop the database, deleting all control
files and all datafiles listed in the control file. If you are
using a SPFILE, then Oracle will remove it as well. Only a user
with SYSDBA privileges can issue the statement and the database
must be mounted (not open) in exclusive and restricted mode. Here
is an example of the use of the drop
database command:
startup nomount
alter database mount exclusive;
alter system enable restricted session;
select name from v$database;
drop database;
Larger LOBs
If you use LOBs in your database (NCLOB, BLOB, or CLOB),
then you will be happy to know that the limits on LOBs have been
increased in Oracle 10g. The new maximum limits are calculated at
(4GB – 1 byte) * (the database block size). Thus, if the database
block size is 8KB, there is essentially a 32GB limitation on LOBs
in that database. Note that Bfiles are limited to 4GB in size.
The SYSAUX Tablespace
The SYSAUX tablespace is a new feature and required
component in Oracle 10g. The SYSAUX
tablespace is a secondary tablespace for storage a number of
database components that were previously stored in the SYSTEM
tablespace. It is created as a locally managed tablespace using
automatic segment space management. Previously, many Oracle
features required their own separate tablespaces (such as the RMAN
recovery catalog, Ultra Search, Data Mining, XDP, and OLAP). This
increases the management responsibility of the DBA. The SYSAUX
tablespace consolidates these tablespaces into one location, which
becomes the default tablespace for these Oracle features.
When you create an Oracle database, Oracle creates the
SYSAUX tablespace for you by default. If you are using OMF, then
the tablespace is created in the appropriate OMF directory. If you
use the sysaux datafile clause in the create
database statement, then the SYSAUX
tablespace datafile(s) will be created in the location you define.
Finally, if no sysaux
datafile clause is included and OMF is not
configured, Oracle creates the SYSAUX tablespace in a default
location that is OS-specific. Here is an example of a create database statement with the sysaux
datafile clause in it:
CREATE DATABASE
my_db
DATAFILE
'c:\oracle\oradata\my_db\my_db_system_01.dbf' SIZE 300m
SYSAUX DATAFILE
‘c:\oracle\my_db\my_db_sysaux_01.dbf' SIZE 100m
DEFAULT TEMPORARY TABLESPACE dtemp_tbs
tempfile
'c:\oracle\my_db\my_db_temp_01.dbf' SIZE
100m
UNDO TABLESPACE undo_tbs_one DATAFILE
'c:\oracle\my_db\my_db_undo_tbs_one_01.dbf'
SIZE 100m;
As stated earlier, when you migrate to Oracle Database
10 g, you need to create the SYSAUX tablespace as a part of that
migration. You do this after mounting the database under the new
Oracle 10g database software. Once you have mounted it, you should
open the database in migrate mode with the startup migrate command. Once the database is open, you can create the
SYSAUX tablespace.
■ When migrating to Oracle Database 10g, you can create the
SYSAUX tablespace only when the database is open in migrate mode.
■ Also, when migrating
to Oracle Database 10g, if a tablespace is already named SYSAUX,
you will need to remove it or rename it while you are in migrate
mode.
■ Once you have opened
your Oracle Database 10g database, you cannot drop the SYSAUX
tablespace. If you try, an error will be returned.
■ You cannot rename
the SYSAUX tablespace during normal database operations.
■ The SYSAUX
tablespace cannot be transported to other databases via Oracle's
transportable tablespace feature.
Managing Occupants of the SYSAUX Tablespace
Each set of application tables within the SYSAUX
tablespace is known as an occupant. Oracle provides some new views
to help you monitor space usage of occupants within the SYSAUX
tablespace and some new procedures you can use to move the
occupant objects in and out of the SYSAUX tablespace. First,
Oracle provides a new view, V$SYSAUX_OCCUPANTS, to manage
occupants in the SYSAUX tablespace. This view allows you to
monitor the space usage of occupant application objects in the
SYSAUX tablespace, as shown in this example:
SELECT
substr(occupant_name,1,20) occupant_name, substr(SCHEMA_NAME,1,20) schema_name,
space_usage_kbytes FROM v$sysaux_occupants;
In this case, Oracle will display the space usage for
the occupants, such as the RMAN recovery catalog. If you determine
that you need to move the occupants out of the SYSAUX tablespace,
then the MOVE_PROCEDURE column of the V$SYSAUX_OCCUPANTS view will
indicate the procedure that you should use to move the related
occupant from the SYSAUX tablespace to another tablespace. This
can also be a method of “reorganizing” your component object
tables, should that be required.
Row
Timestamp ( ora_rowscn and scn_to_timestamp)
Oracle 10g provides a new pseudo-column, consisting of the committed
timestamp or SCN that provides applications and users the ability to
efficiently implement optimistic locking. In previous releases, when
posting updates to the database, applications had to read in all
column values or user-specified indicator columns, compare them with
those previously fetched, and update those with identical values.
With this feature, only the row SCN needs to be retrieved and
compared to verify that the row has not changed from the time of the
select to the update. The pseudo-column for the committed SCN is
called ora_rowscn and is
one of the version query pseudo-columns. The ora_rowscn
pseudo-column returns, for each version of each row, the system
change number (SCN) of the row. You cannot use this pseudo-column in
a query to a view. However, you can use it to refer to the
underlying table when creating a view. You can also use this
pseudo-column in the WHERE clause of an UPDATE or DELETE statement.
For example:
SELECT ora_rowscn FROM
used_boats:
ORA_ROWSCN
----------
791744
791744
791744
791744
791744
The above query shows us that all of the records in used_boats were
committed in the same transaction. Let's update some of the rows and
see what happens.
UPDATE used_boats SET
price=price*1.1 WHERE seller_id=1;
commit;
SELECT ora_rowscn FROM
used_boats:
ORA_ROWSCN
----------
816673
816673
816673
791744
791744
Another convenient function allows you to retrieve the actual time
that the row was last altered through a conversion function called
scn_to_timestamp. Let's look at an example usage of this function.
select
scn_to_timestamp(ora_rowscn) from used_boats;
SCN_TO_TIMESTAMP(ORA_ROWSCN)
-------------------------------
30-AUG-03 11.06.08.000000000
PM
30-AUG-03 11.06.08.000000000
PM
30-AUG-03 11.06.08.000000000
PM
30-AUG-03 04.33.19.000000000
PM
30-AUG-03 04.33.19.000000000
PM
The ora_rowscn has the following restrictions: This pseudo-column is
not supported for external tables or when directly querying views.
The data from the SCN and timestamp pseudo-columns could prove
invaluable in a flashback situation.
Automated Storage
Management (ASM)
Oracle 10g introduces Automated Storage Management
(ASM), a service that provides management of disk drives.
Oracle10G provides its own disk storage management system.
Database administrators are no longer required to use hardware
vendor or third-party disk volume managers to provide striping and
mirroring functionality. ASM manages the raw disks within the
Oracle database architecture. Administrators are able to assign
disks to disk groups, which can then be striped and/or mirrored to
provide high performance and high availability. During tablespace
creation, the administrator assigns the tablespace datafile to a
disk group. This differs from previous Oracle releases which
required that datafiles to be assigned to the individual disks
themselves. Interestingly enough, Oracle's default stripe size is
one megabyte. This differs from most disk storage management
systems, which often utilize 32K or 64K stripe sizes. Oracle found
that one-megabyte stripes on disks provided a very high level of
data transfer and best met the needs of disk intensive
applications. One can only assume that advancements in disk
storage technology have allowed Oracle to access the data in
one-megabyte chunks and not drive disk utilization to unacceptable
levels.
Administrators provide disk mirroring by creating failure groups.
The DBA creates the appropriate number of failure groups to
accommodate the data requiring disk fault tolerance. ASM's
mirroring capability ranges from the mirroring of individual
datafiles to entire disk arrays, providing administrators with a
high level of flexibility when creating fault-tolerant disk
subsystems. The data is duplicated on separate disks in
one-megabyte mirror "chunks"
Administrators can choose from the following mirroring options in
ASM:
- External - no mirroring
- Normal - data is mirrored on two separate
disks. This is the default setting.
- High Redundancy - data is mirrored on there
separate disks providing three-way mirroring capabilities.
ASM requires its own instance, which identifies the
various disk groups and files during instance startup. The ASM
instance then mounts the disks under its control and creates an
extent map, which is passed to the database instances. ASM does
not perform the I/O for the database instances; it is only used to
manage the various disk groups under its control. ASM is only
activated when individual datafiles are created or dropped or
disks are added and removed from the disk groups. When new disks
are added or removed from the disk group, ASM automatically
rebalances the files contained in the disk group while the
database is open and functioning.
ASM is able to balance the I/O for multiple databases across all
managed devices providing load balancing for multiple
applications. In Oracle10G Grid implementations, ASM is able to
reassign disks from one node to another providing additional load
balancing capabilities.
Oracle Enterprise Manager (OEM) for Oracle10G and the Database
Configuration Assistant (DBCA) have been updated to allow
administrators to configure and manage databases using ASM. ASM
can be used on a variety of configurations, including Oracle9i RAC
installations. ASM is an alternative to the use of raw or cooked
file systems. ASM offers a number of features, including:
- Simplified daily administration
- The performance of raw disk I/O for all ASM
files
- Compatibility with any type of disk
configuration, be it JBOD or complex SAN
- Use of a specific file-naming convention to
name files, enforcing an enterprise-wide file-naming
convention
- Prevention of the accidental deletion of
files, since there is no file system interface and ASM is
solely responsible for file management
- Load balancing of data across all ASM
managed disk drives, which helps improve performance by
removing disk hot spots
- Dynamic load balancing of disks as usage
patterns change and when additional disks are added or removed
- Ability to mirror data on different disks to
provide fault tolerance
- Support of vendor-supplied storage offerings
and features
- Enhanced scalability over other
disk-management techniques
ASM can work in concert with existing databases that use
raw or cooked file systems. You can choose to leave existing file
systems in place or move the database datafiles to ASM disks.
Additionally, new database datafiles can be placed in either ASM
disks or on the preexisting file systems. Databases can
conceivably contain a mixture of file types, including raw,
cooked, OMF, and ASM (though the management of such a system would
be more complex). The details of implementing and managing ASM are
significant and would consume more than a few chapters. Review the
Oracle Database 10g documentation for more details on this new
Oracle feature.
Not only does ASM deliver near optimal performance, it
is also very simple to use. There are really only two decisions
that you have to make.
- When creating disk groups and allocating storage, most
customers will be best served by using just two disk groups –
one for a recovery area and one for their work area. However,
customers with intensive OLTP applications but no disk cache
may also need a separate disk group for online log files.
- When creating disk groups, you will need to tell ASM
whether the storage is already mirrored by the hardware, or
whether ASM should mirror the data
Because ASM is highly automated and delivers excellent
performance, better than most customers have been able to achieve
previously using established best practices, using ASM is the new
best practice for all databases under Oracle 10g
More Information about
Setup Details
Setting
up
ASM
ORAFAQ
on
ASM
SQL
Model Clause or SpreadSheet Functionality
Now, Oracle Database 10g queries and subqueries can include new
syntax that provides highly expressive spreadsheet-like array
computations with enterprise-level scalability. The
computations treat relational tables as n-dimensional arrays,
allowing complex computations while avoiding the performance
problems of multiple joins and unions. This will enhance SQL for
calculations. SQL result sets can be treated like multidimensional
arrays. Here's the Model clause syntax:
select ....
from ....
model [main]
[ reference models ]
[ partition by (<cols>)]
dimension by (<cols>)
measures (<cols>)
[ ignore nav ] | [ keep nav ]
[ rules
[ upsert | update]
[ automatic order | sequential order ]
[ iterate (n) [ until <condition>]
]
( <cell_assignment> = <expression> ...)
Examples:
To keep our examples concise, we will create a view using the Sales
History (SH) schema of the sample schema set provided with
Oracle10g. The view sales_view provides annual sums for
product sales, in dollars and units, by country, aggregated across
all channels. The view is built from a 1 million row fact
table and defined as follows:
CREATE VIEW sales_view AS
SELECT country_name country,
prod_name prod,
calendar_year year,
SUM(amount_sold) sale,
COUNT(amount_sold) cnt
FROM sales, times, customers,
countries, products
WHERE sales.time_id =
times.time_id AND
sales.prod_id =
products.prod_id AND
sales.cust_id =
customers.cust_id AND
customers.country_id =
countries.country_id
GROUP BY country_name,
prod_name, calendar_year;
As an initial example of Model, consider the following
statement. It calculates the sales values for two products and
defines sales for a new product based on the other two products.
SELECT SUBSTR(country,1,20)
country, SUBSTR(prod,1,15) prod, year, sales
FROM sales_view
WHERE country IN
('Italy','Japan')
MODEL RETURN
UPDATED ROWS
PARTITION BY (country)
DIMENSION BY (prod,
year)
MEASURES (sale sales)
RULES (
sales['Bounce', 2002] = sales['Bounce', 2001] +
sales['Bounce', 2000],
sales['Y
Box', 2002] = sales['Y Box', 2001],
sales['2_Products', 2002] = sales['Bounce', 2002] + sales['Y Box',
2002])
ORDER BY country, prod, year;
The results are:
COUNTRY
PROD
YEAR SALES
--------------------
--------------- ---------- ----------
Italy
2_Products
2002 92613.16
Italy
Bounce
2002 9299.08
Italy
Y
Box
2002 83314.08
Japan
2_Products
2002 103816.6
Japan
Bounce
2002 11631.13
Japan
Y
Box
2002 92185.47
This statement partitions data by country, so the formulas are
applied to data of one country at a time. Our sales fact
data ends with 2001, so any rules defining values for 2002 or later
will insert new cells. The first rule defines the sales
of a video games called "Bounce" in 2002 as the sum of its sales in
2000 and 2001. The second rule defines the sales for Y Box in 2002
to be the same value they were for 2001. The third rule
defines a product called "2_Products," which is simply the sum
of the Bounce and Y Box values for 2002. Since the
values for 2_Products are derived from the results of the two prior
formulas, the rules for Bounce and Y Box must be executed before
the 2_Products rule.
Note the following characteristics of the example above:
- The "RETURN UPDATED ROWS" clause following the keyword MODEL
limits the results returned to just those rows that were
created or updated in this query. Using this clause is a
convenient way to limit result sets to just the newly calculated
values. We will use the RETURN UPDATED ROWS clause
throughout our examples.
- The keyword "RULES," shown in all our examples at the
start of the formulas, is optional, but we include it for easier
reading.
- Likewise, many of our examples do not require ORDER BY on the
Country column, but we include the specification for convenience
in case readers want to modify the examples and use multiple
countries.
This section examines the techniques for referencing cells and
values in a SQL Model. The material on cell references is
essential to understanding the power of the SQL Model clause.
What if we want to update the existing sales value for the product
Bounce in the year 2000, in Italy, and set it to 10? We could
do it with a query like this, which updates the existing cell for
the value:
SELECT SUBSTR(country,1,20)
country, SUBSTR(prod,1,15) prod, year, sales
FROM sales_view
WHERE country='Italy'
MODEL RETURN
UPDATED ROWS
PARTITION BY (country)
DIMENSION BY (prod,
year)
MEASURES (sale sales)
RULES ( sales['Bounce', 2000] = 10 )
ORDER BY country, prod, year;
COUNTRY
PROD
YEAR SALES
--------------------
--------------- ---------- ----------
Italy
Bounce
2000 10
The formula in the query above uses "positional cell
reference." The value for the cell reference is matched to the
appropriate dimension based on its position in the
expression. The DIMENSION BY clause of the model
determines the position assigned to each dimension: in this
case, the first position is product ("prod") and the second position
is year.
What if we want to create a forecast value of the sales for
the product Bounce in the year 2005, in Italy, and set it to
20? We could do it with a query like this:
SELECT SUBSTR(country,1,20)
country, SUBSTR(prod,1,15) prod, year, sales
FROM sales_view
WHERE country='Italy'
MODEL RETURN
UPDATED ROWS
PARTITION BY (country)
DIMENSION BY (prod,
year)
MEASURES (sale sales)
RULES (
sales['Bounce', 2005] = 20 )
ORDER BY country, prod, year;
COUNTRY
PROD
YEAR SALES
--------------------
--------------- ---------- ----------
Italy
Bounce
2005 20
The formula in the query above sets the year value to 2005 and thus
creates a new cell in the array.
NOTE: If we want to create new cells, such as sales
projections for future years, we must use positional references or
FOR loops (discussed later in this paper). That is, positional
reference permits both updates and inserts into the array.
This is called the "upsert" process.
What if we want to update the sales for the product Bounce in all
years after 1999 where we already have values recorded?
Again, we will change values for Italy and set them to 10. We
could do it with a query like this:
SELECT SUBSTR(country,1,20)
country, SUBSTR(prod,1,15) prod, year, sales
FROM sales_view
WHERE country='Italy'
MODEL RETURN
UPDATED ROWS
PARTITION BY (country)
DIMENSION BY (prod,
year)
MEASURES (sale sales)
RULES (
sales[prod='Bounce', year>1999] = 10 )
ORDER BY country, prod, year;
COUNTRY
PROD
YEAR SALES
--------------------
--------------- ---------- ----------
Italy
Bounce
2000 10
Italy
Bounce
2001 10
The formula in the query above uses "symbolic cell
reference." With symbolic cell references, the standard
SQL conditions are used to determine the cells which are part of a
formula. You can use conditions such as <,>, IN,
and BETWEEN. In this example the formula applies to any cell
which has product value equal to Bounce and a year value greater
than 1999. The example shows how a single formula can access
multiple cells.
NOTE: Symbolic references are very powerful, but they are
solely for updating existing cells: they cannot create new
cells such as sales projections in future years. If a cell
reference uses symbolic notation in any of its dimensions,
then its formula will perform only updates. Later we will
discuss FOR loops in the Model clause, which provide a concise
technique for creating multiple cells from a single formula.
Regular expressions
To harness the power of regular expressions, you can exploit the
newly introduced Oracle SQL REGEXP_LIKE operator and the
REGEXP_INSTR, REGEXP_SUBSTR, and REGEXP_REPLACE functions. You will
see how this new functionality supplements the existing LIKE
operator and the INSTR, SUBSTR, and REPLACE functions. In fact, they
are similar to the existing operator and functions but now offer
powerful pattern-matching capabilities. The searched data can be
simple strings or large volumes of text stored in the database
character columns.
Table 1: Anchoring
Metacharacters
Metacharacter |
Description |
^ |
Anchor the
expression to the start of a line |
$ |
Anchor the
expression to the end of a line |
Table 2: Quantifiers,
or Repetition Operators
Quantifier |
Description |
* |
Match 0 or more
times |
? |
Match 0 or 1 time |
+ |
Match 1 or more
times |
{m} |
Match exactly m
times |
{m,} |
Match at least m
times |
{m, n} |
Match at least m
times but no more than n times |
Table 3: Predefined
POSIX Character Classes
Character
Class |
Description |
[:alpha:] |
Alphabetic
characters |
[:lower:] |
Lowercase
alphabetic characters |
[:upper:] |
Uppercase
alphabetic characters |
[:digit:] |
Numeric digits |
[:alnum:] |
Alphanumeric
characters |
[:space:] |
Space characters
(nonprinting), such as carriage return,
newline, vertical tab, and form feed |
[:punct:] |
Punctuation
characters |
[:cntrl:] |
Control
characters (nonprinting) |
[:print:] |
Printable
characters |
Table 4: Alternate
Matching and Grouping of Expressions
Metacharacter |
Description |
| |
Alternation |
Separates
alternates, often used with grouping
operator () |
( ) |
Group |
Groups
subexpression into a unit for
alternations, for quantifiers, or for
backreferencing |
[char] |
Character list |
Indicates a
character list; most metacharacters
inside a character list are understood
as literals, with the exception of
character classes, and the ^ and - metacharacters |
Table 5: The REGEXP_LIKE Operator
Syntax |
Description |
REGEXP_LIKE(source_string,
pattern
[, match_parameter]) |
source_string
supports character datatypes (CHAR,
VARCHAR2, CLOB, NCHAR, NVARCHAR2, and NCLOB but not LONG). The pattern parameter is another
name for the regular expression. match_parameter allows optional
parameters such as handling the newline
character, retaining multiline
formatting, and providing control over
case-sensitivity. Example:
SELECT
ename FROM emp WHERE REGEXP_LIKE
(ename, '^J.(N|M),S$');
ENAME
-----
JONES
JAMES
In this example, we tell Oracle to
retrieve any values that start with J,
followed by any letter, then N or M,
then any letter, then S
Another one, the following regular
expression would match fly, flying,
flew, flown, and flies:
SELECT c1 FROM t1 WHERE REGEXP_LIKE(c1,
‘fl(y(ing)?|(ew)|(own)|(ies))');
|
Table 6: The REGEXP_INSTR Function
Syntax |
Description |
REGEXP_INSTR(source_string,
pattern
[, start_position
[, occurrence
[, return_option
[, match_parameter]]]]) |
This function
looks for a pattern and returns the first
position of the pattern. Optionally, you
can indicate the start_position you want to begin the
search. The occurrence parameter defaults to
1 unless you indicate that you are
looking for a subsequent occurrence. The
default value of the return_option is 0, which returns
the starting position of the pattern; a
value of 1 returns the starting position
of the next character following the
match. Example:
SELECT
REGEXP_INSTR('5035
Forest Run Trace, Alpharetta, GA', '[^
]+', 1, 6] "Test" FROM dual;
TEST
36
In this example, we are telling Oracle
to examine the string, looking for
occurrences of one or more non-blank
characters and to return the sixth
occurrence of one or more non-blank
character.
|
Table 7: Explanation
of 5-digit + 4 Zip-Code Expression
Syntax |
Description |
|
Empty space that
must be matched |
[:digit:] |
POSIX numeric
digit class |
] |
End of character
list |
{5} |
Repeat exactly
five occurrences of the character list |
( |
Start of
subexpression |
- |
A literal
hyphen, because it is not a range
metacharacter inside a character list |
[ |
Start of
character list |
[:digit:] |
POSIX [:digit:] class |
[ |
Start of
character list |
] |
End of character
list |
{4} |
Repeat exactly
four occurrences of the character list |
) |
Closing
parenthesis, to end the subexpression |
? |
The ? quantifier matches the
grouped subexpression 0 or 1 time thus
making the 4-digit code optional |
$ |
Anchoring
metacharacter, to indicate the end of
the line |
Table 8: The REGEXP_SUBSTR Function
Syntax |
Description |
REGEXP_SUBSTR(source_string,
pattern
[, position [, occurrence
[, match_parameter]]]) |
The REGEXP_SUBSTR function returns the
substring that matches the pattern.
Example:
SELECT
REGEXP_SUPSTR('5035
Forest Run Trace, Alpharetta,
GA',',[^,]+,') "Test"
FROM
dual;
Test
------------------
,
Alpharetta,
In this example we search for a comma,
followed by one or more characters
immediately followed by a comma.
|
Table 9: The REGEXP_REPLACE
Function
Syntax |
Description |
REGEXP_REPLACE(source_string,
pattern
[, replace_string [, position
[,occurrence, [match_parameter]]]]) |
This function
replaces the matching pattern with a
specified replace_string, allowing complex
search-and-replace operations. Example:
SELECT
REGEXP_REPLACE(emp_phone,
'([[:digit:]]{3})\.([[:digit:]]{3})\.([[:digit:]]{4})',
'(\1) \2-\3') "Test"
FROM emp;
Test
----------------
(404)
444-4321
(404)
555-5432
(404)
666-6543
In this eample we search for a pattern
of numbers that looks like a European
phone number listing such as
111.222.3333 and convert it to a normal
USA format listing of (111) 222-3333.
|
Table 10: Backreference
Metacharacter
Metacharacter |
Description |
\digit |
Backslash |
Followed by a
digit between 1 and 9, the backslash
matches the preceding digit-th
parenthesized subexpression.
(Note: The
backslash has another meaning in regular
expressions; depending on the context it
can also mean the Escape character |
Table 11: Explanation
of Pattern-Swap Regular Expression
Regular-Expression
Item |
Description |
( |
Start of first
subexpression |
. |
Match any single
character except a newline |
* |
Repetition
operator, matches previous . metacharacter 0 to n
times |
) |
End of first
subexpression; result of the match is
captured in \1
(In this example,
it's Ellen.) |
|
Empty space that
needs to be present |
( |
Start of the
second subexpression |
. |
Match any single
character except a newline |
* |
Repetition
operator matches the previous . metacharacter 0 to n
times |
) |
End of second
subexpression; result of this match is
captured in \2
(In this example,
it stores Hildi.) |
|
Empty space |
( |
Start of third
subexpression |
. |
Match any single
character except a newline |
* |
Repetition
operator matches . metacharacter 0 to n
times |
) |
End of third
subexpression; result of this match is
captured in \3
(In this example,
it holds Smith.) |
Table 12: Explanation
of the Social Security Number Regular
Expression
Regular-Expression
Item |
Description |
^ |
Start of line
character (Regular expression cannot
have any leading characters before the
match.) |
( |
Start
subexpression and list alternates
separated by the | metacharacter |
[ |
Start of
character list |
[:digit:] |
POSIX numeric
digit class |
] |
End of character
list |
{3} |
Repeat exactly
three occurrences of character list |
- |
A hyphen |
[ |
Start of
character list |
[:digit:] |
POSIX numeric
digit class |
] |
End of character
list |
{2} |
Repeat exactly
two occurrences of character list |
- |
Another hyphen |
[ |
Start of
character list |
[:digit:] |
POSIX numeric
digit class |
] |
End of character
list |
{4} |
Repeat exactly
four occurrences of character list |
| |
Alternation
metacharacter; ends the first choice and
starts the next alternate expression |
[ |
Start of
character list |
[:digit:] |
POSIX numeric
digit class. |
] |
End of character
list |
{9} |
Repeat exactly
nine occurrences of character list |
) |
Ending
parenthesis, to close the subexpression
group used for alternation |
$ |
Anchoring
metacharacter, to indicate the end of
the line; no extra characters can follow
the pattern |
|
|
 |
Constraints
You can use Oracle Regular Expressions to filter data that is
allowed to enter a table by using constraints. The following example
shows how a column could be configured to allow only alphabetical
characters within a VARCHAR2 column. This will disallow all
punctuation, digits, spacing elements, and so on, from entering the
table.
CREATE TABLE t1 (c1
VARCHAR2(20), CHECK (REGEXP_LIKE(c1, '^[[:alpha:]]+$')));
INSERT INTO t1 VALUES
('newuser');
1 row created.
INSERT INTO t1 VALUES
('newuser1');
ORA-02290: check constraint violated
INSERT INTO t1 VALUES
('new-user');
ORA-02290: check constraint violated
Performance Considerations
Due to the inherent complexity of the compile and match logic,
regular expression functions can perform slower than their
non-regular expression counter parts.
Automatic
Workload Repository (AWR)
AWR periodically gathers and stores system activity and workload
data which is then analyzed by ADDM. Every layer of Oracle is
equipped with instrumentation that gathers information on workload
which will then be used to make self-managing decisions. AWR is the
place where this data is stored. AWR looks periodically at the
system performance (by default every 60 minutes) and stores the
information found (by default up to 7 days). This allows to retrieve
information about workload changes and database usage patterns. AWR
runs by default and Oracle states that it does not add a noticeable
level of overhead. A new background server process (MMON) takes
snapshots of the in-memory database statistics (much like STATSPACK)
and stores this information in the repository. MMON also provides
Oracle10G with a server initiated alert feature, which notifies
database administrators of potential problems (out of space, max
extents reached, performance thresholds, etc.). The information is
stored in the sysaux tablespace under the SYS Schema. This
information is the basis for all self-management decisions. For
example, it is thus possible to identify the SQL statements that
have the
* largest CPU consumption
* most buffer gets
* disk reads
* most parse calls
* shared memory
To access from OEM, click on Administration, then "Automatic
Workload Repository" where you can perform all the tasks described
here.
Both the snapshot frequency and retention time can be modified by
the user. To see the present settings, you could use:
select snap_interval,
retention from dba_hist_wr_control;
SNAP_INTERVAL RETENTION
------------------- -------------------
+00000 01:00:00.0 +00007 00:00:00.0
This SQL shows that the snapshots are taken every hour and
the collections are retained 7 seven days
The default collection for AWR data is 7 days, so many Oracle DBAs
will increase the storage of detail information over longer time
periods using the new package
DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS. Remember that if
you increase it, is recommended to extend the SYSAUX tablespace.
This will change the retention period and collection frequency to
provide you with longer timer periods of data:
execute
dbms_workload_repository.modify_snapshot_settings(
interval =>
60, -- Minutes.
Current value retained if NULL.
retention => 43200); -- Minutes (= 30
Days). Current value retained if NULL
In this example the retention period is specified as 30 days (43200
min) and the interval between each snapshot is 60 min. It seems that
STATSPACK is not needed any more!!!
Workload
Repository
Reports
Oracle provide two scripts to produce workload repository reports
(awrrpt.sql and awrrpti.sql). They are similar in format to the
statspack reports and give the option of HTML or plain text formats.
The two reports give essential the same output but the awrrpti.sql
allows you to select a single instance. The reports can be generated
as follows:
@$ORACLE_HOME/rdbms/admin/awrrpt.sql
@$ORACLE_HOME/rdbms/admin/awrrpti.sql
The scripts prompt you to enter the report format (html or text),
the start snapshot id, the end snapshot id and the report filename.
This script looks like Statspack; it shows all the AWR snapshots
available and asks for two specific ones as interval boundaries. It
produces two types of output: text format, similar to that of the
Statspack report but from the AWR repository, and the default HTML
format, complete with hyperlinks to sections and subsections,
providing quite a user-friendly report. Run the script and take a
look at the report now to get an idea about capabilities of the AWR.
If you want to explore the AWR repository, feel free to do so. The
AWR consists of a number of tables owned by the SYS schema and
stored in the SYSAUX tablespace. All AWR table names starts with the
identifier “WR.” Following WR is a mnemonic that identifies the type
designation of the table followed by a dollar sign ($). AWR tables
come with three different type designations:
- Metadata (WRM$)
- Historical data (WRH$)
- AWR tables related to advisor functions (WRI$)
Most of the AWR table names are pretty self-explanatory, such as
WRM$_SNAPSHOT or WRH$_ACTIVE_SESSION_HISTORY.
Also Oracle Database 10g offers several DBA tables that allow you to
query the AWR repository. The tables all start with DBA_HIST,
followed by a name that describes the table. These include tables
such as DBA_HIST_FILESTATS, DBA_HIST_DATAFILE, or DBA_HIST_SNAPSHOT.
The AWR history tables capture a lot more information than
Statspack, including tablespace usage, filesystem usage, even
operating system statistics. A complete list of these tables can be
seen from the data dictionary through:
select view_name from
user_views
where view_name like 'DBA\_HIST\_%' escape '\';
Snapshots
You can create snapshots manually using:
EXEC
dbms_workload_repository.create_snapshot;
You can see what snapshots are currently in the AWR by using the
DBA_HIST_SNAPSHOT view as seen in this example:
SELECT snap_id,
to_char(begin_interval_time,'dd/MON/yy hh24:mi') Begin_Interval,
to_char(end_interval_time,'dd/MON/yy hh24:mi') End_Interval
FROM dba_hist_snapshot
ORDER BY 1;
SNAP_ID
BEGIN_INTERVAL END_INTERVAL
---------- --------------- ---------------
954 30/NOV/05 03:01 30/NOV/05
04:00
955 30/NOV/05 04:00 30/NOV/05
05:00
956 30/NOV/05 05:00 30/NOV/05
06:00
957 30/NOV/05 06:00 30/NOV/05
07:00
958 30/NOV/05 07:00 30/NOV/05
08:00
959 30/NOV/05 08:00 30/NOV/05
09:00
Each snapshot is assigned a unique snapshot ID that is reflected in
the SNAP_ID column. The END_INTERVAL_TIME column displays the time
that the actual snapshot was taken. Sometimes you might want to drop
snapshots manually. The dbms_workload_repository.drop_snapshot_range
procedure can be used to remove a range of snapshots from the AWR.
This procedure takes two parameters, low_snap_id and high_snap_id,
as seen in this example:
EXEC
dbms_workload_repository.drop_snapshot_range(low_snap_id=>1107,
high_snap_id=>1108);
AWR
Automated Snapshots
Oracle Database 10g uses a scheduled job, GATHER_STATS_JOB, to
collect AWR statistics. This job is created, and enabled
automatically, when you create a new Oracle database under Oracle
Database 10g. To see this job, use the DBA_SCHEDULER_JOBS view as
seen in this example:
SELECT a.job_name,
a.enabled, c.window_name, c.schedule_name, c.start_date,
c.repeat_interval
FROM dba_scheduler_jobs a,
dba_scheduler_wingroup_members b, dba_scheduler_windows c
WHERE
job_name='GATHER_STATS_JOB'
And
a.schedule_name=b.window_group_name
And
b.window_name=c.window_name;
You can disable this job using the dbms_scheduler.disable
procedure as seen in this example:
Exec
dbms_scheduler.disable('GATHER_STATS_JOB');
And you can enable the job using the dbms_scheduler.enable
procedure as seen in this example:
Exec
dbms_scheduler.enable('GATHER_STATS_JOB');
More Details on Reading AWR_Reports
Here
Activity Session
History (ASH)
By now you must understand how important and useful the Automatic
Workload Repository (AWR) is. (Please read up on AWR if you need
to.)
As a recap, AWR captures workload-related performance data at the
user and system levels, including performance statistics by
different dimensions, metrics, OS statistics, and ASH data at
regular predetermined intervals.
Activity Session History (ASH) represents the history of the
activities of all recent active sessions captured efficiently
through a circular buffer in memory and efficiently written to AWR
to incur minimal overhead. The ASH data can be rolled up by
different dimensions: TOP SQL, object, file, session, module,
action, and so on.
However, most DBAs are commonly asked to diagnose transient
performance problems. To diagnose such problems, Oracle Database 10g
Release 2 introduces the ASH report.
The ASH report can be used to target the entire database or a
particular session, SQL_ID, module, action, or a combination of
these dimensions.
In most cases ASH is better than good enough to identify problems,
it is always on and there is no need to try to replicate a problem,
we’ve already recorded it.
Remember that ASH is licensed as part of the Diagnostic pack and
it's On by default.
ASH Samples ‘Active’ sessions every second, it's like doing "select
* from v$session_wait" w/o SQL;
One way to access the ASH report is from the Database page. Choosing
the Performance tab will generate a screen similar to the following

Note the button (inside the red oval) labeled "Run ASH Report."
Clicking on it brings up the Active Session History report:

This screen allows you to put the date and time of the start
and finish times of the period in which you're interested.
Enter the date and time as needed and press the "Generate
Report" button on the upper right. By default the date and
time shows a 5-minute interval.
After you click the button, you will see the ASH report on the
screen for that period. If you look carefully, you will see
that the report resembles the STASPACK report; but since it
comes from AWR data, the metrics in them are much more useful.
A small portion of the screen is shown below:

You can save the report to a file for later viewing by
pressing the button "Save to File."
Note the links in the section "ASH Report." Here you can see
the different types of available performance-related
statistics and metrics in one glance. For instance, you can
see Top Events during the period only by clicking on that
link. If performance issues come up within the period, this
information will help substantially. You can generally
identify bottlenecks that caused the transient spikes by
looking at skews along the various dimensions listed in the
ASH report.
Remember, this report is pulled from data collected by AWR or
from in-memory buffers as appropriate; hence, if you want to
diagnose a performance issue that occurred earlier, you could
simply fire up the ASH report for that period and see any
issues that might have surfaced then.
The ASH report can also be run through command line, by
running the Oracle supplied SQL script located in
$ORACLE_HOME/rdbms/admin
- awrrpt.sql, awrrpti.sql: Generate
report for pairs of snapshot ids. awrrpti.sql allows you
to pick a database, awrrpt.sql defaults to the current
database
- awrddrpt.sql, awrddrpi.sql: Generates a
comparison report between two AWR reports. Can be
for single database (compare prod at 2pm to prod at 4pm)
or for two different databases (compare prod at 2pm to
test at 2pm)
- awrsqrpt.sql, awrsqrpi.sql: Generates a
report for a single SQL statement over multiple
snapshots. Report on plans used, resources used.
- awrinfo.sql, (must be connected as sys): To review general AWR
information. Tablespace usage, Schema/option usage,
Segment usage, snapshot information, retention policies and so
on.
- awrextr.sql, awrload.sql, (must be
connected as sys) : Datapump exports/imports AWR data so
you can move it to another system for analysis/maintaining
a longer history.
- awrgrpti.sql, awrgrpt.sql: Generate RAC aware reports by
choosing specific instances to include – or to use all available
instances
- awrgdrpt.sql, awrgdrpi.sql: Generate RAC aware time period
comparison reports – within a set of database instances or
across databases
- ashrpt.sql, ashrpti.sql: Generates an ASH report … filter by
Session
SQL’s
Wait Classes
Database Services
Modules/Actions/Client_ID
PL/SQL entry point
Shows major waits, top services/modules, top clients, SQL
activity (parsing, executions, etc), top SQL and more…
- ashrpti.sql: Generates an ASH report. The “I” version
filters by
Session
SQL’s
Wait Classes
Database Services
Modules/Actions/Client_ID
PL/SQL entry point
Shows major waits, top services/modules, top clients, SQL
activity (parsing, executions, etc), top SQL and more…
Imagine you are a DBA on a production
system and get an emergency call like "The Database is dead slow!".
You are supposed to spot the cause as soon as possible. ASH kicks in
here: We sample the Wait-Events of active sessions every second into
the ASH-Buffer. It is accessed most comfortable with the Enterprise
Manager GUI from the Performance Page (Button ASH Report there). Or
with little effort from the command line like this:
@$ORACLE_HOME/rdbms/admin/ashrpt.sql;
Specify what type of report you would like, text or html.
The report will then request a beginning time for the report
You can then review your ASH report:
The first section will look similar for all databases:
DB Name |
DB Id |
Instance |
Inst num |
Release |
RAC |
Host |
FGTST1 |
2330122768 |
FGTST1 |
1 |
11.2.0.3.0 |
NO |
fgdb1t |
CPUs |
SGA Size |
Buffer Cache |
Shared Pool |
ASH Buffer Size |
8 |
2,039M (100%) |
192M (9.4%) |
960M (47.1%) |
16.0M (0.8%) |
|
Sample Time |
Data Source |
Analysis Begin Time: |
04-Oct-13
10:02:32 |
V$ACTIVE_SESSION_HISTORY |
Analysis End Time: |
04-Oct-13 11:02:32 |
V$ACTIVE_SESSION_HISTORY |
Elapsed Time: |
60.0
(mins) |
|
Sample Count: |
4,415 |
|
Average Active
Sessions: |
1.23 |
|
Avg. Active Session per CPU: |
0.15 |
|
Report Target: |
None specified |
|
The Top User Events are a solid place to view first:
Top User Events
Event |
Event Class |
% Event |
Avg Active Sessions |
CPU + Wait for
CPU |
CPU |
92.98 |
1.14 |
db file sequential read |
User I/O |
1.59 |
0.02 |
log file sync |
Commit |
1.13 |
0.01 |
Top Background Events
Event |
Event Class |
% Activity |
Avg Active Sessions |
CPU + Wait for
CPU |
CPU |
1.49 |
0.02 |
log file parallel write |
System I/O |
1.25 |
0.02 |
Top Event P1/P2/P3 Values
Event |
% Event |
P1 Value, P2 Value, P3
Value |
% Activity |
Parameter 1 |
Parameter 2 |
Parameter 3 |
db file
sequential read |
1.59 |
"5","1013701","1" |
0.02 |
file# |
block# |
blocks |
log file parallel write |
1.25 |
"2","38","2" |
0.14 |
files |
blocks |
requests |
log file sync |
1.13 |
"1439","1575570377","0" |
0.02 |
buffer# |
sync scn |
NOT DEFINED |
The query to inspect the object involved in the blocks would look
like this:
select owner, segment_name, segment_type
from dba_extents
where file_id = &p1
and &p2 between block_id and block_id + blocks
-&p3;
OWNER SEGMENT_NAME SEGMENT_TYPE
------------------
USER1 PTBL1 TABLE
It will also tell us the types of commands utilzing the database
most:
Top SQL Command Types
- 'Distinct SQLIDs' is the
count of the distinct number of SQLIDs with the given SQL
Command Type found over all the ASH samples in the analysis
period
SQL Command Type |
Distinct SQLIDs |
% Activity |
Avg Active Sessions |
UPDATE |
3 |
88.24 |
1.08 |
SELECT |
28 |
6.34 |
0.08 |
And the exact statements that were in the “TOP”, note that it will
break it down by percentage of wait type per statement. If there is
more than one that the statement falls under,
it will show show each wait with percentage:
Top SQL with Top Events
SQL ID |
Planhash |
Sampled # of Executions |
% Activity |
Event |
% Event |
Top Row Source |
% RwSrc |
SQL Text |
fb69fu8w7argp |
240554356 |
2 |
88.11 |
CPU + Wait for CPU |
88.11 |
HASH JOIN |
87.70 |
UPDATE ACCOUNT SET
ISCOMPLETE ... |
3c0dfgruf2sdj |
1317339463 |
1 |
2.76 |
CPU + Wait for CPU |
2.51 |
SORT - AGGREGATE |
1.52 |
/* SQL Analyze(2) */ select /*... |
dqtbktv8s7g6k |
3033207482 |
56 |
1.29 |
db file sequential read |
1.18 |
INDEX - RANGE SCAN |
1.13 |
SELECT (SELECT COUNT(1)
FROM T... |
Another valuable area is PLSQL Subprograms:
PLSQL Entry
Subprogram
% Activity
-----------------------------------------------------------------
----------
PLSQL Current
Subprogram
% Current
-----------------------------------------------------------------
----------
USER1.PROC_d
5.61
USER1.PROC_h
5.61
USER2.PROC_b
3.39
USER1.TRIGGER_a
1.69
Useful queries based on ASH
--------------------------------------------
-- Top 10 CPU consumers in
last 5 minutes
--------------------------------------------
select *
from (select session_id,
session_serial#, count(*)
from v$active_session_history
where session_state= 'ON CPU'
and sample_time > sysdate - interval '5' minute
group by session_id, session_serial#
order by count(*) desc
)
where rownum <= 10;
SESSION_ID SESSION_SERIAL# COUNT(*)
---------- --------------- ----------
3
1 3
--------------------------------------------
-- Top 10 waiting sessions in
last 5 minutes
--------------------------------------------
select *
from (select session_id,
session_serial#,count(*)
from v$active_session_history
where session_state='WAITING'
and sample_time > sysdate - interval '5' minute
group by session_id, session_serial#
order by count(*) desc
)
where rownum <= 10;
These 2 queries should spot the most incriminating sessions of the
last 5 minutes. But who is that and what SQL was running?
--------------------
-- Who is that SID?
--------------------
set lines 200
col username for a10
col osuser for a10
col machine for a10
col program for a10
col resource_consumer_group
for a10
col client_info for a10
select serial#, username,
osuser, machine, program, resource_consumer_group, client_info
from v$session
where sid=&sid;
-------------------------
-- What did that SID do?
-------------------------
select distinct sql_id,
session_serial#
from v$active_session_history
where sample_time >
sysdate - interval '5' minute
and
session_id=&sid;
----------------------------------------------
-- Retrieve the SQL from the
Library Cache:
----------------------------------------------
col sql_text for a80
SQL> select sql_text from
v$sql where sql_id='&sqlid';
----------------------------------------------
-- ASH:Top SQL Over Last 5 Minutes
----------------------------------------------
select NVL(sql_id,'NULL') as sql_id
,count(*)
as DB_time
,ROUND(100*count(*) / SUM(count(*)) OVER (), 2) as Pct_load
from
v$active_session_history
where sample_time >
sysdate - 5/24/60
and session_type
<> 'BACKGROUND'
group by sql_id
order by count(*) desc;
SQL_ID
DB_TIME
PCT_LOAD
------------- ---------------------- ----------------------
88v077cs94gak
136
43.17
4xvts5kvsf1w8
89
28.25
8pcw7z5vvhfj0
7
2.22
dbm33sd7kv9s3
5
1.59
572fbaj0fdw2b
5
1.59
----------------------------------------------
-- ASH: Top *anything* SQL
----------------------------------------------
select
ash.sql_id,
(select distinct decode( count(distinct sql_text) over (),
1, substr(sql_text,1,80)||'...',
'more than one sql matched' )
from v$sql
where sql_id = ash.sql_id) sql,
count(*)
from
v$active_session_history ash, v$event_name evt
where ash.sample_time
> sysdate - 1/24
and
ash.session_state = 'WAITING'
and ash.event_id
= evt.event_id
and
evt.wait_class = 'User I/O'
group by sql_id
order by count(*) desc;
--------------------------------------------------------------------------------------------
-- ASH: SQL by total CPU plus
wait time, broken down by CPU, IO wait and non-IO wait
--------------------------------------------------------------------------------------------
select sql_id,
(select distinct decode( count(distinct sql_text) over (),
1, substr(sql_text,1,80)||'...', 'more than one sql matched' )
from v$sql
where sql_id = ash.sql_id) sql,
cpu, non_io_wait, io_wait
from (
select ash.sql_id,
sum(case when ash.session_state = 'ON CPU' then 1 else 0 end )
cpu,
sum(case when ash.session_state='WAITING' and
ash.event#<>ev.user_io then 1 else 0 end) non_io_wait,
sum(case when ash.session_state = 'WAITING' and ash.event# =
ev.user_io then 1 else 0 end ) io_wait
from
v$active_session_history ash,
(select event# user_io from v$event_name where wait_class = 'User
I/O') ev
group by ash.sql_id
) ash
order by
cpu+non_io_wait+io_wait desc;
--------------------------------------------------------------------------------------------
-- ASH: Look at the last hour,
And for each event Find the min/max sample times
--
Add up the wait time for that event, And report the largest waited
for things first by Waits
--------------------------------------------------------------------------------------------
select ash.event,
min(sample_time) start_time,
max(sample_time)-min(sample_time) duration,
round(sum(ash.wait_time+ash.time_waited)/1000000,2) wait_secs
from
v$active_session_history ash
where ash.sample_time
between
systimestamp-numtodsinterval(1,'hour') and systimestamp
group by ash.event
order by wait_secs desc;
--------------------------------------------------------------------------------------------
-- ASH: Look at the
last hour, And for each event Find the min/max sample times
--
Add up the wait time for that event, And report the largest waited
for things first by Session
--------------------------------------------------------------------------------------------
select ash.session_id,
au.username,
round(sum(ash.wait_time+ash.time_waited)/1000000,2) wait_secs
from
v$active_session_history ash,
all_users au
where ash.sample_time
>= systimestamp-numtodsinterval(1,'hour')
and ash.user_id =
au.user_id
group by ash.session_id,
au.username
order by wait_secs DESC;
--------------------------------------------------------------------------------------------
-- ASH: Look at the last hour,
And for each event Find the min/max sample times
--
Add up the wait time for that event, And report the largest waited
For a given session, by sql statement used in that session
--------------------------------------------------------------------------------------------
select (select distinct
decode( count(distinct sql_text) over (),
1, substr(sql_text,1,80)||'...', 'more than one sql matched' )
from v$sql
where sql_id = ash.sql_id) sql,
round(sum(ash.wait_time+ash.time_waited)/1000000,2) wait_secs
from
v$active_session_history ash
where ash.session_id =
1065
group by ash.sql_id
order by wait_secs DESC
Automatic
Database
Diagnostic
Monitor
(ADDM)
The ADDM analyzes the information contained in the Automatic
Workload Repository (AWR) every 30 minutes to pinpoint problems and
provide automated recommendations to DBAs. If ADDM requires
additional information to make a decision, it will activate other
advisories to gather more information. ADDM's output includes a
plethora of reports, charts, graphs, heartbeats and related visual
aids. For example, ADDM identifies the most resource intensive SQL
statements and passes that statement to the SQL tuning advisor. It
promises that you can forget all of your scripts that link the many
v$views. ADDM can be run from Enterprise Manager or through a PL/SQL
interface. If a recommendation is made it reports the benefits that
can be expected, again in terms of time. The ADDM then triggers
automatic reconfiguration using the Automatic Storage Management
(ASM) and Automatic Memory Management (AMM) components. ADDM
automatically detects and diagnoses common performance problems,
including:
* Hardware issues related to excessive I/O
* CPU bottlenecks
* Connection management issues
* Excessive parsing
* Concurrency issues, such as contention for
locks
* PGA, buffer-cache, and log-buffer-sizing issues
* Issues specific to Oracle Real Application
Clusters (RAC) deployments, such as global cache hot blocks and
objects and interconnect latency issues
Because ADDM runs automatically after each new AWR snapshot is
taken, no manual steps are required to generate its findings. But
you can run ADDM on demand by creating a new snapshot manually, by
using either Oracle Enterprise Manager (OEM) or the command-line
interface. The following shows creation of a snapshot from the
command line:
exec
dbms_workload_repository.create_snapshot();
or
exec
dbms_workload_repository.create_snapshot('TYPICAL');
You can also generate an ADDM report that summarizes performance
data and provides a list of all findings and recommendations
You can access ADDM reports through the Web-based OEM console or
from a SQL*Plus command line by using the new DBMS_ADVISOR built-in
package. For example, here's how to use the command line to create
an ADDM report quickly (based on the most recent snapshot):
spool ADDMsuggestions.txt
set long 1000000
set pagesize 50000
column get_clob format a80
select
dbms_advisor.get_task_report(task_name, 'TEXT', 'ALL') as
ADDM_report
from
dba_advisor_tasks
where task_id =
(select max(t.task_id)
from
dba_advisor_tasks
t,
dba_advisor_log l
where
t.task_id = l.task_id
and
t.advisor_name='ADDM'
and
l.status= 'COMPLETED');
spool off
The ALL parameter generates additional information about the meaning
of some of the elements in the report.
The easiest way to get the ADDM report is by executing:
@?/rdbms/admin/addmrpt
Running this script will show which snapshots have been generated,
asks for the snapshot IDs to be used for generating the report, and
will generate the report containing the ADDM findings.
When you do not want to use the script, you need to submit and
execute the ADDM task manually. First, query DBA_HIST_SNAPSHOT to
see which snapshots have been created. These snapshots will be used
by ADDM to generate recommendations:
SELECT * FROM dba_hist_snapshot ORDER BY
snap_id;
Mark the 2 snapshot IDs (such as the lowest and highest ones) for
use in generating recommendations.
Next, you need to submit and execute the ADDM task manually, using a
script similar to:
DECLARE
task_name VARCHAR2(30) :=
'SCOTT_ADDM';
task_desc VARCHAR2(30) :=
'ADDM Feature Test';
task_id NUMBER;
BEGIN
dbms_advisor.create_task('ADDM', task_id, task_name,
task_desc,null); -- (1)
dbms_advisor.set_task_parameter('SCOTT_ADDM', 'START_SNAPSHOT',
1); -- (2)
dbms_advisor.set_task_parameter('SCOTT_ADDM', 'END_SNAPSHOT', 3);
dbms_advisor.set_task_parameter('SCOTT_ADDM', 'INSTANCE', 1);
dbms_advisor.set_task_parameter('SCOTT_ADDM', 'DB_ID', 494687018);
dbms_advisor.execute_task('SCOTT_ADDM');
-- (3)
END;
/
Here is an explanation of the steps you need to take to successfully
execute an ADDM job:
1) The first step is to create the task. For this, you need to
specify the name under which the task will be known in the ADDM task
system. Along with the name you can provide a more readable
description on what the job should do. The task type must be 'ADDM'
in order to have it executed in the ADDM environment.
2) After having defined the ADDM task, you must define the
boundaries within which the task needs to be executed. For this you
need to set the starting and ending snapshot IDs, instance ID
(especially necessary when running in a RAC environment), and
database ID for the newly created job.
3) Finally, the task must be executed.
When querying DBA_ADVISOR_TASKS you will see the just created job:
SELECT * FROM dba_advisor_tasks;
When the job has successfully completed, examine the recommendations
made by ADDM by calling the DBMS_ADVISOR.GET_TASK_REPORT() routine,
like in:
SET LONG 1000000 PAGESIZE 0 LONGCHUNKSIZE
1000
COLUMN
get_clob FORMAT a80
SELECT
dbms_advisor.get_task_report('SCOTT_ADDM', 'TEXT', 'TYPICAL')
FROM
sys.dual;
The recommendations supplied should be sufficient to investigate the
performance issue
To see the ADDM recommendations and the AWR repository data, use the
new Enterprise Manager 10g console on the page named DB Home. To see
the AWR reports, you can navigate to them from Administration, then
Workload Repository, and then Snapshots. We'll examine ADDM in
greater detail in a future installment.
Script to display the most recent ADDM report
set long 1000000
set pagesize 50000
column get_clob format a80
select
dbms_advisor.get_task_report(task_name) as ADDM_report
from
dba_advisor_tasks
where task_id = (
select
max(t.task_id)
from
dba_advisor_tasks t, dba_advisor_log l
where
t.task_id = l.task_id
and t.advisor_name = 'ADDM'
and l.status = 'COMPLETED');
Using
OEM
To perform a diagnosis in 10g, you would choose the snapshots in the
relevant interval for further drill-down analysis. In Enterprise
Manager 10g, from the Database home page, you would choose "Advisor
Central" and then click on the "ADDM" link, which brings up a page
similar to Figure 1.

In this page, you can create tasks to be analyzed by ADDM. You know
that the performance problems occurred around 11PM, so choose the
snapshots that fall in that range, indicated by "Period Start" and
"Period End" values. You can also click on the camera icons to
indicate start and stop snapshot intervals, as shown in red ellipses
here. After choosing the interval, press the "OK" button, which
brings up a page similar to that shown in Figure 2.

Here ADDM identifies two critical and related performance problems
in the interval: some SQL statements are consuming significant CPU
time, leading to a significant database slowdown. Based on the
findings, ADDM recommends SQL tuning for these statements as
highlighted in the figure.
If you click on the individual findings, ADDM displays more details.
For example, clicking on the problem finding brings up a page
similar to the one shown in Figure 3

Here you can see the specific SQL statement causing this problem.
ADDM recommends that you subject this SQL statement to a thorough
analysis by SQL Tuning Advisor, as mentioned in the "Action"
section. You can immediately run the task by clicking on the button
next to it, which will invoke the SQL Tuning Advisor.
In Figure 2, you may have noticed a button named "View Report." In
addition to providing the recommendation in individual web pages,
ADDM can also create plain-text reports for a quicker one-stop
analysis.
The SQL ID can be used for independent analysis in the SQL Tuning
Advisor page in Enterprise Manager or via the command line.
ADDM is invoked after every AWR snapshot is collected, so the
recommendations based on the adjacent snapshots are available for
viewing.
Hence you will not have to create an ADDM task as shown above if the
scope of analysis is just two adjacent snapshots. If you want to
analyze between two snapshots that are not adjacent, you will need
to create the ADDM task.
Keep in mind that this is not all ADDM can do; there are many more
analyses and recommendations available for memory management,
segment management, redo/undo, and more, as you saw in previous
installment. Because it would be impossible to describe the full
spectrum of ADDM functionalities in this single brief article, we'll
focus only on SQL Tuning Advisor here. Now let's see how it works.
More Screenshots




SQL Tuning Advisor (STA) and
SQL Access Advisor
Oracle's latest advisor will help Oracle
DBAs with the “fine art” of SQL tuning. In the past DBA's
required extensive tuning experience before they could be
described as “expert SQL tuners.” Oracle claims to have
embedded hundreds of year's worth of tuning experience into
the SQL Tuning Advisor.
The SQL Tuning Advisor uses the AWR to capture
and identify high resource consuming SQL statements. An
intelligent analyzer is then used to assist administrators in
tuning the offending SQL statements.
The tuning advisor sends the SQL statement
being analyzed to the Automatic Tuning Optimizer to perform
the following in-depth analysis:
-
Statistics
Analysis – the utility checks for stale or missing
statistics, which may have a detrimental effect on the
query's optimization.
-
SQL Profiling –
reviews past executions of the SQL statement to
provide further information for recommendations.
-
Access Path
Analysis – determines if additional objects (indexes,
materialized views) can be created to improve the
statement's performance.
-
SQL Structure
Analysis – reviews the SQL statement's coding
structure to determine if it can be altered to
increase performance.
The Automatic Tuning Advisor uses the
Oracle optimizer to make its recommendations. Unlike run-time
optimization, which focuses on quick optimization, Automatic
Tuning Advisor calls to the optimizer are not limited by time
constraints. As a result, queries tuned by the advisor have a
much better chance of having a finely tuned optimization plan
created.
The SQL Tuning Advisor will be very beneficial
to administrators who support third-party applications. The SQL Tuning Advisor uses the CBO to rewrite the
poorly performing SQL and create a SQL profile, which is
stored in the data dictionary. Each time the poorly performing
SQL statement executes, the rewritten statement stored in the
data dictionary is used in its place. No vendor assistance
required!
The input workload for the SQL Access Advisor can consist of SQL
statements currently in the SQL Cache, a user defined set of SQL
statements contained in a workload table or an OEM generated SQL
Tuning Set. The SQL Access Advisor is also able to generate a
hypothetical workload for a specified schema. The utility can be
invoked from the Advisor Central home page or from the DBMS_ADVISOR
package.
As you might expect, this "thinking" consumes resources
such as CPU; hence the SQL Tuning Advisor works on
SQL statements during a Tuning Mode, which can be run during
off-peak times. This mode is indicated by setting the SCOPE
and TIME parameters in the function while creating
the tuning task. It's a good practice to run Tuning Mode
during a low-activity period in the database so that regular
users are relatively unaffected, leaving analysis for later.
Access the SQL Tuning Advisor using OEM (Oracle Enterprise
Manager) 10g. Start OEM from your Web browser using a URL
similar to this:
http://servername.com:5501/em/
When you start OEM, you need to provide a username and
password for the database and server you're monitoring. For
database access, you need to create a user with DBA
privileges if you haven't yet created one.
Once you've logged into OEM, you typically start at the Database Control
Screen. To get to the SQL Tuning Advisor,
navigate to Performance,
then to Advisor
Central, and finally to the SQL Tuning Advisor
window.
Each time you want to tune a SQL statement, just follow this
three-step process:
- •Create a new SQL tuning task using the
function Create_Tuning_Task
- •Execute the SQL
tuning task using the procedure Execute_Tuning_Task
- Review the
tuning recommendations using the function Report_Tuning_Task
Of course, considering the simplicity of this
example, you would have reached the conclusion via manual
examination as well. However, imagine how useful this tool would
be for more complex queries where a manual examination may not be
possible or is impractical.
1-Basic-Level Tuning
In order to access the SQL tuning advisor API a user must be
granted the ADVISOR privilege:
CONN sys/password AS
SYSDBA
GRANT ADVISOR TO
scott;
CONN scott/tiger;
The concept is best explained through an example. Take the
case of the query that the developer brought to your
attention, shown below.
select account_no from accounts where old_account_no = 11;
The first step when using the SQL
tuning advisor is to create a new tuning task using the
CREATE_TUNING_TASK function. The statements to be analyzed
can be retrieved from the Automatic Workload Repository
(AWR), the cursor cache, an SQL tuning set or specified
manually. There are two ways to fire up the advisor: using
Enterprise Manager or plain command line. Let's see how to use it in command line. We
invoke the advisor by calling the supplied package dbms_sqltune.
declare
l_task_id varchar2(20);
l_sql varchar2(2000);
begin
l_sql := 'select account_no from accounts where old_account_no = 11';
dbms_sqltune.drop_tuning_task ('FOLIO_COUNT');
l_task_id := dbms_sqltune.create_tuning_task (
sql_text => l_sql,
user_name => 'SCOTT',
scope => 'COMPREHENSIVE',
time_limit => 120,
task_name => 'FOLIO_COUNT' );
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
dbms_sqltune.execute_tuning_task ('FOLIO_COUNT');
end;
/
This package creates and executes a
tuning task named FOLIO_COUNT. Next, you will need
to see the results of the execution of the task (that is,
see the recommendations).
set serveroutput on size 999999
set long 999999
spool recommendations.txt
select dbms_sqltune.report_tuning_task ('FOLIO_COUNT') from dual;
spool off;
Look at the output recommendations
carefully; the advisor says you can improve performance by
creating an index on the column OLD_ACCOUNT_NO. Even better,
the advisor calculated the cost of the query if the index
were created, making the potential savings more definable
and concrete.
Here is a full example with all
the ways to analyze queries, from the
Automatic Workload Repository (AWR), the cursor
cache, an SQL tuning set or specified manually:
SET SERVEROUTPUT ON
------------------------------------------------------------------------------
-- Tuning task created for a Specific SQL
------------------------------------------------------------------------------
declare
l_task_id varchar2(20);
l_sql varchar2(2000);
begin
l_sql := 'select account_no from accounts where old_account_no = 11';
dbms_sqltune.drop_tuning_task ('TUNE_ACCOUNT');
l_task_id := dbms_sqltune.create_tuning_task (
sql_text => l_sql,
user_name => 'SCOTT',
scope => 'COMPREHENSIVE',
time_limit => 120,
task_name => 'TUNE_ACOUNT' );
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
dbms_sqltune.execute_tuning_task ('TUNE_ACOUNT');
end;
/
------------------------------------------------------------------------------
-- Tuning task created for a Specific SQL sending parameters
------------------------------------------------------------------------------
DECLARE
l_sql VARCHAR2(500);
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql := 'SELECT e.*, d.* FROMM emp e , dept d WHERE e.deptno = d.deptno AND NVL(empno, ''0'') = :empno';
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sql_text => l_sql,
bind_list => sql_binds(anydata.ConvertNumber(100)),
user_name => 'SCOTT',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => 'EMP_DEPT_TUNING_TASK',
description => 'Tuning task for an EMP to DEPT join query.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
DBMS_SQLTUNE.execute_tuning_task('EMP_DEPT_TUNING_TASK');END;
/
------------------------------------------------------------------------------
-- Tuning task created for specific a statement from the AWR.
------------------------------------------------------------------------------
DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
begin_snap => 764,
end_snap => 938,
sql_id => '19v5guvsgcd1v',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => '19v5guvsgcd1v_AWR_TUNING_TASK',
description => 'Tuning task for statement 19v5guvsgcd1v in AWR.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
DBMS_SQLTUNE.execute_tuning_task('19v5guvsgcd1v_AWR_TUNING_TASK');
END;
/
------------------------------------------------------------------------------
-- Tuning task created for specific a statement from the cursor cache.
------------------------------------------------------------------------------
DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sql_id => '19v5guvsgcd1v',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => '19v5guvsgcd1v_TUNING_TASK',
description => 'Tuning task for statement 19v5guvsgcd1v in Memory');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
DBMS_SQLTUNE.execute_tuning_task('19v5guvsgcd1v_TUNING_TASK');
END;
/
------------------------------------------------------------------------------
-- Tuning task created from an SQL tuning set.
------------------------------------------------------------------------------
DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sqlset_name => 'test_sql_tuning_set',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => 'SQLSET_TUNING_TASK',
description => 'Tuning task for an SQL tuning set.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
DBMS_SQLTUNE.execute_tuning_task('SQLSET_TUNING_TASK');
END;
/
If the TASK_NAME parameter is specified it's value is
returned as the SQL tune task identifier. If ommitted a
system generated name like "TASK_1478" is returned.
If the SCOPE parameter is set to scope_limited the SQL
profiling analysis is omitted. The TIME_LIMIT parameter
simply restricts the time the optimizer can spend
compiling the recommendations.
The following examples will reference the last tuning set
as it has no external dependancies other than the SCOTT
schema. The NVL in the SQL statement was put in to provoke
a reaction from the optimizer. In addition we can delete
the statistics from one of the tables to provoke it even
more:
EXEC
DBMS_STATS.delete_table_stats('SCOTT','EMP');
With the tuning task is defined, the next step is to
execute it using the EXECUTE_TUNING_TASK procedure:
EXEC
DBMS_SQLTUNE.execute_tuning_task(task_name =>
'TUNE_ACOUNT');
During the execution phase you may wish to pause and
restart the task, cancel it or reset the task to allow it
to be re-executed:
-- Interrupt and
resume a tuning task.
EXEC
DBMS_SQLTUNE.interrupt_tuning_task (task_name =>
'TUNE_ACOUNT');
EXEC
DBMS_SQLTUNE.resume_tuning_task (task_name =>
'TUNE_ACOUNT');
-- Cancel a tuning task.
EXEC
DBMS_SQLTUNE.cancel_tuning_task (task_name =>
'TUNE_ACOUNT');
-- Reset a tuning task allowing it to be re-executed.
EXEC
DBMS_SQLTUNE.reset_tuning_task (task_name =>
'TUNE_ACOUNT');
The status of the tuning task can be monitored using the
DBA_ADVISOR_LOG view:
SELECT task_name,
status FROM dba_advisor_log WHERE owner = 'SCOTT';
TASK_NAME
STATUS
------------------------------
-----------
emp_dept_tuning_task
COMPLETED
Once the tuning task has executed successfully the
recommendations can be displayed using the
REPORT_TUNING_TASK function:
set serveroutput on size
999999
set long 999999
SET PAGESIZE 1000
SET LINESIZE 200
spool
recommendations.txt
select dbms_sqltune.report_tuning_task ('TUNE_ACOUNT') AS recommendations from dual;
spool off;
set pagesize 24
2-Intermediate-Level Tuning: Query Restructuring
Suppose the query is a little bit more complex:
select account_no from accounts a
where account_name = 'HARRY'
and sub_account_name not in
( select account_name from accounts
where account_no = a.old_account_no and status is not null);
The advisor recommends the following:
1- Restructure SQL finding (see plan 1 in explain plans section)
----------------------------------------------------------------
The optimizer could not unnest the subquery at line ID 1 of the execution plan.
Recommendation
--------------
Consider replacing "NOT IN" with "NOT EXISTS" or ensure that columns used
on both sides of the "NOT IN" operator are declared "NOT NULL" by adding
either "NOT NULL" constraints or "IS NOT NULL" predicates.
Rationale
---------
A "FILTER" operation can be very expensive because it evaluates the
subquery for each row in the parent query. The subquery, when unnested can
drastically improve the execution time because the "FILTER" operation is
converted into a join. Be aware that "NOT IN" and "NOT EXISTS" might
produce different results for "NULL" values.
This time the advisor did not
recommend any structural changes such as indexes, but
rather intelligently guessed the right way to tune a query
by replacing NOT IN with NOT EXISTS.
ecause the two constructs are similar but not identical,
the advisor gives the rationale for the change and leaves
the decision to the DBA or application developer to decide
whether this recommendation is valid for the environment.
3-Advanced Tuning: SQL Profiles
As you may know, the optimizer
decides on a query execution plan by examining the
statistics present on the objects referenced in the query
and then calculating the least-cost method. If a query
involves more than one table, which is typical, the
optimizer calculates the least-cost option by examining
the statistics of all the referenced objects—but it does
not know the relationship among them.
For example, assume that an
account with status DELINQUENT will have less
than $1,000 as balance. A query that joins the tables
ACCOUNTS and BALANCES will report fewer rows if the
predicate has a clause filtering for DELINQUENT
only. The optimizer does not know this complex
relationship—but the advisor does; it "assembles" this
relationship from the data and stores it in the form of
a SQL Profile. With access to the SQL Profile, the
optimizer not only knows the data distribution of
tables, but also the data correlations among them. This
additional information allows the optimizer to generate
a superior execution plan, thereby resulting in a
well-tuned query.
SQL Profiles obviate the need for
tuning SQL statements by manually adding query hints to
the code. Consequently, the SQL Tuning Advisor makes it
possible to tune packaged applications without modifying
code—a tremendous benefit. The main point here is that unlike
objects statistics, a SQL Profile is mapped to a query,
not an object or objects. Another query involving the same
two tables—ACCOUNTS and BALANCES—may have a different
profile. Using this metadata information on the query,
Oracle can improve performance. If a profile can be created, it is done
during the SQL Tuning Advisor session, where the advisor
generates the profile and recommends that you "Accept" it.
Unless a profile is accepted, it's not tied to a
statement. You can accept the profile at any time by
issuing a statement such as the following:
begin
dbms_sqltune.accept_sql_profile (
task_name => 'FOLIO_COUNT',
name => 'FOLIO_COUNT_PROFILE'
description => 'Folio Count Profile',
category => 'FOLIO_COUNT');
end;
This command ties the profile
named FOLIO_COUNT_PROFILE generated earlier by
the advisor to the statement associated with the tuning
task named FOLIO_COUNT described in the
earlier example. (Note that although only the advisor,
not the DBA, can create a SQL Profile, only you can
decide when to use it.)
You can see created SQL Profiles
in the dictionary view DBA_SQL_PROFILES. The
column SQL_TEXT shows the SQL statement the profile was
assigned to; the column STATUS indicates if the profile
is enabled. (Even if it is already tied to a statement,
the profile must be enabled in order to affect the
execution plan.)
Now, let's say that you want to
know how much of those recommendations have been done. If you are
using the command-line version of the SQL Access Advisor, not Oracle
Enterprise Manager, can you still see how much is done? Using the
new view V$ADVISOR_PROGRESS.
desc v$advisor_progress
Name
Null? Type
-----------------------------------------
-------- -----------
SID
NUMBER
SERIAL#
NUMBER
USERNAME
VARCHAR2(30)
OPNAME
VARCHAR2(64)
ADVISOR_NAME
VARCHAR2(64)
TASK_ID
NUMBER
TARGET_DESC
VARCHAR2(32)
SOFAR
NUMBER
TOTALWORK
NUMBER
UNITS
VARCHAR2(32)
BENEFIT_SOFAR
NUMBER
BENEFIT_MAX
NUMBER
FINDINGS
NUMBER
RECOMMENDATIONS
NUMBER
TIME_REMAINING
NUMBER
START_TIME
DATE
LAST_UPDATE_TIME
DATE
ELAPSED_SECONDS
NUMBER
ADVISOR_METRIC1
NUMBER
METRIC1_DESC
VARCHAR2(64)
Here the columns TOTALWORK and SOFAR show how much work has been
done as well as the total work, similar to what you can see from
V$SESSION_LONGOPS view.
Once the tuning session is over the tuning task can be dropped using
the DROP_TUNING_TASK procedure:
BEGIN
DBMS_SQLTUNE.drop_tuning_task (task_name =>
'19v5guvsgcd1v_AWR_tuning_task');
DBMS_SQLTUNE.drop_tuning_task (task_name =>
'19v5guvsgcd1v_tuning_task');
DBMS_SQLTUNE.drop_tuning_task (task_name =>
'sqlset_tuning_task');
DBMS_SQLTUNE.drop_tuning_task (task_name =>
'emp_dept_tuning_task');
END;
/
Using
OEM
The previous example was deliberately formulated to illustrate how
to use SQL Tuning Advisor in command-line mode, which is very useful
for scripting these tasks proactively. In most cases, however, you
will need to perform tuning in response to problems reported by an
end user. Enterprise Manager 10g comes in handy in those cases.
Here's how you would use it to diagnose and tune SQL: From the
Database home page, click on the link "Advisor Central" at the
bottom of the screen, which launches the page containing all the
advisors. Next, click on "SQL Tuning Advisor" at the top of the
screen as shown in Figure 4.

You have just launched the SQL Tuning Advisor. Choose "Top SQL" from
the next page as shown in Figure 5.

This action launches a page similar to the one shown in Figure 6,
where a graph containing the various wait classes are traced along a
time dimension

A gray rectangular area within a red ellipse puts the focus on the
graph. Reposition the rectangle by mouse-dragging it to a location
where the CPU wait is high (as shown in the figure). The lower part
of the page will display the relevant SQL statements in that
interval, as shown in Figure 7.

As you can see, the SQL statement shown at the top (enclosed by the
red ellipse) has the highest activity with maximum CPU consumption.
Click on the statement ID to see details of the statement, which
will bring up a screen as shown in Figure 8.

In the figure, you can see the exact SQL statement that caused the
CPU consumption in that time period. You can click on the button
"Run SQL Tuning Advisor" (marked in the figure) to run the advisor.
This brings up a screen similar to the one shown in Figure 9.

In the advisor scheduler, you can determine the type of task and how
much analysis should be done. For example, in the above figure, I
have chosen "comprehensive" analysis and that the advisor is to be
run immediately. After the advisor finishes you can see its
recommendation, as shown in Figure 10.

This process I just described is similar to what you have seen in
the command-line version; however, the flow is more reflective of a
real-life scenario in which you have reacted to a problem, drilled
down to its cause, and accepted recommendations about how to fix it.
Conclusion
ADDM is a powerful tool that has the "brains" to analyze performance
metrics and offer recommendations based on best practices and
accepted methodologies professed by seasoned Oracle professionals,
all automatically. This functionality can tell the DBA not only what
happened and why, but most important, what to do next.
Automatic
Shared
Memory Management (ASMM)
The system global area (SGA) consists of memory components. A
component represents a pool of memory used to satisfy a particular
class of memory allocation requests. The most commonly configured
memory components include the database buffer cache, shared pool,
large pool, and java pool. Since we fix the values for these
components at instance start time, we are constrained to use them as
they are during the instance runtime (with some exceptions).
Often it happens that a certain component's memory pool is never
used but the pool is not available for another component, which is
in need of extra memory. Under-sizing can lead to poor performance
and out-of-memory errors (ORA-4031), while over-sizing can waste
memory.
The ASMM feature enables the Oracle database to automatically
determine the size of each of these memory components within the
limits of the total SGA size. This solves the allocation issues that
we normally face in a manual method. This feature enables us to
specify a total memory amount to be used for all SGA components. The
Oracle Database periodically redistributes memory between the
components above according to workload requirements.
Using the sga_target
initialization parameter configures Automatic Shared Memory
Management (AMM). If you specify a non-zero value for sga_target, the following four
memory pools are automatically sized:
- Buffer cache (The Default pool) (db_cache_size)
- Shared pool (shared_pool_size)
- Large pool (large_pool_size)
- Java pool (java_pool_size)
If these automatically tuned memory pools are set to non-zero
values, then those values are used as minimum levels by Automatic
Shared Memory Management.
The following pools are not affected by Automatic Shared Memory
Management:
o Log buffer
o Other buffer caches, such as KEEP, RECYCLE, and other block sizes
o Streams pool (in Release 1 only)
o Fixed SGA and other internal allocations
o The new Oracle Storage Management (OSM) buffer cache, which is
meant for the optional ASM instance
The memory allocated to these pools is deducted from the total
available for SGA_TARGET when Automatic Shared Memory
Management computes the values of the automatically tuned memory
pools.
Note: If you dynamically set SGA_TARGET to zero, the
size of the four auto-tuned shared memory components will remain at
their present levels.
Note: The SGA_MAX_SIZE parameter sets an upper bound
on the value of the SGA_TARGET parameter.
Note: In order to use Automatic Shared Memory Management, you
should make sure that the initialization parameter STATISTICS_LEVEL
is set to TYPICAL or ALL.
sga_target is also a
dynamic parameter and can be changed through Enterprise Manager or
with the ALTER SYSTEM command. However, the sga_target can be
increased only up to the value of sga_max_size. Figure 2.3 shows an
example of SGA components.

A new background process named Memory Manager (MMAN) manages the
automatic shared memory. MMAN serves as the SGA Memory Broker and
coordinates the sizing of the memory components. The SGA Memory
Broker keeps track of the sizes of the components and pending resize
operations.
Some pools in SGA are not subject to dynamic resizing, and must be
specified explicitly. Notable among them are the buffer pools for
nonstandard block sizes and the non-default ones for KEEP or
RECYCLE. If your database has a block size of 8K, and you want to
configure 2K, 4K, 16K, and 32K block-size pools, you must set them
manually. Their sizes will remain constant; they will not shrink or
expand based on load. You should consider this factor when using
multiple-size buffer, KEEP, and RECYCLE pools. In addition, log
buffer is not subject to the memory adjustment—the value set in the
parameter log_buffer is constant, regardless of the workload.
If statistic_level is set
to TYPICAL (default) or ALL, statistics are collected automatically.
Oracle Database 10g has a predefined Scheduler job named
GATHER_STATS_JOB, which is activated with the appropriate value of
the STATISTIC_LEVEL parameter. The collection of statistics is
fairly resource-intensive, so you may want to ensure it doesn't
affect regular operation of the database. In 10g, you can do so
automatically: a special resource consumer group named
AUTO_TASK_CONSUMER_GROUP is available predefined for automatically
executed tasks such as gathering of statistics. This consumer group
makes sure that the priority of these stats collection jobs is below
that of the default consumer group, and hence that the risk of
automatic tasks taking over the machine is reduced or eliminated.
What if you want to set the parameter STATISTIC_LEVEL to TYPICAL but
don't want to make the statistics collection automatic? Simple. Just
disable the Scheduler job by issuing the following:
BEGIN
DBMS_SCHEDULER.DISABLE('GATHER_STATS_JOB');
END;
Go back
to Previous Statistics
One of the complications that can occur
during optimizer statistics collection is changed execution
plans—that is, the old optimization works fine until the
statistics are collected, but thereafter, the queries suddenly go
awry due to bad plans generated by the newly collected statistics.
This is a not infrequent problem. To
protect against such mishaps, the statistics collection saves the
present statistics before gathering the new ones. In the event of
a problem, you can always go back to the old statistics, or at
least examine the differences between them to get a handle on the
problem. For example, let's
imagine that at 10:00PM on May 31 the statistics collection job on
the table REVENUE is run, and that subsequently the queries
perform badly. The old statistics are saved by Oracle, which you
can retrieve by issuing:
begin
dbms_stats.restore_table_stats (
'ARUP',
'REVENUE',
'31-MAY-04 10.00.00.000000000 PM -04:00');
end;
This command restores the statistics as of
10:00PM of May 31, given in the TIMESTAMP datatype. You just
immediately undid the changes made by the new statistics gathering
program. The length of the period
that you can restore is determined by the retention parameter. To
check the current retention, use the query:
SQL> select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;
GET_STATS_HISTORY_RETENTION
---------------------------
31
which in this case shows that 31 days worth
of statistics can be saved but not guaranteed. To discover the
exact time and date to which the statistics extend, simply use the
query:
SQL> select DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY from dual;
GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------
17-MAY-04 03.21.33.594053000 PM -04:00
which reveals that the oldest available statistics date to 3:21AM on
May 17. You can set the retention period to
a different value by executing a built-in function. For example,
to set it to 45 days, issue:
execute DBMS_STATS.ALTER_STATS_HISTORY_RETENTION (45)
trcsess
utility (trace utility)
When solving tuning problems, session traces are very useful and
offer vital information. Traces are simple and straightforward for
dedicated server sessions, but for shared server sessions, many
processes are involved. The trace pertaining to the user session is
scattered across different trace files belonging to different
processes. This makes it difficult to get a complete picture of the
life cycle of a session.
Now there is a new tool or command line utility to help read the
trace files. The trcsess command-line utility consolidates trace
information from selected trace files, based on specified criteria.
The criteria include session id, client id, service name, action
name and module name.
Also note that beginning with Oracle10g, Oracle Trace functionality
is no longer available. For tracing database activity, use SQLTrace
or TKPROF instead.
The syntax for the trcsess utility is:
trcsess [output=output_file_name]
[session=session_Id]
[clientid=client_Id]
[service=service_name]
[action=action_name]
[module=module_name]
[trace_files]
where:
- output specifies the file where the output is generated. When
this option is not specified, the standard output is used for
the output.
- session consolidates the trace information for the session
specified. The session Id is a combination of session index and
session serial number.
- clientid consolidates the trace information given client Id.
- service consolidates the trace information for the given
service name.
- action consolidates the trace information for the given action
name.
- module consolidates the trace information for the given module
name.
- trace_files is a list of all trace file names, separated by
spaces, in which trcsess will look for trace information. The
wild card character * can be used to specify the trace file
names. If trace files are not specified, all the files in the
current directory are checked by trcsess.
End-to-End
Tracing
End-to-End Tracing is a new feature in Oracle Database 10g that
facilitates the following tasks:
- Helps the debugging of performance problems in multi-tier
environments. In multi-tier environments, a request from an end
client is routed to different database sessions by the middle
tier.
- Proper accounting of workload for applications using services.
Service Name, Module, and Action name provide a means to set
apart important transactions in an application.
End-to-End tracing becomes possible with the attribute client_identifier in v$session, which uniquely
identifies a given end client and is carried through all tiers to
the database server. Enabling tracing based on the client_identifier solves the
problem of debugging performance problems in multi-tier
environments.
You can use the newly introduced dbms_monitor package to control additional
tracing and statistics gathering. This package contains the
following procedures used to enable and disable additional
statistics aggregation:
- client_id_stat_enable: Used to enable statistics accumulation
for a given client identifier
- client_id_stat_disable: Used to disable statistics
accumulation for a given client identifier
- serv_mod_act_stat_enable: Used to enable statistics
accumulation for a given hierarchy of service name, module name,
and action name
- serv_mod_act_stat_disable: Used to disable statistics
accumulation for a given hierarchy of service name, module name,
and action name
Here is an example to enable and disable the tracing based on a
client_id:
EXECUTE
DBMS_MONITOR.CLIENT_ID_TRACE_ENABLE ('client id');
EXECUTE
DBMS_MONITOR.CLIENT_ID_TRACE_DISABLE('client id');
End to end tracing can be managed via Enterprise Manager or a set of
APIs and views. Here are some examples of how to enable and disable
to various types of tracing:
BEGIN
-- Enable/Disable Client Identifier Trace.
DBMS_MONITOR.client_id_trace_enable (client_id => 'my_id');
DBMS_MONITOR.client_id_trace_disable (client_id => 'my_id');
-- Enable/Disable Service, Module and Action Trace (various
overloads).
DBMS_MONITOR.serv_mod_act_trace_enable (
service_name => 'my_service');
DBMS_MONITOR.serv_mod_act_trace_enable (
service_name => 'my_service',
module_name => 'my_module');
DBMS_MONITOR.serv_mod_act_trace_enable (
service_name => 'my_service',
module_name => 'my_module',
action_name => 'INSERT');
DBMS_MONITOR.serv_mod_act_trace_disable (
service_name => 'my_service',
module_name => 'my_module',
action_name => 'INSERT');
-- Enable/Disable Session Trace (various overloads).
DBMS_MONITOR.session_trace_enable;
DBMS_MONITOR.session_trace_enable (
session_id => 15,
serial_num => 1234);
DBMS_MONITOR.session_trace_disable (
session_id => 15,
serial_num => 1234);
END;
/
Once the trace files are produced the trcsess command line utility
can be used to filter out the relevant data from multiple files. The
utility accepts the following parameters:
* OUTPUT - Specifies the name of the consolidated
trace file.
* SESSION - Consolidates the file based on the
specified session id (SID.SERIAL# columns from V$SESSION).
* CLIENT_ID - Consolidates the file based on the
specified client identifier (CLIENT_IDENTIFIER column from
V$SESSION).
* SERVICE - Consolidates the file based on the
specified service (SERVICE_NAME column from V$SESSION).
* MODULE - Consolidates the file based on the
specified module (MODULE column from V$SESSION).
* ACTION - Consolidates the file based on the
specified action (ACTION column from V$SESSION).
* TRACE_FILES - A space separated list of trace
files to be searched. If omitted all files in the local directory
are searched.
At lease one of the search criteria must be specified. If more than
one is specified only trace that matches all the criteria is
consolidated. Examples of trcsess usage are:
# Search
all files for this session.
trcsess
output=session.trc session=144.2274
# Search
the specified files for this client identifier.
trcsess
output=client.trc client_id=my_id db10g_ora_198.trc
db10g_ora_206.trc
# Search
the specified files for this service, module and action
combination.
trcsess
output=client.trc service=my_service module=my_module
action=INSERT db10g_ora_198.trc db10g_ora_206.trc
Once the consolidated trace file is produced it can be processed by
the TKPROF utility like any other SQL Trace file.
By default statistics are gathered at the session level. The
DBMS_MONITOR package allows this to be altered to follow the client
identifier, service or combinations of the service, module and
action:
BEGIN
-- Enable/Disable Client Identifier Statistics.
DBMS_MONITOR.client_id_stat_enable (client_id => 'my_id');
DBMS_MONITOR.client_id_stat_disable (client_id => 'my_id');
-- Enable/Disable Service, Module and Action Statistics (various
overloads).
DBMS_MONITOR.serv_mod_act_stat_enable (
service_name => 'my_service');
DBMS_MONITOR.serv_mod_act_stat_enable (
service_name => 'my_service',
module_name => 'my_module');
DBMS_MONITOR.serv_mod_act_stat_enable (
service_name => 'my_service',
module_name => 'my_module',
action_name => 'INSERT');
DBMS_MONITOR.serv_mod_act_stat_disable (
service_name => 'my_service',
module_name => 'my_module',
action_name => 'INSERT');
END;
/
Now, imagine that you have been using end-to-end tracing on several
sessions for some time but now you have no idea which sessions have
tracing turned on. How do you find out? All you have to do is to
check a view you check anyway, V$SESSION.
Three new columns now show the status of tracing:
- sql_trace—Shows (TRUE/FALSE) if SQL tracing has been enabled
in the session
- sql_trace_waits—If session tracing is enabled, you can have
the trace write wait information to the trace file; very useful
in diagnosing performance issues.
- sql_trace_binds—If the session uses bind variables, you can
have the trace write the bind variable values to the trace file.
This column shows TRUE/FALSE.
When tracing in the session is turned on, if you select these
columns:
select sid, serial#,
sql_trace, sql_trace_waits, sql_trace_binds
from v$session
where username = 'HR'
The output is:
SID
SERIAL#
SQL_TRAC
SQL_T SQL_T
---------- ---------- --------
----- -----
196
60946
ENABLED
TRUE FALSE
Note that the view V$SESSION is populated
only if the procedure session_trace_enable in the package
dbms_monitor is used to enable tracing, not by alter session
set sql_trace = true or setting the event 10046. At some
point later in time, if you want to find out which sessions have
been enabled for tracing, you can do so using the above query
Wait
Event Model improvements
Overview
of
Wait Event Model
In a nutshell, the wait event interface provides insight into where
time is consumed. Wait events are collected by the server process or
thread to indicate the ‘wait' before a process is completed. As we
know, at any given moment an Oracle process is either busy servicing
a request or waiting for something to happen. Oracle has defined a
list of every possible event that an Oracle process could wait for.
The Wait Event Interface now provides a powerful tool to monitor the
process delays. With its snapshot of the events and its detailed
analysis, it becomes possible for database administrators to
pinpoint areas that need tuning. Wait events show various symptoms
of problems that impact performance.
Wait
Event Enhancements
Oracle Database 10g introduces many new dynamic performance views
and updates other views. General improvements include:
- New columns in the v$session and v$session_wait views that
track the resources sessions are waiting for.
- A history of waits per session, enabling diagnosis of
performance problems for a desired time frame and period.
- Maintaining wait statistics of each SQL statement in the
library cache
- Histograms of wait durations, rather than a simple accumulated
average
The following list shows the existing views that are modified.
Changes to v$event_name
The V$EVENT_NAME view has had three new columns added
(WAIT_CLASS_ID, WAIT_CLASS# and WAIT_CLASS) which indicate the class
of the event.
For example, to list the events related to IO, use the statement,
SELECT
substr(name,1,30)
name,
wait_class#,
substr(wait_class,1,30) wait_class
FROM v$event_name
WHERE wait_class# IN (10, 11);
In another example, to group all the events by class to get a quick
idea of the performance issues, use the statement,
SELECT
e.wait_class#, substr(e.name,1,30) name, sum(s.total_waits), sum(s.time_waited)/1000000
time_waited
FROM v$event_name e,
v$system_event s
WHERE e.name = s.event
GROUP BY e.wait_class#,e.name;
-- Display time waited for each wait class.
SELECT a.wait_class,
sum(b.time_waited)/1000000 time_waited
FROM v$event_name
a
JOIN
v$system_event
b
ON a.name = b.event
GROUP BY wait_class;
WAIT_CLASS
TIME_WAITED
---------------------------
-----------
Application
.013388
Commit
.003503
Concurrency
.009891
Configuration
.003489
Idle
232.470445
Network
.000432
Other
.025698
System
I/O
.095651
User
I/O
.109552
Changes to v$session
In the past, sessions experiencing waits were generally located by
joining the v$session_wait view with the v$session view. To simplify
the query, all the wait event columns from v$session_wait have been
added to v$session.
Use the statement below to determine the wait events that involve
the most sessions.
SELECT
wait_class, count(username)
FROM v$session GROUP BY wait_class;
New columns have been added to v$sessions as follows:
SQL_CHILD_NUMBER, PREV_CHILD_NUMBER, BLOCKING_SESSION_STATUS,
BLOCKING_SESSION, SEQ#, EVENT#, EVENT, WAIT_CLASS#, WAIT_CLASS,
WAIT_TIME, SECONDS_IN_WAIT, STATE and SERVICE_NAME
Changes to v$session_wait
The new columns include wait_class# and wait_class.
The following list shows the views that are new.
v$system_wait_class – This
view provides the instance-wide time totals for the number of waits
and the time spent in each class of wait events. This view also
shows the object number for which the session is waiting.
v$session_wait_class - This
view provides the number of waits and the time spent in each class
of wait event on a per session basis. This view also shows the
object number for which the session is waiting.
v$event_histogram – This
view displays a histogram of the number of waits, the maximum wait,
and total wait time on a per-child cursor basis. Using this view,
you can create a histogram showing the frequency of wait events for
a range of durations. This information assists you in determining
whether a wait event is a frequent problem that needs addressing or
a unique event.
v$file_histogram – This
view displays a histogram of all single block reads on a per-file
basis. To provide more in-depth data, the v$file_histogram view
shows the number of I/O wait events over a range of values. You use
the histogram to determine if the bottleneck is a regular or a
unique problem.
v$temp_histogram – This
view displays a histogram of all single block reads on a
per-tempfile basis.
v$session_wait_history – This
view
displays the last 10 wait events for each active session. Example:
select sid, seq#, event, p1,
p2, p3
from
v$session_wait_history
where sid = %SID
order by seq#
;
SEQ#
EVENT
P1
P2 P3
-------
------------------------------ ---------- ---------- ----------
1 db file scattered
read
1
20409 8
2 db file scattered
read
1
20401 8
3 db file scattered
read
1
20393 8
4 db file scattered
read
1
20385 8
5 db file scattered
read
1
20377 8
6 db file scattered
read
1
20369 8
7 db file scattered
read
1
20361 8
8 db file scattered
read
1
20225 8
9 db file scattered
read
1
20217 8
10 db
file scattered
read
1
20209
8
select sid, seq#, event, p1,
p2, p3
from
v$session_wait_history
order by sid;
The new views above are quite helpful in understanding the overall
health of the database. For example, use the v$system_wait_class
view to display wait events occurring across the database.
SELECT wait_class#,
wait_class,
time_waited, total_waits
FROM
v$system_wait_class
ORDER BY
time_waited;
WAIT_CLASS#
WAIT_CLASS TIME_WAITED
TOTAL_WAITS
----------- ----------------
----------- -----------
5
Commit
10580 29404
2
Configuration
25140 1479
7
Network
28060 35111917
4
Concurrency
34707 16754
8 User
I/O
308052 178647
9 System
I/O
794444 2516453
1
Application
3781085 68100532
0
Other
38342194 22317
6
Idle
845197701 37411971
-- Display the resource or event the session is waiting for.
SELECT sid, serial#, event,
(seconds_in_wait/1000000) seconds_in_wait
FROM v$session
ORDER BY sid;
The V$SESSION_WAIT_CLASS view allows you to see the session wait
information broken down by wait class for each session:
-- Display session wait information by wait class.
SELECT *
FROM
v$session_wait_class
WHERE sid = 134;
Automated
Checkpoint Tuning
Check-pointing is an important Oracle activity which records the
highest system change number (SCN,) so that all data blocks less
than or equal to the SCN are known to be written out to the data
files. If there is a failure and then subsequent cache recovery,
only the redo records containing changes at SCN(s) higher than the
checkpoint need to be applied during recovery.
As we are aware, instance and crash recovery occur in two steps -
cache recovery followed by transaction recovery. During the cache
recovery phase, also known as the rolling forward stage, Oracle
applies all committed and uncommitted changes in the redo log files
to the affected data blocks. The work required for cache recovery
processing is proportional to the rate of change to the database and
the time between checkpoints.
Fast-start recovery can greatly reduce the mean time to recover
(MTTR), with minimal effects on online application performance.
Oracle continuously estimates the recovery time and automatically
adjusts the check-pointing rate to meet the target recovery time.
Oracle recommends using the FAST_START_MTTR_TARGET
initialization parameter to control the duration of startup after
instance failure. With 10g, the database can now self-tune
check-pointing to achieve good recovery times with low impact on
normal throughput. You no longer have to set any checkpoint-related
parameters. This method reduces the time required for cache recovery
and makes the recovery bounded and predictable by limiting the
number of dirty buffers and the number of redo records generated
between the most recent redo record and the last checkpoint.
Administrators specify a target (bounded) time to complete the cache
recovery phase of recovery with the FAST_START_MTTR_TARGET initialization parameter, and
Oracle automatically varies the incremental checkpoint writes to
meet that target.
The target_mttr field of v$instance_recovery contains the MTTR
target in effect. The estimated_mttr field of v$instance_recovery
contains the estimated MTTR should a crash happen right away.
For example,
SELECT
TARGET_MTTR,
ESTIMATED_MTTR,
CKPT_BLOCK_WRITES
FROM V$INSTANCE_RECOVERY;
TARGET_MTTR ESTIMATED_MTTR
CKPT_BLOCK_WRITES
----------- --------------
-----------------
37
22
209187
Whenever you set FAST_START_MTTR_TARGET to a nonzero value, and
while MTTR advisory is ON, Oracle Corporation recommends that you
disable (set to 0) the following parameters:
LOG_CHECKPOINT_TIMEOUT
LOG_CHECKPOINT_INTERVAL
FAST_START_IO_TARGET
Because these initialization parameters either override
fast_start_mttr_target or potentially drive checkpoints more
aggressively than fast_start_mttr_target does, they can interfere
with the simulation.
WEB
Admin for Database
Reconfiguring Repository
When should one reconfigure the repository ?
1. When the databaseserver hostname either
its' s IP-adres(ses) changes.
2. In – rare - case when the ORACLE_SID
changes
3. In case when the services_name of a
database as registered by PMON in the local listener changes
When should one create or recreate the repository ?
1. When it hasn' t been created yet during
database creation
2. When one would like to start from
scratch with a clean repository (rare)
Most of the time the sysman schema is the owner of the repository,
will be created during dbcreation (option if you use dbca). Changes
in ipadresses, databaseserver name, database service names could
trigger the need to reconfigure the dbconsole.
Stop the Console
emctl stop dbconsole
How To Drop, Create And Recreate DB Control In 10g
Database
A.Delete DB Control
Objects:
There are several ways to delete DB Control Objects.
1).Delete DB Control Configuration Files and Repository Objects
using EMCA
2).Delete DB Control Configuration Files using EMCA scripts
3).Delete DB Control
Configuration Files Manually:
4).Delete DB Control Repository Objects using RepManager
5).Manually clean everything
1).Delete DB Control Configuration Files and
Repository Objects using EMCA
In 10.1 run,
emca -x SID
RepManager hostname
listener_port sid -action drop
In 10.2 run,:
emca
-deconfig dbcontrol db -repos drop
2).Delete DB Control
Configuration Files using EMCA scripts
In 10.1g run, $emca -x sid
In 10.2g run, $emca -deconfig
dbcontrol db Then enter sid as prompt and then y.
3).Delete DB Control Configuration Files
Manually:
Remove the following directories from your filesystem:
$ORACLE_HOME/hostname_sid
ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_hostname_sid
On Windows you also need to delete the DB Console
service:
- run regedit
- navigate to HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Services
- locate the OracleDBConsole entry and delete it
On, Windows XP and Windows Server 2003 you can run
the following from the command line:
'sc delete service_name'
-
where service_name is the DB Control service name (typically:
OracleDBConsolesid)
Connect as SYSMAN to SQL PLus
C:\>sqlplus sysman
select
substr(target_name,1,20) target_name, substr(target_type,1,17)
target_type, substr(emd_url,1,40) emd_url
from mgmt_targets;
TARGET_NAME
TARGET_TYPE
EMD_URL
---------------------------------
-----------------
----------------------------------------
machinename.aa.something:1831
oracle_emd
http://machinename.aa.something:1831/emd/main
machinename.aa.something host
http://machinename.aa.something:1831/emd/main
UTF8
oracle_database http://machinename.aa.something:1831/emd/main
LISTENER1_seny-10-79.be.sterianet
oracle_listener http://machinename.aa.something:1831/emd/main
Management Services and
Repository oracle_emrep
or
TARGET_NAME
TARGET_TYPE EMD_URL
--------------------
----------------- ----------------------------------------
machinename.domain
oracle_emd http://machinename.domain:3938/
machinename.domain
host
http://machinename.domain:3938/
DEV10g2
oracle_database http://machinename.domain:3938/
LISTENER_mach.domain
oracle_listener http://machinename.domain:3938/
Management Services
oracle_emrep
Now I purge the obsolete entries:
exec mgmt_admin.delete_target('target_name','target_type');
SQL> exec
mgmt_admin.delete_target('machinename.aa.something','host');
SQL> exec
mgmt_admin.delete_target('LISTENER1_machinename.aa.something','oracle_listener');
SQL> exec
mgmt_admin.delete_target('UTF8','oracle_database');
SQL> exec
mgmt_admin.delete_target('machinename.aa.something:1831','oracle_emd');
4).Delete
DB
Control
Repository
using RepManager:
Invoke following command:
RepManager hostname
listener_port sid -action drop
But
don't
delete
by
RepManager. It puts the database in
quiescence mode.
5).Manually Clean Everything:
1- First we will stop the console: emctl stop dbconsole
2 - Remove the following directories from your filesystem:
$ORACLE_HOME/hostname_sid
ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_hostname_sid
3 - Connect as SYS to SQLPLus and:
drop user SYSMAN cascade;
drop role mgmt_user;
drop user mgmt_view cascade;
drop public synonym
mgmt_target_blackouts;
drop public synonym
setemviewusercontext;
drop public synonym
mgmt_current_availability;
drop public synonym
mgmt_availability;
4- Now take notes of you listener status, host information,
network cards, and all the names (like: Server Name, SID,
etc)
lsnrctl status
more /etc/hosts
/sbin/ifconfig
5- Check that the variables
echo $ORACLE_SID
echo $ORACLE_HOME
echo $ORACLE_HOSTNAME
6- Now you are ready to configure all again:
On 10.2 emca -config
dbcontrol db -repos create
B.Create DB
Control Objects
1)Create only DB
Control configuration files:
on 10.1 emca -r
On 10.2 emca -config
dbcontrol db
2)Create both
the DB Control Repository
Objects and Configuration
Files:
On 10.1 emca
On 10.2 emca -config
dbcontrol db -repos create
STARTED EMCA at Apr 25,
2008 6:25:26 PM
EM Configuration Assistant,
Version 10.2.0.1.0 Production
Copyright (c) 2003, 2005,
Oracle. All rights reserved.
Enter the following
information:
Database SID: DIE
Database Control is already
configured for the database DIE
You have chosen to configure
Database Control for managing the database DIE
This will remove the
existing configuration and the default settings and perform a fresh configuration
Do you wish to continue?
[yes(Y)/no(N)]: Y
Listener port number: 1521
Password for SYS user:
Password for SYSMAN user:
Email address for
notifications (optional):
Outgoing Mail (SMTP) server
for notifications (optional):
You have specified the
following settings
Database ORACLE_HOME
................ C:\oracle\product\10.2.0\db_1
Database hostname
................ CONCORD
Listener port number
................ 1521
Database SID
................ DIE
Email address for
notifications ...............
Outgoing Mail (SMTP) server
for notifications ...............
Do you wish to continue?
[yes(Y)/no(N)]: Y
Apr 25, 2008 6:26:29 PM
oracle.sysman.emcp.EMConfig perform
INFO: This operation is
being logged at C:\oracle\product\10.2.0\db_1\cfgtoollog
s\emca\DIE\emca_2008-04-25_06-25-25-PM.log.
Apr 25, 2008 6:26:44 PM
oracle.sysman.emcp.util.DBControlUtil stopOMS
INFO: Stopping Database
Control (this may take a while) ...
Apr 25, 2008 6:26:53 PM
oracle.sysman.emcp.EMReposConfig createRepository
INFO: Creating the EM
repository (this may take a while) ...
Apr 25, 2008 6:30:54 PM
oracle.sysman.emcp.EMReposConfig invoke
INFO: Repository
successfully created
Apr 25, 2008 6:31:49 PM
oracle.sysman.emcp.util.DBControlUtil startOMS
INFO: Starting Database
Control (this may take a while) ...
Apr 25, 2008 6:32:39 PM
oracle.sysman.emcp.EMDBPostConfig performConfiguration
INFO: Database Control
started successfully
Apr 25, 2008 6:32:44 PM
oracle.sysman.emcp.EMDBPostConfig performConfiguration
INFO:
>>>>>>>>>>> The Database
Control URL is http://CONCORD:1158/em
<<<<<<<<<<<
Enterprise Manager
configuration completed successfully
FINISHED EMCA at Apr 25,
2008 6:32:44 PM
Now wait about 10 Minutes to complete!
Try to connect to the database Control:
http://server_name:1158/em
For
Oracle Enterprise Manager
http://server_name:5560/isqlplus
For iSQL*Plus
http://server_name:5620/ultrasearch
For Ultrasearch
If you have problems to connect, check the local configuration
file located on:
$ORACLE_HOME/<hostname>_<service_name>/sysman/config/emoms.properties
Start and Stop the DB-Console
$ emctl start dbconsole
$ emctl stop dbconsole
$ emctl status dbconsole
C.Recreate/ReConfig
DB Control:
1)Recreate only DB Control
configuration files:
$ emca -config
dbcontrol db
2)Recreate
both the DB Control Repository
Objects and Configuration
Files:
$ emca -config
dbcontrol db -repos recreate
References
:
http://arjudba.blogspot.com/2008/04/stack-of-problems-while-creating.html
http://download-west.oracle.com/docs/pdf/B12013_01.pdf
Metalink Doc ID: Note:271126.1 - How to Troubleshoot "Agent
Unreachable" Status and "Upload greyed out"
Metalink Doc ID: Note:306422.1- Management Services And Repository
Shows Unknown Availability
Metalink Doc ID: Note:276417.1 - Unable To Remove Target On 10g EM
Grid Console after Renaming Target
Metalink Doc ID: Note 278100.1 - How To Drop, Create And Recreate DB
Control In A 10g Database
Metalink Doc ID: Note 274661.1 - Problem: Startup: Cannot Start
DBCONSOLE on DHCP Machine after Reboot
Metalink Doc ID: Note 428665.1 - Installation Checklist for Testing
Networking Configurations Prior to Installing EM Components
Shrink Tables
(Segment Advisor)
Online segment shrink is available for tables in ASSM (Automatic
Segment Space Management) tablespaces. Conceptually, what happens is
that Oracle reads the table from the bottom up, and upon finding
rows at the bottom of the table, it deletes them and reinserts them
at the top of the table. When it runs out of space at the top, it
stops, leaving all the free space at the end—or bottom—of the table.
Then Oracle redraws the high-water mark for that table and releases
that allocated space. Here is a quick example:
create table ttt ENABLE ROW
MOVEMENT
as select * from
all_objects;
Here I created a table with ENABLE ROW MOVEMENT. Oracle will be
physically moving the rows, and this clause gives Oracle permission
to change the rowids. Here's what a full scan of this big table
does:
set autotrace on statistics
select count(*) from t;
COUNT(*)
-----------
47266
Statistics
-------------------------------
0
db
block
gets
724 consistent gets
651 physical reads
set autotrace off
It took 724 logical IOs (consistent gets) to read that table and
count the rows. A peek at USER_EXTENTS shows the table consuming 768
blocks in 20 extents. Over time, I perform some deletes on this
table, leaving behind lots of white space. I'll simulate that by
deleting every other row in the table:
delete from t
where mod(object_id,2)
= 0;
23624 rows deleted.
You can also run the following to check how many blocks this table
is using:
select blocks from
user_segments where segment_name = 'T';
BLOCKS
----------
8
Now I want to reclaim this white space, getting it back from
the table and perhaps using it for other objects, or maybe I
full-scan this table frequently and would just like it to be
smaller. Before Oracle Database 10g, the only option was to rebuild
it, with EMP/IMP, ALTER TABLE MOVE, or an online redefinition.
With10g, I can compact and shrink it:
alter table t shrink space
compact;
alter table t shrink space;
In addition the CASCADE option can be used to propagete the shrink
operation to all dependan objects.
Another peek at USER_EXTENTS shows that the table now consumes 320
blocks in 17 extents. The table has actually shrunk while still
online and without a rebuild. REMEMBER that this option will modify the
ROWID's. It is now half its original size in blocks, because it
released extents back to the system—something that was never
possible before. Further, look what this shrinking does for a full
scan:
select count(*) from t;
COUNT(*)
-----------
23642
Statistics
-------------------------------
0
db
block
gets
409 consistent gets
62 physical reads
The number of IOs required to perform that operation is now in line
with the actual size of the data.
To check the number of blocks we use again:
select blocks from user_segments where segment_name = 'T';
BLOCKS
----------
2
Script to Shring Objects Automatically
You can use the following script to automatically perform the shrink
process for a specific schema:
--------------------------------
--- Author: Diego Pafumi
--- Date : Nov-19-2012
--- Script: _Shrink_Segments.sql
--- http://pafumi.net/New_Features_Oracle_10G.html#Shrink_Tables
--- Explanation: This script was developed to shrink the HWM for
all the Tables of a Schema or just for 1 table
--------------------------------
set verify off
set serveroutput on
-- Ask if the data is ok
PROMPT
accept OWNER char prompt 'Enter Schema Name to Review: '
PROMPT
ACCEPT vSave CHAR DEFAULT 'R' PROMPT 'Do you want to generate a
Report or Compress this Schema (R/C default ''R'') ? '
PROMPT
ACCEPT vTable CHAR DEFAULT 'A' PROMPT 'Enter Table Name to Analyze
or ''A'' for all the tables (Default ''A'') ? '
spool Shrink_Segments_Analysis.txt
declare
l_fs1_bytes number;
l_fs2_bytes number;
l_fs3_bytes number;
l_fs4_bytes number;
l_fs1_blocks number;
l_fs2_blocks number;
l_fs3_blocks number;
l_fs4_blocks number;
l_full_bytes number;
l_full_blocks number;
l_unformatted_bytes number;
l_unformatted_blocks number;
V_OWNER VARCHAR2(20) := upper('&OWNER');
v_Save VARCHAR2(20) := upper('&vSave');
v_Table VARCHAR2(20) := upper('&vTable');
v_blocks number;
begin
if v_Table = 'A' then
-- Process All the tables
for rec in (select segment_name,
segment_type
from dba_segments
where owner = V_OWNER
and segment_type in ('TABLE')
and segment_name not like 'BIN%')
loop
dbms_space.space_usage(
segment_owner => V_OWNER,
segment_name =>
rec.segment_name,
segment_type =>
rec.segment_type,
fs1_bytes
=> l_fs1_bytes,
fs1_blocks =>
l_fs1_blocks,
fs2_bytes
=> l_fs2_bytes,
fs2_blocks =>
l_fs2_blocks,
fs3_bytes
=> l_fs3_bytes,
fs3_blocks =>
l_fs3_blocks,
fs4_bytes
=> l_fs4_bytes,
fs4_blocks =>
l_fs4_blocks,
full_bytes =>
l_full_bytes,
full_blocks =>
l_full_blocks,
unformatted_blocks => l_unformatted_blocks,
unformatted_bytes => l_unformatted_bytes );
select blocks
into v_blocks
from dba_segments
where owner= V_OWNER
and segment_name = rec.segment_name;
dbms_output.put_line('******************************************');
dbms_output.put_line('*** Table ' || rec.segment_name || ' ***');
dbms_output.put_line(' FS1 Blocks = '||l_fs1_blocks||' Bytes =
'||l_fs1_bytes);
dbms_output.put_line(' FS2 Blocks = '||l_fs2_blocks||' Bytes =
'||l_fs2_bytes);
dbms_output.put_line(' FS3 Blocks = '||l_fs3_blocks||' Bytes =
'||l_fs3_bytes);
dbms_output.put_line(' FS4 Blocks = '||l_fs4_blocks||' Bytes =
'||l_fs4_bytes);
dbms_output.put_line(' Full Blocks = '||l_full_blocks||' Bytes =
'||l_full_bytes);
dbms_output.put_line(' # of Blocks Used = '|| to_char(v_blocks));
if v_Save = 'C'
then
begin
dbms_output.put_line('Shrinking Segment: ' || rec.segment_name);
--Enable row movement for the table.
execute immediate 'alter table ' || V_OWNER ||'.' ||
rec.segment_name ||' enable row movement';
--Shrink table but dont want to shrink HWM (High Water Mark).
execute immediate 'alter table ' || V_OWNER ||'.' ||
rec.segment_name ||' shrink space compact';
--Shrink table and Index and HWM too.
execute immediate 'alter table ' || V_OWNER ||'.' ||
rec.segment_name ||' shrink space cascade';
dbms_space.space_usage(
segment_owner => V_OWNER,
segment_name =>
rec.segment_name,
segment_type =>
rec.segment_type,
fs1_bytes
=> l_fs1_bytes,
fs1_blocks =>
l_fs1_blocks,
fs2_bytes
=> l_fs2_bytes,
fs2_blocks =>
l_fs2_blocks,
fs3_bytes
=> l_fs3_bytes,
fs3_blocks =>
l_fs3_blocks,
fs4_bytes
=> l_fs4_bytes,
fs4_blocks =>
l_fs4_blocks,
full_bytes =>
l_full_bytes,
full_blocks =>
l_full_blocks,
unformatted_blocks => l_unformatted_blocks,
unformatted_bytes => l_unformatted_bytes );
select blocks into v_blocks
from dba_segments
where owner= V_OWNER
and segment_name = rec.segment_name;
dbms_output.put_line(' After the Compact
Process... ' );
dbms_output.put_line(' FS1 Blocks =
'||l_fs1_blocks||' Bytes = '||l_fs1_bytes);
dbms_output.put_line(' FS2 Blocks =
'||l_fs2_blocks||' Bytes = '||l_fs2_bytes);
dbms_output.put_line(' FS3 Blocks =
'||l_fs3_blocks||' Bytes = '||l_fs3_bytes);
dbms_output.put_line(' FS4 Blocks =
'||l_fs4_blocks||' Bytes = '||l_fs4_bytes);
dbms_output.put_line(' Full Blocks =
'||l_full_blocks||' Bytes = '||l_full_bytes);
dbms_output.put_line(' # of Blocks
Used = '|| to_char(v_blocks));
exception
when OTHERS then
dbms_output.put_line('WARNING!!!!! Can not Shrink Table '||
rec.segment_name );
end;
end if;
end loop;
else
-- Process Just 1 Table
dbms_space.space_usage(
segment_owner => V_OWNER,
segment_name => v_Table,
segment_type => 'TABLE',
fs1_bytes
=> l_fs1_bytes,
fs1_blocks =>
l_fs1_blocks,
fs2_bytes
=> l_fs2_bytes,
fs2_blocks =>
l_fs2_blocks,
fs3_bytes
=> l_fs3_bytes,
fs3_blocks =>
l_fs3_blocks,
fs4_bytes
=> l_fs4_bytes,
fs4_blocks =>
l_fs4_blocks,
full_bytes =>
l_full_bytes,
full_blocks =>
l_full_blocks,
unformatted_blocks => l_unformatted_blocks,
unformatted_bytes => l_unformatted_bytes );
select blocks into v_blocks
from dba_segments
where owner=
V_OWNER
and
segment_name = v_Table;
dbms_output.put_line('******************************************');
dbms_output.put_line('*** Table '
|| v_Table || ' ***');
dbms_output.put_line(' FS1 Blocks =
'||l_fs1_blocks||' Bytes = '||l_fs1_bytes);
dbms_output.put_line(' FS2 Blocks =
'||l_fs2_blocks||' Bytes = '||l_fs2_bytes);
dbms_output.put_line(' FS3 Blocks =
'||l_fs3_blocks||' Bytes = '||l_fs3_bytes);
dbms_output.put_line(' FS4 Blocks =
'||l_fs4_blocks||' Bytes = '||l_fs4_bytes);
dbms_output.put_line(' Full Blocks
= '||l_full_blocks||' Bytes = '||l_full_bytes);
dbms_output.put_line(' # of Blocks
Used = '|| to_char(v_blocks));
if v_Save = 'C' then
begin
dbms_output.put_line('Shrinking Segment: ' || v_Table);
--Enable row movement for the table.
execute immediate 'alter table ' || V_OWNER ||'.' || v_Table ||'
enable row movement';
--Shrink table but dont want to shrink HWM (High Water Mark).
execute immediate 'alter table ' || V_OWNER ||'.' || v_Table ||'
shrink space compact';
--Shrink table and Index and HWM too.
execute immediate 'alter table ' || V_OWNER ||'.' || v_Table ||'
shrink space cascade';
dbms_space.space_usage(
segment_owner => V_OWNER,
segment_name => v_Table,
segment_type => 'TABLE',
fs1_bytes
=> l_fs1_bytes,
fs1_blocks =>
l_fs1_blocks,
fs2_bytes
=> l_fs2_bytes,
fs2_blocks =>
l_fs2_blocks,
fs3_bytes
=> l_fs3_bytes,
fs3_blocks =>
l_fs3_blocks,
fs4_bytes
=> l_fs4_bytes,
fs4_blocks =>
l_fs4_blocks,
full_bytes =>
l_full_bytes,
full_blocks =>
l_full_blocks,
unformatted_blocks => l_unformatted_blocks,
unformatted_bytes => l_unformatted_bytes );
select blocks into v_blocks
from dba_segments
where owner= V_OWNER
and segment_name = v_Table;
dbms_output.put_line(' After the Compact
Process... ' );
dbms_output.put_line(' FS1 Blocks =
'||l_fs1_blocks||' Bytes = '||l_fs1_bytes);
dbms_output.put_line(' FS2 Blocks =
'||l_fs2_blocks||' Bytes = '||l_fs2_bytes);
dbms_output.put_line(' FS3 Blocks =
'||l_fs3_blocks||' Bytes = '||l_fs3_bytes);
dbms_output.put_line(' FS4 Blocks =
'||l_fs4_blocks||' Bytes = '||l_fs4_bytes);
dbms_output.put_line(' Full Blocks =
'||l_full_blocks||' Bytes = '||l_full_bytes);
dbms_output.put_line(' # of Blocks
Used = '|| to_char(v_blocks));
exception
when OTHERS then
dbms_output.put_line('WARNING!!!!! Can not Shrink Table '||
v_Table );
end;
end if;
end if;
dbms_output.put_line('...');
dbms_output.put_line('...');
dbms_output.put_line('...');
dbms_output.put_line('**** Recompile Invalid
Objects');
DBMS_UTILITY.compile_schema(schema => V_OWNER);
if v_Save = 'R' then
dbms_output.put_line('********************************************************************************');
dbms_output.put_line('********************************************************************************');
dbms_output.put_line('If you
want to Shrink a specific Table execute the following commands:');
dbms_output.put_line('Enable
row movement for the table with: alter table XYZ
enable row movement;');
dbms_output.put_line('Shrink
table but dont shrink HWM with: alter table XYZ
shrink space compact;');
dbms_output.put_line('Shrink
table and Index and HWM too:
alter table XYZ shrink space cascade;');
dbms_output.put_line('********************************************************************************');
dbms_output.put_line('********************************************************************************');
end if;
end;
/
undefine V_OWNER
spool off
An example of the output is:
*** Table ACCOUNT
***
FS1 Blocks = 0 Bytes = 0
FS2 Blocks = 0 Bytes = 0
FS3 Blocks = 0 Bytes = 0
FS4 Blocks = 4148 Bytes = 0
Full Blocks = 2 Bytes = 16384
# of Blocks Used =
40
The output shows that there are 4,148 blocks with 75-100% free space
(FS4); no other free blocks are available. There are only 2 full
blocks. The 4,148 blocks can be recovered. And using 40 blocks in
total.
Next, you must ensure that the table is row-movement enabled. If
it's not, you can enable it with:
alter table ACCOUNT enable row movement;
or via Enterprise Manager 10g, on the Administration page. You
should also ensure that all rowid-based triggers are disabled on
this table because the rows are moved and the rowids could change.
Finally, you can reorganize the existing rows of the table with:
alter table ACCOUNT shrink space compact;
This command re-distributes the rows inside the blocks resulting in
more free blocks under the HWM, but the HWM itself is not disturbed.
After the operation, let's see the change in space utilization.
Using the PL/SQL block shown in the first step, you can see how the
blocks are organized now:
FS1 Blocks = 0 Bytes = 0
FS2 Blocks = 0 Bytes = 0
FS3 Blocks = 1 Bytes = 0
FS4 Blocks = 0 Bytes = 0
Full Blocks = 2 Bytes = 16384
# of Blocks Used =
40
The number of blocks occupied by the table remains the same because
the HWM has not moved from its original position. You can move the
HWM to a lower position and reclaim the space
with:
alter table ACCOUNT shrink space;
Note that the clause COMPACT is not present. This operation will
return the unused blocks to the database and reset the HWM. You can
test it by checking the space allocated to the table:
You can also compact the indexes of the table in one statement:
alter table ACCOUNT shrink space cascade;
The online shrink command is a powerful feature for reclaiming
wasted space and resetting the HWM. I consider the latter setting of
the HWM the most useful result of this command because it improves
the performance of full table scans.
Finding Candidates for
Shrinking
Before performing an online shrink, you may want to find out the
biggest bang-for-the-buck by identifying the segments that can be
most fully compressed. Simply use the built-in function
verify_shrink_candidate in the package dbms_space. Execute this
PL/SQL code to test if the segment can be shrunk to 1,300,000 bytes:
begin
if
(dbms_space.verify_shrink_candidate
('ARUP','BOOKINGS','TABLE',1300000) ) then
:x := 'T';
else
:x := 'F';
end if;
end;
/
print x
X
--------------------------------
T
If you use a low number for the target shrinkage, say 3,000:
begin
if
(dbms_space.verify_shrink_candidate
('ARUP','BOOKINGS','TABLE',3000) ) then
:x := 'T';
else
:x := 'F';
end if;
end;
/
print x
The value of the variable x is set to 'F', meaning the table
cannot be shrunk to 3,000 bytes.
Oracle10G's Segment Advisor
Administrators are able to use Oracle10G's Segment Advisor to identify candidates
for shrink operations. The advisor estimates the amount of
unused space that will be released when the shrink operation is run
on the particular object. A wizard is available that allows
users to evaluate all objects in the database, all objects in a
specific tablespace or all objects owned by a particular schema.
The 10G R2 Segment Advisor has been enhanced to identify tables that
suffer from excessive row chaining and row migrations.
Why should we care about row chaining and row migrations? When a row
is updated and becomes too large to fit into its original block (due
to insufficient free space), the row is moved to a new block and a
pointer is placed in the original block that identifies the row's
new home. This is called a row migration. So when you access the row
through an index, Oracle navigates first to the row's original block
and then follows the pointer to the block where the row is actually
stored. This means you are generating unnecessary I/O to access a
migrated row. You correct this by identifying the tables affected
and reorganizing them. A row chain occurs when a row is simply too
long to fit into a single block. Oracle will chain the row together
on multiple blocks using pointers to connect the chain's pieces. You
solve this problem by increasing the block size or decreasing the
row's length. Most often you just have to live with row chaining.
In the past, we identified row chaining and row migrations by
reviewing the "table fetch by continued row" output line in our
STATSPACK reports and ran SQL ANALYZE statements on the data objects
on a regular basis. Remember DBMS_STATS does not populate the
CHAIN_CNT column in DBA_TABLES. If you want to populate that column,
you'll need to run the ANALYZE statement.
In 10G R2, the Segment Advisor is automatically scheduled by
Enterprise Manager to run during a predefined maintenance window.
The maintenance window is initially defined as follows:
* Monday through Friday - 10PM to 6AM
* Saturday 12:00 a.m. to Monday morning at 12:00
a.m
The maintenance window's default times can be changed to tailor it
to an individual application's availability requirements. The
Automatic Segment Advisor doesn't analyze all of the data objects in
the database. It intelligently selects them by identifying segments
that are the most active, have the highest growth rate or exceed a
critical or warning space threshold.
In Oracle Database 10g Release 2, the supplied package DBMS_SPACE
provides the capability to tell you which segments have plenty of
free space under the high-water mark and would benefit from a
reorganization.
The built-in function ASA_RECOMMENDATIONS procedure in the
DBMS_SPACE package returns a nested table object that contains
findings or recommendations for Automatic Segment Advisor runs and,
optionally, manual Segment Advisor runs. Calling this procedure may
be easier than working with the DBA_ADVISOR_* views, because the
procedure performs all the required joins for you and returns
information in an easily consumable format.
The following query returns recommendations by the most recent run
of the Auto Segment Advisor, with the suggested command to run to
follow the recommendations:
select
tablespace_name, segment_name, segment_type,
partition_name,recommendations, c1
from table(dbms_space.asa_recommendations('FALSE', 'FALSE',
'FALSE'));
TABLESPACE_NAME
: USERS
SEGMENT_OWNER
: ARUP
SEGMENT_NAME
: ACCOUNTS
SEGMENT_TYPE
: TABLE PARTITION
PARTITION_NAME
: P7
ALLOCATED_SPACE
: 0
USED_SPACE
: 0
RECLAIMABLE_SPACE
: 0
CHAIN_ROWEXCESS
: 17
RECOMMENDATIONS
:
The
object
has chained rows that can be removed by re-org.
C1
:
C2
:
C3
:
TASK_ID
: 261
MESG_ID
: 0
Here you'll see that partition P7 of the table ACCOUNTS of the
schema ARUP has chained rows. Doing a reorganization will help speed
up full table scans in this partition.
This information is collected by an automatically scheduled job that
runs in the predefined maintenance window (between 10PM and 6AM on
weekdays and between 12 a.m. Saturday and 12 a.m. Monday); you can
change those windows using Oracle Enterprise Manager. During this
time, the job scans the segments for candidates. If the scan cannot
be completed in time, the job is suspended and resumed in the next
day's window.
The job stores the information about the segments and tablespaces
inspected in a table named wri$_segadv_objlist. You can see the
information on the segments inspected in the view
DBA_AUTO_SEGADV_CTL.
Executing
the
Segment Advisor Manually
As we mentioned before, you can execute the Segment Advisor for a
specific object from the OEM. You can also perform that task from
SQL Plus.
Example
The example that follows shows how to use the DBMS_ADVISOR
procedures to run the Segment Advisor for the sample table
hr.employees. The user executing these package procedures must have
the EXECUTE object privilege on the package or the ADVISOR system
privilege.
Note that passing an object type of TABLE to
DBMS_ADVISOR.CREATE_OBJECT amounts to an object level request.
If the table is not partitioned, the table segment is analyzed
(without any dependent segments like index or LOB segments).
If the table is partitioned, the Segment Advisor analyzes all table
partitions and generates separate findings and recommendations for
each.
variable id number;
begin
declare
name varchar2(100);
descr varchar2(500);
obj_id number;
begin
name:='Analyze_EMPLOYEES_Table';
dbms_advisor.create_task (
advisor_name => 'Segment Advisor',
task_id
=> :id,
task_name => name);
dbms_advisor.create_object (
task_name => name,
object_type => 'TABLE',
attr1
=> 'HR',
attr2
=> 'EMPLOYEES',
attr3
=> NULL,
attr4
=> NULL,
attr5
=> NULL,
object_id => obj_id);
dbms_advisor.set_task_parameter(
task_name => name,
parameter =>
'recommend_all',
value
=> 'TRUE');
dbms_advisor.execute_task(name);
end;
end;
/
By executing the next lines, you can see what is recommended:
select task_id, advisor_name,
status
from
dba_advisor_tasks
where owner =
'SYSTEM'
and
task_name = 'Analyze_EMPLOYEES_Table';
select af.task_name, ao.attr2
segname, ao.attr3 partition, ao.type, af.message
from
dba_advisor_findings af, dba_advisor_objects ao
where ao.task_id =
af.task_id
and ao.object_id =
af.object_id
and ao.owner = 'SYSTEM'
and ao.task_id =
&TASK_ID;
So Oracle is suggesting the following:
Enable row movement of the
table HR.EMPLOYEES and perform shrink, estimated savings is
52050787 bytes.
So then you will perform:
ALTER TABLE HR.EMPLOYEES
enable row movement;
ALTER TABLE HR.EMPLOYEES
SHRINK SPACE;
Very good information here:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/schema.htm
MERGE Statement Enhancements
The following examples use the table defined below.
CREATE TABLE test1 AS
SELECT *
FROM all_objects
WHERE 1=2;
Optional
Clauses
The MATCHED
and NOT MATCHED
clauses are
now optional making all of the following examples valid.
-- Both clauses present.
MERGE INTO test1 a
USING all_objects b
ON (a.object_id = b.object_id)
WHEN MATCHED THEN
UPDATE SET a.status = b.status
WHEN NOT MATCHED THEN
INSERT (object_id, status)
VALUES (b.object_id, b.status);
-- No matched clause, insert only.
MERGE INTO test1 a
USING all_objects b
ON (a.object_id = b.object_id)
WHEN NOT MATCHED THEN
INSERT (object_id, status)
VALUES (b.object_id, b.status);
-- No not-matched clause, update only.
MERGE INTO test1 a
USING all_objects b
ON (a.object_id = b.object_id)
WHEN MATCHED THEN
UPDATE SET a.status = b.status;
Conditional
Operations
Conditional inserts and updates are now possible by using a WHERE
clause on these statements.
-- Both clauses present.
MERGE INTO test1 a
USING all_objects b
ON (a.object_id = b.object_id)
WHEN MATCHED THEN
UPDATE SET a.status = b.status
WHERE b.status != 'VALID'
WHEN NOT MATCHED THEN
INSERT (object_id, status)
VALUES (b.object_id, b.status)
WHERE b.status != 'VALID';
-- No matched clause, insert only.
MERGE INTO test1 a
USING all_objects b
ON (a.object_id = b.object_id)
WHEN NOT MATCHED THEN
INSERT (object_id, status)
VALUES (b.object_id, b.status)
WHERE b.status != 'VALID';
-- No not-matched clause, update only.
MERGE INTO test1 a
USING all_objects b
ON (a.object_id = b.object_id)
WHEN MATCHED THEN
UPDATE SET a.status = b.status
WHERE b.status != 'VALID';
DELETE
Clause
An optional DELETE WHERE
clause can be used to clean
up after a merge operation. Only those rows which match both the ON
clause and the DELETE WHERE
clause are deleted.
MERGE INTO test1 a
USING all_objects b
ON (a.object_id = b.object_id)
WHEN MATCHED THEN
UPDATE SET a.status = b.status
WHERE b.status != 'VALID'
DELETE WHERE (b.status = 'VALID');
The faster Option is to execute it in Parallel.
This is the MERGE example on steroids. It uses Oracle's Parallel DML
capability to spread the load over multiple slave threads
ALTER SESSION ENABLE
PARALLEL DML;
MERGE /*+ first_rows
parallel(test) parallel(test2) */ INTO test
USING test5 new ON (test.pk
= new.pk)
WHEN MATCHED THEN UPDATE SET
fk = new.fk, fill =
new.fill;
Quick
Additions
- The SQL*PLUS copy command will be deprecated.
- View the error line
The long awaited Oracle enhancement to solve this problem was
introduced in the first release of 10g. The
DBMS_UTILITY.FORMAT_ERROR_BACKTRACE provides the error stack
all the way back to source. In a simple example such as the
following, the output is very simple and provides the accurate
information we require:-
BEGIN
EXECUTE IMMEDIATE 'garbage';
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(
DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );
RAISE;
END;
/
ORA-06512: at line 2
BEGIN
*
ERROR at line 1:
ORA-00900: invalid SQL statement
ORA-06512: at line 6
Note, however, that the DBMS_UTILITY.FORMAT_ERROR_BACKTRACE function
does not supply the error message, just the error's propagation
path. We therefore need to include a call to SQLERRM:-
BEGIN
EXECUTE IMMEDIATE 'garbage';
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE( SQLERRM );
-- DBMS_OUTPUT.PUT_LINE(
DBMS_UTILITY.FORMAT_ERROR_STACK );
DBMS_OUTPUT.PUT_LINE(
DBMS_UTILITY.FORMAT_ERROR_BACKTRACE );
RAISE;
END;
/
ORA-00900: invalid SQL statement
ORA-06512: at line 2
BEGIN
*
ERROR at line 1:
ORA-00900: invalid SQL statement
ORA-06512: at line 7
This example demonstrates that we now have sufficient information
for our application logs, while the error stack generated from the
RAISE call in line 7 can be discarded ( it is included to send the
necessary failure signal to the calling program / scheduler / shell
).
-- Make ERROR Comments to Continue
later
On any PL/SQL Code, you can make comments on code to review it
later. Example:
declare
a date;
begin
select sysdate into a
from dual;
$ERROR
'Im tired
I see this later
Go to ' || $$PLSQL_UNIT
||' at line ' || $$PLSQL_LINE
$END
end;
Will show:
ERROR at line 5:
ORA-06550: line 5, column 3:
PLS-00179: $ERROR: Im tired
I see this later
Go to at line 8
-- SAMPLE Clause Enhancements
The SAMPLE clause allows a query to return a limited sample of data
by specifying a percentage of rows or blocks to scan. This clause
can now be present in complex queries:
-- Query
10% or rows.
SELECT
e.empno, e.ename, d.dname
FROM emp SAMPLE (10) e
JOIN dept d
ON e.deptno = d.deptno;
-- Query
10% of blocks.
SELECT
e.empno, e.ename, d.dname
FROM emp SAMPLE BLOCK (10) e
JOIN dept d
ON e.deptno = d.deptno;
-- Bigfile tablespaces
This is a feature of Oracle 10g.
create bigfile tablespace beeeg_ts data file '/o1/dat/beeeg.dbf'
size 2T
Bigfile tablespaces are supported only for locally managed
tablespaces with automatic segment-space management (which is the
default setting since Oracle 9i).
--spool in SQL*PLUS
Oracle 10g improves the spool command with
* spool create
* spool replace
* spool append
Example:
spool myfile.txt append
--Whitespace Support in Windows Path and File
Names
Support for whitespaces in file names has been added to the START,
@, @@, RUN, SPOOL, SAVE and EDIT commands. Names containing
whitespaces must be quoted for them to be recognised correctly:
SPOOL "My Report.txt"
@"My Report.sql"
--Glogin, Login and Predefined Variables
The user profile files, glogin.sql and login.sql are now run after
each successful connection in addition to SQL*Plus startup. This is
particularly useful when the login.sql file is used to set the
SQLPROMPT to the current connection details:
- _DATE - Contains the current date (dynamic) by default or else
a fixed string
- _PRIVILEGE - Contains privilege level such as AS SYSDBA, AS
SYSOPER or blank.
- _USER - Contains the current username (like SHOW USER).
- _CONNECT_IDENTIFER contains the connection identifer used to
connect
so if my login.sql which reads :
SET FEED OFF
ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
SET FEED ON
SET SQLPROMPT "_USER'@'_CONNECT_IDENTIFIER _DATE> "
Gives me a sqlprompt of the form
<username> 08-APR-2004@<dbname> 13:55>
-- SHOW RECYCLEBIN
The SHOW RECYCLEBIN [original_table_name]
option has
been added to display all the contents of the recycle bin, or just
those for a specified table:
show recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
---------------- ------------------------------ ------------ -------------------
BONUS BIN$F5d+By1uRvieQy5o0TVxJA==$0 TABLE 2004-03-23:11:03:38
DEPT BIN$Ie1ifZzHTV6bDhFraYImTA==$0 TABLE 2004-03-23:11:03:38
EMP BIN$Vu5i5jelR5yPGTP2M99vgQ==$0 TABLE 2004-03-23:11:03:38
SALGRADE BIN$L/27VyBRRP+ZGWnZylVbZg==$0 TABLE 2004-03-23:11:03:38
TEST1 BIN$0lObShnuS0+6VS1cvLny0A==$0 TABLE 2004-03-24:15:38:42
show recyclebin test1
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
---------------- ------------------------------ ------------ -------------------
TEST1 BIN$0lObShnuS0+6VS1cvLny0A==$0 TABLE 2004-03-24:15:38:42
This allows users to inspect the contents of the recycle bin before
a PURGE
or FLASHBACK
operation.
SELECT * FROM RECYCLEBIN;
Remove A Recycle Bin Object By Name: PURGE TABLE RB$$49684$TABLE$0;
Remove Recycle Bin Objects By Tablespace: PURGE TABLESPACE data_sml;
Remove Recycle Bin Objects By Tablespace And User: PURGE TABLESPACE
<tablespace_name> USER <schema_name>;
Empty The Recycle Bin: PURGE recyclebin;
Empty Everything In All Recycle Bins: PURGE dba_recyclebin;
-- OEM
Startup Process
apachectl start
apachectl stop
emctl start dbconsole
emctl stop dbconsole
emctl status dbconsole
Tried to access isqlplus using URL:
http://server:5560/isqlplus
Can successfully access:
http://server:5500
http://server:5500/em
Started the isqlplus process using command:
isqlplusctl start
isqlplusctl stop
Automatically Start / Stop the Database and Listener
su - root
cp dbora lsnrora /etc/init.d
rc-update add dbora default
rc-update add lsnrora default
PL/SQL
Enhancements
in
Oracle
Database 10g
Performance
Tuning
Enhancements
in
Oracle Database 10g
-- Server Generated Alerts
Server Generated Alerts (SGA) interfaces with the US to send e-mail
messages when an external problem is impeding Oracle performance.
External problems might include a UNIX mount point that is full,
causing a failure of ASM files to extend or a RAM shortage with the
System Global Area.
-- PL/SQL DEVELOPMENTS
Oracle Database 10g PL/SQL has received considerable performance
enhancement work. This work applies to both interpreted and
natively compiled PL/SQL. Oracle Database 10g also allows a
degree of optimization to the PL/SQL code also. This is set by
the init.ora or session parameter plsql_optimizer_level=2.
--Easier and more Secure Encryption
Remember the package DBMS_OBFUSCATION_TOOLKIT (DOTK)? It was the
only available method to achieve encryption inside the database in
Oracle9i and below. While the package was sufficient for most
databases, like most security products, it was quickly rendered
ineffective against sophisticated hacker attacks involving highly
sensitive information. Notable among the missing functionality was
support for Advanced Encryption Standard (AES), a more powerful
successor to the older Digital Encryption Standard (DES) and Triple
DES (DES3).
In 10g, a more sophisticated encryption apparatus, DBMS_CRYPTO,
comes to the rescue. This built-in package offers all the
functionalities lacking in DOTK, in addition to enhancing existing
functions and procedures. For example, DBMS_CRYPTO can encrypt in
the new 256-bit AES algorithm. The function ENCRYPT (which is also
overloaded as a procedure) accepts a few parameters:
Parameter |
Description |
SRC |
The input to be encrypted. It must be in RAW data type;
any other data type must be converted. For instance, the
character variable l_inp is converted by:
utl_i18n.string_to_raw (p_in_val, 'AL32UTF8');
Because the string must be converted to RAW and the
character set AL32UTF8, a new package called UTL_IL8N
is used. Unlike DOTK, DBMS_CRYPTO does not accept
character variables as parameters. Another point to note is
that you do not have to pad the character to make the length
a multiple of 16, as it was in DOTK package. The function
(or procedure) pads it automatically.
|
KEY |
The encryption key is specified here. The key must be of
appropriate length based on the algorithm used.
|
TYP |
The type of encryption and padding used is specified in
this parameter. For example, if you want to use AES 256-bit
algorithm, Cipher Block Chaining, and PKCS#5 padding, you
would use the built-in constants here as:
typ => dbms_cryptio.encrypt_aes256 + dbms_cryptio.chain_cbc + dbms_cryptio.pad_pkcs5
|
The ENCRYPT function returns the
encrypted value in RAW, which can be converted into strings using
utl_i18n.raw_to_char (l_enc_val, 'AL32UTF8')
which is the reverse of the casting to RAW.
The opposite of encryption is decryption,
provided by the function (and overloaded as a procedure) DECRYPT,
which accepts analogous parameters. Using this new package, you
can build sophisticated security models inside your database
applications.
Oracle File
Copies
Oracle10G's DBMS_FILE_TRANSFER PL/SQL package provides
administrators with a mechanism to copy binary files between
Oracle databases without using OS commands or FTP. The
transfer package can be executed locally to transfer files to
another database server or can be executed remotely to transfer
files between two remote databases. Currently, the only
files that can be copied using this mechanism are DataPump dump
sets and tablespace data files. In addition, the file size
must be a multiple of 512 bytes and less than 2 terabytes.
Using the file transfer package in conjunction with Oracle's
transportable tablespace feature allows administrators to totally
automate tablespace data transfers from one database to
another. The process to unplug tablespace data files
from the source database, copy the files to the destination server
and plug the tablespace data files into the target database can
now be executed on a recurring basis by batch jobs initiated by
DBMS_JOBS, OEM, KRON, AT and third-party schedulers. The
transferred files created on the target platforms are owned by the
Oracle account and can be accessed by all database processes. For
long copy operations, progress is displayed in the
V$SESSION_LONGOPS view
Redo Log File
Size Advisor
Describing the process of determining the size of a database's
redo logfile as "somewhat error-prone" is like stating that
the Titanic sprung a small leak. Administrators must
balance the performance implications of redo logfiles that are
too small with the recovery implications of having redo
logfiles that are too large. Oracle10G comes to
the rescue with another new advisor, the Redo Logfile Size
Advisor. The advisor suggests the smallest on-line redo
logfile based on the current FAST_START_MTTR_TARGET parameter
and workload statistics. You just need to run:
select
OPTIMAL_LOGFILE_SIZE
from v$instance_recovery;
Initialization
Parameters
In
previous release of Oracle, all parameters were considerd
equally important. This made the administration and tuning of
the database very difficult because database administrators need
to become familiar with over 200 parameters. Oracle 10g
introduces two classes of parameter: basic and advanced. In most
cases, you need only set up the basic parameters for an Oracle
10g instance.
These
basic parameters include:
- cluster_database
- compatible
- control_files
- db_block_size
- db_create_file_dest
- db_create_online_log_dest_n
- db_domain
- db_name
- db_recovery_file_dest
- db_recovery_file_dest_size
- instance_number
- job_queue_processes
- log_archive_dest_n
- log_archive_dest_state_n
- nls_language
- nls_territory
- open_cursors
- pga_aggregate_target
- processes
- remote_listener
- remote_login_passwordfile
- rollback_segments
- sessions
- sga_target
- shared_servers
- star_transformation_enabled
- undo_management
- undo_tablespace
The
following is an example of the parameter file generated by DBCR
utility (The basic parameters are in bold):
###########################################
# Archive
###########################################
log_archive_dest_1='LOCATION=/u03/arch/grid'
log_archive_format=%t_%s_%r.dbf
###########################################
# Cache and I/O
###########################################
db_block_size=8192
db_cache_size=25165824
db_file_multiblock_read_count=16
###########################################
# Cursors and Library Cache
###########################################
open_cursors=300
###########################################
# Database Identification
###########################################
db_domain=world
db_name=grid
###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=/u01/app/oracle/admin/grid/bdump
core_dump_dest=/u01/app/oracle/admin/grid/cdump
user_dump_dest==/u01/app/oracle/admin/grid/udump
timed_statistics=TRUE
###########################################
# File Configuration
###########################################
control_files=("/u02/ctl/grid/control01.ctl",
"/u02/ctl/grid/control02.ctl")
###########################################
# Job Queues
###########################################
job_queue_processes=10
###########################################
# Compatibility
###########################################
compatible=10.1.0.0.0
###########################################
# Optimizer
###########################################
query_rewrite_enabled=FALSE
star_transformation_enabled=FALSE
###########################################
# Pools
###########################################
java_pool_size=104857600
large_pool_size=8388608
shared_pool_size=104857600
###########################################
# Processes and Sessions
###########################################
processes=150
###########################################
# Redo Log and Recovery
###########################################
fast_start_mttr_target=300
###########################################
# Security and Auditing
###########################################
remote_login_passwordfile=EXCLUSIVE
###########################################
# Sort, Hash Joins, Bitmap Indexes
###########################################
pga_aggregate_target=25165824
sort_area_size=524288
###########################################
# System Managed Undo and Rollback Segments
###########################################
undo_management=AUTO
undo_retention=10800
undo_tablespace=UNDOTBS1
In
Oracle9i (9.2.0.3), there are 258 parameters in the v$parameter
view. In Oracle 10g (10.1.0.0), there are more than 250
parameters in the v$parameter view. There are 233 parameters in
both Oracle9i Release 2 and Oracle 10g Release 1.
Old
Parameters
Twenty-five
of
the
258
parameters no longer exist in Oracle 10g's v$parameter
view; these are:

New
Parameters
There
are twenty more new parameters in Oracle 10g Release 1:

V$SQLSTATS
Performance View
Before we discuss the new V$SQLSTATS view, let's review some
tuning information. V$SQLAREA is one of the best SQL tuning
views. I use the two queries below to identify poorly performing
SQL. I take the traditional "top down" tuning approach and start
tuning the highest resource consuming SQL idenfified by the
scripts below.
The following
query dentifies the SQL responsible for the most disk reads:
SELECT disk_reads, executions,
disk_reads/executions, hash_value, sql_text
FROM v$sqlarea
WHERE disk_reads > 5000
ORDER BY disk_reads;
The following
query dentifies the SQL responsible for the most buffer hits:
SELECT buffer_gets, executions,
buffer_gets/executions, hash_value, sql_text
FROM v$sqlarea
WHERE buffer_gets > 100000
ORDER BY buffer_gets;
You can create
a more readable report in SQLPLUS by inserting report breaks
between the output lines. To generate the report breaks in
SQLPLUS, issue the following statement before running the query:
BREAK ON
disk_reads SKIP 2 --- for the disk read report and
BREAK ON buffer_gets SKIP 2 --- for the buffer get report
It's common
knowledge that poorly performing SQL is responsible for the
majority of database performance problems. The first query
returns SQL statements responsible for generating disk reads
greater than 5,000 while the second query returns SQL statements
responsible for generating buffer reads greater than 100,000.
These are good numbers to start with and you can adjust them
according to the size of the system you are tuning. You'll
notice that I divide the number of disk and buffer reads by the
number of statement executions. If a statement is generating
1,000,000 disk reads but is executed 500,000 times, it probably
doesn't need tuning. Heavy disk reads per statement execution
usually means a lack of proper indexing. Heavy buffer reads
usually means the exact opposite - indexes are being used when
they shouldn't be.
But the
SQLTEXT column in V$SQLAREA does not provide the entire text of
the SQL statement. That's why I include the HASH_VALUE column in
the report. I can use that value to dump the entire SQL
statement from V$SQLTEXT using the statement below (where
xxxxxxxx is the value in the HASH_VALUE column from the
V$SQLAREA reports above):
SELECT
sql_text FROM v$sqltext WHERE hash_value = 'xxxxxxxxx' ORDER
BY piece;
Oracle 10G R2
provides a new view called V$SQLSTATS that contains a
combination of columns that appear in V$SQL and V$SQLAREA. The
benefits that V$SQLSTATS provides are as follows:
- Since
V$SQLSTATS contains the entire text of the SQL statement AND
its associated performance statistics, we are no longer
required to access both the V$SQLTEXT and V$SQLAREA to obtain
the information we need.
- Oracle
states that V$SQLSTATS is faster and more scalable.
- The data in
V$SQLAREA has a tendency to get its contents flushed out just
when you need to get additional information from it. The
V$SQLSTATS view provides users with a longer access window.
FlashBack Command
This feature allows you to view the state of your database at
a specified prior point in time. Oracle does this by keeping
copies of all modified data blocks in flashback logs. The
Flashback logs are written in the Flash Recovery Area; a
directory specified by a new parameter db_recovery_file_dest.
Suppose you deleted/modified the configuration information for
your application. Instead of performing a recovery operation
on this database (and having the end users screaming while the
application is offline), you can just ask the database to “put
the table back the way it was 5 minutes ago”.
Oracle automatically creates and manages Flashback Logs within
the Flash Recovery Area. Since the Flash Recovery Area
is configured with a space quota, the Flashback Logs are
subject to those disk space restrictions. The size of
Flashback Logs can vary considerably, depending on the
read/write ratio of database changes during a given
flashback-logging interval. A copy of the block changes is
written to the Flashback Log. If, over the course of a day,
10% of the database blocks are updated, then the size of
Flashback Logs for 24 hours is 1/10th the size of your
database.
Flashback Database can be used on both the primary and standby
database to quickly revert the databases to an earlier
point-in-time to back out user errors. Alternatively, if the
administrator decides to failover to a standby database, but
those user-errors were already applied to the standby database
(say, because Real Time Apply was enabled), the administrator
may simply flashback the standby database to a safe point in
time. Finally, the administrator has the added option not to
use the Real Time Apply feature at one or more standby
databases, and instead delay the application of redo data on
those standby databases by a configurable amount of time,
which provides a window of protection from such user errors or
corruptions. The performance overhead of enabling Flashback
Database is less than 2%.
First of all, you will need to setup the DB in Flashback
Database mode:
1- Setup the DB in archive
log mode
2- Then setup the parameters DB_RECOVERY_FILE_DEST
(Location of Flash Recovery Area) and the DB_RECOVERY_FILE_DEST_SIZE
(Maximum amount of space allocable for the flash recovery
area). How big the flashback
area ought to be is contingent on quite a few factors, such as
size of the datafiles, redologs, controlfiles. You have also
to be aware of the mean frequency and number of your block
changes, whether you store backups only on disk, or on disk
and tape, and whether you use a redundancy-based retention
policy, or a recovery window-based retention policy etc.
NOTE - In RAC environments, the Flash Recovery Area must be
stored in the cluster's shares storage: ASM, NAS, etc
3- Then Enabling Flashback by
Shutdown
startup mount exclusive;
alter system set
db_flashback_retention_target=4320;
--flashback
to be retained for three days (specified in minutes)
alter system set DB_RECOVERY_FILE_DEST_SIZE=536870912;
alter system set DB_RECOVERY_FILE_DEST =
'C:\oracle\RMAN_Backup\Flash_Recovery_Area';
alter database flashback on;
alter database open;
To be able to use the Flasback Table, yu will need the
following parameters:
undo_management=auto
undo_retention=a
number greater then zero
undo_tablespace=the
name of your undo tablespace
Check DB Status
SELECT flashback_on,
log_mode
FROM v$database;
So, that enables flashback for our database.
We can query v$flashback_database_log to see what's going on
set linesize 200
select * from v$flashback_database_log;
OLDEST_FLASHBACK_SCN
OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE
ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ----------------
-------------- ------------------------
8292258
18-JUN-10
4320
8192000
0
We see that oldest SCN that we can flashback to is SCN
8292258. In other words this is our baseline SCN. The entire
technology of flashback database is being implemented from
this SCN. We can easily convert the SCN to a timestamp, if we
are interested in seeing the time from which flashback
database stands enabled.
select scn_to_timestamp(8292258) from dual;
SCN_TO_TIMESTAMP(8292258)
---------------------------------
18-JUN-10
10.49.28.000000000 AM
Types
Of FlashBack Recoveries
There are basic 7 types are FlashBack recoveries, these are
discussed below in details
1- Flashback Query
2- Flashback Version Query
3- Flashback Transaction Query
4- Flashback Table
5- Flashback Drop (Recycle Bin)
6- Flashback Database
7- Flashback Query Functions
Object Level
|
Scenario
|
Flashback Technology
|
Traditional Recovery
|
Database
|
Drop User
|
FLASHBACK DATABASE
|
Point-In-Time-Recovery
|
|
Truncate Table
|
FLASHBACK DATABASE
|
Point-In-Time-Recovery
|
|
Batch job errors out leaving a number
of tables partially updated.
|
FLASHBACK DATABASE
|
Database Point-In-Time-Recovery
|
Table
|
Drop Table
|
FLASHBACK DROP
|
Point-In-Time-Recovery
|
|
Update without the proper ‘where’
clause
|
FLASBACK TABLE
|
Point-In-Time-Recovery
|
|
Recover deleted data or undoing
incorrect changes, even after the changes are committed
|
FLASHBACK QUERY or
FLASHBACK TABLE
|
Tablespace Point-In-Time-Recovery
|
|
Comparing
current data against the data at some time in the past
|
FLASHBACK QUERY
|
The space intentionally left blank..
|
Transaction
|
Batch Job runs twice, but not really
sure of the objects affected
|
FLASHBACK VERSONS QUERY &
FLASHBACK TRANSACTION QUERY
|
Database Point-In-Time-Recovery
|
1)
Flashback
Query
:
You perform a Flashback Query using a SELECT statement with an
AS OF clause. You use a Flashback Query to retrieve data as it
existed at some time in the past. The query explicitly
references a past time using a timestamp or SCN. It returns
committed data that was current at that point in time. The
black magic that makes this possible are UNDO tablespaces and
automatic UNDO management
Example
This example uses a Flashback Query to examine the state of a
table at a previous time. Suppose, for instance, that a DBA
discovers at 12:30 PM that data for employee JOHN had been
deleted from the employee table, and the DBA knows that at
9:30AM The data for JOHN was correctly stored in the database.
The DBA can use a Flashback Query to examine the contents of
the table at 9:30, to findout what data had been lost. If
appropriate, the DBA can then re-insert the lost data in the
database.
The following query retrieves the state of the employee record
for JOHN at 9:30AM, April 4, 2003:
SELECT * FROM employee
AS OF TIMESTAMP
TO_TIMESTAMP('2003-04-04 09:30:00', 'YYYY-MM-DD HH:MI:SS')
WHERE name
= 'JOHN';
This update then restores John's information to the employee
table:
INSERT INTO employee
(SELECT * FROM employee AS OF TIMESTAMP
TO_TIMESTAMP('2003-04-04 09:30:00', 'YYYY-MM-DD HH:MI:SS')
WHERE name = 'JOHN');
2)
Flashback
Version
Query
The Flashback Versions Query is a powerful tool for the DBA to
run analysis and answer the question, 'How did this happen?'
Not only can the DBA run manual analysis, but this is a
powerful tool for the application's developer as well. You can
build customized applications for auditing purposes. Now
everyone really is accountable for his or her actions. Various
elements for this are shown below :
VERSIONS_XID - The transaction id that created this version of
the row
VERSIONS_OPERATION - The action that created this version of
the row (such as delete, insert, and update)
VERSIONS_STARTSCN - The SCN in which this row version first
occurred
VERSIONS_ENDSCN - The SCN in which this row version was
changed.
Eg : we use the Dept table in Scott schema & update dept
10 to 11 & then 12 & then 13(with commit on every
update).Thus we have run 3 updates so a query like
select
to_char(versions_starttime, 'DD-MON-YY HH24:mi:ss')
versions_starttime,
to_char(versions_endtime, 'DD-MON-YY HH24:mi:ss')
versions_endtime,
versions_xid,
versions_operation, EMPNO
from Dept versions between timestamp
minvalue and maxvalue
order by VERSIONS_STARTTIME;
will return something like :
VERSIONS_STARTTIME
VERSIONS_ENDTIME
VERSIONS_XID V Dept
----------------------
---------------------- ---------------- - ----------
01-DEC-03 03.57.12 PM
01-DEC-03 03.57.30 PM 0002002800000C61 I 10
01-DEC-03 03.57.30 PM
01-DEC-03 03.57.39 PM 000A000A00000029 U 11
01-DEC-03 03.57.39 PM
01-DEC-03 03.57.55 PM 000A000B00000029 U 12
We can automatically generate the undostatement
(AS SYS USER) using :
SELECT UNDO_SQL
FROM
FLASHBACK_TRANSACTION_QUERY
WHERE XID =
'04001A00B90A0000';
- output : update "SCOTT"."Dept" set "Dept" = '12' where ROWID
= 'AAAMicAAEAAAAA/AAA';
3)
Flashback
Transaction
Query
A Flashback Transaction Query is a query on the view
FLASHBACK_TRANSACTION_QUERY. You use a Flashback Transaction
Query to obtain transaction information, including SQL code
that you can use to undo each of the changes made by the
transaction.It uses the XID from above & shows the
complete transaction steps for that xid ...eg
SELECT xid,
start_scn START, commit_scn COMMIT, operation OP,
logon_user USER, undo_sql
FROM
flashback_transaction_query
WHERE xid =
HEXTORAW('000200030000002D');
XID
START
COMMIT
OP
USER
UNDO_SQL
----------------
----- ------
-- ----
---------------------------
000200030000002D
195243 195244 DELETE
HR insert into "HR"."EMP" ("EMPNO","EMPNAME","SALARY")
values ('111','Mike','655');
000200030000002D
195243 195244 INSERT
HR delete from "HR"."DEPT" where ROWID =
'AAAKD4AABAAAJ3BAAB';
000200030000002D
195243 195244 UPDATE
HR update "HR"."EMP" set "SALARY" = '555' where
ROWID = 'AAAKD2AABAAAJ29AAA';
000200030000002D
195243 113565 BEGIN HR
4)
Flashback Table
Just like the flashback query helps retrieve rows of a table,
FLASHBACK TABLE helps restore the state of a table to a
certain point in time even if a table structure changed has
occurred since then.
It all depends on the undo_retention parameter.
You can see a list of objects that can be recoverd by using
the following query:
drop
table BANK;
show recyclebin
ORIGINAL NAME RECYCLEBIN
NAME
OBJECT
TYPE
DROP
TIME
---------------- ------------------------------ ------------
-------------------
ACCOUNT
BIN$GJBH/nI3QIuQZWOapqJXHg==$0
TABLE
2007-10-17:09:42:45
CUSTOMER
BIN$qXjGDja/S2iCMedZ7wu3vg==$0
TABLE
2007-10-17:09:42:46
BANK
BIN$MFryFUanRRS2EVQD/hl9iQ==$0
TABLE
2007-11-09:09:42:45
or
select object_name, original_name , operation, type
from user_recyclebin;
OBJECT_NAME
ORIGINAL_NAME
OPERATION TYPE
------------------------------
--------------------------------
--------- ---------
BIN$MFryFUanRRS2EVQD/hl9iQ==$0
BANK
DROP TABLE
BIN$qXjGDja/S2iCMedZ7wu3vg==$0
CUSTOMER
DROP TABLE
BIN$GJBH/nI3QIuQZWOapqJXHg==$0
ACCOUNT
DROP TABLE
BIN$2LBk+8W3T92RhXpBb6Y4PA==$0
UN_BANK_BANKCODECENTERID
DROP INDEX
The FLASHBACK TABLE command
uses the underlying flashback query technology to put the
table back the way it was – providing no database integrity
constraints would be violated. In addition to being able to
simply put a table back the way it was in the past – the
FLASHBACK TABLE command also allows you to undrop a database
table. Example:
The table BANK is
gone but note the presence of the new table BIN$MFryFUanRRS2EVQD/hl9iQ==$0. Here's what happened: The dropped table BANK,
instead of completely disappearing, was renamed to a
system-defined name. It stays in the same tablespace, with the
same structure as that of the original table. If there are indexes
or triggers defined on the table, they are renamed too, using the
same naming convention used by the table. Any dependent sources
such as procedures are invalidated; the triggers and indexes of
the original table are instead placed on the renamed table BIN$MFryFUanRRS2EVQD/hl9iQ==$0, preserving the complete object structure of the
dropped table.
The table and its associated objects
are placed in a logical container known as the "recycle bin,"
which is similar to the one in your PC. However, the objects are
not moved from the tablespace they were in earlier; they still
occupy the space there. The recycle bin is merely a logical
structure that catalogs the dropped objects. Use the following
command from the SQL*Plus prompt to see its content (you'll need
SQL*Plus 10.1 to do this):
SQL> show recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
---------------- ------------------------------ ------------ -------------------
ACCOUNT BIN$GJBH/nI3QIuQZWOapqJXHg==$0 TABLE 2007-10-17:09:42:45
CUSTOMER BIN$qXjGDja/S2iCMedZ7wu3vg==$0 TABLE 2007-10-17:09:42:46
BANK BIN$MFryFUanRRS2EVQD/hl9iQ==$0 TABLE 2007-11-09:09:42:45
This shows the original name of the table,
BANK, as well as the new name in the recycle bin, which has the
same name as the new table we saw created after the drop. (Note:
the exact name may differ by platform.) To reinstate the table,
all you have to do is use the FLASHBACK TABLE command:
SQL> FLASHBACK TABLE BANK TO BEFORE DROP;
FLASHBACK COMPLETE.
SQL> SELECT * FROM TAB;
TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
BANK TABLE
Voila! The table is reinstated
effortlessly. If you check the recycle bin now, it will be
empty.
The following simple
command will take us to the table state at the specified
timestamp. eg
FLASHBACK TABLE Employee
TO TIMESTAMP
('13-SEP-04 8:50:58','DD-MON-YY HH24: MI: SS');
5) Flashback Drop (Recycle Bin)
Dropping of objects by accident has always been a problem for
users and DBAs alike. Users soon realize their mistake but then
it's too late and historically there is no easy way to recover
those dropped tables, indexes, constraints, triggers, etc.
Flashback Drop provides a safety net when dropping objects in
Oracle Database 10g. When a user drops a table, Oracle
automatically places it into the Recycle Bin.
What is the Recycle Bin?
The Recycle Bin is a virtual container where all dropped objects
reside. Underneath the covers, the objects are occupying the
same space as when they were created. If table EMP was created
in the USERS tablespace, the dropped table EMP remains in the
USERS tablespace. Dropped tables and any associated objects such
as indexes, constraints, nested tables, and other dependant
objects are not moved, they are simply renamed with a prefix of
BIN$$. You can continue to access the data in a dropped table or
even use Flashback Query against it.
FLASHBACK TABLE dept_test
TO BEFORE DROP;
Remember, placing tables in the recycle
bin does not free up space in the original tablespace. To free
the space, you need to purge the bin using:
PURGE RECYCLEBIN;
But what if you want to drop the table
completely, without needing a flashback feature? In that case, you
can drop it permanently using:
DROP TABLE RECYCLETEST
PURGE;
This command will not rename the table to the
recycle bin name; rather, it will be deleted permanently, as it
would have been pre-10g.
Managing the Recycle Bin
If the tables are not really dropped
in this process--therefore not releasing the tablespace--what
happens
when
the
dropped objects take up all of that space? The answer is simple: that situation does not
even arise. When a tablespace is completely filled up with
recycle bin data such that the datafiles have to extend to make
room for more data, the tablespace is said to be under "space
pressure." In that scenario, objects are automatically purged
from the recycle bin in a first-in-first-out manner. The
dependent objects (such as indexes) are removed before a table
is removed. Similarly, space
pressure can occur with user quotas as defined for a particular
tablespace. The tablespace may have enough free space, but the
user may be running out of his or her allotted portion of it. In
such situations, Oracle automatically purges objects belonging
to that user in that tablespace. In addition, there are several ways you can
manually control the recycle bin. If you want to purge the
specific table named TEST from the recycle bin after its drop,
you could issue
PURGE TABLE TEST;
or using its recycle bin name:
PURGE TABLE
"BIN$04LhcpndanfgMAAAAAANPw==$0";
This command will remove table TEST and all dependent objects such
as indexes, constraints, and so on from the recycle bin, saving
some space. If, however, you want to permanently drop an index
from the recycle bin, you can do so using:
purge index
in_test1_01;
which will remove the index only,
leaving the copy of the table in the recycle bin. Sometimes it might be useful to purge at a higher
level. For instance, you may want to purge all the objects in
recycle bin in a tablespace USERS. You would issue:
PURGE TABLESPACE USERS;
You may want to purge only the recycle bin for a particular user
in that tablespace. This approach could come handy in data
warehouse-type environments where users create and drop many
transient tables. You could modify the command above to limit the
purge to a specific user only:
PURGE TABLESPACE USERS USER SCOTT;
A user such as SCOTT would clear his own recycle bin with
PURGE RECYCLEBIN;
You as a DBA can purge all the objects in any tablespace using
PURGE DBA_RECYCLEBIN;
As you can see, the recycle bin can be managed in a variety of
different ways to meet your specific needs.
6)
Flashback Database
We have talked abt object recovery before this, now lets talk
what if something happens to database, this quickly rewinds the
complete database to a previous time, to correct any logical
data corruption.This can be used with both RMAN & SQL*Plus.
Some of the options are :
FLASHBACK DATABASE TO
TIMESTAMP SYSDATE-(1/24/12); --Recover up to 5 minutes
ago
FLASHBACK DATABASE TO
TIMESTAMP my_date;
FLASHBACK DATABASE TO
BEFORE TIMESTAMP my_date;
FLASHBACK DATABASE TO SCN
my_scn'
We
can also "mark the DB" to a point to restore:
CREATE RESTORE POINT BEFORE_MY_CHANGES;
And then use the following to recover:
FLASHBACK DATABASE TO RESTORE POINT
BEFORE_CHANGES;
After
the
system
comes
back with FLASHBACK COMPLETE, open the database with Resetlogs.
Eg :
-- Create a dummy table.
CREATE TABLE
flashback_database_test (id NUMBER(10)
-Flashback 5 Minutes
CONN sys/password AS
SYSDBA
SHUTDOWN IMMEDIATE
STARTUP MOUNT EXCLUSIVE
FLASHBACK DATABASE TO
TIMESTAMP SYSDATE-(1/24/12);
ALTER DATABASE OPEN
RESETLOGS;
- Check that the table is
gone.
DESC
flashback_database_test
7) Flashback Query Functions
The TIMESTAMP_TO_SCN and SCN_TO_TIMESTAMP functions have been
added to SQL and PL/SQL to simplify flashback operations: eg :
DECLARE
l_scn NUMBER;
l_timestamp
TIMESTAMP;
BEGIN
l_scn :=
TIMESTAMP_TO_SCN(SYSTIMESTAMP - 1/24);
l_timestamp :=
SCN_TO_TIMESTAMP(l_scn);
END;
/
Limits
• There is one little catch you need to be aware
of: it doesn't work on the system tablespaces.
• As fas as the performance overhead of enabling
Flashback Database is concerned its less than 2%.
Turning
Flashback
OFF
Shutdown;
startup mount exclusive;
alter database flashback off;
alter database open;
More Information:
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10734/toc.htm
http://www.oracle.com/technology/obe/obe10gdb/ha/flashback/flashback.htm
--
Oracle
By
Example (OBE).
Restore to the Point
Imagine this scenario: you need to create a variety of test
scenarios for your applications. For each scenario, you put
together the test data, and after running each test, you need
to modify the data to bring back the pretest values.
Or you need to update your DB from the vendor to modify the
database structure, modify data, and so on, as part of the
application upgrade process. Most upgrades go smoothly;
however, when an upgrade fails, you need to rollback all the
changes.
It's possible to perform these actions and reinstate the
database to a certain point in time using a simple command:
flashback database
In Oracle Database 10g Release 2 the functionality is now
enhanced significantly by the ability to name a specific point
in time, called a restore point.
Demostration (assume that the DB is in archive log mode):
Enable flashback:
startup mount;
alter system set db_recovery_file_dest_size = 2G;
alter system set
db_recovery_file_dest = '/u02/flashbackarea/acmeprd';
In flashback mode,
the database creates flashback log files, which record the old
images of the data after a change is made. These files are kept in
the location specified by the db_recovery_file_dest parameter, up
to the size specified by the db_recovery_file_dest_size parameter,
which in this case is set to 2GB.
Then enable flashback logging with:
alter database flashback
on;
alter database open;
Then create a restore point named qa_gold:
create restore point
qa_gold;
You can monitor your restore points with:
select name, scn, time,
guarantee_flashback_database from v$restore_point;
Now, you can start running your tests, once that is done, to
rollback all your changes is as simple as:
shutdown immediate;
startup mount;
flashback database to
restore point qa_gold;
That's it; the database is now "rewound" to the restore point
named qa_gold. There was no need to back up the database and
perform a point-in-time recovery.
Another
Example:
For this, I am going to create a table S and then truncate it.
But before truncating the table I need to find out the scn
and/ or timestamp to which I will revert back to, after
truncating my table.
create table s as select
* from tab;
select current_scn, scn_to_timestamp(current_scn) from
v$database;
CURRENT_SCN
SCN_TO_TIMESTAMP(CURRENT_SCN)
-----------
---------------------------------------------------------------------------
591023 10-MAY-07 11.53.52.000000000 AM
truncate table s;
shutdown immediate
startup mount
flashback database to
scn 591023;
alter database open
resetlogs;
select * from s where
rownum=1;
TNAME
TABTYPE CLUSTERID
------------------------------
-------
----------
ICOL$
TABLE
4
select count(*) from s;
COUNT(*)
----------
3339
So, that's it. Its that simple. On the other hand, if you
aren't satisfied with what you have got after flashing back,
you can simply undo the results of the entire flashback
operation by issuing RECOVER DATABASE command. This will
perform a complete recovery by applying all of the changes
from the archived logs and restoring the status of the
database to current.Or, in case you feel you haven't flashed
back far enough, you can do so by running the FLASHBACK
DATABASE command once again to go back further.
Case Insensitive
Searching
In Oracle Database 10g, Oracle provides case-insensitive and
accent-insensitive options for linguistic sorts.
The following example shows a GENERIC_BASELETTER query. First
create a table called test5:
CREATE TABLE
test5(product VARCHAR2(20));
INSERT INTO test5
VALUES('DATABASE');
INSERT INTO test5
VALUES('dätäbase');
INSERT INTO test5
VALUES('database');
INSERT INTO test5
VALUES('Database');
Set NLS_COMP to ANSI to perform a linguistic sort based on the
value of NLS_SORT:
ALTER SESSION SET
NLS_COMP=ANSI;
Set NLS_SORT to GENERIC_BASELETTER:
ALTER SESSION SET
NLS_SORT=GENERIC_BASELETTER;
Again select database from test5:
SELECT * FROM test5
WHERE product='database';
PRODUCT
--------------------
DATABASE
dätäbase
database
Database
Note that all of the rows of test5 are selected.
Query Changes to a
Table
Thanks to the Flashback Versions Query feature, Oracle 10g can
run a representation
of changed data between two time points task
easily and efficiently.
Querying
Changes
to a Table
In this example, I have used a bank's foreign currency management
application. The database has a table called RATES to record
exchange rate on specific times.
SQL> desc rates
Name Null? Type
----------------- -------- ------------
CURRENCY VARCHAR2(4)
RATE NUMBER(15,10)
This table shows the exchange rate of US$
against various other currencies as shown in the CURRENCY column.
In the financial services industry, exchange rates are not merely
updated when changed; rather, they are recorded in a history.
Up until now, the only option was to create a rate history table
to store the rate changes, and then query that table to see if a
history is available. Another option was to record the start and
end times of the applicability of the particular exchange rate in
the RATES table itself. When the change occurred, the END_TIME
column in the existing row was updated to SYSDATE and a new row
was inserted with the new rate with the END_TIME as NULL.
In Oracle Database 10g, however, the Flashback Versions Query
feature obviates the need to maintain a history table or store
start and end times. Rather, using this feature, you can get the
value of a row as of a specific time in the past with no
additional setup.
For example, say that the DBA, in the course of normal business,
updates the rate several times—or even deletes a row and reinserts
it:
insert into rates values
('EURO',1.1012);
commit;
update rates set rate =
1.1014;
commit;
update rates set rate =
1.1013;
commit;
delete rates;
commit;
insert into rates values
('EURO',1.1016);
commit;
update rates set rate =
1.1011;
commit;
After this set of activities, the DBA would get the current
committed value of RATE column by
SQL> select * from rates;
CURR
RATE
---- ----------
EURO
1.1011
This output shows the current value of the RATE, not all the
changes that have occurred since the first time the row was
created. Thus using Flashback Query, you can find out the value at
a given point in time; but we are more interested in building an
audit trail of the changes—somewhat like recording changes through
a camcorder, not just as a series of snapshots taken at a certain
point.
The following query shows the changes made to the table:
select versions_starttime,
versions_endtime, versions_xid,
versions_operation, rate
from rates versions between
timestamp minvalue and maxvalue
order by VERSIONS_STARTTIME;
VERSIONS_STARTTIME
VERSIONS_ENDTIME
VERSIONS_XID
V RATE
----------------------
---------------------- ---------------- - ----------
01-DEC-03 03.57.12 PM
01-DEC-03 03.57.30 PM 0002002800000C61
I 1.1012
01-DEC-03 03.57.30 PM
01-DEC-03 03.57.39 PM 000A000A00000029
U 1.1014
01-DEC-03 03.57.39 PM
01-DEC-03 03.57.55 PM 000A000B00000029
U 1.1013
01-DEC-03 03.57.55
PM
000A000C00000029 D 1.1013
01-DEC-03 03.58.07 PM
01-DEC-03 03.58.17 PM 000A000D00000029
I 1.1016
01-DEC-03 03.58.17
PM
000A000E00000029 U 1.1011
Note that all the changes to the row are shown here, even when the
row was deleted and reinserted. The VERSION_OPERATION column shows
what operation (Insert/Update/Delete) was performed on the row.
This was done without any need of a history table or additional
columns.
The column versions_xid shows the identifier of the transaction
that changed the row. More details about the transaction can be
found from the view FLASHBACK_TRANSACTION_QUERY, where the column
XID shows the transaction id. For instance, using the VERSIONS_XID
value 000A000D00000029 from above, the UNDO_SQL value shows the
actual statement.
SELECT UNDO_SQL
FROM
FLASHBACK_TRANSACTION_QUERY
WHERE XID =
'000A000D00000029';
UNDO_SQL
----------------------------------------------------------------------------
insert into
"ANANDA"."RATES"("CURRENCY","RATE") values ('EURO','1.1013');
In addition to the actual statement, this view also shows the
timestamp and SCN of commit and the SCN and timestamp at the start
of the query, among other information.
Finding Out Changes
During a Period
Now, let's see how we can use the information effectively. Suppose
we want to find out the value of the RATE column at 3:57:54 PM. We
can issue:
select rate, versions_starttime, versions_endtime
from rates versions between timestamp
to_date('12/1/2003
15:57:54','mm/dd/yyyy hh24:mi:ss')
and to_date('12/1/2003
16:57:55','mm/dd/yyyy hh24:mi:ss');
RATE
VERSIONS_STARTTIME VERSIONS_ENDTIME
----------
---------------------- ----------------------
1.1011
This query is similar to the flashback queries. In the above
example, the start and end times are null, indicating that the
rate did not change during the time period; rather, it includes a
time period. You could also use the SCN to find the value of a
version in the past. The SCN numbers can be obtained from the
pseudo-columns VERSIONS_STARTSCN and VERSIONS_ENDSCN. Here is an
example:
select rate, versions_starttime, versions_endtime
from rates versions
between scn 1000 and
1001;
Using the keywords MINVALUE and MAXVALUE, all the changes that are
available from the undo segments is displayed. You can even give a
specific date or SCN value as one of the end points of the ranges
and the other as the literal MAXVALUE or MINVALUE. For instance,
here is a query that tells us the changes from 3:57:52 PM only;
not the complete range:
select versions_starttime, versions_endtime, versions_xid,
versions_operation, rate
from rates versions between
timestamp to_date('12/11/2003 15:57:52', 'mm/dd/yyyy
hh24:mi:ss')
and maxvalue
order by VERSIONS_STARTTIME;
VERSIONS_STARTTIME
VERSIONS_ENDTIME
VERSIONS_XID
V RATE
----------------------
---------------------- ---------------- - ----------
01-DEC-03 03.57.55
PM
000A000C00000029 D 1.1013
01-DEC-03 03.58.07 PM
01-DEC-03 03.58.17 PM 000A000D00000029
I 1.1016
01-DEC-03 03.58.17
PM
000A000E00000029 U 1.1011
Estimate Table
and Index Size
We are asked to create an index on the
columns booking_id and cust_name of the table BOOKINGS. How
much space does the proposed index need? All you do is execute
the following PL/SQL script.
declare
l_used_bytes number;
l_alloc_bytes number;
begin
dbms_space.create_index_cost (
ddl => 'create index in_bookings_hist_01 on bookings_hist '||
'(booking_id, cust_name) tablespace users',
used_bytes => l_used_bytes,
alloc_bytes => l_alloc_bytes
);
dbms_output.put_line ('Used Bytes = '||l_used_bytes);
dbms_output.put_line ('Allocated Bytes = '||l_alloc_bytes);
end;
/
The output is:
Used Bytes = 7501128
Allocated Bytes = 12582912
You should be aware of two important caveats, however.
First, this process applies only to tablespaces with SEGMENT SPACE
MANAGEMENT AUTO turned on. Second, the package calculates the
estimated size of the index from the statistics on the table.
Hence it's very important to have relatively fresh statistics on
the tables. But beware: the absence of statistics on the table
will not result in an error in the use of the package, but will
yield a wrong result.
Suppose there is a table named BOOKINGS_HIST, which has the
average row length of 30,000 rows and the PCTFREE parameter of 20.
What if you wanted to increase the parameter PCT_FREE to 3—by what
amount will the table increase in size? Because 30 is a 10%
increase over 20, will the size go up by 10%? Instead of asking
your psychic, ask the procedure CREATE_TABLE_COST inside the
package DBMS_SPACE. Here is how you can estimate the size:
declare
l_used_bytes
number;
l_alloc_bytes
number;
begin
dbms_space.create_table_cost (
tablespace_name => 'USERS',
avg_row_size => 30,
row_count => 30000,
pct_free => 20,
used_bytes => l_used_bytes,
alloc_bytes => l_alloc_bytes
);
dbms_output.put_line('Used: '||l_used_bytes);
dbms_output.put_line('Allocated: '||l_alloc_bytes);
end;
/
The output is:
Used: 1261568
Allocated: 2097152
Changing the table's PCT_FREE parameter to 30 from 20, by
specifying
pct_free => 30
we get the output:
Used: 1441792
Allocated: 2097152
Note how the used space has increased from 1,261,568 to 1,441,792
because the PCT_FREE parameter conserves less room in the data
block for user data. The increase is about 14%, not 10%, as
expected. Using this package you can easily calculate the impact
of parameters such as PCT_FREE on the size of the table, or of
moving the table to a different tablespace.
Improvements to Bulk Binds and
Collections
You can use collections to improve the performance of SQL
operations executed iteratively by using bulk binds. Bulk binds
reduce the number of context switches between the PL/SQL engine
and the SQL engine. Two PL/SQL language constructs implement bulk
binds: FORALL and BULK COLLECT INTO.
The syntax for the FORALL statement is:
FORALL bulk_index IN [lower_bound..upper_bound
| INDICES OF
collection_variable[BETWEEN lower_bound AND upper_bound]
| VALUES OF
collection_variable ]
[SAVE
EXCEPTIONS]
sql_statement;
Bulk_index can be used only in the sql_statement and only as a
collection index (subscript). When PL/SQL processes this
statement, the whole collection, instead of each individual
collection element, is sent to the database server for processing.
To delete all the accounts in the collection inactives from the
table ledger, do this:
FORALL i IN
inactives.FIRST..inactives.LAST
DELETE FROM
ledger WHERE acct_no = inactives(i);
Before 10g, FORALL will iterate through each integer between
low_value and high_value, using that integer to identify an
element in all collections that are bound into the DML statement
with the index_variable. If no element exists at a particular
index value, Oracle Database raises an exception, as you can see
in here:
DECLARE
TYPE
list_of_names_t IS TABLE OF VARCHAR2 (32767)
INDEX
BY PLS_INTEGER;
happyfamily list_of_names_t;
BEGIN
happyfamily (1)
:= 'Eli';
happyfamily (2)
:= 'Chris';
happyfamily (3)
:= 'Veva';
happyfamily (5) :=
'Steven';
FORALL indx IN
happyfamily.FIRST .. happyfamily.LAST
INSERT
INTO first_names
VALUES (happyfamily (indx));
END;
/
DECLARE
*
ERROR at line 1:
ORA-22160: element at index
[4] does not exist
With Oracle10g, if there are non-consecutive index values due to
deletions, you will need to use the INDICES OF syntax to skip over
the deleted elements:
DECLARE
TYPE list_of_names_t
IS TABLE OF VARCHAR2 (32767)
INDEX BY PLS_INTEGER;
happyfamily list_of_names_t;
BEGIN
happyfamily (1) := 'Eli';
happyfamily (2) := 'Chris';
happyfamily (3) := 'Veva';
happyfamily (5) := 'Steven';
FORALL indx IN
INDICES OF happyfamily
INSERT INTO first_names
VALUES
(happyfamily (indx));
END;
/
That is an example of the simplest way to apply INDICES OF:
"self- reference" the same collection used within the DML
statement, to easily avoid errors due to sparseness in that
collection.
With Oracle10g, if you are interested in the values of a sparse
collection of integers instead of the indices, you will need to
use the VALUES OF syntax:
FORALL i IN VALUES OF
inactives_list
--
inactives_list is a collection of index values from
-- the
inactives table which are earmarked for deletion
DELETE FROM
ledger WHERE acct_no = inactives(i);
These new INDICES OF and VALUES OF keywords allow you to specify a
subset of rows in a driving collection that will be used in the
FORALL statement. To match the row numbers in the data collection
with the row numbers in the driving collection, use the INDICES OF
clause. To match the row numbers in the data collection with
the values found in the defined rows of the driving collection,
use the VALUES OF clause.
There are several functions that can be used to manipulate
collections. Most of these are new to Oracle10g; only CAST and MULTISET are available in
earlier releases. The COLLECT, POWERMULTISET, and
POWERMULTISET_BY_CARDINALITY are only valid in a SQL statement;
they cannot be used, for example, in a PLSQL assignment.
The CAST function works together with the COLLECT and MULTISET
functions. MULTISET was available prior to Oracle10g and operates
on a subquery.
COLLECT is new to Oracle10g and operates on a column in a SQL
statement:
CREATE TYPE email_list_t AS
TABLE OF VARCHAR2(64);
-- COLLECT operates on a column
SELECT
CAST(COLLECT(cust_email)AS email_list_t)
FROM oe.customers;
-- which is equivalent to
SELECT CAST(MULTISET(SELECT
cust_email
FROM oe.customers)
AS email_list_t)
FROM dual;
Examples of the other nested table functions, operators, and
expressions are demonstrated as follows:
DECLARE
TYPE
nested_type IS TABLE OF NUMBER;
nt1 nested_type := nested_type(1,2,3);
nt2 nested_type := nested_type(3,2,1);
nt3 nested_type := nested_type(2,3,1,3);
nt4 nested_type := nested_type(1,2,4);
answer nested_type;
BEGIN
answer := nt1 MULTISET UNION nt4; -- (1,2,3,1,2,4)
answer := nt1 MULTISET UNION nt3; -- (1,2,3,2,3,1,3)
answer := nt1 MULTISET UNION DISTINCT nt3; --
(1,2,3)
answer := nt2
MULTISET INTERSECT nt3; -- (3,2,1)
answer := nt2
MULTISET INTERSECT DISTINCT nt3;
-- (3,2,1)
answer := nt3
MULTISET EXCEPT nt2; -- (3)
answer := nt3
MULTISET EXCEPT DISTINCT nt2; -- ()
answer :=
SET(nt3); -- (2,3,1)
IF (nt1 IS A
SET) AND (nt3 IS NOT A SET) THEN
dbms_output.put_line('nt1 has unique elements');
dbms_output.put_line('but nt3 does not');
END IF;
IF (nt3
MULTISET EXCEPT DISTINCT nt2) IS EMPTY THEN
dbms_output.put_line('empty set');
END IF;
IF 3 MEMBER OF
(nt3 MULTISET EXCEPT nt2) THEN
dbms_output.put_line('3 is in the answer set');
END IF;
IF nt1
SUBMULTISET nt3 THEN
dbms_output.put_line('nt1 is a subset of nt3');
END IF;
IF SET(nt3) IN
(nt1,nt2,nt3) THEN
dbms_output.put_line('expression is IN the list of
nested tables');
END IF;
END;
New
collection functions.
Function
|
Return value
|
Description
|
=
|
BOOLEAN
|
Compares two nested tables
and return TRUE if they have the same named type,
cardinality, and the elements are equal.
|
<> or !=
|
BOOLEAN |
Compares two nested tables
and return FALSE if they differ in named type, cardinality,
or equality of elements.
|
[NOT] IN ( )
|
BOOLEAN |
Returns TRUE [FALSE] if the
nested table to the left of IN exists in the list of nested
tables in the parentheses.
|
CARDINALITY(x)
|
NUMBER
|
Returns the number of
elements in varray or nested table x. Returns NULL if the
collection is atomically NULL (not initialized).
|
COLLECT (Oracle10g)
|
NESTED TABLE
|
Used in conjunction with CAST
to map a column to a collection.
|
MULTISET
|
NESTED TABLE |
Used in conjunction with CAST
to map a subquery to a collection.
|
x MULTISET EXCEPT [DISTINCT]
y
|
NESTED TABLE |
Performs a MINUS set
operation on nested tables x and y, returning a nested table
whose elements are in x, but not in y. x, y, and the
returned nested table must all be of the same type. The
DISTINCT keyword forces the elimination of duplicates from
the returned nested table.
|
x MULTISET INTERSECT
[DISTINCT] y
|
NESTED TABLE |
Performs an INTERSECT set
operation on nested tables x and y, returning a nested table
whose elements are in both x and y. x, y, and the returned
nested table must all be of the same type. The DISTINCT
keyword forces the elimination of duplicates from the
returned nested table.
|
x MULTISET UNION [DISTINCT] y
|
NESTED TABLE |
Performs a UNION set
operation on nested tables x and y, returning a nested table
whose elements include all those in x as well as those in y.
x, y, and the returned nested table must all be of the same
type. The DISTINCT keyword forces the elimination of
duplicates from the returned nested table.
|
SET(x)
|
NESTED TABLE
|
Returns nested table x
without duplicate elements.
|
x IS [NOT] A SET
|
BOOLEAN
|
Returns TRUE [FALSE] if the
nested table x is composed of unique elements
|
x IS [NOT] EMPTY
|
BOOLEAN |
Returns TRUE [FALSE] if the
nested table x is empty
|
e IS [NOT] MEMBER [OF] x
|
BOOLEAN |
Returns TRUE [FALSE] if an
expression e is a member of the nested table x
|
y [NOT] SUBMULTISET [OF] x
|
BOOLEAN |
Returns TRUE [FALSE] if the
nested table y contains only elements that are also in
nested table x
|
Compile Time
Warnings and Conditional Compilations
PL/SQL's optimizing compiler can improve runtime performance
dramatically, imposing a relatively slight overhead at compile
time. Fortunately, the benefits of optimization apply both to
interpreted and natively compiled PL/SQL, because
optimizations are applied by analyzing patterns in source
code.
The optimizing compiler is enabled by default. However, you
may wish to alter its behavior, either by lowering its
aggressiveness or by disabling it entirely.
For example, if, in the course of normal operations, your
system must perform recompilation of many lines of code, or if
an application generates many lines of dynamically executed
PL/SQL, the overhead of optimization may be unacceptable. Keep
in mind, though, Oracle's tests show that the optimizer
doubles the runtime performance of computationally intensive
PL/SQL.
This new feature examines code and makes internal adjustments
to it, depending on the level used and the type of code it
sees.
Level 0 is to compile without optimization.
Level 1 is to compile with some optimization but tries to
maximize compile time.
Level 2 is the default and tries to improve code for the best
runtime performance.
Use the session level statements to control this new
optimizing compiler:
ALTER
SESSION SET PLSQL_OPTIMIZE_LEVEL = 1;
or simply compile with the setting using
ALTER
PROCEDURE myproc COMPILE PLSQL_OPTIMIZE_LEVEL = 1;
Oracle can now produce compile-time warnings when code is
ambiguous or inefficient be setting the PLSQL_WARNINGS
parameter at either instance or session level. The categories
ALL, SEVERE, INFORMATIONAL and PERFORMANCE can be used to
alter the type of warnings that are produced.
Severe: Messages for
conditions that might cause unexpected behavior or wrong
results, such as aliasing problems with parameters.
Performance: Messages
for conditions that might cause performance problems, such as
passing a VARCHAR2 value to a NUMBER column in an INSERT
statement.
Informational:
Messages for conditions that do not have an effect on
performance or correctness, but that you might want to change
to make the code more maintainable, such as dead code that can
never be executed.
The keyword All is a
shorthand way to refer to all warning messages.
Examples of their usage include:
-- Instance and session level.
ALTER SYSTEM SET
PLSQL_WARNINGS='ENABLE:ALL';
ALTER SESSION SET
PLSQL_WARNINGS='DISABLE:PERFORMANCE';
-- Recompile with extra checking.
ALTER PROCEDURE hello
COMPILE PLSQL_WARNINGS='ENABLE:PERFORMANCE';
-- Set mutiple values.
ALTER SESSION SET
PLSQL_WARNINGS='ENABLE:SEVERE','DISABLE:PERFORMANCE','DISABLE:INFORMATIONAL';
-- Use the DBMS_WARNING package instead.
EXEC
DBMS_WARNING.SET_WARNING_SETTING_STRING('ENABLE:ALL'
,'SESSION');
The current settings associated with each object can be
displayed using the [USER|DBA|ALL]_PLSQL_OBJECT_SETTINGS
views.
ALTER
SYSTEM SET PLSQL_WARNINGS='ENABLE:ALL'; -- For debugging
during development. ALTER SESSION SET PLSQL_WARNINGS='ENABLE:PERFORMANCE'; -- To
focus on one aspect. ALTER PROCEDURE hello COMPILE PLSQL_WARNINGS='ENABLE:PERFORMANCE';
-- Recompile with extra checking. ALTER SESSION SET
PLSQL_WARNINGS='DISABLE:ALL'; -- To turn off all warnings. -- We
want to hear about 'severe' warnings, don't want to hear about
'performance' -- warnings, and want PLW-06002 warnings to produce
errors that halt compilation.
ALTER SESSION SET
PLSQL_WARNINGS='ENABLE:SEVERE','DISABLE:PERFORMANCE','ERROR:06002';
To
see
a typical example of the warning output try:
ALTER SESSION SET
PLSQL_WARNINGS='ENABLE:ALL';
CREATE OR REPLACE
PROCEDURE test_warnings AS
l_dummy
VARCHAR2(10) := '1';
BEGIN
IF 1=1 THEN
SELECT '2' INTO
l_dummy FROM
dual;
ELSE
RAISE_APPLICATION_ERROR(-20000, 'l_dummy != 1!');
END IF;
END;
/
SP2-0804: Procedure
created with compilation warnings
SHOW ERRORS
LINE/COL ERROR
--------
---------------------------
9/5
PLW-06002:
Unreachable code
The errors can be
queried using the %_ERRORS views.
Another Example:
ALTER SESSION SET
PLSQL_WARNINGS=‘enable:severe', 'enable:performance‘
'enable:informational';
CREATE OR REPLACE
PROCEDURE my_proc(p_date_info IN OUT VARCHAR2) AS
v_return
BOOLEAN;
first_test
TY_TEST;
second_test
TY_TEST;
BEGIN
first_test
:= TY_TEST(test(1, SYSDATE));
second_test :=
TY_TEST(test(1, SYSDATE));
v_return := first_test = second_test;
p_date_info :=
'The date is '||SYSDATE;
IF v_return THEN
dbms_output.put_line('The two are the same.');
ELSE
dbms_output.put_line('The two are not the same.');
END IF;
END;
show err
Errors for PROCEDURE
MY_PROC:LINE/COL ERROR
--------
-----------------------------------------------------
2/2
PLW-07203:
parameter 'P_DATE_INFO' may benefit from use of the NOCOPY
compiler hint
Conditional Compilation
Conditional compilation allows PL/SQL code to be tailored to
specific environments by selectively altering the source code
based on compiler directives. It is considered a new feature
of Oracle 10g Release 2, but is available in Oracle 10g
Release 1 (10.1.0.4.0).
Compiler flags are identified by the "$$"
prefix, while conditional control is provided by the $IF-$THEN-$ELSE
syntax.
$IF boolean_static_expression $THEN text
[ $ELSIF boolean_static_expression $THEN text ]
[ $ELSE text ]
$END
As an example, let's assume that all application debugging is
performed by calling a procedure called DEBUG
.
Conditional compilation can be used to provide an on/off switch for
the debug, as well as influencing the debug information that is
produced. The following procedure implements a variety of debug
behavior using conditional compilation.
CREATE OR REPLACE PROCEDURE debug (p_text IN VARCHAR2) AS
$IF $$debug_on $THEN
l_text VARCHAR2(32767);
$END
BEGIN
$IF $$debug_on $THEN
$IF DBMS_DB_VERSION.VER_LE_10_1 $THEN
l_text := SUBSTR(p_text, 1 ,233);
$ELSE
l_text := p_text;
$END
$IF $$show_date $THEN
DBMS_OUTPUT.put_line(TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS') || ': ' || l_text);
$ELSE
DBMS_OUTPUT.put_line(p_text);
$END
$ELSE
NULL;
$END
END debug;
/
The debug_on
flag acts as an on/off switch, as a value
of FALSE will result in an empty procedure. Assuming debug is
enabled, the DBMS_DB_VERSION
package is used to
determine if the input text should be truncated to prevent errors in
DBMS_OUTPUT
. If the code is running on a Release 2
server this truncation is not necessary due to the enhancements in
the DBMS_OUTPUT
package. The show_date
flag is used to determine if a date prefix should be added to the
debug message.
Once the procedure is compiled the complete source is stored in the
database, including the conditional code directives.
SET PAGESIZE 30
SELECT text
FROM user_source
WHERE name = 'DEBUG'
AND type = 'PROCEDURE';
TEXT
----------------------------------------------------------------------------------------------------
PROCEDURE debug (p_text IN VARCHAR2) AS
$IF $$debug_on $THEN
l_text VARCHAR2(32767);
$END
BEGIN
$IF $$debug_on $THEN
$IF DBMS_DB_VERSION.VER_LE_10_1 $THEN
l_text := SUBSTR(p_text, 1 ,233);
$ELSE
l_text := p_text;
$END
$IF $$show_date $THEN
DBMS_OUTPUT.put_line(TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS') || ': ' || l_text);
$ELSE
DBMS_OUTPUT.put_line(p_text);
$END
$ELSE
NULL;
$END
END debug;
The PRINT_POST_PROCESSED_SOURCE
procedure of the DBMS_PREPROCESSOR
package displays the post-processed source.
SET SERVEROUTPUT ON SIZE UNLIMITED
BEGIN
DBMS_PREPROCESSOR.print_post_processed_source (
object_type => 'PROCEDURE',
schema_name => 'TEST',
object_name => 'DEBUG');
END;
/
PROCEDURE debug (p_text IN VARCHAR2) AS
BEGIN
NULL;
END debug;
As expected, the lack of compile flags results in an empty debug
procedure. Setting the debug_on
and show_date
compiler flags results in debug messages printed with a date prefix.
ALTER PROCEDURE debug COMPILE PLSQL_CCFLAGS = 'debug_on:TRUE, show_date:TRUE' REUSE SETTINGS;
BEGIN
DBMS_PREPROCESSOR.print_post_processed_source (
object_type => 'PROCEDURE',
schema_name => 'TEST',
object_name => 'DEBUG');
END;
/
PROCEDURE debug (p_text IN VARCHAR2) AS
l_text VARCHAR2(32767);
BEGIN
l_text := p_text;
DBMS_OUTPUT.put_line(TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS') || ': ' || l_text);
END debug;
Switching off the show_date
compiler flag results in
debug messages printed without a date prefix.
ALTER PROCEDURE debug COMPILE PLSQL_CCFLAGS = 'debug_on:TRUE, show_date:FALSE' REUSE SETTINGS;
SET SERVEROUTPUT ON SIZE UNLIMITED
BEGIN
DBMS_PREPROCESSOR.print_post_processed_source (
object_type => 'PROCEDURE',
schema_name => 'TEST',
object_name => 'DEBUG');
END;
/
PROCEDURE debug (p_text IN VARCHAR2) AS
l_text VARCHAR2(32767);
BEGIN
l_text := p_text;
DBMS_OUTPUT.put_line(l_text);
END debug;
The result of conditional compilation is the removal of unnecessary
code. PL/SQL is an intepretted language, so removing unnecessary
code may produce performance improvements in some circumstances.
Single-Set
Aggregates in DML Returning Clause
This allows the use of single-set aggregation functions (like
sum, avg, etc) in the RETURNING clause of DML statements. This
can result in significant performance gains in transactions
that process many rows of the same table - such as in batch
processes. The DML statements that can use the single-set
aggregates in their returning clauses are INSERT, UPDATE, and
DELETE.
The purpose of the RETURNING clause is to return the rows
affected by the INSERT, UPDATE, or DELETE statement. The
RETURNING clause can only be used with single tables and
materialized views and regular views based on a single table.
When the target of the INSERT is a single row, the RETURNING
clause can retrieve column expressions using the affected row,
rowid, and REFs to the affected row. Single-set aggregates can
only be used when the returning clause returns a single row.
Single-set aggregates cannot be combined with simple
expressions in the same returning clause. Single-set
aggregates cannot contain the DISTINCT keyword.
An example INSERT using the RETURNING clause and a single-set
aggregate would be:
Set serveroutput on
Variable tot_sal;
begin
INSERT INTO emp
select * from emp
RETURNING sum(sal) INTO :tot_sal;
dbms_output.put_line(' Total Company Payroll now : ' ||
to_char(:tot_sal,'$999,999.00'));
end;
/
An example UPDATE using the RETURNING clause and a single-set
aggregate is shown below.
Variable tot_sal number;
begin
update emp set
sal=sal*1.1
RETURNING sum(sal) INTO :tot_sal;
dbms_output.put_line('Total Company Payroll now
'||to_char(:tot_sal,'$999,999.00'));
end;
/
An example DELETE using a subquery in the WHERE statement and
the RETURNING clause with a single-set aggregate would be:
variable tot_sal number;
begin
delete emp a
where a.rowid > (select min (x.rowid) from emp x
where x.empno=a.empno)
RETURNING
sum(a.sal) INTO :tot_sal;
dbms_output.put_line('Total
Company
Payroll now '||to_char(:tot_sal,'$999,999.00'));
end;
/
Online Redefinition
In highly available systems, it is occasionally necessary to
redefine large "hot" tables to improve the performance of queries
or DML performed against these tables. The database provide a
mechanism to redefine tables online. This mechanism provides a
significant increase in availability compared to traditional
methods of redefining tables that require tables to be taken
offline.
When a table is redefined online, it is accessible to R/W
operation during much of the redefinition process. The DBA has the
control over when to switch, at that point the table is locked in
exclusive mode. Online table redefinition enables you to:
- Modify the storage parameters of the
table
- Move the table to a different
tablespace in the same schema
- Add support for parallel queries
- Add or drop partitioning support
- Re-create the table to reduce
fragmentation
- Change the organization of a normal
table (heap organized) to an index-organized table and vice
versa
- Add or drop a column
Steps
for
Online Redefinition of Tables
This example illustrates online redefinition of a non-partitioned
table emp, with columns: empno, name, salary, phone..
The schema name is u1 and the table is redefined as follows:
* The column salary is
multiplied by a factor of 1.10 and renamed as sal
* The
column phone is dropped
* A new
column deptno is added,
and a user-defined function u1.emp_dept
is used to determine the value of deptno
* The
redefined table is partitioned
by range on empno
* Place
the index emp_idx2 in
tablespace tbs_2
1. Choose one of the following two methods of redefinition:
* The first method of redefinition is to use
the primary keys or pseudo-primary keys to perform the
redefinition. Pseudo-primary keys are unique keys with all
component columns having NOT NULL constraints. For this method,
the versions of the tables before and after redefinition should
have the same primary key columns. This is the preferred and
default method of redefinition.
* The second method of redefinition is to use
rowids. For this method, the table to be redefined should not be
an index organized table. Also, in this method of redefinition, a
hidden column named M_ROW$$ is added to the post-redefined version
of the table and it is recommended that this column be marked as
unused or dropped after the redefinition is completed.
2. Verify that the table can be online redefined by invoking the
DBMS_REDEFINITION.CAN_REDEF_TABLE() procedure and use the
OPTIONS_FLAG parameter to specify the method of redefinition to be
used. If the table is not a candidate for online redefinition,
then this procedure raises an error indicating why the table
cannot be online redefined.
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE('HR','ADMIN_EMP', DBMS_REDEFINITION.CONS_USE_PK);
END;
/
3. Create an empty interim
table (in the same schema as the table to be redefined) with all
of the desired attributes. If columns are to be dropped, do not
include them in the definition of the interim table. If a column
is to be added, then add the column definition to the interim
table. It is possible to perform table redefinition in parallel.
If you specify a degree of parallelism on both of the tables and
you ensure that parallel execution is enabled for the session, the
database will use parallel execution whenever possible to perform
the redefinition. You can use the PARALLEL clause of the ALTER
SESSION statement to enable parallel execution.
CREATE TABLE INT_EMP
(empno NUMBER ,
name VARCHAR2(100),
sal NUMBER,
deptno NUMBER DEFAULT 10)
PARTITION BY RANGE(empno)
(PARTITION emp1000 VALUES LESS THAN (1000) TABLESPACE tbs_1,
PARTITION emp2000 VALUES LESS THAN (2000) TABLESPACE
tbs_2);
4. Start the redefinition process by calling
DBMS_REDEFINITION.START_REDEF_TABLE(), providing the following:
* The Schema Name
* The table to be
redefined
* The interim
table name
* The column
mapping. If the column mapping information is not supplied, then
it is assumed that all the columns (with their names unchanged)
are to be included in the interim table. If the column mapping is
supplied, then only those columns specified explicitly in the
column mapping are considered
* Redefinition
method to use; primary key, a unique index or rowed
* Optionally, the
columns to be used in ordering rows
* Optionally,
specify the ORDER BY columns
If the method of redefinition is not specified, then the default
method of redefinition using primary keys is assumed. You can
optionally specify the ORDERBY_COLS parameter to specify how rows
should be ordered during the initial instantiation of the interim
table.
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE('HR',
'EMP', 'INT_EMP', 'empno
empno, name name, salary*1.10 sal, emp_dept (empno) deptno ');
END;
/
Manually Register Changed Index
DBMS_REDEFINITION.REGISTER_DEPENDENT_OBJECT('HR', 'EMP', 'INT_EMP',
dbms_redefinition.cons_index, 'HR',
'EMP_IDX2' , 'INT_EMP_IDX2');
5. You have two methods for creating (cloning) dependent objects
such as triggers, indexes, grants, and constraints on the interim
table. Method 1 is the most automatic and preferred method, but
there may be times that you would choose to use method 2.
* Method 1: Automatically Creating Dependent
Objects
Use the COPY_TABLE_DEPENDENTS
procedure to automatically create dependent objects such as
triggers, indexes, grants, and constraints on the interim table.
This procedure also registers the dependent objects. Registering
the dependent objects enables the identities of these objects and
their cloned counterparts to be automatically swapped later as
part of the redefinition completion process. The result is that
when the redefinition is completed, the names of the dependent
objects will be the same as the names of the original dependent
objects.
You can discover if errors occurred
while copying dependent objects by checking the NUM_ERRORS output
variable. If the IGNORE_ERRORS parameter is set to TRUE, the
COPY_TABLE_DEPENDENTS procedure continues cloning dependent
objects even if an error is encounter when creating an object. The
errors can later be viewed by querying the DBA_REDIFINITION_ERRORS
view. Reasons for errors include a lack of system resources or a
change in the logical structure of the table.
If IGNORE_ERRORS is set to FALSE,
the COPY_TABLE_DEPENDENTS procedure stops cloning objects as soon
as any error is encountered.
After you correct any errors you
can attempt again to clone the failing object or objects by
reexecuting the COPY_TABLE_DEPENDENTS procedure. Optionally you
can create the objects manually and then register them as
explained in method 2.
The COPY_TABLE_DEPENDENTS procedure
can be used multiple times as necessary. If an object has already
been successfully cloned, it will ignore the operation.
* Method 2: Manually Creating Dependent
Objects
You can manually create dependent
objects on the interim table.
Note: In previous releases you were required to manually create
the triggers, indexes, grants, and constraints on the interim
table, and there may still be situations where to want to or must
do so. In such cases, any referential constraints involving the
interim table (that is, the interim table is either a parent or a
child table of the referential constraint) must be created
disabled. Until the redefinition process is either completed or
aborted, any trigger defined on the interim table will not
execute.
Use the REGISTER_DEPENDENT_OBJECT
procedure after you create dependent objects manually. You can
also use the COPY_TABLE_DEPENDENTS procedure to do the
registration. Note that the COPY_TABLE_DEPENDENTS procedure does
not clone objects that are registered manually.
You would also use the
REGISTER_DEPENDENT_OBJECT procedure if the COPY_TABLE_DEPENDENTS
procedure failed to copy a dependent object and manual
intervention is required.
You can query the
DBA_REDEFINITION_OBJECTS view to determine which dependent objects
are registered. This view shows dependent objects that were
registered explicitly with the REGISTER_DEPENDENT_OBJECT procedure
or implicitly with the COPY_TABLE_DEPENDENTS procedure. Only
current information is shown in the view.
The UNREGISTER_DEPENDENT_OBJECT
procedure can be used to unregister a dependent object on the
table being redefined and on the interim table.
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('HR',
'EMP',
'INT_EMP',
1,
TRUE, TRUE, TRUE , FALSE, no_errors);
END;
/
6.Execute the DBMS_REDEFINITION.FINISH_REDEF_TABLE
procedure to complete the redefinition of the table. During this
procedure, the original table is locked in the exclusive mode for a
very short time, independent of the amount of data in the original
table. However, FINISH_REDEF_TABLE
will wait for all
pending DML that was initiated before it was invoked to commit
before completing the redefinition. As a result of this procedure,
the following occur:
-
The original table is redefined such that it has all the
attributes, indexes, constraints, grants and triggers of the
interim table
-
The referential constraints involving the interim table now
involve the post redefined table and are enabled.
-
Dependent objects that were registered, either explicitly
using REGISTER_DEPENDENT_OBJECT
or implicitly
using COPY_TABLE_DEPENDENTS
, are renamed
automatically.
Optionally, synchronize the
interim table hr.int_admin_emp.
BEGIN
DBMS_REDEFINITION.SYNC_INTERIM_TABLE('HR',
'EMP', 'INT_EMP');
END;
/
Complete the redefinition.
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE('HR', 'EMP', 'INT_EMP');
END;
/
7. If the redefinition was done using rowids, the post-redefined
table will have a hidden column (M_ROW$$) and it is recommended that
the user set this hidden column to unused as follows:
ALTER TABLE table_name SET UNUSED (M_ROW$$)
After the redefinition process has been started by calling
START_REDEF_TABLE() and before FINISH_REDEF_TABLE() has been called,
it is possible that a large number of DML statements have been
executed on the original table. If you know this is the case, it is
recommended that you periodically synchronize the interim table with
the original table. This is done by calling the
DBMS_REDEFINITION.SYNC_INTERIM_TABLE() procedure. Calling this
procedure reduces the time taken by FINISH_REDEF_TABLE() to complete
the redefinition process.
The small amount of time that the original table is locked during
FINISH_REDEF_TABLE() is independent of whether SYNC_INTERIM_TABLE()
has been called.
In the event that an error is raised during the redefinition
process, or if you choose to terminate the redefinition process,
call DBMS_REDEFINITION.ABORT_REDEF_TABLE(). This procedure drops
temporary logs and tables associated with the redefinition process.
After this procedure is called, you can drop the interim table and
its associated objects.
8. Finally drop the interim table
DROP TABLE int_emp;
Restrictions for Online
Redefinition of Tables
* If the table is to be redefined using primary
key or pseudo-primary keys (unique keys or constraints with all
component columns having not null constraints), then the table to be
redefined must have the same primary key or pseudo-primary key
columns. If the table is to be redefined using rowids, then the
table must not be an index-organized table.
* Tables that are replicated in an n-way master
configuration can be redefined, but horizontal subsetting (subset of
rows in the table), vertical subsetting (subset of columns in the
table), and column transformations are not allowed.
* The overflow table of an index-organized table
cannot be online redefined.
* Tables with user-defined types (objects, REFs,
collections, typed tables) cannot be online redefined.
* Tables with BFILE columns cannot be online
redefined.
* Tables with LONG columns can be online
redefined, but those columns must be converted to CLOBS. Tables with
LONG RAW columns must be converted to BLOBS. Tables with LOB columns
are acceptable.
* The table to be redefined cannot be part of a
cluster.
* Tables in the SYS and SYSTEM schema cannot be
online redefined.
* Temporary tables cannot be redefined.
* A subset of rows in the table cannot be
redefined.
* Only simple deterministic expressions,
sequences, and SYSDATE can be used when mapping the columns in the
interim table to those of the original table. For example,
subqueries are not allowed.
* If new columns (which are not instantiated with
existing data for the original table) are being added as part of the
redefinition, then they must not be declared NOT NULL until the
redefinition is complete.
* There cannot be any referential constraints
between the table being redefined and the interim table.
* Table redefinition cannot be done NOLOGGING.
* Tables with materialized view logs defined on
them cannot be online redefined.
Example of Reorganizing an
Individual Partition
This example illustrates reorganizing a single partition, emp2000,
in the partitioned emp table , where it is moved to a new tablespace
called tbs_3.
Step 1. Create anon-partitioned interim table int_emp
CREATE TABLE int_emp
(empno NUMBER ,
name VARCHAR2(100),
sal NUMBER,
deptno NUMBER DEFAULT
10)
TABLESPACE tbs_3);
Step 2. Start the redefinition process
DBMS_REDEFINITION.START_REDEF_TABLE('HR',
'EMP',
'INT_EMP',
NULL,
dbms_redefinition.cons_use_rowid, '', 'emp2000') ;
Step 3. Optionally, synchronize the interim table int_emp
DBMS_REDEFINITION.SYNC_INTERIM_TABLE('HR', 'EMP', 'INT_EMP', 'emp2000');
Step 4. Complete the redefinition
DBMS_REDEFINITION.FINISH_REDEF_TABLE('HR', 'EMP', 'INT_EMP', 'emp2000');
Step 5. Drop the interim table
DROP TABLE int_emp;
Online Table & Index
Operations
The table below describes the reorganization operations that are
possible using the ONLINE clause in the SQL CREATE/ALTER INDEX and
TABLE statements.
Table Move Online |
Index Create Online |
Index Rebuild Online |
Index Coalesce Online |
ALTER TABLE emp MOVE ONLINE;
- Parallel operations not supported
- compressed
|
CREATE INDEX emp.ename.idx ON
emp(ename) ONLINE;
- Parallel operations supported
- Partitions supported
- All index types except cluster
- Supported index types: IOT secondary, reverse key,
functional, bitmap, key compressed
|
ALTER INDEX emp.ename_idx
REBUILD ONLINE;
- Parallel operations supported
- Partitions supported
- All index types except cluster
- Supported index types: IOT secondary, reverse key,
functional, bitmap key compressed
|
ALTER INDEX emp.ename_idx
COALESCE;
- Parallel operations supported
- Partitions supported
- All index types
|
Reclaiming Unused Space
During the life time of the database, many updates and deletes to
the data can occur, and this may result in space inside the database
not being used efficiently. In Oracle Database 10g, this space can
be reclaimed for tables, index organized tables, indexes, partitions
and materialized views, provided the objects are stored in
tablespaces using automatic segment space management.
The space is reclaimed online, and in-place, thus eliminating
expensive database downtime and additional storage requirements. The
space is retrieved by adding the clauses SHRINK SPACE to the ALTER
TABLE, ALTER INDEX, ALTER MATERIALIZED VIEW and ALTER MATERIALIZED
VIEW LOG commands.
Whilst the space is being reclaimed, indexes are maintained and are
completely usable once the process has finished. An optional CASCADE
clause provides the ability to reclaim space for all dependent
objects. Therefore reclaiming space on the table, would also reclaim
space on all the indexes on that table.
After 10g
Installation
--As SYSTEM:
Exec dbms_stats.gather_schema_stats('SYS');
exec dbms_stats.gather_dictionary_stats;
--During workload
Exec dbms_stats.gather_schema_stats('SYS',gather_fixed=>TRUE) ;
-- Set Default tablespace for all the users
ALTER DATABASE DEFAULT TABLESPACE <tsname>;
-- Use Row Timestamp for inserts/updates/deletes, it shows date of
modification
When to
Collect Dictionary Statistics
Oracle recommends the following strategy with regard to analyzing
the data dictionary in Oracle Database 10g:
1. Analyze normal data dictionary objects (not fixed dictionary
objects) using the same interval that you currently use when
analyzing other objects. Use gather_database_stats,
gather_schema_stats, or gather_dictionary_stats to perform this
action. Here is an example:
Exec dbms_stats.gather_schema_stats('SYS',gather_fixed=>TRUE)
2. Analyze fixed objects only once, unless the workload footprint
changes. Generally, use the dbms_stats.gather_fixed_object_stats
supplied procedure when connected as SYS or any other SYSDBA
privileged user. Here is an example:
Exec dbms_stats.gather_fixed_objects_stats('ALL');
3. Collect specific Statistics for a Table (if needed)
Exec dbms_stats.gather_table_stats('my_user','my_tab');
or
Exec dbms_stats.gather_table_stats('my_user','my_tab',
degree=>dbms_stats.auto_degree);
- Shrink Tables when they are heavily deleted rows
Improvements
on 10g r2
Enhanced COMMIT (10gr2)
When a session commits, the redo log buffer is flushed to the
online redo logs on disk. This process ensures that transactions
can be replayed from the redo logs if necessary when recovery is
performed on the database. Sometimes,
however, you may want to trade-off the guaranteed ability to
recover for better performance. With Oracle Database 10g
Release 2, you now have control over how the redo stream is
written to the online log files. You can control this behavior
while issuing the commit statement itself, or simply make change
the default behavior of the database.
Let's see how the commit statement works. After a transaction,
when you issue COMMIT, you can have an additional
clause:
COMMIT
WRITE <option>
where the <option> is what
influences the redo stream. The option WAIT is the
default behavior. For instance, you can issue:
COMMIT WRITE WAIT;
This command has the same effect as COMMIT
itself. The commit does not get the control back to the user until
the redo stream is written to the online redo log files.
If you don't want it to wait, you could issue:
COMMIT WRITE NOWAIT;
In this case, the control immediately returns
to the session, even before the redo streams are written to the
online redo logs.
When a commit is issued, the Log Writer process writes the redo
stream to the online redo logs. If you are making a series of
transactions, such as in a batch processing environment, you may
not want it to commit so frequently. Of course, the best course of
action is to change the application to reduce the number of
commits; but that may be easier said than done. In that case, you
could simply issue the following commit statement:
COMMIT
WRITE BATCH;
This command will make the commit write the
redo streams to the log file in batches, instead of at each
commit. You can use this technique to reduce log-buffer flushing
in a frequent-commit environment.
If you want to write the log buffer immediately, you would issue:
COMMIT
WRITE IMMEDIATE;
If you want a specific commit behavior to be
the default for a database, you could issue the following
statement.
ALTER
SYSTEM SET COMMIT_WRITE = NOWAIT;
This command will make this behavior the
default across the database. You can also make it at session
level:
ALTER
SESSION SET COMMIT_WORK = NOWAIT;
As with any parameter, the parameter behaves
the setting at the system level, if set. If there is a setting at
the session level, the session level setting takes precedence and
finally the clause after the COMMIT statement, if given, takes
precedence.
This option is not available for distributed
transactions.
The following code examples show the enhanced commit processing in
action. First we define a table for the code to populate.
CREATE TABLE commit_test (
id
NUMBER(10),
description
VARCHAR2(50),
CONSTRAINT
commit_test_pk PRIMARY KEY (id)
);
Next we see the variations of the WRITE clause in action. The code
truncates the table and measures the time taken to populate it
with a commit for each insert. This process is repeated for each
variant of the WRITE clause. All the times are measured in
hundredths of a second.
SET SERVEROUTPUT ON
DECLARE
PROCEDURE do_loop
(p_type IN VARCHAR2) AS
l_start NUMBER;
l_loops NUMBER := 1000;
BEGIN
EXECUTE
IMMEDIATE 'TRUNCATE TABLE commit_test';
l_start
:= DBMS_UTILITY.get_time;
FOR i IN
1 .. l_loops LOOP
INSERT
INTO commit_test (id, description)
VALUES
(i, 'Description for ' || i);
CASE
p_type
WHEN
'WAIT'
THEN
COMMIT WRITE WAIT;
WHEN
'NOWAIT'
THEN
COMMIT WRITE NOWAIT;
WHEN
'BATCH'
THEN
COMMIT WRITE BATCH;
WHEN
'IMMEDIATE'
THEN
COMMIT WRITE IMMEDIATE;
END
CASE;
END LOOP;
DBMS_OUTPUT.put_line(RPAD('COMMIT WRITE ' || p_type, 30) || ': '
|| (DBMS_UTILITY.get_time - l_start));
END;
BEGIN
do_loop('WAIT');
do_loop('NOWAIT');
do_loop('BATCH');
do_loop('IMMEDIATE');
END;
/
COMMIT WRITE
WAIT
: 129
COMMIT WRITE
NOWAIT
: 86
COMMIT WRITE
BATCH
: 128
COMMIT WRITE
IMMEDIATE : 128
Next we see the variations of the COMMIT_WRITE parameter in
action. This example follows the format of the previous example,
but the COMMIT_WRITE parameter is altered for each run and a
standard commit is issued.
SET SERVEROUTPUT ON
DECLARE
PROCEDURE do_loop
(p_type IN VARCHAR2) AS
l_start NUMBER;
l_loops NUMBER := 1000;
BEGIN
EXECUTE
IMMEDIATE 'ALTER SESSION SET COMMIT_WRITE=''' || p_type || '''';
EXECUTE
IMMEDIATE 'TRUNCATE TABLE commit_test';
l_start
:= DBMS_UTILITY.get_time;
FOR i IN
1 .. l_loops LOOP
INSERT
INTO commit_test (id, description)
VALUES
(i, 'Description for ' || i);
COMMIT;
END LOOP;
DBMS_OUTPUT.put_line(RPAD('COMMIT_WRITE=' || p_type, 30) || ': '
|| (DBMS_UTILITY.get_time - l_start));
END;
BEGIN
do_loop('WAIT');
do_loop('NOWAIT');
do_loop('BATCH');
do_loop('IMMEDIATE');
do_loop('BATCH,WAIT');
do_loop('BATCH,NOWAIT');
do_loop('IMMEDIATE,WAIT');
do_loop('IMMEDIATE,NOWAIT');
END;
/
COMMIT_WRITE=WAIT
: 141
COMMIT_WRITE=NOWAIT
: 90
COMMIT_WRITE=BATCH
: 78
COMMIT_WRITE=IMMEDIATE
: 94
COMMIT_WRITE=BATCH,WAIT
: 139
COMMIT_WRITE=BATCH,NOWAIT
: 78
COMMIT_WRITE=IMMEDIATE,WAIT
: 133
COMMIT_WRITE=IMMEDIATE,NOWAIT
: 87
Catch
the
Error and Move On: Error Logging Clause (10gr2)
Suppose that you are inserting 1000000 rows from one table to
another one and the intert fails because a PK or FK error, the
full sentence is rollback. How many times you have that problem??
Now you can log those errors on another table and move on!!!!
Suppose you are trying to insert records from table ACCOUNTS_NY
into the table ACCOUNTS. The table ACCOUNTS has a primary key on
ACC_NO column. It's possible that some rows in ACCOUNTS_NY may
violate that primary key. Try using a conventional insert
statement:
insert into accounts
select * from accounts_ny;
insert into accounts
*
ERROR at line 1:
ORA-00001: unique constraint (ARUP.PK_ACCOUNTS) violated
None of the records from the table
ACCOUNTS_NY will be loaded. Now, try the same with error logging
turned on. First, you need to create a table to hold the records
rejected by the DML statement. Let's call that table ERR_ACCOUNTS.
exec dbms_errlog.CREATE_ERROR_LOG ('ACCOUNTS','ERR_ACCOUNTS');
Next, execute the earlier statement with the
error-logging clause.
insert into accounts
select * from accounts_ny
log errors into err_accounts
reject limit 200;
-- reject limit unlimited; Will insert all the errors and never stop
Note that the table ACCOUNTS_NY contains 10
rows yet only six rows were inserted; the other four rows were
rejected due to some error. To find out what it was, query the
ERR_ACCOUNTS table.
select ORA_ERR_NUMBER$, ORA_ERR_MESG$, ACC_NO
from err_accounts;
ORA_ERR_NUMBER$ ORA_ERR_MESG$ ACC_NO
--------------- -------------------------------------------------------- ------
1 ORA-00001: unique constraint (ARUP.PK_ACCOUNTS) violated 9997
1 ORA-00001: unique constraint (ARUP.PK_ACCOUNTS) violated 9998
1 ORA-00001: unique constraint (ARUP.PK_ACCOUNTS) violated 9999
1 ORA-00001: unique constraint (ARUP.PK_ACCOUNTS) violated 10000
Note the columns ORA_ERR_NUMBER$, which show
the Oracle error number encountered during the DML statement
execution, and the ORA_ERR_MESG$, which shows the error message.
In this case you can see that four records were rejected because
they violated the primary key constraint PK_ACCOUNTS. The table
also captures all the column of table ACCOUNTS, including the
column ACC_NO. Looking at the rejected records, note that these
account numbers already exist in the table; hence the records were
rejected with ORA-00001 error. Without the error-logging clause,
the whole statement would have failed, with no records rejected.
Through this clause, only the invalid records were rejected; all
others were successful
The syntax for the error logging clause is the same for INSERT,
UPDATE, MERGE and DELETE statements.
LOG
ERRORS [INTO [schema.]table] [('simple_expression')] [REJECT
LIMIT integer|UNLIMITED]
The optional INTO clause allows you to specify the name of
the error logging table. If you omit this clause, the the first 25
characters of the base table name are used along with the "ERR$_"
prefix.
The simple_expression is used to specify a tag that makes the
errors easier to identify. This might be a string or any function
whose result is converted to a string.
The REJECT LIMIT is used to specify the maximum number of errors
before the statement fails. The default value is 0 and the maximum
values is the keyword UNLIMITED. For parallel DML operations, the
reject limit is applied to each parallel server.
Restrictions
The DML error logging functionality is not invoked when:
- Deferred constraints are violated.
- Direct-path INSERT or MERGE operations
raise unique constraint or index violations.
- UPDATE or MERGE operations raise a
unique constraint or index violation.
In addition, the tracking of errors in LONG,
LOB and object types is not supported, although a table containing
these columns can be the target of error logging.
UNDO_RETENTION
parameter
(10gr2)
The UNDO_RETENTION parameter specifies the amount of time in
seconds that Oracle attempts to keep undo data available.
Setting this parameter to the appropriate value could be
described as more of an art than a science.
Set it too low and you are wasting disk space. In addition,
you aren't taking advantage of being able to flashback your
data to as far back as the disk space allocated to the undo
tablespace allows. Set it too high and you are in danger of
running out of freespace in the undo tablespace.
10G R2 comes to the rescue! The database now collects undo
usage statistics, identifies the amount of disk space
allocated to the undo tablespace and uses that information to
tune the undo retention time period to provide maximum undo
data retention. Administrators can determine the current
retention time period by querying the TUNED_UNDORETENTION
column of the V$UNDOSTAT view.
You can also use the new Undo Advisor (Undo Management) under
the Advisor Central Option in OEM. It will show you a graphic
with all the possible values, when you click on the graphic,
it will change the UNDO_RETENTION
and/ord the UNDO Tablespace size. Is very good for "what-if"
analysis.
Unlimited DBMS Output (10gr2)
In
Oracle Database 10g Release 2,, that restriction has
been lifted: The maximum output can now be as much as
required. You can set it to "unlimited" by simply issuing
set serveroutput on
In Oracle
Database 10g Release 2, the command shows the
following result:
show serveroutput
serveroutput ON SIZE UNLIMITED FORMAT WORD_WRAPPED
The default value is UNLIMITED.
Another inconvenience was the
maximum size of a line displayed by dbms_output. The
following is a typical error message for lines longer than 255
bytes.
ERROR at line 1:
ORA-20000: ORU-10028: line length overflow, limit of 255 chars per line
ORA-06512: at "SYS.DBMS_OUTPUT", line 35
ORA-06512: at "SYS.DBMS_OUTPUT", line 115
ORA-06512: at line 2
In Oracle Database 10g Release
2, the lines can be of any length.
TRANSPORT
AWR DATA (10g r2)
This new feature provides ability to move and consolidate AWR
(Automatic Workload Repository) data across databases.
Rather than impact the production system by analyzing the
performance data on it, you may now “export” the AWR data for
any period of time and import it into another database for
analysis. This is accomplished via three new routines:
• DBMS_SWRF_INTERNAL.AWR_EXTRACT
• DBMS_SWRF_INTERNAL.AWR_LOAD
• DBMS_SWRF_INTERNAL.MOVE_TO_AWR
The extract routine allows you to specify a begin and end
snapshot period to unload and creates a file in the file
system. This file can then be loaded into the target
database via the load routine and then moved into the actual
AWR tables using the MOVE_TO_AWR routine.
Example:
A new package
DBMS_SWRF_INTERNAL has been provided in Oracle Database 10g
Release 2 for this purpose. To download it into a DataPump
dumpfile, you would use the procedure AWR_EXTRACT:
1 begin
2 DBMS_SWRF_INTERNAL.AWR_EXTRACT (
3 dmpfile => 'awr_data.dmp',
4 dmpdir => 'TMP_DIR',
5 bid => 302,
6 eid => 305
7 );
8* end;
Let's examine the lines in more detail.
Line |
Description |
3 |
The name of the target file for the data is mentioned
here. This is a DataPump export file. If non filename is
given, the default value awrdat.dmp is used. |
4 |
The directory object where the dumpfile is written. In
this case, you may have defined a directory TMP_DIR as /tmp. |
5 |
The snapshot ID of the beginning snapshot of the period. |
6 |
The end snapshot ID. Here you are exporting the snapshots
between 302 and 305. |
Now you can take the dumpfile awr_data.dmp to
the new location and load it using another procedure in the same
package, AWR_LOAD:
1 begin
2 DBMS_SWRF_INTERNAL.AWR_LOAD (
3 SCHNAME => 'ARUP',
4 dmpfile => 'awr_data',
5 dmpdir => 'TMP_DIR'
6 );
7* end;
In this code, you are loading the contents of
the dumpfile awr_data.dmp into the directory specified by the
directory object TMP_DIR. When loading the AWR data, it is not
loaded into the SYS schema directly; rather, it's staged in a
different schema first. The schema name is given in the parameter
SCHNAME, as shown in line 3. After staging, the data is moved into
the SYS schema:
1 begin
2 DBMS_SWRF_INTERNAL.MOVE_TO_AWR (
3 SCHNAME => 'ARUP'
4 );
5* end;
Here you are moving the AWR data from the
schema ARUP to SYS. Moving AWR to
a different database, as I mentioned above, has a lot of benefits
and uses. You can analyze the data in a different database without
affecting production too much. In addition, you can build a
central repository of AWR data collected from multiple databases.
All these loading steps have been
placed into a single file awrload.sql located in
$ORACLE_HOME/rdbms/bin directory. Similarly, the script
awrextr.sql contains all the steps for the extraction process.
Improvements
in Oracle Data Guard (10gr2)
# Automatic Deletion of applied archive logs: Once
primary database Archived logs are applied to a Logical
Standby Database, they are deleted automatically without DBA
intervention. This makes it easier to maintain both primary
and logical standby databases. Physical standby databases have
had this functionality since Oracle 10g Release 1, by using
Flash Recovery Area option.
# No downtime required: The primary database is no longer
required to shutdown or be put in QUIESCING state, as we can
create the logical standby database from a hotbackup of the
primary database just like the physical standby database.
# Online upgrades: A lot of DBAs have dreamed about this for
long time: just like IBM's DB2 or Microsoft SQL Server, the
DBA no longer required to shutdown the primary database to
upgrade from Oracle 10g release 2 with Data Guard option.
First, upgrade the logical standby database to the next
release, test and validate the upgrade, do a role reversal by
switching over to the upgraded database, and then finally
upgrade the old primary database.
# New Datatypes Supported: I always used to hesitate whenever
I thought of logical standby databases, as some of my
databases never meet the pre-requisite conditions. In 10g
relase2, Oracle supports most of the datatypes, such as NCLOB,
LONG, LONGRAW,BINARY_FLOAT,BINARY_DOUBLE,IOTs.
The Power of
Tuning
Oracle 10g, brings many new features through which one can
easily tune the bad sqls or also can diagnose the database
performance issues. Using database metrics, active session
history and time model views.
The following query fetchs top sqls spent more on cpu/wait/io:
select ash.SQL_ID ,
sum(decode(ash.session_state,'ON CPU',1,0)) "CPU",
sum(decode(ash.session_state,'WAITING',1,0))
- sum(decode(ash.session_state,'WAITING',
decode(en.wait_class,
'User
I/O',1,0),0))
"WAIT" ,
sum(decode(ash.session_state,'WAITING',
decode(en.wait_class,
'User
I/O',1,0),0))
"IO" ,
sum(decode(ash.session_state,'ON
CPU',1,1)) "TOTAL"
from
v$active_session_history ash,v$event_name en
where SQL_ID is not NULL
and
en.event#=ash.event#
SQL_ID
CPU
WAIT
IO TOTAL
------------- ----------
---------- ---------- ----------
bqts5m5y267ct
0 0
20
20
4gd6b1r53yt88
0
16
1
17
35rqnp0hn3p3j
0
13
0
13
3shtm7x3a54qu
0
0
8
8
0hf43mhpx086p
0
0
4
4
Use any of v$sql to get the sql_text for one of the above
sql_id.
SELECT sql_text FROM
v$sqlarea WHERE sql_id = 'bqts5m5y267ct';
dbms_xplan.display_awr can be used to extract the sql plan for
this sql_id.
SELECT * FROM
table(dbms_xplan.display_awr('bqts5m5y267ct');
Capture Bind
Variables
There is one good useful feature in Oracle 10g to catch the
value of bind variables without a trace.
This feature is the view V$SQL_BIND_CAPTURE. This view gives
you information about the values binded to a specific sql_id
with the capture time info. When we use this view with
V$SESSION we can capture the values that are binded to the
working sqls. This view is populated only when you set
STATISTICS_LEVEL parameter to ALL/TYPICAL . Bind capturing is
disabled when the STATISTICS_LEVEL initialization parameter is
set to BASIC. Here is an example of the usage:
col sid format a4
Col Username Format A15
Col Name Format A15
Col Value_String Format
A10
Break On Sql_Text
Col Value Format A10
Select Distinct
Sq.Sql_Text, Spc.Name, Substr(Spc.Value_String,1,15) VALUE
From V$Sql_Bind_Capture
Spc, V$Session S,V$Sql Sq
Where S.Sql_Hash_Value
=Spc.Hash_Value
And S.Sql_Address
= Spc.Address
And
Sq.Sql_Id=S.Sql_Id
-- and
spc.was_captured='YES'
-- and
s.type<>'BACKGROUND'
-- and
s.status='ACTIVE'
-- and username
not like ‘LAWSON%'
order by 1,2;
Another one:
col sid format a4
col username format a5
col sql_hash_value
format 99999999
col sqlid format a14
col sql_child_number
format 9
col name format a4
col value_string format
a8
col last_captured format
a9
select s.sid,
s.username, sq.sql_text, s.sql_hash_value, s.sql_id,
s.sql_child_number,
spc.name,
spc.value_string,
last_captured
from v$sql_bind_capture
spc, v$session s,v$sql sq
where s.sql_hash_value =
spc.hash_value
and s.sql_address
= spc.address
and
sq.sql_id=s.sql_id
and s.sid = %SID
-- and
spc.was_captured='YES'
-- and
s.type<>'BACKGROUND'
-- and
s.status='ACTIVE';
Dynamyc
RMAN Views por Past and Current Jobs
In Oracle Database 10g Release 2, the new dynamic views provided for
RMAN jobs make it extremely easy to peek into these
activities—current as well as past.
The first new view, V$RMAN_BACKUP_JOB_DETAILS, records the history
of all backups. In addition to revealing simple details like how
long the backup took, the view shows a slew of other details that
are important to analyze after the fact. Let's examine some
important ones and how they can help you analyze RMAN sessions.
Let's say that you want to know more or less everything about that
history: how many RMAN jobs have been issued, the status of each
job, what
time they started and completed, what types of jobs they were, and
so on. You would issue a query as follows:
col STATUS format a9
col hrs format 999.99
select SESSION_KEY,
INPUT_TYPE, STATUS, to_char(START_TIME,'mm/dd/yy hh24:mi')
start_time, to_char(END_TIME,'mm/dd/yy hh24:mi') end_time,
elapsed_seconds/3600 hrs
from
V$RMAN_BACKUP_JOB_DETAILS
order by session_key;
The output may resemble the one shown below:
SESSION_KEY
INPUT_TYPE STATUS
START_TIME
END_TIME HRS
----------- -------------
-------- -------------- ------------- -------
1
DATAFILE
FULL
COMPLETED 03/25/05 00:48 03/25/05 00:48 .00
4 DB
FULL COMPLETED 03/27/05 02:09
03/27/05 02:11 .04
7 DB
FULL FAILED
03/27/05 02:18 03/27/05 02:24 .10
The SESSION KEY column is the key to the other views showing other
relevant information. (More on that in a moment.)
The columns START_TIME and END_TIME are fairly intuitive.
The column ELAPSED_SECONDS shows the elapsed time in seconds, which
I have converted to hour format for easy reading.
The STATUS column shows the status of the RMAN jobs. When the job is
in progress, the status column shows RUNNING.
Another important piece of information recorded is the rate of the
backup produced and how fast data was read and written by the
process. This information helps you diagnose any slowness in the
RMAN jobs.
col ins format a10
col outs format a10
select SESSION_KEY, OPTIMIZED,
COMPRESSION_RATIO, INPUT_BYTES_PER_SEC_DISPLAY ins,
OUTPUT_BYTES_PER_SEC_DISPLAY outs, TIME_TAKEN_DISPLAY
from
V$RMAN_BACKUP_JOB_DETAILS
order by session_key;
SESSION_KEY OPT
COMPRESSION_RATIO
INS OUTS TIME_TAKEN
----------- ---
----------------- ---------- ---------- ----------
1
NO
2.23776224
3.33M
1.49M
00:00:06
4 NO
1.31065794
6.92M 5.28M 00:02:16
7 NO
1.32363058
3.68M 2.78M 00:06:00
Note how the time is displayed in a human-readable format: in
hours:minutes:seconds format. The columns INS and OUTS display the
data input or output per seconds, in the easier-to-read format such
as M for megabytes. In the above example, you can see that the job
marked by session key 4 saw a read rate of 6.92MB/s and 5.2MB/s. You
can now examine the output for several RMAN executions and look for
a pattern emerging from them. This pattern analysis will help you
identify any potential bottlenecks revealed through variations.
The backup information can also be filtered by backup type. The new
view V$RMAN_BACKUP_JOB_DETAILS provides the type of backups RMAN
performs and how the output can be organized.
select * from
V$RMAN_BACKUP_TYPE;
WEIGHT INPUT_TYPE
---------- -------------
1 BACKUPSET
2 SPFILE
3 CONTROLFILE
4 ARCHIVELOG
5 DATAFILE INCR
6 DATAFILE FULL
7 DB INCR
8 RECVR AREA
9 DB FULL
The object type weight determines how the records in the view are
ordered.
Another very useful view is the RMAN output. Say you have run an
RMAN job via a shell script but something failed. Sure, you have an
output file that records the RMAN output, but unfortunately, you
have lost it. What can you do? Fortunately, the new view
V$RMAN_OUTPUT records the output from the RMAN jobs viewing later.
This view is useful for scripted RMAN jobs as well as ad-hoc jobs.
select output
from v$rman_output
where session_key = 4
order by recid;
OUTPUT
----------------------------------------------------------------------
connected to target database:
TEST (DBID=1849323268)
Starting backup at 27-MAR-05
using target database
controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=201
devtype=DISK
channel ORA_DISK_1: starting
full datafile backupset
channel ORA_DISK_1: specifying
datafile(s) in backupset
input datafile fno=00001
name=/u01/app/oracle/oradata/TEST/system01.dbf
input datafile fno=00003
name=/u01/app/oracle/oradata/TEST/sysaux01.dbf
input datafile fno=00002
name=/u01/app/oracle/oradata/TEST/undotbs01.dbf
input datafile fno=00004
name=/u01/app/oracle/oradata/TEST/users01.dbf
input datafile fno=00005
name=/u01/app/oracle/oradata/TEST/accdata01.dbf
channel ORA_DISK_1: starting
piece 1 at 27-MAR-05
channel ORA_DISK_1: finished
piece 1 at 27-MAR-05
piece
handle=/u01/app/oracle/product/10.2.0/db_1/dbs/07ggc7qr_1_1
comment=NONE
channel ORA_DISK_1: backup set
complete, elapsed time: 00:01:46
channel ORA_DISK_1: starting
full datafile backupset
channel ORA_DISK_1: specifying
datafile(s) in backupset
including current controlfile
in backupset
channel ORA_DISK_1: starting
piece 1 at 27-MAR-05
channel ORA_DISK_1: finished
piece 1 at 27-MAR-05
piece
handle=/u01/app/oracle/product/10.2.0/db_1/dbs/08ggc7u6_1_1
comment=NONE
channel ORA_DISK_1: backup set
complete, elapsed time: 00:00:03
Finished backup at 27-MAR-05
As you can see, the entire output from the RMAN job is captured
here. This is an in-memory view and is cleared when the instance is
shut down. If you want the RMAN output to be persistent, the rows
can be copied to a permanent table. The column SESSION_KEY shows the
records associated with the RMAN jobs shown in the view
V$RMAN_BACKUP_JOB_DETAILS. Now you will never lose the output from
the RMAN job
again.
Via Oracle Enterprise Manager you can utilize the new views to
create a new backup report. This report provides an instantaneous
overview of backup operations that have been made in your
enterprise. You can filter the data by backup type and status.