Oracle 12c New
Features
Pluggable Databases
There is a radical change and a major change in the core
database architecture through the introduction of Container
Databases also called CBD and Pluggable Databases (PDB).
The memory and process is owned by the Container Database.
The container holds the metadata where the PDBs hold the user
data. You can create up to 253 PDBs including the seed PDB.
A Container Database (CDBs) comprises one or more Pluggable
Databases (PDBs)
CDBs are databases that contain common elements shared with PDBs.
PDBs are much like traditional databases in prior releases.

A CDB owns in common:
- Control files and SPFILE
- Online and archived redo logs
- Backup sets and image copies
A CDB has one SYSTEM, SYSAUX, UNDO, and TEMP tablespace
Oracle-supplied data dictionary objects, users, and roles are
shared globally between CDB and all PDBs

PDBs also own local objects
- PDBs have a local SYSTEM and SYSAUX tablespace
- PDBs may have their own local TEMP tablespace (12.2
feature)
- PDBs can own one or more application schemas: Local
tablespaces and Local users and roles
- All PDBs share same SGA and PGA
- All PDBs share same background processes
In a large setup, it is common to see 20 or 30 different
instances running in production environment. With these many
instances, it is a maintenance nightmare as all these instances
have to be separately
•Upgraded
•Patched
•Monitored
•Tuned
•RAC Enabled
•Adjusted
•Backed up and
•Data Guarded.
With Pluggable Databases feature, you just have to do all this for
ONE single instance. Without this feature, prior to 12C, you would
have to create separate schemas and there is always a thread of
security how much ever the isolation we build into it.
You can pull out (or "unplug") a container from one CDB and place
it (or, "plug" it) into another CDB. This is why a container is
also known as a Pluggable Database (PDB).
For all practical purposes from the perspective of the clients,
the PDBs are just regular databases.
PDBs do not have an instance associated with them, eliminating
this overhead. Let's examine the setup with an example. Suppose
you have a CDB (container database - the real database that houses
other PDBs) called CONA which has a PDB called PDB1.
If you check the Oracle instance, you will see that there is only
one - that of the CDB. Let's check the ORACLE_SID first:
$ echo $ORACLE_SID
CONA
That's it. There is just one SID. Next, let's check for the
processes, specifically the very important one known as "pmon":
$ ps -aef|grep pmon
oracle 7672 7323 0 11:18 pts/2 00:00:00 grep pmon
oracle 12519 1 0 Feb19 ? 00:00:00 asm_pmon_+ASM
oracle 21390 1 0 Feb19 ? 00:00:00 ora_pmon_CONA
As you can see, the only instance running is CONA (the CDB)
beside, of course, the ASM instance. There is no instance for the
PDB named PDB1. You can create as many of these PDBs on this CDB
called CONA; there will be no additional instance.
PDBs are simply hosted on the CDBs. So in effect these PDBs are
like virtual machines running on a physical machine in a virtual
machine context.
Since the CDB is the only real database, all the physical database
components such as the Automatic Diagnostic Repository (ADR) is
associated with it. Let's check the ADR using the ADRCI command
line utility:
$ adrci
ADRCI: Release 12.1.0.1.0 - Production on Sun Feb 24
12:18:12 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates.
All rights reserved.
ADR base = "/u02/oradb"
adrci> show homes
ADR Homes:
diag/rdbms/cona/CONA
As you see from the output, there is only one ADR home - that for
CONA (the CDB)
You can check the containers (or PDBs) created in a database in a
view named V$PDBS, which is brand new in Oracle Database 12c.
select con_id, dbid, name from v$pdbs;
CON_ID
DBID NAME
---------- ---------- ------------------------------
2
4050437773 PDB$SEED
3
3315520345 PDB1
4
3874438771 PDB2
5
3924689769 PDB3
Note how the DBIDs are also different for each PDB. There are two
striking oddities in this output:
- There is no CON_ID of 1. The answer is simple - there is
a special container called the "root" container, known as
CDB$Root that is created to hold the metadata. This container
has the CON_ID of 1.
- There is a PDB called PDB$SEED, which is something we
didn't create.
There are new built-in functions to identify PDBs from their
details without querying the V$PDBS view. Here is an example how
to identify the container ID from the name:
select con_name_to_id('PDB2') from dual;
And, here is how you can get the container ID from the DBID:
select con_dbid_to_id(3924689769) from dual;
Another cool feature is, you can allocate a CPU percentage for
each PDB.
More information on Multitenants can be found HERE
Online rename and relocation of
an active data file
Unlike in the previous releases, a data file migration or
renaming in Oracle database 12c no longer requires a number of
steps i.e. putting the tablespace in READ ONLY mode, followed by
data file offline action.
In 12c R1, a data file can be renamed or moved online simply using
the ALTER DATABASE MOVE DATAFILE SQL statement. While the data
file is being transferred, the end user can perform queries, DML
and DDL tasks. Additionally, data files can be migrated between
storages e.g. from non-ASM to ASM and vice versa.
Rename a data file:
ALTER DATABASE MOVE DATAFILE '/u00/data/users01.dbf' TO
'/u00/data/users_01.dbf';
Migrate a data file from non-ASM to ASM:
ALTER DATABASE MOVE DATAFILE '/u00/data/users_01.dbf' TO
'+DG_DATA';
Migrate a data file from one ASM disk group to another:
ALTER DATABASE MOVE DATAFILE
'+DG_DATA/DBNAME/DATAFILE/users_01.dbf ' TO '+DG_DATA_02';
Overwrite the data file with the same name, if it exists at the
new location:
ALTER DATABASE MOVE DATAFILE '/u00/data/users_01.dbf' TO
'/u00/data_new/users_01.dbf' REUSE;
Copy the file to a new location whilst retaining the old copy in
the old location:
ALTER DATABASE MOVE DATAFILE '/u00/data/users_01.dbf' TO
'/u00/data_new/users_01.dbf' KEEP;
You can monitor the progress while a data file being moved by
querying the v$session_longops dynamic view.
Additionally, you can also refer the alert.log of the database
where Oracle writes the details about action being taken place.
The container database (CDB) can not move files that belong to a
pluggable database. The following query displays all the datafiles
for the CDB and the PDBs.
Not surprisingly, the ALTER DATABASE MOVE DATAFILE syntax does not
work for temporary files. That is not major problem as temporary
files can be created and removed quite simply.
Things to know before using this command
Dataguard
Move operations on the primary or standby is independent of each
other
Flashback
Flashback operations will retain the new location of
datafile irrespective of the flashback time.
OS
Old files are not deleted on Windows
Online Migration of Table
Partition or Sub Partition:
Migration of a table partition or sub-partition to a different
tablespace no longer requires a complex procedure in Oracle 12c.
In a similar way to how a heap (non-partition) table online
migration was achieved in the previous releases, a table partition
or sub-partition can be moved to a different tablespace online or
offline.
When an ONLINE clause is specified, all DML operations can be
performed without any interruption on the partition|sub-partition
which is involved in the procedure. In contrast, no DML operations
are allowed if the partition|sub-partition is moved offline.
Here are some working examples:
ALTER TABLE &table_name MOVE PARTITION (or SUBPARTITION)
&partition_name [ONLINE] &tablespace_name;
ALTER TABLE &table_name MOVE PARTITION (or
SUBPARTITION) &partition_name [ONLINE]
&tablespace_name UPDATE INDEXES ONLINE;
The first example is used to move a table partition|sub-partition
to a new tablespace offline. The second example moves a table
partition/sub-partitioning online maintaining any local/global
indexes on the table.
Additionally, no DML operation will get interrupted when ONLINE
clause is mentioned.
Important notes:
- The UPDATE INDEXES clause will avoid any local/global
indexes going unusable on the table.
- Table online migration restriction applies here too.
- There will be locking mechanism involved to complete the
procedure, also it might leads to performance degradation and
can generate huge redo, depending upon the size of the
partition, sub-partition.
In addition to the articles listed above, the following operations
are now non-blocking provided the ONLINE keyword is used.
ALTER TABLE table-name DROP CONSTRAINT contraint-name ONLINE;
ALTER TABLE table-name SET UNUSED (column-list) ONLINE;
DROP INDEX index-name ONLINE;
ALTER INDEX index-name UNUSABLE ONLINE;
ALTER TABLE table_name MOVE PARTITION partition-name
ONLINE ...;
ALTER TABLE table_name MOVE SUBPARTITION partition-name
ONLINE ...;
The following operations are non-blocking without the ONLINE
keyword.
ALTER INDEX index-name INVISIBLE;
ALTER INDEX index-name VISIBLE;
Invisible
columns
In Oracle 11g R1, Oracle introduced a couple of good enhancements
in the form of invisible indexes and virtual columns. Taking the
legacy forward, invisible column concepts has been introduced in
Oracle 12c.
In the previous releases, to hide important data –columns from
being displayed in the generic queries– you needed to create a
view hiding the required information or apply some sort of
security conditions.
In 12c, you can now have an invisible column in a table. When a
column is defined as invisible, the column won’t appear in generic
queries, unless the column is explicitly referred to in the SQL
statement or condition, or DESCRIBED in the table definition. It
is pretty easy to add or modify a column to be invisible and vice
versa:
CREATE TABLE emp (eno number(6), name varchar2(40), sal
number(9) INVISIBLE);
desc emp
Name
Null? Type
---------------- -------- ------------
ENO
NUMBER(6)
NAME
VARCHAR2(40)
ALTER TABLE emp MODIFY (sal visible);
Name
Null? Type
---------------- -------- ------------
ENO
NUMBER(6)
NAME
VARCHAR2(40)
SAL
NUMBER(9)
You must explicitly refer to the invisible column name with the
INSERT statement to insert the database into invisible columns. A
virtual column or partition column can be defined as invisible
too. However, temporary tables, external tables and cluster tables
won’t support invisible columns.
If a user specifically selects the invisible column (i.e. select
salary,…) the column WILL be displayed in the output (you have to
know it’s there).
In SQL*Plus, use SET COLINVISIBLE ON to display the invisible
column when using the DESCRIBE command
Multiple indexes on the same
column
Pre Oracle 12c, you can’t create multiple indexes either on the
same column or set of columns in any form. For example, if you
have an index on column {a} or columns {a,b}, you can’t create
another index on the same column or set of columns in the same
order.
In 12c, you can have multiple indexes on the same column or set of
columns as long as the index type is different. However, only one
type of index is usable/visible at a given time. In order to test
the invisible indexes, you need to set the
optimizer_use_use_invisible_indexes=true.
Here’s an the example:
CREATE INDEX emp_ind1 ON EMP(ENO,ENAME);
CREATE BITMAP INDEX emp_ind2 ON EMP(ENO,ENAME) INVISIBLE;
DDL
logging
You can now log the DDL action into xml and log files. This will
be very useful to know when the drop or create command was
executed and by who.
The ENABLE_DDL_LOGGING initiation parameter must be configured in
order to turn on this feature. The parameter can be set at the
database or session levels.
When this parameter is enabled, all DDL commands are logged in an
xml and a log file under the
$ORACLE_BASE/diag/rdbms/DBNAME/log|ddl location. An xml file
contains information, such as DDL command, IP address, timestamp
etc.
This helps to identify when a user or table dropped or when a DDL
statement is triggered.
To enable DDL logging
ALTER SYSTEM|SESSION SET ENABLE_DDL_LOGGING=TRUE;
The following DDL statements are likely to be recorded in the
xml/log file:
- CREATE|ALTER|DROP|TRUNCATE TABLE
- DROP USER
- CREATE|ALTER|DROP PACKAGE|FUNCTION|VIEW|SYNONYM|SEQUENCE
The information will be saved under:
/u01/app/oracle/diag/rdbms/orcl/orcl/log/ddl/log.xml
Temporary
Undo
Each Oracle database contains a set of system related tablespaces,
such as, SYSTEM, SYSAUX, UNDO & TEMP, and each are used for
different purposes within the Oracle database.
Pre Oracle 12c, undo records generated by the temporary tables
used to be stored in undo tablespace, much similar to a
general/persistent table undo records.
However, with the temporary undo feature in 12c R1, the temporary
undo records can now be stored in a temporary table instead of
stored in undo tablespace.
The prime benefits of temporary undo includes: reduction in undo
tablespace and less redo data generation as the information won’t
be logged in redo logs.
You have the flexibility to enable the temporary undo option
either at session level or database level.
To be able to use the new feature, the following needs to be set:
- Compatibility parameter must be set to 12.0.0 or higher
- Enable TEMP_UNDO_ENABLED initialization
parameter
- Since the temporary undo records now stored in a temp
tablespace, you need to create the temporary tablespace with
sufficient space
- For session level, you can use: ALTER SESSION SET
TEMP_UNDO_ENABLE=TRUE;
Query temporary undo information
The dictionary views listed below are used to view/query the
information/statistics about the temporary undo data:
- V$TEMPUNDOSTAT
- DBA_HIST_UNDOSTAT
- V$UNDOSTAT
To disable the feature, you simply need to set the following:
ALTER SYSTEM|SESSION SET TEMP_UNDO_ENABLED=FALSE;
Backup
specific user privilege
In 11g R2, SYSASM privilege was introduced to perform ASM specific
operations. Similarly, backup and recovery tasks specific
privilege SYSBACKUP has been introduced in 12c to execute backup
and recovery commands in Recovery Manager (RMAN). Therefore, you
can create a local user in the database and grant the SYSBACKUP
privilege to perform any backup and recovery related tasks in RMAN
without being granting the SYSDBA privilege.
$ ./rman target "username/password as SYSBACKUP"
Table partition maintenance
enhancements
Adding multiple new partitions
Before Oracle 12c, it was only possible to add one new partition
at a time to an existing partitioned table. To add more than one
new partition, you had to execute an individual ALTER TABLE ADD
PARTITION statement to every new partition.
Oracle 12c provides the flexibility to add multiple new partitions
using a single ALTER TABLE ADD PARTITION command. The
following example explains how to add multiple new partitions to
an existing partitioned table:
CREATE TABLE emp_part (eno number(8), ename varchar2(40), sal
number (6))
PARTITION BY RANGE
(sal)
(PARTITION p1
VALUES LESS THAN (10000),
PARTITION p2
VALUES LESS THAN (20000),
PARTITION p3
VALUES LESS THAN (30000)
);
Now lets add a couple of new partitions:
ALTER TABLE emp_part ADD
PARTITION p4 VALUES
LESS THAN (35000),
PARTITION p5 VALUES
LESS THAN (40000);
In the same way, you can add multiple new partitions to a list and
system partitioned table, provided that the MAXVALUE partition
doesn’t exist.
How to drop and truncate multiple partitions/sub-partitions
As part of data maintenance, you typically either use drop or
truncate partition maintenance task on a partitioned table.
Pre 12c R1, it was only possible to drop or truncate one partition
at a time on an existing partitioned table. With Oracle 12c,
multiple partitions or sub-partitions can be dropped or merged
using a single ALTER TABLE table_name {DROP|TRUNCATE}
PARTITIONS command.
The following example explains how to drop or truncate multiple
partitions on an existing partitioned table:
ALTER TABLE emp_part DROP PARTITIONS p4,p5;
ALTER TABLE emp_part TRUNCATE PARTITONS p4,p5;
To keep indexes up-to-date, use the UPDATE INDEXES or UPDATE
GLOBAL INDEXES clause, shown below:
ALTER TABLE emp_part DROP PARTITIONS p4,p5 UPDATE GLOBAL
INDEXES;
ALTER TABLE emp_part TRUNCATE PARTITIONS p4,p5 UPDATE
GLOBAL INDEXES;
If you truncate or drop a partition without the UPDATE GLOBAL
INDEXES clause, you can query the column ORPHANED_ENTRIES
in the USER_INDEXES or USER_IND_PARTITIONS
dictionary views to find out whether the index contains any stale
entries.
Splitting a single partition into multiple new partitions
The new enhanced SPLIT PARTITION clause in 12c will let
you split a particular partition or sub-partition into multiple
new partitions using a single command. The following example
explains how to split a partition into multiple new partitions:
CREATE TABLE emp_part (eno number(8), ename varchar2(40), sal
number (6))
PARTITION BY RANGE
(sal)
(PARTITION p1
VALUES LESS THAN (10000),
PARTITION p2
VALUES LESS THAN (20000),
PARTITION
p_max VALUES LESS THAN (MAXVALUE)
);
ALTER TABLE emp_part SPLIT PARTITION p_max INTO
(PARTITION p3
VALUES LESS THAN (25000),
PARTITION p4
VALUES LESS THAN (30000), PARTITION p_max);
Merge multiple partitions into one partition
You can merge multiple partitions to a single partition using a
single ALTER TBALE MERGE PARTITIONS statement:
CREATE TABLE emp_part
(eno number(8),
ename varchar2(40), sal number (6))
PARTITION BY RANGE
(sal)
(PARTITION p1
VALUES LESS THAN (10000),
PARTITION p2
VALUES LESS THAN (20000),
PARTITION p3
VALUES LESS THAN (30000),
PARTITION p4
VALUES LESS THAN (40000),
PARTITION p5
VALUES LESS THAN (50000),
PARTITION
p_max (MAXVALUE)
);
ALTER TABLE emp_part MERGE PARTITIONS p3,p4,p5 INTO
PARTITION p_merge;
If the range falls in the sequence, you can use the following
example:
ALTER TABLE emp_part MERGE PARTITIONS p3 TO p5 INTO PARTITION
p_merge;
Database
upgrade improvements
Whenever a new Oracle version is announced, the immediate
challenge that every DBA confronts is the upgrade process. In this
section, I will explain the two new improvements introduced for
upgrading to 12c.
Pre-upgrade script
A new and much improved pre-upgrade information script,
preupgrd.sql, replaces the legacy utlu[121]s.sql script in 12c R1.
Apart from the preupgrade checks verification, the script is
capable of addressing the various issues – in the form of fixup
scripts – that are raised during the pre-post upgrade process.
The fixup scripts that are generated can be executed to resolve
the problems at different levels, for example, pre-upgrade and
post upgrade. When upgrading the database manually, the script
must be executed manually before initiating the actual upgrade
procedure. However, when the Database Upgrade Assistant (DBUA)
tool is used to perform a database upgrade, it automatically
executes the pre-upgrade scripts as part of the upgrade procedure
and will prompt you to execute the fixup scripts in case of any
errors that are reported.
The following example demonstrates how to execute the scripts:
@$ORACLE_12GHOME/rdbms/admin/preupgrd.sql
The above script generates a log file and a
[pre/post]upgrade_fixup.sql script. All these files are
located under the $ORACLE_BASE/cfgtoollogs directory. Before you
continue with the real upgrade procedure, you should run through
the recommendations mentioned in the log file and execute the
scripts to fix any issues.
Note: Ensure you copy the preupgrd.sql and utluppkg.sql scripts
from the 12c Oracle home/rdbms/admin directory to the current
Oracle database/rdbms/admin location.
Parallel-upgrade utility
The database upgrade duration is directly proportional to the
number of components that are configured on the database, rather
than the database size. In previous releases, there was no direct
option or workaround available to run the upgrade process in
parallel to quickly complete the overall upgrade procedure.
The catctl.pl (parallel-upgrade utility) that replaces the legacy
catupgrd.sql script in 12c R1 comes with an option to run the
upgrade procedure in parallel mode to improve the overall duration
required to complete the procedure.
The following procedure explains how to initiate the parallel
(with 3 processes) upgrade utility; you need to run this after you
STARTUP the database in UPGRADE mode:
cd $ORACLE_12_HOME/perl/bin
$ ./perl catctl.pl –n 3 -catupgrd.sql
The above two steps need to be run explicitly when a database is
upgraded manually. However, the DBUA inherits the both new
changes.
Restore/Recover data files over
the network
You can now restore or recover a data file, control file, spfile,
tablespace or entire database between primary and standby
databases using a SERVICE name. This is particularly useful to
synchronize the primary and standby databases.
When there is a pretty long gap found between the primary and
standby database, you no longer require the complex roll-forward
procedure to fill the gap between the primary and standby. RMAN is
able to perform standby recovery getting the incremental backups
through the network and applying them to the physical standby
database. Having said that, you can directly copy the required
data files from the standby location to the primary site using the
SERVICE name e.g. in the case of a data file, tablespace lost on
the primary database, or without actually restoring the data files
from a backup set.
The following procedure demonstrates how to perform a roll forward
using the new features to synchronize the standby database with
its primary database:
On the physical standby database:
./rman target "username/password@standby_db_tns as SYSBACKUP"
RMAN> RECOVER DATABASE FROM SERVICE primary_db_tns
USING COMPRESSED BACKUPSET;
The above example uses the primary_db_tns connect string defined
on the standby database, connects to the primary database,
performs an incremental backup, transfers these incremental
backups over standby destination, and then applies these files to
the standby database to synchronize the standby. However, you need
to ensure you have configured primary_db_tns to point to the
primary database on the standby database side.
In the following example, I will demonstrate a scenario to restore
a lost data file on the primary database by fetching the data file
from the standby database:
On the primary database:
./rman target "username/password@primary_db_tns as SYSBACKUP"
RMAN> RESTORE DATAFILE
‘+DG_DISKGROUP/DBANME/DATAFILE/filename’ FROM SERVICE
standby_db_tns;
Data
Pump enhancements
There are quite a few useful additions, such as converting view
into a table while exporting and turning off logging while import.
Wildcards in TRANSPORT_DATAFILES
When using the TRANSPORT_DATAFILES
parameter,
the datafile definition can now use wildcards in the file name.
- * : Zero to many characters.
- ? : Exactly one character.
The wildcards are not
allowed in directory names, just file names, and the wildcards
can't match files that are not present in the transport set, or an
error will be produced.
# This
transport_datafiles=/my/path/file10.dbf,/my/path/file11.dbf,/my/path/file12.dbf
# becomes one of these alternatives.
transport_datafiles=/my/path/file*.dbf
transport_datafiles=/my/path/file1*.dbf
transport_datafiles=/my/path/file1?.dbf
New Substitution Variables for File Names
Multiple files are
generated by parallel exports, so each file needs to have a unique
name. This is achieved using substitution variables. In previous
releases the only substitution variable available was "%U", which
generated a two digit number from 01-99. Oracle 12.2 includes
additional substitution variables.
The following
substitution variables are only available for export operations.
- %d, %D : The current day of
the month in DD format.
- %m, %M : The current month in
MM format.
- %y, %Y : The current year in
YYYY format.
- %t, %T : The current date in
YYYYMMDD format.
The following
substitution variables are available for both export and import
operations.
- %U : Unchanged from previous
releases. A two digit number incremented between 01-99.
- %l, %L : This starts off
similar to "%U", producing a two digit number between 01-99, but
it can extend up to 2147483646, so the resulting file name is
not a fixed length.
The following example
shows the usage of two of the new substitution variables in a
parallel export. The output is edited to reduce its size.
$ expdp system/OraPasswd1@pdb1 schemas=OE directory=TEST_DIR dumpfile=OE_%T_%L.dmp logfile=expdpOE.log parallel=4
Export: Release 12.2.0.1.0 - Production on Wed Mar 22 16:04:32 2017
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@pdb1 schemas=OE directory=TEST_DIR dumpfile=OE_%T_%L.dmp logfile=expdpOE.log parallel=4
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
.
.
.
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/tmp/OE_20170322_01.dmp
/tmp/OE_20170322_02.dmp
/tmp/OE_20170322_03.dmp
/tmp/OE_20170322_04.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" completed with 1 error(s) at Wed Mar 22 16:05:41 2017 elapsed 0 00:01:07
Turn off redo log generation
The new TRANSFORM option introduced in data pumps import provides
the flexibility to turn off the redo generation for the objects
during the course of import. When DISABLE_ARCHIVE_LOGGING values
is specified with the TRANSFORM option, redo generation for the
objects in the context will be turned off during the entire import
duration. This feature provides a great relief when importing
large tables, and reduces the excessive redo generation, which
results in quicker imports. This attribute applies to tables and
indexes.
This example demonstrates this feature:
$ ./impdp directory=dpump dumpfile=abcd.dmp logfile=abcd.log
TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y
Export view as table
With the new VIEWS_AS_TABLES option, you can unload the view data
into a table. The following example describes how to unload views
data into a table during export:
$ ./expdp directory=dpump dumpfile=abcd.dmp logfile=abcd.log
views_as_tables=my_view:my_table
Change Table Compression at Import
The TABLE_COMPRESSION_CLAUSE clause of the TRANSFORM parameter
allows the table compression characteristics of the tables in an
import to be altered on the fly.
TRANSFORM=TABLE_COMPRESSION_CLAUSE:[NONE | compression_clause]
The allowable values for the TABLE_COMPRESSION_CLAUSE include the
following.
- NONE : The table compression clause is omitted, so the
table takes on the compression characteristics of the
tablespace.
- NOCOMPRESS : Disables table compression.
- COMPRESS : Enables basic table compression.
- ROW STORE COMPRESS BASIC : Same as COMPRESS.
- ROW STORE COMPRESS BASIC : Same as COMPRESS.
- ROW STORE COMPRESS ADVANCED : Enables advanced
compression, also known as OLTP compression.
- COLUMN STORE COMPRESS FOR QUERY : Hybrid Columnar
Compression (HCC) available in Exadata and ZFS storage
appliances.
- COLUMN STORE COMPRESS FOR ARCHIVE : Hybrid Columnar
Compression (HCC) available in Exadata and ZFS storage
appliances.
Compression clauses that contain whitespace must be enclosed by
single or double quotes.
An example of its use is shown below.
$ impdp system/@db1 directory=test dumpfile=emp
logfile=impdp.log remap_schema=scott:test
transform=table_compression_clause:compress
Transportable Database
The TRANSPORTABLE option can now be combined with the FULL option
to transport a whole database.
$ expdp system@db1 full=Y transportable=always version=12
directory=TEMP_DIR dumpfile=orcl.dmp logfile=expdporcl.log
Dumpfile Compression Options
As part of the Advanced Compression option, you can specify the
COMPRESSION_ALGORITHM parameter to determine the level of
compression of the export dumpfile. This is not related to table
compression discussed previously.
COMPRESSION_ALGORITHM=[BASIC | LOW | MEDIUM | HIGH]
The meanings of the available values are described below.
- BASIC : The same compression algorithm used in previous
versions. Provides good compression, without severely
impacting on performance.
- LOW : For use when reduced CPU utilisation is a priority
over compression ratio.
- MEDIUM : The recommended option. Similar characteristics
to BASIC, but uses a different algorithm.
- HIGH : Maximum available compression, but more CPU
intensive.
An example of its use is shown below.
$ expdp scott@db1 tables=emp directory=test dumpfile=emp
logfile=emp.log compression=all compression_algorithm=medium
Data Pump enhancements
(12.2)
New Substitution
Variables for File Names
Multiple files are
generated by parallel exports, so each file needs to have a unique
name. This is achieved using substitution variables. In previous
releases the only substitution variable available was "%U", which
generated a two digit number from 01-99.
Oracle 12.2 includes additional substitution variables.
The following
substitution variables are only available for export operations.
- %d, %D : The current day of
the month in DD format.
- %m, %M : The current month in
MM format.
- %y, %Y : The current year in
YYYY format.
- %t, %T : The current date in
YYYYMMDD format.
The following
substitution variables are available for both export and import
operations.
- %U : Unchanged from previous
releases. A two digit number incremented between 01-99.
- %l, %L : This starts off
similar to "%U", producing a two digit number between 01-99, but
it can extend up to 2147483646, so the resulting file name is
not a fixed length.
The following example
shows the usage of two of the new substitution variables in a
parallel export. The output is edited to reduce its size.
$ expdp system@DB
schemas=scott directory=TEST dumpfile=scott_%T_%L.dmp
logfile=expdpscott.log parallel=4
...
....
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully
loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/tmp/scott_20180814_01.dmp
/tmp/scott_20180814_02.dmp
/tmp/scott_20180814_03.dmp
/tmp/scott_20180814_04.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" completed with 1
error(s) at Wed Mar 22 16:05:41 2017 elapsed 0 00:01:07
Parameter File Information to Log File
The contents of the parameter file specified by the PARFILE
parameter is written to the logfile, but not echoed to the
screen.
Create the following parameter file and run an export using it.
cat > /tmp/parfile.txt <<EOF
USERID=test/test@pdb1
schemas=TEST
directory=TEST_DIR
dumpfile=TEST.dmp
logfile=expdpTEST.log
EOF
expdp parfile=/tmp/parfile.txt
If we check the top of the resulting log file we can
see the parameter file contents.
$ head -15 /tmp/expdpTEST.log
;;;
Export: Release 12.2.0.1.0 - Production on Mon Aug 21
19:45:00 2017
Copyright (c) 1982, 2017, Oracle and/or its
affiliates. All rights reserved.
;;;
Connected to: Oracle Database 12c Enterprise Edition
Release 12.2.0.1.0 - 64bit Production
;;;
**************************************************************************
;;; Parfile values:
;;; parfile: logfile=expdpTEST.log
;;; parfile: dumpfile=TEST.dmp
;;; parfile: directory=TEST_DIR
;;; parfile: schemas=TEST
;;; parfile: userid=test/********@pdb1
;;;
**************************************************************************
Starting "TEST"."SYS_EXPORT_SCHEMA_01":
test/********@pdb1 parfile=/tmp/parfile.txt
$
REMAP_DIRECTORY
The REMAP_DIRECTORY
parameter
is a variant of the REMAP_DATAFILE
parameter,
which allows you to remap the paths of multiple datafile
references in a single shot. The REMAP_DIRECTORY
and REMAP_DATAFILE
parameters
are mutually exclusive.
The basic syntax is a
search and replace, with the documentation recommending the
inclusion terminators and double-quotes. Since this would have to
be escaped at the command line, it's easier to view the setting as
it would be in a parameter file.
REMAP_DIRECTORY="'/u02/oradata/cdb1/pdb1/':'/u02/oradata/cdb3/pdb3/'"
You can see it in the
context of a full parameter file below.
cat > /tmp/parfile.txt <<EOF
USERID=system/OraPasswd1@pdb1
schemas=TEST
directory=TEST_DIR
dumpfile=TEST.dmp
logfile=impdpTEST.log
remap_directory="'/u02/oradata/cdb1/pdb1/':'/u02/oradata/cdb3/pdb3/'"
EOF
impdp parfile=/tmp/parfile.txt
This parameter is not
present in the impdp -help
usage
information.
Changes on
Statistics
Gathering
Statistics Automatically
Oracle is now able to gather statistics during some bulk
operations, making it unnecessary to gather statistics
subsequently. This means extra table scans for statistics
collection are unnecessary and there is less chance that you
will forget to gather statistics after bulk loads. This works
for:
- CREATE TABLE ... AS SELECT (CTAS)
- INSERT INTO ... SELECT
- Partitioned Tables (A bulk load into an empty
partitioned table will trigger the gathering of global
statistics, but no partition-level statistics)
Example:
CREATE TABLE tab1 AS
SELECT level AS id,
'Description of '
|| level AS description
FROM dual
CONNECT BY level <= 1000;
COLUMN table_name FORMAT A20
SELECT table_name,
num_rows
FROM user_tables
WHERE table_name = 'TAB1';
TABLE_NAME
NUM_ROWS
-------------------- ----------
TAB1
1000
Restrictions
There are a number of restrictions associated with online
statistics gathering. They are not gathered for:
- Index statistics or histograms. If those are
required the must be gathered in a separate operation. The
following call will gather missing statistics, but will
not re-gather table or column statistics unless the
existing statistics are already stale.
- EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'TAB1',
options => 'GATHER AUTO');
- Non-empty segments, as described above.
- Tables in built-in schemas. Only those in
user-defined schemas.
- Nested, index-organized or external tables.
- Global temporary tables using the ON COMMIT DELETE
ROWS clause.
- Table with virtual columns.
- Tables if the PUBLISH preference is set to FALSE
for DBMS_STATS.
- Tables with locked statistics.
- Partitioned tables using incremental statistics,
where the insert is not explicitly referencing a partition
using the PARTITION clause.
- Tables loaded using multitable inserts.
Gathering statistics
concurrently on multiple tables
In previous Oracle database editions, whenever you execute a
DBMS_STATS procedure to gather table, index, schema or database
level statistics, Oracle used to collect stats one table at a
time. If the table is big enough, then increasing the parallelism
was recommended.
With 12c R1, you can now collect stats on multiple tables,
partitions and sub partitions concurrently. Before you start
using it, you must set the following at the database level to
enable the feature:
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN='DEFAULT_MAIN';
ALTER SYSTEM SET JOB_QUEUE_PROCESSES=4;
EXEC DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT', 'ALL');
EXEC DBMS_STATS.GATHER_SCHEMA_STATS('SCOTT');
Optimizer Statistics Advisor
AUTO_STATS_ADVISOR_TASK
The Optimizer Statistics
Advisor runs a task called AUTO_STATS_ADVISOR_TASK
during
the maintenance window, so by default you don't need to do
anything to start seeing advice about your database statistics.
Like other advisors, we use the DBA_ADVISOR_EXECUTIONS
view
to identify the executions of interest.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER= pdb1;
ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
SET LINESIZE 150
COLUMN task_name FORMAT A25
COLUMN execution_name FORMAT A20
COLUMN execution_end FORMAT A20
COLUMN execution_type FORMAT A20
SELECT task_name,
execution_name,
execution_start,
execution_end,
execution_type,
status
FROM dba_advisor_executions
WHERE task_name = 'AUTO_STATS_ADVISOR_TASK'
AND execution_end >= SYSDATE-2
ORDER BY 3;
TASK_NAME EXECUTION_NAME EXECUTION_START EXECUTION_END EXECUTION_TYPE STATUS
------------------------- -------------------- -------------------- -------------------- -------------------- -----------
AUTO_STATS_ADVISOR_TASK EXEC_42 23-AUG-2017 23:00:15 23-AUG-2017 23:00:20 STATISTICS COMPLETED
AUTO_STATS_ADVISOR_TASK EXEC_52 24-AUG-2017 23:00:27 24-AUG-2017 23:00:40 STATISTICS COMPLETED
SQL>
We can use this
information to view the advice from the automatic runs, as shown
below.
Generate a Report
Since the Optimizer
Statistics Advisor uses the normal advisor framework, the results
of the advisor executions can be viewed in the {CDB|DBA|ALL|USER}_ADVISOR_%
views,
but there is an easier way to get the results. An Optimizer
Statistics Advisor report is generated using the REPORT_ADVISOR_TASK
function
in the DBMS_STATS
package.
It accepts the following parameters.
TASK_NAME
:
Name of the task.
EXECUTION_NAME
:
Name of the execution. When not specified the latest execution
for the task is used. Default NULL
.
TYPE
:
Allowable values are TEXT
, HTML
or XML
. Default TEXT
.
SECTION
:
Allowable values are SUMMARY
, FINDINGS
, ERRORS
or ALL
.
Combinations are possible using "+" or "-" qualifiers. Default ALL
.
LEVEL
:
Allowable values are BASIC
, TYPICAL
, ALL
or SHOW_HIDDEN
.
Combinations are possible using "+" or "-" qualifiers. Default TYPICAL
.
The following query
displays the default report for the latest execution for the AUTO_STATS_ADVISOR_TASK
task.
SET LINESIZE 200
SET LONG 1000000
SET PAGESIZE 0
SET LONGCHUNKSIZE 100000
SELECT DBMS_STATS.report_advisor_task('AUTO_STATS_ADVISOR_TASK') AS REPORT
FROM dual;
GENERAL INFORMATION
-------------------------------------------------------------------------------
Task Name : AUTO_STATS_ADVISOR_TASK
Execution Name : EXEC_52
Created : 01-26-17 02:43:30
Last Modified : 08-24-17 11:00:40
-------------------------------------------------------------------------------
SUMMARY
-------------------------------------------------------------------------------
For execution EXEC_52 of task AUTO_STATS_ADVISOR_TASK, the Statistics Advisor
has 1 finding(s). The findings are related to the following rules:
AVOIDSTALESTATS. Please refer to the finding section for detailed information.
-------------------------------------------------------------------------------
FINDINGS
-------------------------------------------------------------------------------
Rule Name: AvoidStaleStats
Rule Description: Avoid objects with stale or no statistics
Finding: There are 1 object(s) with stale statistics.
Schema:
CTXSYS
Objects:
DR$DBO
Recommendation: Regather statistics on objects with stale statistics.
Example:
-- Gathering statistics for tables with stale or no statistics in schema, SH:
exec dbms_stats.gather_schema_stats('SH', options => 'GATHER AUTO')
Rationale: Stale statistics or no statistics will result in bad plans.
----------------------------------------------------
-------------------------------------------------------------------------------
SQL>
Implement Advisor
Recommendations
The Optimizer Statistics
Advisor recommendations can be implemented directly using the IMPLEMENT_ADVISOR_TASK
function
in the DBMS_STATS
package.
It accepts the following parameters.
TASK_NAME
:
Name of the task.
EXECUTION_NAME
:
Name of the execution. When not specified the latest execution
for the task is used. Default NULL
.
LEVEL
:
Allowable values are ALL
or TYPICAL
. Default TYPICAL
.
The following code
implements the recommendations and displays the result.
VARIABLE v_result CLOB
-- Implement recommendations.
DECLARE
l_task_name VARCHAR2(32767);
BEGIN
l_task_name := 'AUTO_STATS_ADVISOR_TASK';
:v_result := DBMS_STATS.implement_advisor_task(l_task_name);
END;
/
-- Display results.
SET LONG 10000
SELECT XMLTYPE(:v_result) AS results
FROM dual;
RESULTS
--------------------------------------------------------------------------------
<implementation_results>
<rule NAME="AVOIDSTALESTATS">
<implemented>yes</implemented>
</rule>
</implementation_results>
SQL>
We can see the
statistics have just been analyzed.
COLUMN last_analyzed FORMAT A20
COLUMN stale_stats FORMAT A15
SELECT last_analyzed,
stale_stats
FROM dba_tab_statistics
WHERE owner = 'CTXSYS'
AND table_name = 'DR$DBO';
LAST_ANALYZED STALE_STATS
-------------------- ---------------
25-AUG-2017 16:11:31 NO
SQL>
Alternatively the
Optimizer Statistics Advisor recommendations can be turned into a
script using the SCRIPT_ADVISOR_TASK
function.
It accepts the following parameters.
TASK_NAME
:
Name of the task.
EXECUTION_NAME
:
Name of the execution. When not specified the latest execution
for the task is used. Default NULL
.
DIR_NAME
:
If the directory name is specified, the function returns the
script as a CLOB and writes it to a file in the specified
directory. If not specified, the script is only present in the
return value of the function. Default NULL
.
LEVEL
:
Allowable values are ALL
or TYPICAL
. Default TYPICAL
.
The following query
displays the default script for the latest execution for the AUTO_STATS_ADVISOR_TASK
task.
SET LINESIZE 200
SET LONG 1000000
SET PAGESIZE 0
SET LONGCHUNKSIZE 100000
SELECT DBMS_STATS.script_advisor_task('AUTO_STATS_ADVISOR_TASK')
FROM dual;
-- Script generated for the recommendations from execution EXEC_52
-- in the statistics advisor task AUTO_STATS_ADVISOR_TASK
-- Script version 12.2
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- Scripts for rule USECONCURRENT
-- Rule Description: Use Concurrent preference for Statistics Collection
-- No scripts will be provided for the rule USEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule COMPLETEAUTOJOB.Please check the report for more details.
-- No scripts will be provided for the rule MAINTAINSTATSHISTORY.Please check the report for more details.
-- No scripts will be provided for the rule TURNONSQLPLANDIRECTIVE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDSETPROCEDURES.Please check the report for more details.
-- No scripts will be provided for the rule USEDEFAULTPARAMS.Please check the report for more details.
-- No scripts will be provided for the rule USEGATHERSCHEMASTATS.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDINEFFICIENTSTATSOPRSEQ.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDUNNECESSARYSTATSCOLLECTION.Please check the report for more details.
-- No scripts will be provided for the rule GATHERSTATSAFTERBULKDML.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDDROPRECREATE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDOUTOFRANGE.Please check the report for more details.
-- No scripts will be provided for the rule AVOIDANALYZETABLE.Please check the report for more details.
-- Scripts for rule USEDEFAULTPREFERENCE
-- Rule Description: Use Default Preference for Stats Collection
-- Set global preferenes to default values.
-- Scripts for rule USEDEFAULTOBJECTPREFERENCE
-- Rule Description: Use Default Object Preference for statistics collection
-- Setting object-level preferences to default values
-- setting CASCADE to default value for object level preference
-- setting ESTIMATE_PERCENT to default value for object level preference
-- setting METHOD_OPT to default value for object level preference
-- setting GRANULARITY to default value for object level preference
-- setting NO_INVALIDATE to default value for object level preference
-- Scripts for rule USEINCREMENTAL
-- Rule Description: Statistics should be maintained incrementally when it is beneficial
-- Turn on the incremental option for those objects for which using incremental is helpful.
-- Scripts for rule UNLOCKNONVOLATILETABLE
-- Rule Description: Statistics for objects with non-volatile should not be locked
-- Unlock statistics for objects that are not volatile.
-- Scripts for rule LOCKVOLATILETABLE
-- Rule Description: Statistics for objects with volatile data should be locked
-- Lock statistics for volatile objects.
-- Scripts for rule NOTUSEINCREMENTAL
-- Rule Description: Statistics should not be maintained incrementally when it is not beneficial
-- Turn off incremental option for those objects for which using incremental is not helpful.
-- Scripts for rule USEAUTODEGREE
-- Rule Description: Use Auto Degree for statistics collection
-- Turn on auto degree for those objects for which using auto degree is helpful.
-- Scripts for rule AVOIDSTALESTATS
-- Rule Description: Avoid objects with stale or no statistics
-- Gather statistics for those objcts that are missing or have no statistics.
-- Scripts for rule MAINTAINSTATSCONSISTENCY
-- Rule Description: Statistics of dependent objects should be consistent
-- Gather statistics for those objcts that are missing or have no statistics.
declare
obj_filter_list dbms_stats.ObjectTab;
obj_filter dbms_stats.ObjectElem;
obj_cnt number := 0;
begin
obj_filter_list := dbms_stats.ObjectTab();
obj_filter.ownname := 'CTXSYS';
obj_filter.objtype := 'TABLE';
obj_filter.objname := 'DR$DBO';
obj_filter_list.extend();
obj_cnt := obj_cnt + 1;
obj_filter_list(obj_cnt) := obj_filter;
dbms_stats.gather_database_stats(
obj_filter_list=>obj_filter_list);
end;
/
SQL>
Manually Execute the
Advisor
There are a number of procedures
and functions in the DBMS_STATS package that
allow the advisor to be run manually, some of which are
demonstrated below.
We create and populate a
table, making sure there are no statistics. This will give us
something for the advisor to find.
CONN test/test@pdb1
DROP TABLE t1 PURGE;
CREATE TABLE t1 (
id NUMBER,
description VARCHAR2(50),
CONSTRAINT t1_pk PRIMARY KEY (id)
);
CREATE INDEX t1_desc ON t1(description);
INSERT INTO t1
SELECT level,
'Description for ' || level
FROM dual
CONNECT BY level <= 10000;
COMMIT;
EXEC DBMS_STATS.delete_table_stats('TEST', 'T1');
We create a new task and
execute it using the CREATE_ADVISOR_TASK
and EXECUTE_ADVISOR_TASK
functions
respectively. These optionally accept TASK_NAME
and EXECUTION_NAME
parameters.
If they aren't specified system-generated names are created. We
then generate the report associated with this task.
CONN / AS SYSDBA
ALTER SESSION SET CONTAINER=pdb1;
SET SERVEROUTPUT ON
DECLARE
l_task_name VARCHAR2(32767);
l_exec_name VARCHAR2(32767);
l_report CLOB;
BEGIN
l_task_name := DBMS_STATS.create_advisor_task;
l_exec_name := DBMS_STATS.execute_advisor_task(task_name => l_task_name);
l_report := DBMS_STATS.report_advisor_task(
task_name => l_task_name,
execution_name => l_exec_name);
DBMS_OUTPUT.put_line(l_report);
END;
/
GENERAL INFORMATION
-------------------------------------------------------------------------------
Task Name : TASK_16
Execution Name : EXEC_75
Created : 08-26-17 09:42:24
Last Modified : 08-26-17 09:42:35
-------------------------------------------------------------------------------
SUMMARY
-------------------------------------------------------------------------------
For execution EXEC_75 of task
TASK_16, the Statistics Advisor has 2
finding(s). The findings are related to the following rules: USEDEFAULTPARAMS,
AVOIDSTALESTATS. Please refer to the finding section for detailed
information.
-------------------------------------------------------------------------------
FINDINGS
-------------------------------------------------------------------------------
Rule Name:
UseDefaultParams
Rule Description: Use Default Parameters in Statistics Collection Procedures
Finding: There are 1 statistics operation(s) using
nondefault parameters.
Operation:
delete_table_stats(ownname=>'TEST', tabname=>'T1', cascade_parts=>TRUE, cascade_columns=>TRUE, cascade_indexes=>TRUE, stattype=>'ALL');
Recommendation: Use default parameters for statistics
operations.
Example:
-- Gathering statistics for 'SH' schema using all default parameter values:
BEGIN
dbms_stats.gather_schema_stats('SH'); END;
-- Also the non default parameters can be overriden by setting
'PREFERENCE_OVERRIDES_PARAMETER' preference.
-- Overriding non default parameters and preferences for all tables in the
system and to use dbms_stats for gathering statistics:
begin
dbms_stats.set_global_prefs('PREFERENCE_OVERRIDES_PARAMETER', 'TRUE');
end;
-- Overriding non default parameters and
preferences for 'SH.SALES':
begin dbms_stats.set_table_prefs('SH','SALES',
'PREFERENCE_OVERRIDES_PARAMETER', 'TRUE'); end;
Rationale: Using default parameter values for statistics gathering operations
is more efficient.
----------------------------------------------------
Rule Name: AvoidStaleStats
Rule Description: Avoid objects with stale or no statistics
Finding: There are 1 object(s) with no statistics.
Schema:
TEST
Objects:
T1
Recommendation: Gather Statistics on those objects with no statistics.
Example:
-- Gathering statistics for tables with stale or no statistics in schema, SH:
exec
dbms_stats.gather_schema_stats('SH', options => 'GATHER AUTO')
Rationale: Stale statistics or no statistics will result in bad plans.
----------------------------------------------------
-------------------------------------------------------------------------------
PL/SQL procedure successfully completed.
SQL>
We can see the advisor
has noticed the table with no statistics. We could now implement
the recommendations as we have done previously.
Real-time
ADDM analysis
Analyzing past and current database health statuses through a
set of automatic diagnostic tools such as AWR, ASH and ADDM is
part of every DBAs life. Though each individual tool can be
used at various levels to measure the database’s overall heath
and performance, no tool can be used when the database is
unresponsive or totally hung.
When you encounter an unresponsive database or hung state, and
if you have configured Oracle Enterprise Manager 12c Cloud
Control, you can diagnose serious performance issues. This
would give you a good picture about what’s currently going on
in the database, and might also provide a remedy to resolve
the issue.
The following step-by-step procedure demonstrates how to
analyze the situation on the Oracle EM 12c Cloud Control :
- Select Database Home → Performance →
Emergency Monitoring option .This will show the top blocking
sessions in the Hang Analysis table.
- Select Database Home → Performance →
Real-Time ADDM option from the Performance to perform
Real-time ADDM analysis.
- After collecting the performance data, click on the
Findings tab to get the interactive summary of all the
findings.
Real-time ADDM reports
are generated using the "rtaddmrpt.sql" and "rtaddmrpti.sql"
scripts in the "$ORACLE_HOME/rdbms/admin" directory.
The "rtaddmrpt.sql"
script assumes the report is for the current database and
instance, and uses a reporting period of the -60 mins to current
time (SYSDATE). This sounds a little like a conventional ADDM
report, but the information is sourced from the DBA_HIST_REPORTS
view,
not the conventional ADDM analysis. An example of running the
"rtaddmrpt.sql" script is shown below. Only the report_id
and report_name
required
user input.
@rtaddmrpt
Current Database
~~~~~~~~~~~~~~~~
845557242
Instances in this Report reposistory
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
845557242 1
Default to current database
Using database id: 845557242
Enter begin time for report:
-- Valid input formats:
-- To specify absolute begin time:
-- [MM/DD[/YY]] HH24:MI[:SS]
-- Examples: 02/23/03 14:30:15
-- 02/23 14:30:15
-- 14:30:15
-- 14:30
-- To specify relative begin time: (start with '-' sign)
-- -[HH24:]MI
-- Examples: -1:15 (SYSDATE - 1 Hr 15 Mins)
-- -25 (SYSDATE - 25 Mins)
Default to -60 mins
Report begin time specified:
Enter duration in minutes starting from begin time:
Defaults to SYSDATE - begin_time
Press Enter to analyze till current time
Report duration specified:
Using 20/07/2015 09:39:41 as report begin time
Using 20/07/2015 10:39:41 as report end time
Report ids in this workload repository.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
845557242 1922 20/07/2015 10:34:29 High Load 20.16
Select a report id from the list. If the report id list is empty,
please select a different begin time and end time.
Enter value for report_id: 1922
Report id specified : 1922
Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is rtaddmrpt_0720_1039.html. To use this name,
press <return> to continue, otherwise enter an alternative.
Enter value for report_name: /tmp/rtaddmrpt_0720_1039.html
Using the report name /tmp/rtaddmrpt_0720_1039.html
... Removed HTML Output ...
Report written to /tmp/rtaddmrpt_0720_1039.html
The "rtaddmrpti.sql"
script allows you to manually select the database and reporting
period. The items in bold required user input.
SQL> @rtaddmrpti
Instances in this Report reposistory
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Db Id Inst Num
---------- --------
845557242 1
Default to current database
Enter value for dbid:
Using database id: 845557242
Enter begin time for report:
-- Valid input formats:
-- To specify absolute begin time:
-- [MM/DD[/YY]] HH24:MI[:SS]
-- Examples: 02/23/03 14:30:15
-- 02/23 14:30:15
-- 14:30:15
-- 14:30
-- To specify relative begin time: (start with '-' sign)
-- -[HH24:]MI
-- Examples: -1:15 (SYSDATE - 1 Hr 15 Mins)
-- -25 (SYSDATE - 25 Mins)
Default to -60 mins
Enter value for begin_time: -5
Report begin time specified: -5
Enter duration in minutes starting from begin time:
Defaults to SYSDATE - begin_time
Press Enter to analyze till current time
Enter value for duration:
Report duration specified:
Using 20/07/2015 10:29:46 as report begin time
Using 20/07/2015 10:34:47 as report end time
Report ids in this workload repository.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DBID REPORT_ID TIME trigger_cause impact
---------- --------- -------------------- ------------------------- ----------
845557242 1922 20/07/2015 10:34:29 High Load 20.16
Select a report id from the list. If the report id list is empty,
please select a different begin time and end time.
Enter value for report_id: 1922
Report id specified : 1922
Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is rtaddmrpt_0720_1034.html. To use this name,
press <return> to continue, otherwise enter an alternative.
Enter value for report_name: /tmp/rtaddmrpt_0720_1034.html
Using the report name /tmp/rtaddmrpt_0720_1034.html
... Removed HTML Output ...
Report written to /tmp/rtaddmrpt_0720_1034.html
DBMS_ADDM.REAL_TIME_ADDM_REPORT
Function
The DBMS_ADDM
package
contains a function called REAL_TIME_ADDM_REPORT
,
which looks promising at first glance.
Unfortunately, it
doesn't return a formatted real-time ADDM report, just some data
in XML format. You can see the output using the following query.
SET LONG 1000000 LONGCHUNKSIZE 1000000
SET LINESIZE 1000 PAGESIZE 0
SET TRIM ON TRIMSPOOL ON
SET ECHO OFF FEEDBACK OFF
SELECT DBMS_ADDM.real_time_addm_report FROM dual;
Truncate table CASCADE
In the previous releases, there wasn’t a direct option provided to
truncate a master table while it is referred to by the child
tables and child records exist. The TRUNCATE TABLE with CASCADE
option in 12c truncates the records in the master table and
automatically initiates recursive truncate on child tables too,
subject to foreign key reference as DELETE ON CASCADE. There is no
CAP on the number of recursive levels as it will apply on all
child, grand child and great grandchild etc.
This enhancement gets rid of the prerequisite to truncate all
child records before truncating a master table. The new CASCADE
clause can also be applied on table partitions and sub-partitions
etc.
-- Create the test tables using ON DELETE CASCADE
DROP TABLE t3 PURGE;
DROP TABLE t2 PURGE;
DROP TABLE t1 PURGE;
CREATE TABLE t1 (
id
NUMBER,
description VARCHAR2(50),
CONSTRAINT t1_pk PRIMARY KEY (id)
);
CREATE TABLE t2 (
id
NUMBER,
t1_id
NUMBER,
description VARCHAR2(50),
CONSTRAINT t2_pk PRIMARY KEY (id),
CONSTRAINT t2_t1_fk FOREIGN KEY (t1_id)
REFERENCES t1 (id) ON DELETE CASCADE
);
CREATE TABLE t3 (
id
NUMBER,
t2_id
NUMBER,
description VARCHAR2(50),
CONSTRAINT t3_pk PRIMARY KEY (id),
CONSTRAINT t3_t2_fk FOREIGN KEY (t2_id)
REFERENCES t2 (id) ON DELETE CASCADE
);
Since the foreign key
columns are optional, rows in the dependent tables can have a null
value and not be part of the relationship.
We'll take advantage of this and insert some data for each table.
Notice that T2 and T3 both have a row that relates back to their
respective parent and a row with a null value in the foreign key
column.
-- Insert a data into each table.
INSERT INTO t1 VALUES (1, 't1 ONE');
INSERT INTO t2 VALUES (1, 1, 't2 ONE');
INSERT INTO t2 VALUES (2, NULL, 't2 TWO');
INSERT INTO t3 VALUES (1, 1, 't3 ONE');
INSERT INTO t3 VALUES (2, NULL, 't3 TWO');
COMMIT;
-- Check the contents of the tables.
SELECT (SELECT COUNT(*) FROM t1) AS t1_count,
(SELECT COUNT(*) FROM
t2) AS t2_count,
(SELECT COUNT(*) FROM
t3) AS t3_count
FROM dual;
T1_COUNT T2_COUNT T3_COUNT
---------- ---------- ----------
1
2
2
DELETE ... [CASCADE]
The presence of the ON DELETE CASCADE relationships allows us to
delete from any of the tables, with any dependent child records
deleted automatically. The CASCADE keyword in the following delete
example is not really necessary, but it's good to use it to remind
any other developers that you are expecting a recursive delete.
DELETE FROM t1 CASCADE;
-- Check the contents of the tables.
SELECT (SELECT COUNT(*) FROM t1) AS t1_count,
(SELECT COUNT(*)
FROM t2) AS t2_count,
(SELECT COUNT(*)
FROM t3) AS t3_count
FROM dual;
T1_COUNT T2_COUNT T3_COUNT
---------- ---------- ----------
0
1 1
Notice the rows with
null values in the foreign key columns are not deleted, as
strictly speaking they were not orphaned by the initial deletion.
TRUNCATE TABLE <table_name> CASCADE;
TRUNCATE TABLE <table_name> PARTITION
<partition_name> CASCADE;
The TRUNCATE TABLE ... CASCADE command
succeeds and recursively truncates all the dependent tables.
Both the DELETE CASCADE and the TRUNCATE CASCADE fail if
any of the relationships in the hierarchy are not defined with
the ON DELETE CASCADE clause.
TRUNCATE TABLE t1 CASCADE;
-- Check the contents of the tables.
SELECT (SELECT COUNT(*) FROM t1) AS t1_count,
(SELECT
COUNT(*) FROM t2) AS t2_count,
(SELECT
COUNT(*) FROM t3) AS t3_count
FROM dual;
T1_COUNT T2_COUNT
T3_COUNT
---------- ---------- ----------
0
0 0
ROW limiting for Top-N result
queries
There are various indirect approaches/methods exist to fetch Top-N
query results for top/bottom rows in the previous releases.
In 12c, retrieving Top-N query results for top/bottom rows
simplified and become straight forward with the new FETCH FIRST |
NEXT | PERCENT clauses.
The syntax is:
[ OFFSET offset { ROW | ROWS } ]
[ FETCH { FIRST | NEXT } [ { rowcount | percent PERCENT
} ]
{ ROW | ROWS } { ONLY | WITH TIES } ]
For this example we will use the following table:
DROP TABLE rownum_order_test;
CREATE TABLE rownum_order_test (
val NUMBER);
INSERT ALL
INTO rownum_order_test
INTO rownum_order_test
SELECT level
FROM dual
CONNECT BY level <= 10;
COMMIT;
The following query shows we have 20 rows with 10 distinct values.
SELECT val FROM rownum_order_test ORDER
BY val;
VAL
----------
1
1
2
2
3
3
4
4
5
5
6
6
7
7
8
8
9
9
10
10
Actually, for the classic Top-N query it is very simple. The
example below returns the 5 largest values from an ordered set.
Using the ONLY clause limits the number of rows returned to the
exact number requested.
SELECT val
FROM rownum_order_test
ORDER BY val DESC
FETCH FIRST 5 ROWS ONLY;
VAL
----------
10
10
9
9
8
Using the WITH TIES clause may result in more rows being returned
if multiple rows match the value of the Nth row. In this case the
5th row has the value "8", but there are two rows that tie for 5th
place, so both are returned.
SELECT val
FROM rownum_order_test
ORDER BY val DESC
FETCH FIRST 5 ROWS WITH TIES;
VAL
----------
10
10
9
9
8
8
In addition to limiting by row count, the row limiting clause also
allows us to limit by percentage of rows. The following query
returns the bottom 20% of rows.
SELECT val
FROM rownum_order_test
ORDER BY val
FETCH FIRST 20 PERCENT ROWS ONLY;
VAL
----------
1
1
2
2
Paging Through Data
Paging through an ordered resultset was a little annoying using
the classic Top-N query approach, as it required two Top-N
queries, one nested inside the other. For example, if we wanted
the second block of 4 rows we might do the following.
SELECT val
FROM (SELECT val, rownum AS rnum
FROM (SELECT val
FROM rownum_order_test
ORDER BY val)
WHERE rownum
<= 8)
WHERE rnum >= 5;
VAL
----------
3
3
4
4
With the row limiting clause we can achieve the same result using
the following query.
SELECT val
FROM rownum_order_test
ORDER BY val
OFFSET 4 ROWS FETCH NEXT 4 ROWS ONLY;
VAL
----------
3
3
4
4
The starting point for the FETCH is OFFSET+1.
The OFFSET is always based on a number of rows, but this can be
combined with a FETCH using a PERCENT.
SELECT val
FROM rownum_order_test
ORDER BY val
OFFSET 4 ROWS FETCH NEXT 20 PERCENT ROWS
ONLY;
VAL
----------
3
3
4
4
Not surprisingly, the offset, rowcount and percent can use bind
variables.
VARIABLE v_offset NUMBER;
VARIABLE v_next NUMBER;
BEGIN
:v_offset := 4;
:v_next := 4;
END;
/
SELECT val
FROM rownum_order_test
ORDER BY val
OFFSET :v_offset ROWS FETCH NEXT :v_next ROWS ONLY;
VAL
----------
3
3
4
4
More Examples:
The following example fetches all similar records of Nth row. For
example, if the 10th row has salary of 5000 value, and there are
other employees whose salary matches with the Nth value, the will
also be fetched upon mentioning WITH TIES clause.
SELECT eno,ename,sal FROM emp ORDER BY SAL DESC FETCH
FIRST 10 ROWS ONLY WITH TIES;
The following example limits the fetch to 10 per cent from the top
salaries in the EMP table:
SELECT eno,ename,sal FROM emp ORDER BY SAL DESC FETCH FIRST 10
PERCENT ROWS ONLY;
The following example offsets the first 5 rows and will display
the next 5 rows from the table:
SELECT eno,ename,sal FROM emp ORDER BY SAL DESC OFFSET
5 ROWS FETCH NEXT 5 ROWS ONLY;
All these limits can be very well used within the PL/SQL block
too.
BEGIN
SELECT sal BULK
COLLECT INTO sal_v FROM EMP
FETCH FIRST 100 ROWS ONLY;
END;
/
Miscellaneous
SQL*Plus enhancements
Implicit Results on SQL*Plus
SQL*Plus in 12c returns results from an implicit cursor of a
PL/SQL block without actually binding it to a RefCursor. The new dbms_sql.return_result
procedure will return and formats the results of SELECT statement
query specified within PL/SQL block. The following code descries
the usage:
CREATE PROCEDURE mp1
as
res1 sys_refcursor;
BEGIN
open res1 for
SELECT eno,ename,sal FROM emp;
dbms_sql.return_result(res1);
END;
/
execute mp1;
When the procedure is executed, it return the formatted rows on
the SQL*Plus.
Display invisible columns
When the columns are defined as invisible, they won’t be displayed
when you describe the table structure. However, you can display
the information about the invisible columns by setting the
following on the SQL*Plus prompt:
SET COLINVISIBLE ON|OFF
The above setting is only valid for DESCRIBE command. It has not
effect on the SELECT statement results on the invisible columns.
Session level sequences
A new SESSION level database sequence can be created now in 12c to
support the session level sequence values. These types of
sequences are most useful and suitable on global temporary tables
that have session level existence.
Session level sequences produce a unique range of values that are
limited within the session, not across the sessions. Once the
session ends, the state of the session sequences also goes away.
The following example explains creating a session level sequence:
CREATE SEQUENCE my_seq START WITH 1 INCREMENT BY 1 SESSION;
ALTER SEQUENCE my_seq GLOBAL|SESSION;
The CACHE, NOCACHE, ORDER or NOORDER clauses are ignored for
SESSION level sequences.
WITH clause improvements
In 12c, you can have faster running PL/SQL function/procedure in
SQL, that are defined and declared within the WITH clause of SQL
statements. The following examples demonstrate how to define and
declare a procedure or function within the WITH clause:
WITH
PROCEDURE|FUNCTION
test1 (…)
BEGIN
<logic>
END;
SELECT <referece_your_function|procedure_here> FROM
table_name;
/
Although you can’t use the WITH clause directly in the PL/SQL
unit, it can be referred through a dynamic SQL within that PL/SQL
unit.
Approx Count
The function provides an approximate discount count, as the name
implies. It performs much faster on large data sets, and is almost
as close (but not exactly the same) as using a COUNT(DISTINCT).
This example uses the new APPROX_COUNT_DISTINCT function to get an
approximate number of distinct values from the same table.
SELECT APPROX_COUNT_DISTINCT(first_name)
FROM student;
Extended data types
Prior to Oracle 12c, regardless of the character semantics used,
the maximum size of a VARCHAR2, NVARCHAR2 and RAW columns in a
database are as follows.
VARCHAR2 : 4000 bytes
NVARCHAR2 : 4000 bytes
RAW : 2000 bytes
With the introduction of Extended Data Types, Oracle 12c
optionally increases these maximum sizes.
VARCHAR2 : 32767 bytes
NVARCHAR2 : 32767 bytes
RAW : 32767 bytes
The following procedure need to run to use the extended data
types:
- Shutdown the database
- Restart the database in UPGRADE mode
- Modify the parameter: ALTER SYSTEM SET
MAX_STRING_SIZE=EXTENDED;
- Execute utl32k.sql as sysdba : SQL>
@?/rdbms/admin/utl32k.sql
- Shutdown the database
- Restart the database in READ WRITE mode
In contrast to LOB data types, the extended data types columns
in ASSM tablespace management are stored as SecureFiles LOBs, and
in non-ASSM tablespace management they stored as BasciFiles LOBs.
Note: Once modified, you can’t change the settings back to
STANDARD.
RMAN 12c New
Features
Recover a Table or a Partition
Oracle database backups are mainly categorized
into two types: logical and physical. Each backup type has
its own pros and cons. In previous editions, it was not
feasible to restore a table or partition using existing
physical backups.
In order to restore a particular object, you must have
logical backup.
With 12c R1, you can recover a particular table or partition
to a point-in-time or SCN from RMAN backups in the event of
a table drop or truncate, you don't need to restore to a
full auxiliary DB.
When a table or partition recovery is initiated via RMAN,
the following actions are performed:
- Required backup sets are identified to recover the
table/partition
- An auxiliary database will be configured to a point-in-time
temporarily in the process of recovering the table/partition
- Required table/partitions will be then exported to a dumpfile
using the data pumps
- Optionally, you can import the table/partitions in the source
database
- Rename option while recovery
An example of a table point-in-time recovery via
RMAN (ensure you already have a full database backup from earlier):
RMAN> connect target "username/password as SYSBACKUP";
RMAN> RECOVER TABLE username.tablename UNTIL TIME
'2015-07-02 09:33:00' (or until SCN XYZ)
AUXILIARY DESTINATION
'/u01/tablerecovery'
DATAPUMP DESTINATION
'/u01/dpump'
DUMP FILE
'tablename.dmp'
NOTABLEIMPORT -- this option avoids importing
the table automatically.
You can also restore a Table to another table:
RMAN> RECOVER TABLE username.tablename UNTIL TIME
'2015-07-02 09:33:00' (or until SCN XYZ)
AUXILIARY DESTINATION
'/u01/tablerecovery'
REMAP TABLE
'username.tablename': 'username.new_table_name';
-- can rename table with this option.
Important notes:
- Ensure sufficient free space available under /u01 filesystem
for auxiliary database and also to keep the data pump file
- A full database backup must be exists, or at least the SYSTEM
related tablespaces
The following limitations/restrictions are applied on
table/partition recovery in RMAN:
- SYS user table/partition can’t be recovered
- Tables/partitions stored under SYSAUX and SYSTEM tablespaces
can’t be recovered
- Recovery of a table is not possible when REMAP option used to
recovery a table that contains NOT NULL constraints
How table recovery works?
RMAN uses the database backups taken previously to recovery
table or table partitions to a specified time. You need to
provide following inputs to RMAN
- Table name or partition names to be
recovered
- Point in time until which the
recovery should be done
- Whether recovered tables should be
imported back to original database
This are the steps executed "behind the
scenes"
- RMAN determines the correct backup
based on your inputs
- RMAN creates auxiliary instance
- RMAN restores the controlfile
- RMAN restores necessary files
required for obtaining the older image of table. This
might include SYSTEM, SYSAUX, UNDO tablespace and
tablespace which has required table.
- RMAN recovers the table or table
partitions in auxiliary instance until the specified
time.
- RMAN creates data pump export dump
file that contains the recovered objects
- RMAN imports the recovered objects
into original database
Recovery Point-in-Time Options
You can recover a table or table partition to a past point
in time by specifying one of the following three clauses:
UNTIL SCN – The system change number (SCN)
UNTIL TIME – Time in NLS_DATE_FORMAT environment variable
UNTIL SEQUENCE – The log sequence number and thread number
select current_scn from v$database;
RMAN> recover table advaitd.t1 until scn 1243251
auxiliary destination '/u01/app/oracle/oradata/aux';
Let’s see what this command does exactly.
Step 1) Command creates and mounts a new adhoc dummy
instance
It uses the controlfile autobackup to restore the
controlfile to mount the dummy instance.
Step 2) It then restores the controlfile for the auxiliary
instance
Following memory script shows the commands for restoring
controlfile
Step 3) Next it restores the tablespaces required to perform
tablespace PITR
Step 4) Recover the datafiles until required SCN specified
Step 5) Create SPFILE for auxiliary instance and mount the
DB again
Step 6) Restore USERS tablespace where the table belongs
Step 7) Recovering the auxiliary database until required SCN
Step 8) Create directory for datapump export and export the
required table
Step 9) Importing the table to original database
Step 10) Finally, RMAN will clean up and remove the
auxiliary instance
More Examples:
Recover until SCN
RECOVER TABLE 'SCOTT'.'EMP'
UNTIL SCN 1759715
AUXILIARY DESTINATION '/u01/aux'
REMAP TABLE 'SCOTT'.'EMP':'REC_EMP';
Recover and Export it to an export File:
RECOVER TABLE 'SCOTT'.'EMP'
UNTIL SCN 1759715
AUXILIARY DESTINATION '/u01/aux'
DATAPUMP DESTINATION '/u01/exp'
DUMP FILE 'scott_emp_prev.dmp'
NOTABLEIMPORT;
Backup & Recovery
Entire CDB with all PDBs
BACKUP DATABASE PLUS ARCHIVELOG;
RESTORE DATABASE;
RECOVER DATABASE;
Just the CDB$ROOT only
BACKUP DATABASE ROOT;
RESTORE DATABASE ROOT;
RECOVER DATABASE ROOT;
PDBs:
BACKUP PLUGGABLE DATABASE sales, hr;
RESTORE PLUGGABLE DATABASE 'pdb$seed', sales, hr;
RECOVER PLUGGABLE DATABASE 'pdb$seed', sales, hr;
Command Line Improvements
Up until version 12c, the RMAN command line parser was capable
of parsing certain DML and DDL statements. To do this, we
use the “sql” command from the RMAN command line, like this.
RMAN> sql 'create table altdotoracle (col1 number)';
RMAN> sql 'insert into altdotoracle values (1)';
However, one operation we could not do was SELECT. The RMAN
parser lacked the facility to return arrays of data to the
screen. This shouldn’t be surprising, considering that RMAN
is intended to be used to run backup and restore operations.
RMAN> sql 'select * from altdotoracle';
The statement does not error, but no data is returned. Now,
in version 12c, we get two new benefits in this area. First,
we no longer need to use the ‘sql’ command. For example…
RMAN> create table altdotoracle (col1 number);
RMAN> insert into altdotoracle values (1);
Additionally, the RMAN parser in 12c allows us to run SELECT
commands, as well as DESCRIBE commands.
RMAN> select * from altdotoracle;
RMAN> describe altdotoracle;
Multisection Image
Copies and Incremental Backups
In previous releases it
was only possible to perform multisection backups using
conventional backup sets. In Oracle 12c, it is also possible to
use multisection backups for image copy and incremental backups.
In all three cases,
multisection backups are triggered by the addition of the SECTION SIZE
clause
of the BACKUP
command,
which indicates the size of the fragment to be processed by each
slave. If the file size is smaller than the SECTION SIZE
value,
a multisection backup of that file is not performed.
# Backup set.
BACKUP SECTION SIZE 400M DATABASE;
# Image copy.
BACKUP AS COPY SECTION SIZE 400M DATABASE;
# Incremental and incrementally updated image copy.
BACKUP INCREMENTAL LEVEL 1 SECTION SIZE 400M DATABASE;
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'mydb_incr_backup' SECTION SIZE 400M DATABASE;
Network-Enabled RESTORE
Restore and recovery
operations can be performed directly over a network without the
need to manually transfer files. This is done using the FROM SERVICE
clause,
which is used to specify an entry in the "tnsnames.ora" file
pointing to the service the data should be sourced from. The documentation discusses
two scenarios where this might be useful.
Using a file from a
physical standby database to restore a missing/damaged file in the
primary database.
RESTORE DATAFILE '/u01/oradata/primary/hr.dbf' FROM SERVICE standby_tns;
Refreshing a physical
standby database from a primary database. This performs an
incremental backup of the primary database and uses it to refresh
the standby database.
RECOVER DATABASE FROM SERVICE primary_db;
There are some
additional post-recovery steps to take when doing a network
refresh of a standby database, described here.
RMAN -
Duplication Enhancements
Active Database
Duplication using Backup Sets
In previous releases,
active duplicates were performed using implicit image copy
backups, transferred directly to the destination server. From 12.1
it is also possible to perform active duplicates using backup sets
by including the USING BACKUPSET
clause.
Compared to image copy backups, the unused block compression
associated with a backup set can greatly reduce the amount of data
pulled across the network for databases containing lots of unused
blocks. Moreover, auxiliary channels from the destination site are
used to PULL the backups over the network, as opposed to the PUSH
method, used prior to 12c.
With Oracle Database 12c, a “pull” (or restore) process is based
on backup sets. A connection is first established with the source
database.The auxiliary instance then retrieves the required
database files from the source database as backup sets. A restore
operation is performed from the auxiliary instance instance.
Therefore, fewer resources are used on the source database.
Based on the DUPLICATE clauses, RMAN dynamically determines which
process to use (‘push’ or ‘pull’). This ensures that existing
customized scripts continue to function.
- When you specify USING BACKUPSET , RMAN uses the ‘pull’ method
(recommended as is FASTER).
- When you specify SET ENCRYPTION before the DUPLICATE command,
RMAN automatically uses the ‘pull’ method and creates backup
sets. The backups sent to the destination are encrypted.
- The SECTION SIZE clause divides data files into subsections
that are restored in parallel across multiple channels on the
auxiliary database. For an effective use of parallelization,
allocate more AUXILIARY channels.
- With the USING COMPRESSED BACKUPSET clause, the files are
transferred as compressed backup sets. RMAN uses unused block
compression while creating backups, thus reducing the size of
backups that are transported over the network.
The example below
performs an active duplicate of a source database (cdb1) to a new
destination database (cdb2) using backup sets rather than image
copy backups.
DUPLICATE DATABASE TO cdb2
FROM ACTIVE DATABASE
USING BACKUPSET
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
Active Database
Duplication and Parallelism (Multisection)
Active database
duplications can take advantage of the multisection backup
functionality introduced in Oracle 12c, whether using image copies
or backup sets. Including the SECTION SIZE
clause
indicates multisection backups should be used.
There must be multiple
channels available for multisection backups to work, so you will
either need to configure persistent channel parallelism using CONFIGURE DEVICE TYPE
... PARALLELISM
or use set the
parallelism for the current operation by performing multiple ALLOCATE CHANNEL
commands.
The example below
performs an active duplicate of a source database (cdb1) to a new
destination database (cdb2) using multisection backups.
CONFIGURE DEVICE TYPE disk PARALLELISM 4;
DUPLICATE DATABASE TO cdb2
FROM ACTIVE DATABASE
USING BACKUPSET
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK
SECTION SIZE 400M;
Multitenant
Considerations
All the examples shown
previously involve multitenant databases, but there are some extra
considerations when you are using the multitenant architecture.
If you are building an
"initSID.ora" file from scratch, you must remember to include the
following parameter.
enable_pluggable_database=TRUE
The previous examples
didn't have to do this as the SPFILE was created as a copy of the
source SPFILE, which already contained this parameter setting.
The DUPLICATE
command
includes some additional clauses related to the multitenant
option.
Adding the PLUGGABLE DATABASE
clause
allows you to specify which pluggable databases should be included
in the duplication. The following example creates a new container
database (cdb2), but it only contains two pluggable databases
(pdb1 and pdb2). The third pluggable database (pdb3) is not
included in the clone.
DUPLICATE DATABASE TO cdb2 PLUGGABLE DATABASE pdb1, pdb2
FROM ACTIVE DATABASE
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
The resulting clone
contains the following PDBs.
SELECT name FROM v$pdbs;
NAME
------------------------------
PDB$SEED
PDB1
PDB2
SQL>
Using the SKIP PLUGGABLE DATABASE
clause
will create a duplicate CDB will all the PDBs except those in the
list. The following example creates a container database (cdb2)
with a single pluggable database (pdb3). The other two pluggable
databases (pdb1 and pdb2) are excluded from the clone.
DUPLICATE DATABASE TO cdb2 SKIP PLUGGABLE DATABASE pdb1, pdb2
FROM ACTIVE DATABASE
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
The resulting clone
contains the following PDBs.
SELECT name FROM v$pdbs;
NAME
------------------------------
PDB$SEED
PDB3
SQL>
You can also limit the
tablespaces that are included in a PDB using the TABLESPACE
clause.
If we connect to the source container database (cdb1) and check
the tablespaces in the pdb1 pluggable database we see the
following.
CONN sys/Password1@cdb1 AS SYSDBA
ALTER SESSION SET CONTAINER = pdb1;
SELECT tablespace_name FROM dba_tablespaces;
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
TEST_TS
SQL>
Next, we perform a
duplicate for the whole of the pdb2 pluggable database, but just
the TEST_TS
tablespace
in the the pdb1 pluggable database.
DUPLICATE DATABASE TO cdb2 PLUGGABLE DATABASE pdb2 TABLESPACE pdb1:test_ts
FROM ACTIVE DATABASE
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
Checking the completed
clone reveals both the pdb1 and pdb2 pluggable databases are
present, but the pdb1 pluggable database does not include the USERS
tablespace.
CONN sys/Password1@cdb2 AS SYSDBA
SELECT name FROM v$pdbs;
NAME
------------------------------
PDB$SEED
PDB1
PDB2
SQL>
ALTER SESSION SET CONTAINER = pdb1;
SELECT tablespace_name FROM dba_tablespaces;
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
TEST_TS
SQL>
Clones always contains a
fully functional CDB and functional PDBs. Even when we just ask
for the TEST_TS
tablespace
in pdb1, we also get the SYSTEM
, SYSAUX
and TEMP
tablespaces
in the PDB. The TABLESPACE
clause
can be used on it's own without the PLUGGABLE DATABASE
clause,
if no full PDBs are to be duplicated.
The SKIP TABLESPACE
clause
allows you to exclude specific tablespaces, rather than use the
inclusion approach. The following example clones all the pluggable
databases, but excludes the TEST_TS
tablespace
from pdb1 during the duplicate.
DUPLICATE DATABASE TO cdb2 SKIP TABLESPACE pdb1:test_ts
FROM ACTIVE DATABASE
SPFILE
parameter_value_convert ('cdb1','cdb2')
set db_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set log_file_name_convert='/u01/app/oracle/oradata/cdb1/','/u01/app/oracle/oradata/cdb2/'
set audit_file_dest='/u01/app/oracle/admin/cdb2/adump'
set core_dump_dest='/u01/app/oracle/admin/cdb2/cdump'
set control_files='/u01/app/oracle/oradata/cdb2/control01.ctl','/u01/app/oracle/oradata/cdb2/control02.ctl','/u01/app/oracle/oradata/cdb2/control03.ctl'
set db_name='cdb2'
NOFILENAMECHECK;
Not surprisingly, the
resulting clone contains all the pluggable databases, but the pdb1
pluggable database is missing the TEST_TS
tablespace.
CONN sys/Password1@cdb2 AS SYSDBA
SELECT name FROM v$pdbs;
NAME
------------------------------
PDB$SEED
PDB1
PDB2
PDB3
SQL>
ALTER SESSION SET CONTAINER = pdb1;
SELECT tablespace_name FROM dba_tablespaces;
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
More information of this feature:
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=481234170644834&parent=EXTERNAL_SEARCH&sourceId=HOWTO&id=1987193.1&_afrWindowMode=0&_adf.ctrl-state=pm1r15dy_53
More examples here:
https://www.oracle.com/technetwork/es/articles/database-performance/active-duplication-rman-2549972-esa.html
DEFAULT
Values for Table Columns (Identity Columns)
DEFAULT Values Using Sequences
In Oracle 12c a table column can be created as "identity". As a
result, the column implicitly becomes mandatory, and a sequence is
automatically created and associated with the table. Then
(depending on how exactly the identity is defined) the sequence is
automatically used to produce values for the identity column when
new rows are inserted.
Identity Column Type –Options
•ALWAYS –Forces the use of the identity. If an insert statement
references the identity column, an error is produced
•BY DEFAULT – Allows using the identity if the column isn't
referenced in the insert statement. If the column is referenced,
the specified value will be used in place of the identity
•BY DEFAULT ON NULL –Allows the identity to be used if the
identity column is referenced, but a value of NULL is specified
Identity Column Type –Restrictions
•You can specify only one identity column per table
•When specifying identity clause, you must specify a numeric data
type for datatype in the column definition clause
•When specifying identity clause, you cannot specify the DEFAULT
clause in the column definition clause
•When specifying identity clause, the NOT NULL constraint is
implicitly specified
•CREATE TABLE AS SELECT will not inherit the identity property on
a column
In the following example you can see the effect of specifying a
sequence as the default value for a column. The default value is
only used when the column is not referenced by the insert.
This behavior can be modified using the ON NULL clause described
in the next section.
The fact we can use both the NEXTVAL and CURRVAL pseudocolumns
gives us the ability to auto-populate master-detail relationships,
as shown below.
CREATE SEQUENCE master_seq;
CREATE SEQUENCE detail_seq;
CREATE TABLE master (
id NUMBER
DEFAULT master_seq.NEXTVAL,
description VARCHAR2(30)
);
CREATE TABLE detail (
id NUMBER
DEFAULT detail_seq.NEXTVAL,
master_id NUMBER DEFAULT
master_seq.CURRVAL,
description VARCHAR2(30)
);
INSERT INTO master (description) VALUES ('Master 1');
INSERT INTO detail (description) VALUES ('Detail 1');
INSERT INTO detail (description) VALUES ('Detail 2');
INSERT INTO master (description) VALUES ('Master 2');
INSERT INTO detail (description) VALUES ('Detail 3');
INSERT INTO detail (description) VALUES ('Detail 4');
INSERT INTO master
(description) VALUES ('DESCRIPTION only');
INSERT INTO master (id,
description) VALUES (999, 'ID=999 and DESCRIPTION');
INSERT INTO master (id,
description) VALUES (NULL, 'ID=NULL and DESCRIPTION');
SELECT * FROM master;
ID DESCRIPTION
---------- ------------------------------
1 Master
1
2 Master
2
3 DESCRIPTION only
999 ID=999 and
DESCRIPTION
ID=NULL and DESCRIPTION
SELECT * FROM detail;
ID
MASTER_ID DESCRIPTION
---------- ---------- ------------------------------
1 1 Detail
1
2 1 Detail
2
3 2 Detail
3
4 2 Detail
4
Of course, this would only make sense if you could guarantee the
inserts into the detail table would always immediately follow the
insert into the master table, which would prevent you from using
bulk-bind operations.
A few things to remember about using sequence pseudocolumns as
defaults include:
- If the sequence is dropped after table creation,
subsequent inserts will error.
- Sequences used as default values are always stored in the
data dictionary with fully qualified names. Normal name
resolution rules are used to determine the sequence owner,
including expansion of private and public synonyms.
- As with any use of a sequence, gaps in the sequence of
numbers can occur for a number of reasons. For example, if a
sequence number is requested and not used, a statement
including a sequence is rolled back, or the databases is
turned off and cached sequence values are lost.
DEFAULT Values On Explicit NULLs
In the previous section we saw default values are only used when a
column is not referenced in an insert statement. If the column is
referenced, even when supplying the value NULL, the default value
is not used.
Oracle 12c allows you to modify this behaviour using the ON NULL
clause in the default definition.
The following example compares the default action of the DEFAULT
clause, with that of DEFAULT ON NULL. The example uses sequences
to populate two columns, one using the standard DEFAULT clause,
the other using the DEFAULT ON NULL clause.
CREATE SEQUENCE default_seq;
CREATE SEQUENCE default_on_null_seq;
CREATE TABLE t2 (
col1
NUMBER DEFAULT default_seq.NEXTVAL,
col2
NUMBER DEFAULT ON NULL default_on_null_seq.NEXTVAL,
col3 NUMBER
DEFAULT ON NULL 55,
description VARCHAR2(30)
);
INSERT INTO t2 (description) VALUES ('DESCRIPTION only');
INSERT INTO t2 (col1, col2, col3, description) VALUES
(999, 999, 11, '999,999,DESCRIPTION');
INSERT INTO t2 (col1, col2, description) VALUES (NULL,
NULL, 'NULL,NULL,DESCRIPTION');
SELECT * FROM t2;
COL1 COL2 COL3 DESCRIPTION
---------- ---------- ---- ------------------------------
1 1
55 DESCRIPTION only
999 999 11
999,999,DESCRIPTION
2 55 NULL,NULL,DESCRIPTION
Notice the difference in the way the explicit NULL is handled.
The GENERATED ALWAYS Option
An identity column can behave in one of three different ways. In the
first one – when the column is defined with the GENERATED ALWAYS AS
IDENTITY option (or simply GENERATED AS IDENTITY) – the sequence is
always used, and specifying explicit values is forbidden. Let’s use
this option to implement our task:
create table project_assignments (
assignment_id integer
GENERATED AS IDENTITY constraint project_assignments_pk
primary key,
person_id integer not
null ,
project_id integer not
null
);
We’ll insert now two rows into the PROJECT_ASSIGNMENTS table,
without specifying the ASSIGNMENT_ID column, and we’ll see that the
rows get a unique ASSIGNMENT_ID value:
insert into project_assignments (person_id,project_id) values
(101,1);
insert into project_assignments (person_id,project_id)
values (102,2);
select * from project_assignments ;
ASSIGNMENT_ID PERSON_ID PROJECT_ID
------------- ---------- ----------
1
101 1
2
102 2
If we try to specify an ASSIGNMENT_ID value explicitly, we’ll get
the following exception:
insert into project_assignments
(assignment_id,person_id,project_id) values (3,103,3);
ERROR at line 1:
ORA-32795: cannot insert into a generated always identity
column
The GENERATED BY DEFAULT Option
If we do want the ability to specify the ASSIGNMENT_ID explicitly,
and let Oracle assign a value automatically only when we do not
specify it, we can use instead the GENERATED BY DEFAULT AS IDENTITY
option. Let’s change the ASSIGNMENT_ID column to behave in this way:
alter table project_assignments modify ( assignment_id generated
BY DEFAULT as identity );
Retrying the last INSERT statement will succeed now:
insert into project_assignments
(assignment_id,person_id,project_id) values (3,103,3);
Watch out that this could be a problem because the next time that I
use that IDENTITY, if the column is a PK, then I will have an index
because the next value from the sequence will also be a 3.
insert into project_assignments (person_id,project_id) values
(104,4);
ERROR at line 1:
ORA-00001: unique constraint (DEMO5.PROJECT_ASSIGNMENTS_PK)
violated
Another attempt will succeed, as the sequence has already been
incremented:
insert into project_assignments (person_id,project_id) values
(104,4);
NOTE:
You can specify that IDENTITY column to start with a specific value
and increment by with another number. Example:
create table t2
( x number generated by default as
identity (start with 42 increment
by 1000),
y number
);
insert into t2 (x,y) values (1,100);
insert into t2 (y) values (200);
X Y
---------- ----------
1 100
42 200
The GENERATED BY DEFAULT ON NULL Option
Now, what will happen if we try to set ASSIGNMENT_ID to NULL
during INSERT? An identity column is always mandatory – the
column is defined as NOT NULL as part of the identity definition
(just like it happens when we define a column as PRIMARY KEY) –
and since we defined ASSIGNMENT_ID as GENERATED BY DEFAULT AS
IDENTITY, the following statement will simply try to insert NULL
to ASSIGNMENT_ID, and will fail:
insert into project_assignments
(assignment_id,person_id,project_id) values (null,201,1);
ERROR at line 1:
ORA-01400: cannot insert NULL into
("DEMO5"."PROJECT_ASSIGNMENTS"."ASSIGNMENT_ID")
We can use the third form of the identity clause – GENERATED BY
DEFAULT ON NULL AS IDENTITY – which means that Oracle will
generate values from the sequence whenever the identity column
is not specified explicitly or it is specified explicitly with
NULL.
Let’s change the ASSIGNMENT_ID to behave in this way:
alter table project_assignments modify ( assignment_id generated
BY DEFAULT ON NULL as identity );
Retrying the last INSERT statement will succeed now:
insert into project_assignments
(assignment_id,person_id,project_id) values (null,201,1);
select * from project_assignments;
ASSIGNMENT_ID PERSON_ID PROJECT_ID
------------- ---------- ----------
1
101 1
2
102 2
3
103 3
4
104 4
5
201 1
Note: DEFAULT ON NULL is actually an independent feature
(new in 12c) – it can be used in the definition of any column,
not only identity columns.
USER_TABLES has a new column – HAS_IDENTITY – that contains
YES if the table has an identity column and NO if not.
select table_name,HAS_IDENTITY from user_tables;
TABLE_NAME
HAS
------------------------------ ---
TOAD_PLAN_TABLE
NO
TEST
NO
T2
YES
T1
NO
PROJECT_ASSIGNMENTS
YES
USER_TAB_COLUMNS has two new relevant columns: IDENTITY_COLUMN
and DEFAULT_ON_NULL:
select column_name,data_type,nullable,
column_id,IDENTITY_COLUMN,DEFAULT_ON_NULL
from user_tab_columns
where table_name = 'PROJECT_ASSIGNMENTS'
order by column_id;
COLUMN_NAME DATA_TYPE NUL
COLUMN_ID IDENTITY_ DEFAULT_O
------------- ---------- --- ---------- ---------
---------
ASSIGNMENT_ID NUMBER
N
1 YES YES
PERSON_ID
NUMBER
N
2 NO NO
PROJECT_ID
NUMBER
N
3 NO NO
SQL Text Expansion
Here is another small but very useful new feature in Oracle
Database 12c - SQL Text Expansion. It will come in handy in
two cases:
- You are asked to tune what looks like a simple query -
maybe a two table join with simple predicates. But it
turns out the two tables are each views of views of views and
so on... In other words, you've been asked to 'tune' a 15 page
query, not a two liner.
- You are asked to take a look at a query against tables
with VPD (virtual private database) policies. In order
words, you have no idea what you are trying to 'tune'.
A new function, EXPAND_SQL_TEXT, in the DBMS_UTILITY package
makes seeing what the "real" SQL is quite easy. For example - take
the common view ALL_USERS - we can now:
variable x clob
begin
dbms_utility.expand_sql_text
( input_sql_text => 'select * from
all_users',
output_sql_text => :x );
end;
/
print x
X
--------------------------------------------------------------------------------
SELECT "A1"."USERNAME" "USERNAME","A1"."USER_ID"
"USER_ID","A1"."CREATED" "CREAT
ED","A1"."COMMON" "COMMON" FROM (SELECT "A4"."NAME"
"USERNAME","A4"."USER#" "US
ER_ID","A4"."CTIME"
"CREATED",DECODE(BITAND("A4"."SPARE1",128),128,'YES','NO') "
COMMON" FROM "SYS"."USER$" "A4","SYS"."TS$"
"A3","SYS"."TS$" "A2" WHERE "A4"."DA
TATS#"="A3"."TS#" AND "A4"."TEMPTS#"="A2"."TS#" AND
"A4"."TYPE#"=1) "A1"
Now it is easy to see what query is really being executed at
runtime - regardless of how many views of views you might
have. You can see the expanded text - and that will probably
lead you to the conclusion that maybe that 27 table join to 25
tables you don't even care about might better be written as a two
table join.
SQL*Loader
Express Mode
create table test(
text1 varchar2(10),
counter number
);
more test.dat
time,1
test,2
trim,3
twin,4
In Oracle 12c SQL*Loader can be executed in a new mode – the Express
Mode. In this mode we do not use a control file, and many defaults
are used implicitly. We can override many of these defaults by
specifying explicit parameters in the command-line level.
The only parameter that is mandatory in this mode is the TABLE
parameter. In our case, all we need to do is execute SQL*Loader from
C:\Data (where the text file is located) and specify TABLE=people.
Now, let’s execute SQL*Loader:
C:\Data>sqlldr demo5 table=test
Note that we are specifying the schema where the load has to happen,
followed by the table name. You have to keep the test.dat file in
the same directory where you are triggering the load. When you enter
this command, Oracle will prompt you for the password for the
schema. Enter the password and viola - you have completed the load.
For example, let’s say we want to load another file, which has the
following differences with respect to the original people.dat file:
The file name is my_people.txt. The field delimiter is semicolon.
The fields may contain semicolons in their content, and are enclosed
by double quotes in these cases
We can still use the Express Mode, like this:
C:\Data>sqlldr demo5/demo5 table=people data=my_people.txt
terminated_by=';' optionally_enclosed_by='\"'
A log file is generated for future use, including:
- Control File
- Create External Table statement
- insert Statement
External
Table Updates
Oracle 11g Release 2
introduced the PREPROCESSOR
clause
to identify a directory object and script used to process the
files before they are read by the external table. This feature was
backported to 11gR1 (11.1.0.7). The PREPROCESSOR
clause
is especially useful for reading compressed files, since they are
unzipped and piped straight into the external table process
without ever having to be unzipped on the file system.
CREATE OR REPLACE DIRECTORY exec_dir AS '/bin';
CREATE TABLE countries_ext (
country_code VARCHAR2(5),
country_name VARCHAR2(50),
country_language VARCHAR2(50)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ext_tab_data
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
PREPROCESSOR exec_dir:'zcat'
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
country_code CHAR(5),
country_name CHAR(50),
country_language CHAR(50)
)
)
LOCATION ('Countries1.txt.gz','Countries2.txt.gz')
)
PARALLEL 5
REJECT LIMIT UNLIMITED;
The EXECUTE
privilege
must be granted on the directory containing the executable or
script.
If you need to specify
command line parameters you should create a script to perform the
action, then call that in the PREPROCESSOR
clause.
For example, assuming we had an executable file in the executable
directory called "my_unzip.sh" with the following contents.
#!/bin/bash
/bin/gunzip -c $1
We could use the
following in the PREPROCESSOR
clause.
PREPROCESSOR exec_dir:'my_unzip.sh'
Direct NFS (DNFS)
Support
External tables now
support the use of Direct NFS (DNFS) to improve performance when
the data being loaded resides on NFS filers. DNFS is used by
default for files over 1G in size. Explicit control is possible
using the DNFS_ENABLE
, DNFS_DISABLE
and DNFS_READBUFFERS
parameters,
with the DNFS_READBUFFERS
parameter
defaulting to 4.
Setup
The examples in this
article require the SCOTT
schema. If
you are working in a multitenant environment, edit the
"$ORACLE_HOME/rdbms/admin/utlsampl.sql" script, amending the
connection to the SCOTT
schema as
necessary, then run the script when connected to the relevant PDB
as a privileged user.
conn sys@pdb1 as sysdba
@?/rdbms/admin/utlsampl.sql
Create a CSV file to
load later. This should be placed in a suitable NFS mount point.
In this case I'm also placing it into the "/tmp" directory so as
not to confuse when discussing non-NFS related functionality.
CONN scott/tiger@pdb1
ALTER SESSION SET nls_date_format='DD-MON-YYYY HH24:MI:SS';
-- Create data file.
SET PAGESIZE 0 LINESIZE 100 TRIMSPOOL ON FEEDBACK OFF
SPOOL /nfs/EMP.dat.tmp
SELECT empno || ',' || ename || ',' || job || ',' || mgr || ',' ||
hiredate || ',' || sal || ',' || comm || ',' || deptno
FROM emp;
SPOOL OFF
SET PAGESIZE 14 FEEDBACK ON
-- Clean up the file.
HOST cat /nfs/EMP.dat.tmp | grep '[0-9]\{4\}' > /nfs/EMP.dat
HOST cp /nfs/EMP.dat /tmp/EMP.dat
Create directory objects
to allow the SCOTT
user to load
the data file from both locations.
CONN sys@pdb1 AS SYSDBA
-- Create a directory pointing to an NFS location.
CREATE OR REPLACE DIRECTORY nfs_dir AS '/nfs/';
GRANT READ,WRITE ON DIRECTORY nfs_dir TO scott;
-- Create a directory pointing to an non-NFS location.
CREATE OR REPLACE DIRECTORY tmp_dir AS '/tmp/';
GRANT READ,WRITE ON DIRECTORY tmp_dir TO scott;
CONN scott/tiger@pdb1
DROP TABLE emp_ext;
CREATE TABLE emp_ext (
EMPNO NUMBER(4),
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY NFS_DIR
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
DNFS_ENABLE
DNFS_READBUFFERS 10
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
empno,
ename,
job,
mgr,
hiredate CHAR(21) DATE_FORMAT DATE MASK "DD-MON-YYYY HH24:MI:SS",
sal,
comm,
deptno
)
)
LOCATION ('EMP.dat')
)
PARALLEL 1
REJECT LIMIT UNLIMITED;
SELECT * FROM emp_ext;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 19-APR-87 3000 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 23-MAY-87 1100 20
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10
14 rows selected.
ORACLE_LOADER Access
Driver Enhancements
A number of minor
usability improvements have been made to the ORACLE_LOADER access
driver to make external table creation simpler.
The LOCATION
clause
now accepts wildcards. An "*" matches multiple characters, while a
"?" matches a single character.
LOCATION ('emp_ext*.dmp')
LOCATION ('emp_ext?.dmp')
The BADFILE
, DISCARDFILE
, and LOGFILE
clauses
can now be specified using only a directory object.
DROP TABLE emp_ext;
CREATE TABLE emp_ext (
EMPNO NUMBER(4),
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY TMP_DIR
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
BADFILE TMP_DIR
LOGFILE TMP_DIR
DISCARDFILE TMP_DIR
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
empno,
ename,
job,
mgr,
hiredate CHAR(21) DATE_FORMAT DATE MASK "DD-MON-YYYY HH24:MI:SS",
sal,
comm,
deptno
)
)
LOCATION ('EMP.dat')
);
SELECT * FROM emp_ext;
Files using CSV
(comma-separated-values) format can use the simplified FIELDS CSV
clause.
The default settings for this are shown below, but they can be
modified as described here.
FIELDS CSV WITH EMBEDDED TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
Assuming you are using a
typical CSV format, you can ignore the optional clauses and stick
to the basic FIELDS CSV
clause.
DROP TABLE emp_ext;
CREATE TABLE emp_ext (
EMPNO NUMBER(4),
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY TMP_DIR
ACCESS PARAMETERS
(
FIELDS CSV
MISSING FIELD VALUES ARE NULL
(
empno,
ename,
job,
mgr,
hiredate CHAR(21) DATE_FORMAT DATE MASK "DD-MON-YYYY HH24:MI:SS",
sal,
comm,
deptno
)
)
LOCATION ('EMP.dat')
);
SELECT * FROM emp_ext;
A default datetime
format can be specified for all datetime fields using the DATE_FORMAT
clause.
A separate default mask can be specified for DATE
, TIME
, TIME WITH TIME ZONE
, TIMESTAMP
, and TIMESTAMP WITH TIME
ZONE
fields.
DROP TABLE emp_ext;
CREATE TABLE emp_ext (
EMPNO NUMBER(4),
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY TMP_DIR
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
DATE_FORMAT DATE MASK "DD-MON-YYYY HH24:MI:SS"
MISSING FIELD VALUES ARE NULL
)
LOCATION ('EMP.dat')
);
SELECT * FROM emp_ext;
A default NULLIF
can
be specified that applies to all character fields. A
field-specific NULLIF
overrides
the default NULLIF
.
The NO
NULLIF
clause can be used against a
field to prevent the default NULLIF
applying
to it. The full syntax is presented here.
DROP TABLE emp_ext;
CREATE TABLE emp_ext (
EMPNO NUMBER(4),
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY TMP_DIR
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
NULLIF = "NONE"
(
empno,
ename,
job,
mgr NO NULLIF,
hiredate CHAR(21) DATE_FORMAT DATE MASK "DD-MON-YYYY HH24:MI:SS",
sal NULLIF job="PRESIDENT",
comm,
deptno
)
)
LOCATION ('EMP.dat')
);
SELECT * FROM emp_ext;
The ALL FIELDS OVERRIDE
clause
indicates that all fields are present in the data file and their
order matches the external table column order. This means the
field list only needs to contain columns that require a specific
definition.
DROP TABLE emp_ext;
CREATE TABLE emp_ext (
EMPNO NUMBER(4),
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY TMP_DIR
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
ALL FIELDS OVERRIDE
MISSING FIELD VALUES ARE NULL
(
hiredate CHAR(21) DATE_FORMAT DATE MASK "DD-MON-YYYY HH24:MI:SS"
)
)
LOCATION ('EMP.dat')
);
SELECT * FROM emp_ext;
Easy data export
CREATE TABLE customers_dump
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY HOME_DIR
LOCATION ('customers.dmp')
)
AS SELECT * FROM customers;
ONLINE
Move of a Table (12.2)
On 12.2 the ONLINE option allows DML against the table while it
runs. In the past you needed to use dbms_redefinitions. You can also
use this sentence to move indexes and partitions. Here are some
examples:
-- Create table.
DROP TABLE TOMOVE PURGE;
CREATE TABLE TOMOVE
(id
NUMBER CONSTRAINT TOMOVE_pk PRIMARY KEY ,
description VARCHAR2(50),
created_date DATE
);
-- Populate it.
INSERT INTO TOMOVE
SELECT level,
'Description for ' ||
level,
CASE
WHEN
MOD(level,2) = 0 THEN TO_DATE('01/07/2015', 'DD/MM/YYYY')
ELSE
TO_DATE('01/07/2016', 'DD/MM/YYYY')
END
FROM dual
CONNECT BY level <= 100;
COMMIT;
EXEC DBMS_STATS.gather_table_stats(USER, 'TOMOVE');
We can now move the table using the ONLINE keyword.
-- Basic move.
ALTER TABLE TOMOVE MOVE ONLINE TABLESPACE users;
-- Change table compression.
ALTER TABLE TOMOVE MOVE ONLINE TABLESPACE users COMPRESS UPDATE
INDEXES;
ALTER TABLE TOMOVE MOVE ONLINE TABLESPACE users NOCOMPRESS
UPDATE INDEXES;
-- Change storage parameters.
ALTER TABLE TOMOVE MOVE ONLINE STORAGE (PCTINCREASE 0);
More Examples:
alter index T1_IDX rebuild online;
alter table test.T1 move partition P4 online update
indexes;
--Ability to move a table or partition from one
segment to another online
ALTER TABLE sales MOVE ONLINE
TABLESPACE ts_data
COMPRESS
INCLUDING ROWS WHERE status = 'OPEN'
dbms_redefinition is still available, but it was simplified in
12c for some use cases. One procedure does all the work, but is less
flexible
Moving an entire table changing table, indexes and LOBs
attributes:
begin
DBMS_REDEFINITION.REDEF_TABLE (
uname=> 'TEST',
tname=> 'T1',
table_compression_type=> 'ROW STORE COMPRESS ADVANCED',
table_part_tablespace=> 'TEST_COMP',
index_key_compression_type=> 'COMPRESS ADVANCED LOW',
index_tablespace=> 'TEST_IDX_COMP',
lob_compression_type=> 'COMPRESS HIGH',
lob_tablespace=> 'TEST_COMP',
lob_store_as=> 'SECUREFILE');
end;
NOTE: Table and index owner (TEST) needs quota privilege on new
tablespace TEST_COMP.
dbms_redefinition-restrictions
•Same schema
•LONG and LONG RAW columns cannot be redefined
–cannot be moved manually either (alter table ..
move)
–LONG columns must be converted to CLOBS
–LONG RAW columns must be converted to BLOBS
–needs application testing
•NOTE: BFILE columns can be redefined online starting on 12.2
Ability to convert a non-partitioned table to a partitioned table
online
ALTER TABLE sales MODIFY
PARTITION BY LIST (region)
(partition p1 values ('USA'),
partition p2 values ('Germany'),
partition p3 values ('Japan'),
partition p4 values (DEFAULT))
UPDATE INDEXES ONLINE;
Approx
Counting
Approximate Query Processing, counts distinct values and adds
approximate percentile aggregation. This allows faster processing of
large data sets using approximation instead of exact
aggregation.
Since this is an aggregation it is not assured to be completely
accurate, however, in most cases it is very close and acceptable
considering the large performance boost it provides.
Instead of (100% accurate – 12.1.0.2):
select count(distinct(LOG_RECORD_ID)) from
SEAWARE.JOBS_ERROR_LOG;
Use this for speed (97% accurate – 12.1.0.2):
select approx_count_distinct(LOG_RECORD_ID) from
SEAWARE.JOBS_ERROR_LOG;
Oracle will approximate the total amount within 97% or so from the
actual result. Also the Explain Plan will show a change from SORT
GROUP BY to SORT AGGREGATE APPROX.
Other Approximate Functions:
APPROX_COUNT_DISTINCT_DETAIL
APPROX_COUNT_DISTINCT_AGG
TO_APPROX_COUNT_DISTINCT
APPROX_MEDIAN
APPROX_PERCENTILE
APPROX_PERCENTILE_DETAIL
APPROX_PERCENTILE_AGG
TO_APPROX_PERCENTILE
Read Object Privilege
As you might know, if you provide SELECT privilege to a user, that
user could lock and still update your data:
Grant access on the test
table to the read-only user.
GRANT SELECT ON test.tab1 TO read_only_user;
Issue the following SELECT ... FOR UPDATE
query
in a session connected to the read-only user. Do not issue commit
or rollback after it and keep the session open while you work in a
separate session.
CONN read_only_user/read_only_user@pdb1
SELECT *
FROM test.tab1
FOR UPDATE;
ID DESCRIPTION
---------- --------------------
1 Description of 1
2 Description of 2
3 Description of 3
4 Description of 4
5 Description of 5
SQL>
While connected as the
schema owner, attempt to update one of the rows. You will see it
hang, waiting for the read-only session to commit or rollback and
thereby release the locks.
CONN test/test@pdb1
UPDATE tab1
SET id = id
WHERE id = 1;
Issue a commit in the
read-only user session and you will see the update complete in the
schema owner session. Remember to commit the update also once it
completes.
This is exactly the
problem with using the SELECT
object
privilege for read-only users.
Oracle has the new READ privilege to avoid that situation
SQL*Plus Enhancements
PDB Support
SQL*Plus includes
support for the multitenant architecture.
The SHOW
command
displays information about PDBs.
SQL> SHOW CON_ID
CON_ID
------------------------------
1
SQL> SHOW CON_NAME
CON_NAME
------------------------------
CDB$ROOT
SQL> SHOW PDBS
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
Command History
Before we can use the HISTORY
command
we have to make sure it is turned on. In the example below we
attempt to use the HISTORY
command
with no options and we are told the HISTORY
command
is not enabled, so we enable it as instructed.
SQL> HISTORY
SP2-1650: History is off, use "SET HIST[ORY] ON" to enable History.
SQL> SET HISTORY ON
SQL> HISTORY
SP2-1651: History list is empty.
SQL>
Checking the help for
the HISTORY
command,
we get the following usage.
SQL> HELP HISTORY
HISTORY
-------
Stores, lists, executes, edits of the commands
entered during the current SQL*Plus session.
HIST[ORY] [N {RUN | EDIT | DEL[ETE]}] | [CLEAR]
N is the entry number listed in the history list.
Use this number to recall, edit or delete the command.
Example:
HIST 3 RUN - will run the 3rd entry from the list.
HIST[ORY] without any option will list all entries in the list.
SQL>
Let's put some
statements in the history and try a few commands.
ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
SELECT SYSDATE FROM dual;
SELECT 'Banana' FROM dual;
SQL> HISTORY
1 ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
2 SELECT SYSDATE FROM dual;
3 SELECT 'Banana' FROM dual;
SQL>
SQL> HISTORY 2 RUN
SYSDATE
--------------------
22-APR-2017 13:49:41
SQL>
SQL> HISTORY 1 DELETE
SQL> HISTORY
1 SELECT SYSDATE FROM dual;
2 SELECT 'Banana' FROM dual;
MARKUP CSV
The MARKUP
option
now includes a CSV
option.
MARK[UP] {HTML html_option|CSV csv_option}
html_option;
{ON|OFF} [HEAD text] [BODY text]
[TABLE {ON|OFF}] [ENTMAP {ON|OFF}]
[SPOOL {ON|OFF}] [PRE[FORMAT] {ON|OFF}]
csv_option:
{ON|OFF} [DELIM[ITER] {c}]
[QUOTE {ON|OFF}]
Here's a quick example
of it.
SET MARKUP CSV ON
SELECT level AS ID,
'Description of ' || level AS description
FROM dual
CONNECT BY level <= 5
ORDER BY 1;
"ID","DESCRIPTION"
1,"Description of 1"
2,"Description of 2"
3,"Description of 3"
4,"Description of 4"
5,"Description of 5"
SQLcl
The SQLcl tool is now
shipped with the database. This is and alternative to SQL*Plus
that you might want to try.
$ sql test/test@pdb1
SQLcl: Release 12.2.0.1.0 RC on Sat Apr 22 15:59:49 2017
Copyright (c) 1982, 2017, Oracle. All rights reserved.
Last Successful login time: Sat Apr 22 2017 15:59:50 +01:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL>
You can read more about
SQLcl here.
DBMS_CREDENTIAL
Oracle Database 12c enables enhanced security for extproc by
authenticating it against a user-supplied credential. This new
feature allows the creation of a user credential and links it with a
PL/SQL library object. Whenever an application calls an external
procedure, the extproc process authenticates the connection before
loading the shared library.
In Oracle 12c the credential related sub-programs of the
DBMS_SCHEDULER package have been deprecated and replaced by the new
DBMS_CREDENTIAL package. From a usage perspective it feels similar.
The DBMS_CREDENTIAL package is used to configure the credentials.
Let's explore this a little.
Create Credentials
Credentials are database objects that hold a username/password pair
for authenticating local and remote external jobs. They are created
using the CREATE_CREDENTIAL procedure. The procedure also allows you
to specify the Windows domain for remote external jobs executed
against a Windows server. Credentials are owned by SYS.
BEGIN
-- Basic credential.
DBMS_CREDENTIAL.create_credential(
credential_name => 'UNIX_CREDENTIAL',
username => 'user12',
password =>
'password12');
-- Credential including Windows domain.
DBMS_CREDENTIAL.create_credential(
credential_name =>
'WINDOWS_CREDENTIAL',
username => 'user12',
password =>
'password12',
windows_domain =>
'localdomain');
END;
/
Information about credentials is displayed using the
[DBA|ALL|USER]_CREDENTIALS views.
COLUMN credential_name FORMAT A25
COLUMN username FORMAT A20
COLUMN windows_domain FORMAT A20
SELECT credential_name, username,
windows_domain, enabled
FROM user_credentials
ORDER BY credential_name;
CREDENTIAL_NAME
USERNAME
WINDOWS_DOMAIN ENABL
------------------------- --------------------
-------------------- -----
UNIX_CREDENTIAL
user12
TRUE
WINDOWS_CREDENTIAL
user12
LOCALDOMAIN
TRUE
Enable/Disable Credentials
Credentials are enabled and disabled using the ENABLE_CREDENTIAL and
DISABLE_CREDENTIAL procedures respectively.
-- Disable credential.
EXEC DBMS_CREDENTIAL.disable_credential('UNIX_CREDENTIAL');
-- Enable credential.
EXEC DBMS_CREDENTIAL.enable_credential('UNIX_CREDENTIAL');
Update Credentials
The UPDATE_CREDENTIAL procedure allows you to modify attributes of a
credential.
BEGIN
DBMS_CREDENTIAL.update_credential(
credential_name => 'UNIX_CREDENTIAL',
attribute => 'username',
value
=> 'user2');
DBMS_CREDENTIAL.update_credential(
credential_name => 'UNIX_CREDENTIAL',
attribute => 'password',
value
=> 'password21');
END;
/
Drop Credentials
Credentials are dropped using the DROP_CREDENTIAL procedure.
EXEC DBMS_CREDENTIAL.drop_credential('UNIX_CREDENTIAL');
EXEC DBMS_CREDENTIAL.drop_credential('WINDOWS_CREDENTIAL');
BE AWARE!
It is noted in documentation, passwords are stored obfuscated, and
are not displayed in the *_SCHEDULER_CREDENTIALS views. But as SYS
user you could extract password with
DBMS_ISCHED.GET_CREDENTIAL_PASSWORD function.
SELECT u.name CREDENTIAL_OWNER, O.NAME CREDENTIAL_NAME,
C.USERNAME,
DBMS_ISCHED.GET_CREDENTIAL_PASSWORD(O.NAME, u.name)
pwd
FROM SYS.SCHEDULER$_CREDENTIAL C, SYS.OBJ$ O, SYS.USER$ U
WHERE U.USER# = O.OWNER#
AND C.OBJ# = O.OBJ# ;
CREDENTIAL_OWNER
CREDENTIAL_NAME
USERNAME
PWD
-------------------- ------------------------------
-------------------- ----------
TESTUSR
ORACLE_CRED
oracle
os_password
Scheduler
Oracle Improved the usage of Jobs by adding 3 script jobs:
External_Script, SQL_Script and Backup_Script. These will be used
with the DBMS_CREDENTIAL Package that we explained before.
Let's see some examples:
EXTERNAL_SCRIPT
The JOB_TYPE of EXTERNAL_SCRIPT is similar to an EXECUTABLE job,
except that it allows you to build a script on the fly and run it as
the JOB_ACTION, as if it were an existing shell script.
It can also be used to run an existing script by specifying the full
file system path to the script in the JOB_ACTION parameter.
On Windows servers the script is run using "cmd.exe". On UNIX/Linux
servers the script is run using "sh", unless the first line of the
script indicates a specific shell using a shebang, for example
"#!/bin/bash".
-- Create a job with a shell script defined in-line.
SET SERVEROUTPUT ON
DECLARE
l_job_name VARCHAR2(30);
l_script VARCHAR2(32767);
BEGIN
l_job_name := DBMS_SCHEDULER.generate_job_name;
DBMS_OUTPUT.put_line('JOB_NAME=' || l_job_name);
l_script := '#!/bin/bash
export PATH=$PATH:/bin
ls /home/oracle > /tmp/test.txt';
DBMS_SCHEDULER.create_job(
job_name =>
l_job_name,
job_type =>
'EXTERNAL_SCRIPT',
job_action
=> l_script,
credential_name => 'UNIX_CREDENTIAL',
enabled => TRUE
);
END;
/
JOB_NAME=JOB$_290
-- Check the status of the job.
COLUMN job_name FORMAT A20
SELECT job_name, status, error#
FROM user_scheduler_job_run_details
ORDER BY job_name;
JOB_NAME
STATUS
ERROR#
-------------------- ------------------------------
----------
JOB$_290
SUCCEEDED
0
A more realistic Example to execute an already created script
BEGIN
dbms_scheduler.create_job(
job_name => 'test_script_job',
job_type => 'EXECUTABLE',
job_action =>
'/home/oracle/test_script.sh',
start_date => SYSDATE,
credential_name => 'UNIX_CREDENTIAL'
enabled => FALSE,
repeat_interval => NULL);
end;
/
SQL_SCRIPT
The JOB_TYPE of SQL_SCRIPT runs the specified job as a SQL*Plus
script, which means you get access to all the SQL*Plus functionality
without having to manually shell out to the operating system and
initiate SQL*Plus.
-- Create a credential so SQL*Plus connects to the correct
database user.
-- Notice the service is included in the username.
BEGIN
DBMS_CREDENTIAL.create_credential(
credential_name => 'DB_CREDENTIAL',
username =>
'test@pdb1',
password => 'test12'
);
END;
/
-- Create a job with a SQL*Plus script defined in-line,
-- using a credential, rather than an explicit connect.
SET SERVEROUTPUT ON
DECLARE
l_job_name VARCHAR2(30);
l_script VARCHAR2(32767);
BEGIN
l_job_name := DBMS_SCHEDULER.generate_job_name;
DBMS_OUTPUT.put_line('JOB_NAME=' || l_job_name);
l_script := 'SPOOL /tmp/test.lst
SELECT SYSDATE, USER FROM dual;
SPOOL OFF';
-- Create the job in disabled mode.
-- The credential supplied is the OS user that
should
-- run SQL*Plus.
DBMS_SCHEDULER.create_job(
job_name =>
l_job_name,
job_type =>
'SQL_SCRIPT',
job_action
=> l_script,
credential_name => 'UNIX_CREDENTIAL',
enabled =>
FALSE
);
-- Set the credential used for SQL*Plus to
-- connect to the database;
DBMS_SCHEDULER.set_attribute(
name =>
l_job_name,
attribute =>
'connect_credential_name',
value =>
'DB_CREDENTIAL'
);
-- Enable the job.
DBMS_SCHEDULER.enable(l_job_name);
END;
/
BACKUP_SCRIPT
The JOB_TYPE of BACKUP_SCRIPT runs the specified job as an RMAN
script, without having to manually shell out to the operating system
to run RMAN. The JOB_ACTION can specify an in-line backup script, or
a full path to a backup script on the database server file system.
-- Create a job with an RMAN script defined in-line,
-- including an explicit connect.
SET SERVEROUTPUT ON
DECLARE
l_job_name VARCHAR2(30);
l_script VARCHAR2(32767);
BEGIN
l_job_name := DBMS_SCHEDULER.generate_job_name;
DBMS_OUTPUT.put_line('JOB_NAME=' || l_job_name);
-- Define the backup script.
l_script := 'connect target /
run {
backup archivelog all delete all input;
}';
DBMS_SCHEDULER.create_job(
job_name =>
l_job_name,
job_type =>
'BACKUP_SCRIPT',
job_action
=> l_script,
credential_name => 'UNIX_CREDENTIAL',
enabled => TRUE
);
END;
/
Scheduler View Changes
The most important change to the scheduler views is the addition of
the following columns to the [DBA | ALL |
USER]_SCHEDULER_JOB_RUN_DETAILS views.
ERRORS
OUTPUT
BINARY_ERRORS
BINARY_OUTPUT
These are extremely useful for diagnosing problems during job runs,
especially in dynamically created script jobs. They also save you
from having to use the DBMS_SCHEDULER.GET_FILE procedure to return
the stdout/stderr from external jobs, as shown below.
SQL> SELECT output FROM all_scheduler_job_run_details WHERE
job_name = 'JOB$_338';
Cast and
TO Conversion Functions
Setup
The following table is
used by the examples in this article.
CREATE TABLE t1 (
data VARCHAR2(20)
);
INSERT INTO t1 VALUES ('11111');
INSERT INTO t1 VALUES ('01-JAN-2016');
INSERT INTO t1 VALUES ('AAAAA');
COMMIT;
CAST and TO_* Conversion
Functions
In previous database
versions failure during data type conversions resulted in an
error.
SELECT TO_NUMBER(data)
FROM t1;
ERROR:
ORA-01722: invalid number
no rows selected
SQL>
In Oracle database 12.2
the CAST
function
and several of the TO_*
functions
have been amended to include error handling functionality,
allowing them to return a default value in the event of a
conversion error.
SELECT TO_NUMBER(data DEFAULT -1 ON CONVERSION ERROR)
FROM t1;
*
TO_NUMBER(DATADEFAULT-1ONCONVERSIONERROR)
-----------------------------------------
11111
-1
-1
SQL>
SELECT TO_DATE(data DEFAULT '01-JAN-2000' ON CONVERSION ERROR, 'DD-MON-YYYY' )
FROM t1;
TO_DATE(D
---------
01-JAN-00
01-JAN-16
01-JAN-00
SQL>
SELECT CAST(data AS TIMESTAMP DEFAULT NULL ON CONVERSION ERROR, 'DD-MON-YYYY')
FROM t1;
CAST(DATAASTIMESTAMPDEFAULTNULLONCONVERSIONERROR,'DD-MON-YYYY')
---------------------------------------------------------------------------
01-JAN-16 12.00.00.000000000 AM
SQL>
VALIDATE_CONVERSION
Function
The VALIDATE_CONVERSION
function
is used to test if a conversion will be successful, making it
possible to exclude data that would cause a problem during an
operation. The function returns the value "1" if the conversion is
successful and "0" if it fails.
SELECT data
FROM t1
WHERE VALIDATE_CONVERSION(data AS NUMBER) = 1;
DATA
--------------------
11111
SQL>
SELECT data
FROM t1
WHERE VALIDATE_CONVERSION(data AS DATE, 'DD-MON-YYYY') = 1;
DATA
--------------------
01-JAN-2016
Manage SGA and PGA in
Multitenant
Inheritance
Inheritance in dealing with Oracle means that the value of a
particular parameter in the root (CDB) is inherited by one or more
PDBs. There are parameters that can be changed at the PDB
level and override what is being inherited from the CDB. To
identify which parameters can be modified for a PDB, the
ISPDB_MODIFIABLE column in the V$SYSTEM_PARAMETER view must be
TRUE. If the ISPDB_MODIFIABLE is TRUE then the parameter
inherits the value from the CDB until it is changed with an ALTER
SYSTEM SET command at the PDB.
Now, let’s work on changing parameters for a PDB. In order
to do this, you needed to navigate into a PDB using the ALTER
SESSION SET CONTAINER command.
SQL> ALTER SESSION SET CONTAINER=bcpdb1;
The following parameters
can be set at the PDB level.
DB_CACHE_SIZE
:
The minimum buffer cache size for the PDB.
SHARED_POOL_SIZE
:
The minimum shared pool size for the PDB.
PGA_AGGREGATE_LIMIT
:
The maximum PGA size for the PDB.
PGA_AGGREGATE_TARGET
:
The target PGA size for the PDB.
SGA_MIN_SIZE
:
The minimum SGA size for the PDB.
SGA_TARGET
:
The maximum SGA size for the PDB.
There are a number of
restrictions regarding what values can be used, which are
explained in the documentation here.
To summarize.
- The
NONCDB_COMPATIBLE
parameter
is set to FALSE in the root container.
- The
MEMORY_TARGET
parameter
is unset or set to "0" in the root container.
- The individual parameters have
a variety of maximum limits to prevent you from over-allocating
memory within the PDB and the instance generally. If you attempt
to set an incorrect value an error will be produced.
Pre Oracle 12c R1, there was no option to limit and control the
PGA size. Although, you set a certain size to PGA_AGGREGATE_TARGET
initialization parameter, Oracle could increase/reduce the size of
the PGA dynamically based on the workload and requirements.
In 12c, you can set a hard limit on PGA by enabling the automatic
PGA management, which requires PGA_AGGREGATE_LIMIT parameter
settings. Therefore, you can now set the hard limit on PGA by
setting the new parameter to avoid excessive PGA usage.
ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=2G;
ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=0; --disables the
hard limit
Important notes:
When the current PGA limits exceeds, Oracle will automatically
terminates/abort the session/process that holds the most untenable
PGA memory.
Monitoring Memory Usage for PDBs
Oracle now provides views to monitor the resource (CPU, I/O,
parallel execution, memory) usage of PDBs. Each view contains
similar information, but for different retention periods.
V$RSRCPDBMETRIC : A single row per PDB, holding the last of the 1
minute samples.
V$RSRCPDBMETRIC_HISTORY : 61 rows per PDB, holding the last 60
minutes worth of samples from the V$RSRCPDBMETRIC view.
V$RSRC_PDB : Cumulative statistics since the CDB resource plan ws
set.
DBA_HIST_RSRC_PDB_METRIC : AWR snaphots, retained based on the AWR
retention period. Access to the AWR views require additional
licensing, so be warned.
The following queries are examples of their usage.
CONN / AS SYSDBA
SET LINESIZE 150
COLUMN pdb_name FORMAT A10
COLUMN begin_time FORMAT A26
COLUMN end_time FORMAT A26
ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YYYY HH24:MI:SS';
ALTER SESSION SET NLS_TIMESTAMP_FORMAT='DD-MON-YYYY
HH24:MI:SS.FF';
-- Last sample per PDB.
SELECT r.con_id,
p.pdb_name,
r.begin_time,
r.end_time,
r.sga_bytes,
r.pga_bytes,
r.buffer_cache_bytes,
r.shared_pool_bytes
FROM v$rsrcpdbmetric r,
cdb_pdbs p
WHERE r.con_id = p.con_id
ORDER BY p.pdb_name;
-- Last hours samples for PDB1
SELECT r.con_id,
p.pdb_name,
r.begin_time,
r.end_time,
r.sga_bytes,
r.pga_bytes,
r.buffer_cache_bytes,
r.shared_pool_bytes
FROM v$rsrcpdbmetric_history r,
cdb_pdbs p
WHERE r.con_id = p.con_id
AND p.pdb_name = 'PDB1'
ORDER BY r.begin_time;
-- Check you are licensed to do this before trying!
-- All AWR snapshot information for PDB1.
SELECT r.snap_id,
r.con_id,
p.pdb_name,
r.begin_time,
r.end_time,
r.sga_bytes,
r.pga_bytes,
r.buffer_cache_bytes,
r.shared_pool_bytes
FROM dba_hist_rsrc_pdb_metric r,
cdb_pdbs p
WHERE r.con_id = p.con_id
AND p.pdb_name = 'PDB1'
ORDER BY r.begin_time;
TABLE
Operator with Locally Defined Types in PL/SQL
Setup
The examples in this article follow the same pattern. They populate
a collection, then instantly select from the collection using the
TABLE operator, and print out the contents. The code is merely to
demonstrate the fact the local collection is accessible using the
TABLE operator.
CREATE TABLE DEPT (
DEPTNO NUMBER(2) CONSTRAINT PK_DEPT PRIMARY KEY,
DNAME VARCHAR2(14),
LOC VARCHAR2(13)
) ;
CREATE TABLE EMP (
EMPNO NUMBER(4) CONSTRAINT PK_EMP PRIMARY KEY,
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7,2),
COMM NUMBER(7,2),
DEPTNO NUMBER(2) CONSTRAINT FK_DEPTNO REFERENCES
DEPT
);
INSERT INTO DEPT VALUES (10,'ACCOUNTING','NEW YORK');
INSERT INTO DEPT VALUES (20,'RESEARCH','DALLAS');
INSERT INTO DEPT VALUES (30,'SALES','CHICAGO');
INSERT INTO DEPT VALUES (40,'OPERATIONS','BOSTON');
INSERT INTO EMP VALUES
(7369,'SMITH','CLERK',7902,to_date('17-12-1980','dd-mm-yyyy'),800,NULL,20);
INSERT INTO EMP VALUES
(7499,'ALLEN','SALESMAN',7698,to_date('20-2-1981','dd-mm-yyyy'),1600,300,30);
INSERT INTO EMP VALUES
(7521,'WARD','SALESMAN',7698,to_date('22-2-1981','dd-mm-yyyy'),1250,500,30);
INSERT INTO EMP VALUES
(7566,'JONES','MANAGER',7839,to_date('2-4-1981','dd-mm-yyyy'),2975,NULL,20);
INSERT INTO EMP VALUES
(7654,'MARTIN','SALESMAN',7698,to_date('28-9-1981','dd-mm-yyyy'),1250,1400,30);
INSERT INTO EMP VALUES
(7698,'BLAKE','MANAGER',7839,to_date('1-5-1981','dd-mm-yyyy'),2850,NULL,30);
INSERT INTO EMP VALUES
(7782,'CLARK','MANAGER',7839,to_date('9-6-1981','dd-mm-yyyy'),2450,NULL,10);
INSERT INTO EMP VALUES
(7788,'SCOTT','ANALYST',7566,to_date('13-JUL-87','dd-mm-rr')-85,3000,NULL,20);
INSERT INTO EMP VALUES
(7839,'KING','PRESIDENT',NULL,to_date('17-11-1981','dd-mm-yyyy'),5000,NULL,10);
INSERT INTO EMP VALUES
(7844,'TURNER','SALESMAN',7698,to_date('8-9-1981','dd-mm-yyyy'),1500,0,30);
INSERT INTO EMP VALUES
(7876,'ADAMS','CLERK',7788,to_date('13-JUL-87',
'dd-mm-rr')-51,1100,NULL,20);
INSERT INTO EMP VALUES
(7900,'JAMES','CLERK',7698,to_date('3-12-1981','dd-mm-yyyy'),950,NULL,30);
INSERT INTO EMP VALUES
(7902,'FORD','ANALYST',7566,to_date('3-12-1981','dd-mm-yyyy'),3000,NULL,20);
INSERT INTO EMP VALUES
(7934,'MILLER','CLERK',7782,to_date('23-1-1982','dd-mm-yyyy'),1300,NULL,10);
COMMIT;
Associative Array
Example
This example uses an
associative array based on a %ROWTYPE
defined
in the package specification.
CREATE OR REPLACE PACKAGE test_api AS
TYPE t_tab IS TABLE OF emp%ROWTYPE
INDEX BY BINARY_INTEGER;
PROCEDURE test1;
END;
/
CREATE OR REPLACE PACKAGE BODY test_api AS
PROCEDURE test1 IS
l_tab1 t_tab;
BEGIN
SELECT *
BULK COLLECT INTO l_tab1
FROM emp
WHERE deptno = 10;
DBMS_OUTPUT.put_line('Loop Through Collection');
FOR cur_rec IN (SELECT *
FROM TABLE(l_tab1))
LOOP
DBMS_OUTPUT.put_line(cur_rec.empno || ' : ' || cur_rec.ename);
END LOOP;
END;
END;
/
SET SERVEROUTPUT ON
EXEC test_api.test1;
Loop Through Collection
7782 : CLARK
7839 : KING
7934 : MILLER
PL/SQL procedure successfully completed.
SQL>
Nested Table Example
This example uses a
nested table based on a %ROWTYPE
defined in the
package specification.
CREATE OR REPLACE PACKAGE test_api AS
TYPE t_tab IS TABLE OF emp%ROWTYPE;
PROCEDURE test1;
END;
/
CREATE OR REPLACE PACKAGE BODY test_api AS
PROCEDURE test1 IS
l_tab1 t_tab;
BEGIN
SELECT *
BULK COLLECT INTO l_tab1
FROM emp
WHERE deptno = 10;
DBMS_OUTPUT.put_line('Loop Through Collection');
FOR cur_rec IN (SELECT *
FROM TABLE(l_tab1))
LOOP
DBMS_OUTPUT.put_line(cur_rec.empno || ' : ' || cur_rec.ename);
END LOOP;
END;
END;
/
SET SERVEROUTPUT ON
EXEC test_api.test1;
Loop Through Collection
7782 : CLARK
7839 : KING
7934 : MILLER
PL/SQL procedure successfully completed.
SQL>
Varray Example
This example uses a
varray based on a %ROWTYPE
defined in the
package specification.
CREATE OR REPLACE PACKAGE test_api AS
TYPE t_tab IS VARRAY(10) OF emp%ROWTYPE;
PROCEDURE test1;
END;
/
CREATE OR REPLACE PACKAGE BODY test_api AS
PROCEDURE test1 IS
l_tab1 t_tab;
BEGIN
SELECT *
BULK COLLECT INTO l_tab1
FROM emp
WHERE deptno = 10;
DBMS_OUTPUT.put_line('Loop Through Collection');
FOR cur_rec IN (SELECT *
FROM TABLE(l_tab1))
LOOP
DBMS_OUTPUT.put_line(cur_rec.empno || ' : ' || cur_rec.ename);
END LOOP;
END;
END;
/
SET SERVEROUTPUT ON
EXEC test_api.test1;
Loop Through Collection
7782 : CLARK
7839 : KING
7934 : MILLER
PL/SQL procedure successfully completed.
SQL>
Supported Types
The previous examples
all use a %ROWTYPE
of
a schema table for the collection type. It is also possible to use %TYPE
, a scalar
and a local record type.
The following example
uses an associated array based on a %TYPE
. Notice the
column is referenced as COLUMN_VALUE
.
CREATE OR REPLACE PACKAGE test_api AS
TYPE t_tab IS TABLE OF emp.empno%TYPE
INDEX BY BINARY_INTEGER;
PROCEDURE test1;
END;
/
CREATE OR REPLACE PACKAGE BODY test_api AS
PROCEDURE test1 IS
l_tab1 t_tab;
BEGIN
SELECT empno
BULK COLLECT INTO l_tab1
FROM emp
WHERE deptno = 10;
DBMS_OUTPUT.put_line('Loop Through Collection');
FOR cur_rec IN (SELECT *
FROM TABLE(l_tab1))
LOOP
DBMS_OUTPUT.put_line(cur_rec.column_value);
END LOOP;
END;
END;
/
SET SERVEROUTPUT ON
EXEC test_api.test1;
Loop Through Collection
7782
7839
7934
PL/SQL procedure successfully completed.
SQL>
The following example
uses an associated array based on a scalar. Notice the column is
referenced as COLUMN_VALUE
.
CREATE OR REPLACE PACKAGE test_api AS
TYPE t_tab IS TABLE OF NUMBER
INDEX BY BINARY_INTEGER;
PROCEDURE test1;
END;
/
CREATE OR REPLACE PACKAGE BODY test_api AS
PROCEDURE test1 IS
l_tab1 t_tab;
BEGIN
SELECT empno
BULK COLLECT INTO l_tab1
FROM emp
WHERE deptno = 10;
DBMS_OUTPUT.put_line('Loop Through Collection');
FOR cur_rec IN (SELECT *
FROM TABLE(l_tab1))
LOOP
DBMS_OUTPUT.put_line(cur_rec.column_value);
END LOOP;
END;
END;
/
SET SERVEROUTPUT ON
EXEC test_api.test1;
Loop Through Collection
7782
7839
7934
PL/SQL procedure successfully completed.
SQL>
The following example
uses an associated array based on a local record type.
CREATE OR REPLACE PACKAGE test_api AS
TYPE t_row IS RECORD (
empno NUMBER(4),
ename VARCHAR2(10)
);
TYPE t_tab IS TABLE OF t_row
INDEX BY BINARY_INTEGER;
PROCEDURE test1;
END;
/
CREATE OR REPLACE PACKAGE BODY test_api AS
PROCEDURE test1 IS
l_tab1 t_tab;
BEGIN
SELECT empno, ename
BULK COLLECT INTO l_tab1
FROM emp
WHERE deptno = 10;
DBMS_OUTPUT.put_line('Loop Through Collection');
FOR cur_rec IN (SELECT *
FROM TABLE(l_tab1))
LOOP
DBMS_OUTPUT.put_line(cur_rec.empno || ' : ' || cur_rec.ename);
END LOOP;
END;
END;
/
SET SERVEROUTPUT ON
EXEC test_api.test1;
Loop Through Collection
7782 : CLARK
7839 : KING
7934 : MILLER
PL/SQL procedure successfully completed.
SQL>
Location of Type
Definition
For this new
functionality to work, the type must be defined in package
specification. The examples below show how alternate locations
fail.
The following example
moves the type definition to the package body, which results in a
compilation failure.
CREATE OR REPLACE PACKAGE test_api AS
PROCEDURE test1;
END;
/
CREATE OR REPLACE PACKAGE BODY test_api AS
TYPE t_tab IS TABLE OF emp%ROWTYPE
INDEX BY BINARY_INTEGER;
PROCEDURE test1 IS
l_tab1 t_tab;
BEGIN
SELECT *
BULK COLLECT INTO l_tab1
FROM emp
WHERE deptno = 10;
DBMS_OUTPUT.put_line('Loop Through Collection');
FOR cur_rec IN (SELECT *
FROM TABLE(l_tab1))
LOOP
DBMS_OUTPUT.put_line(cur_rec.empno || ' : ' || cur_rec.ename);
END LOOP;
END;
END;
/
Warning: Package Body created with compilation errors.
SQL> show errors
Errors for PACKAGE BODY TEST_API:
LINE/COL ERROR
-------- -----------------------------------------------------------------
15/21 PL/SQL: SQL Statement ignored
16/28 PL/SQL: ORA-22905: cannot access rows from a non-nested table
item
16/34 PLS-00382: expression is of wrong type
18/7 PL/SQL: Statement ignored
18/28 PLS-00364: loop index variable 'CUR_REC' use is invalid
Partitioned
External Tables
Setup
SET MARKUP CSV ON QUOTE ON
SET TRIMSPOOL ON LINESIZE 1000 FEEDBACK OFF PAGESIZE 0
SPOOL C:\temp\gbr1.txt
SELECT 'GBR',
object_id,
owner,
object_name
FROM dba_objects
WHERE object_id <= 2000
AND rownum <= 1000;
SPOOL OFF
SPOOL C:\temp\gbr2.txt
SELECT 'GBR',
object_id,
owner,
object_name
FROM dba_objects
WHERE object_id BETWEEN 2000 AND 3999
AND rownum <= 1000;
SPOOL OFF
SPOOL C:\temp\ire1.txt
SELECT 'IRE',
object_id,
owner,
object_name
FROM dba_objects
WHERE object_id <= 2000
AND rownum <= 1000;
SPOOL OFF
SPOOL C:\temp\ire2.txt
SELECT 'IRE',
object_id,
owner,
object_name
FROM dba_objects
WHERE object_id BETWEEN 2000 AND 3999
AND rownum <= 1000;
SPOOL OFF
SET MARKUP CSV OFF
SET TRIMSPOOL ON LINESIZE 1000 FEEDBACK OFF PAGESIZE 14
CREATE OR REPLACE DIRECTORY tmp_dir1 AS 'C:\temp\';
CREATE OR REPLACE DIRECTORY tmp_dir2 AS 'C:\temp\';
Partitioned External
Tables
The following example
creates a list partitioned external table based on the CSV files
we created previously. Each partition can have a separate location
definition, which can optionally include a directory definition.
If a location is not defined the partition will be seen as empty.
DROP TABLE part_tab_ext;
CREATE TABLE part_tab_ext (
country_code VARCHAR2(3),
object_id NUMBER,
owner VARCHAR2(128),
object_name VARCHAR2(128)
)
ORGANIZATION EXTERNAL (
TYPE oracle_loader
DEFAULT DIRECTORY tmp_dir1
ACCESS PARAMETERS (
FIELDS CSV WITH EMBEDDED TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
MISSING FIELD VALUES ARE NULL (
country_code,
object_id,
owner,
object_name
)
)
)
REJECT LIMIT UNLIMITED
PARTITION BY LIST (country_code) (
PARTITION part_gbr VALUES ('GBR') LOCATION ('gbr1.txt', 'gbr2.txt'),
PARTITION part_usa VALUES ('IRE') DEFAULT DIRECTORY tmp_dir2 LOCATION ('ire1.txt', 'ire2.txt'),
PARTITION part_others VALUES ('XXX')
);
Querying the external
table shows the files are being read correctly.
SELECT country_code, COUNT(*)
FROM part_tab_ext
GROUP BY country_code
ORDER BY country_code;
COU COUNT(*)
--- ----------
GBR 2000
IRE 2000
If we gather statistics
we can see how the rows were spread between the partitions.
EXEC DBMS_STATS.gather_table_stats(USER, 'part_tab_ext');
SET LINESIZE 120
COLUMN table_name FORMAT A20
COLUMN partition_name FORMAT A20
COLUMN high_value FORMAT A20
SELECT table_name,
partition_name,
high_value,
num_rows
FROM user_tab_partitions
WHERE table_name = 'PART_TAB_EXT'
ORDER BY 1, 2;
TABLE_NAME PARTITION_NAME HIGH_VALUE NUM_ROWS
-------------------- -------------------- -------------------- ----------
PART_TAB_EXT PART_GBR 'GBR' 2000
PART_TAB_EXT PART_OTHERS 'XXX' 0
PART_TAB_EXT PART_USA 'IRE' 2000
Subpartitioned External
Tables
The following example
creates a list-range subpartitioned external table based on the
CSV files we created previously. Each subpartition can have a
separate location definition, which can optionally include a
directory definition at partition or subpartition level.
DROP TABLE subpart_tab_ext;
CREATE TABLE subpart_tab_ext (
country_code VARCHAR2(3),
object_id NUMBER,
owner VARCHAR2(128),
object_name VARCHAR2(128)
)
ORGANIZATION EXTERNAL (
TYPE oracle_loader
DEFAULT DIRECTORY tmp_dir1
ACCESS PARAMETERS (
FIELDS CSV WITH EMBEDDED TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
MISSING FIELD VALUES ARE NULL (
country_code,
object_id,
owner,
object_name
)
)
)
REJECT LIMIT UNLIMITED
PARTITION BY LIST (country_code)
SUBPARTITION BY RANGE (object_id) (
PARTITION part_gbr VALUES ('GBR') (
SUBPARTITION subpart_gbr_le_2000 VALUES LESS THAN (2000) LOCATION ('gbr1.txt'),
SUBPARTITION subpart_gbr_gt_2000 VALUES LESS THAN (MAXVALUE) DEFAULT DIRECTORY tmp_dir2 LOCATION ('gbr2.txt')
),
PARTITION part_ire VALUES ('IRE') DEFAULT DIRECTORY tmp_dir2 (
SUBPARTITION subpart_ire_le_2000 VALUES LESS THAN (2000) LOCATION ('ire1.txt'),
SUBPARTITION subpart_ire_gt_2000 VALUES LESS THAN (MAXVALUE) LOCATION ('ire2.txt')
)
);
Improvements
over Partition Operations
Automatic List Partitioning
Automatic list partitioning was introduced in Oracle Database 12c
Release 2 (12.2) to solve the problem of how to handle new distinct
values of the list partitioning key.
Your company currently deals with customers from USA, UK and Ireland
and you want to partition your orders table based on the country.
You achieve this as follows.
CREATE TABLE orders
(
id
NUMBER,
country_code VARCHAR2(5),
customer_id NUMBER,
order_date DATE,
order_total NUMBER(8,2),
CONSTRAINT orders_pk PRIMARY KEY (id)
)
PARTITION BY LIST (country_code)
(
PARTITION part_usa VALUES ('USA'),
PARTITION part_uk_and_ireland VALUES ('GBR', 'IRL')
);
INSERT INTO orders VALUES (1, 'USA', 10, SYSDATE,
10200.93);
INSERT INTO orders VALUES (2, 'USA', 11, SYSDATE, 948.22);
INSERT INTO orders VALUES (3, 'GBR', 22, SYSDATE, 300.83);
INSERT INTO orders VALUES (4, 'IRL', 43, SYSDATE, 978.43);
Automatic list partitioning creates a partition for any new distinct
value of the list partitioning key. We can enable automatic list
partitioning on the existing table using the ALTER TABLE command.
ALTER TABLE orders SET PARTITIONING AUTOMATIC;
Alternatively we could recreate the table using the AUTOMATIC
keyword.
CREATE TABLE orders
(
id
NUMBER,
country_code VARCHAR2(5),
customer_id NUMBER,
order_date DATE,
order_total NUMBER(8,2),
CONSTRAINT orders_pk PRIMARY KEY (id)
)
PARTITION BY LIST (country_code) AUTOMATIC
(
PARTITION part_usa VALUES ('USA'),
PARTITION part_uk_and_ireland VALUES ('GBR', 'IRL')
);
INSERT INTO orders VALUES (1, 'USA', 10, SYSDATE,
10200.93);
INSERT INTO orders VALUES (2, 'USA', 11, SYSDATE, 948.22);
INSERT INTO orders VALUES (3, 'GBR', 22, SYSDATE, 300.83);
INSERT INTO orders VALUES (4, 'IRL', 43, SYSDATE, 978.43);
Once automatic list partitioning is enabled we can successfully
insert the new order.
INSERT INTO orders VALUES (5, 'BGR', 96, SYSDATE, 2178.43);
We can see a new partition has been created to hold the new order by
querying the {CDB|DBA|ALL|USER}_TAB_PARTITIONS view.
EXEC DBMS_STATS.gather_table_stats(USER, 'orders', cascade
=> TRUE);
SET LINESIZE 100
COLUMN table_name FORMAT A30
COLUMN partition_name FORMAT A30
COLUMN high_value FORMAT A15
SELECT table_name,
partition_name,
high_value,
num_rows
FROM user_tab_partitions
ORDER BY 1, 2;
TABLE_NAME
PARTITION_NAME
HIGH_VALUE NUM_ROWS
------------------------------
------------------------------ --------------- ----------
ORDERS
PART_UK_AND_IRELAND
'GBR',
'IRL'
2
ORDERS
PART_USA
'USA'
2
ORDERS
SYS_P549
'BGR'
1
We can see automatic list partitioning is enabled by querying the
AUTOLIST column in the {CDB|DBA|ALL|USER}_PART_TABLES view.
COLUMN table_name FORMAT A30
COLUMN autolist FORMAT A8
SELECT table_name,
autolist
FROM user_part_tables;
TABLE_NAME
AUTOLIST
------------------------------ --------
ORDERS
YES
More Changes
Partial
Indexes for Partitioned Tables
Partition
Maintenance Operations on Multiple Partitions
Other
Partitioning Enhancements (12.1)
Create
Table for Exchange With a Partitioned Table in Oracle Database
12c Release 2 (12.2)
Filtered
Partition Maintenance Operations in Oracle Database 12c Release
2 (12.2)
Multi-Column
List Partitioning in Oracle Database 12c Release 2 (12.2)
Online SPLIT PARTITION and SPLIT SUBPARTITION in Oracle Database
12c Release 2 (12.2)
All
Partitioning Enhancements in 12.2
Read-Only
Partitions and Subpartitions in Oracle Database 12c Release 2
(12.2)
More New Features
SQL*Loader
Enhancements in Oracle Database 12c Release 1 (12.1)
Table
Enhancements in Oracle Database 12c Release 1 (12.1)
Temporal
Validity in Oracle Database 12c Release 1 (12.1)
Code
Based Access Control (CBAC) : Granting Roles to PL/SQL Program
Units in Oracle Database 12 Release 1 (12.1)
Control
Invoker Rights Privileges for PL/SQL Code in Oracle Database 12c
Release 1 (12.1) (INHERIT [ANY] PRIVILEGES)
Control
Invoker Rights Privileges in Views in Oracle Database 12c Release
1 (12.1) (BEQUEATH CURRENT_USER)
Data
Guard Physical Standby Setup Using the Data Guard Broker in Oracle
Database 12c Release 1
DBMS_COMPRESSION
Enhancements in Oracle Database 12c Release 1 (12.1.0.2)
DBMS_CREDENTIAL
: Persist Database and OS Credentials in Oracle Database 12c
Release 1 (12.1)
Fine-Grained
Access to Network Services Enhancements in Oracle Database 12c
Release 1
Implicit
Statement Results in Oracle Database 12c Release 1 (12.1)
(DBMS_SQL.RETURN_RESULT and DBMS_SQL.GET_NEXT_RESULT)
In-Memory
Column Store in Oracle Database 12c Release 1 (12.1.0.2)
LATERAL Inline Views, CROSS APPLY and OUTER APPLY Joins in Oracle
Database 12c Release 1 (12.1)
PL/SQL
White Lists Using the ACCESSIBLE BY Clause in Oracle Database 12c
Release 1 (12.1)
Heat
Map, Information Lifecycle Management (ILM) and Automatic Data
Optimization (ADO) in Oracle Database 12c Release 2 (12.2)
Index
Usage Tracking (DBA_INDEX_USAGE, V$INDEX_USAGE_INFO) in Oracle
Database 12c Release 2 (12.2)
LISTAGG
Function Enhancements in Oracle Database 12c Release 2 (12.2)
Override
External Table Parameters From a Query in Oracle Database 12c
Release 2 (12.2)
SecureFiles
: Large Object (LOB) Enhancements in Oracle Database 12c (12.1 and
12.2)