Oracle 8i New Features

- Service Registration for NET 8i                         - Database Triggers                                             - NOCOPY Hint
- Online Index Rebuild and Coalesce                   - Bitmap Indexes
- Reverse Key Indexes                                        - Function Bases Indexes
- Multiple Archival Redo Logs                             - Log Miner
- Fined Grained Access                                       - Transportable Tablespaces
- Partition Tables                                                 - Partition Indexes
- Actions over Partitions Objects                         - Cluster Tables
- Index Organized Tables (IOT)                          - Temporary Tables
- Move Tables between Tablespaces                  - Drop Columns
- Random generator                                            - Detect Block Corruption
- Export With the where clause                           - Simplified Dynamic SQL
- Locally-Managed Tablespaces                         - Limited Random Scan
- Profiler                                                             - Bulk Binds (FORALL and BULK COLLECT)
- Autonomous Transactions                                 -Materialized Views
- Auto Conversion of Literals in Bind variables    - Using CUBE and ROLLUP and Grouping Extensions

Service Registration for Net 8i
In Oracle8i there is a mechanism to register a single or multiple instances with the TNS listener. In Oracle 7.x and 8.0.x the instances that are served by a listener are enlisted in the listener.ora file. This was a static listing in that , the listener has to be restarted whenever a new service is added .Also since the listener never checked for the state of the instance , it could be that the listener is started to serve instances which are down.
With this new feature, you do not have explicit the SID_LIST_LISTENER parameter in the listener.ora file (except for external procedures).
PMON dynamically registers a database service with the listener, but this takes at least 30sec to 1 min, to take place. If you have set the SID_LIST_LISTENER in the listener, in the beginning it will show the single handler, and eventually when PMON dynamically registers a database service with the listener it will create another service handler.
Oracle 8i perform dynamic sid registration by using the INITsid.ORA  parameters :
If these parameters are set then the instance will automatically try to register the value given in services_name to the listener. If no local_listener is defined in the init.ora then it attempts to connect the listener at the default port of 1521

In Oracle 8 a typical listener configuration file would look like:
   (HOST= )

However in Oracle 8I  the SID_LIST_LISTENER may not include the list of instances that the listener has to serve, ie. we would simply have the Address part of the listener as below:

To correct the issue you have two choices:

1) Do not use dynamic sid registration. Comment out instance_name and services_name from the init.ora and bounce the instance. At this point you are using regular sid values the same way you did in pre 8i and use the regular listener.ora method

2) Include local_listener in the init.ora to point the registration to the correct listener. The following is an example:
listener_A =

If there is both a static service registration information (via the LISTENER.ORA file), and the dynamic service registration information (registered by the instance), "DEDICATED SERVER" entries appear twice for the same instance.

Yo can also try removing/commenting out the following parameters from the $ORACLE_HOME/network/admin/sqlnet.ora file:

Access the following notes by entering the note number in the search box above and click Search -
Note: 69546.1 - Oracle8i - Instance Registration to Listeners --> La mejor con explicacion paso
Note: 76531.1 - Service Name Usage in Net8i
Note: 76636.1 - Service Registration in Net 8i with examples

Database Triggers
Prior to Oracle 8i, database triggers could be applied to tables only. Essentially, they were table triggers. Oracle 8i introduces eight new database triggers, which extend beyond previous limitation. Users can now trap DATABASE / DDL events.Table 1 lists these triggers.

Table 1. New Oracle8i Triggers

Trigger Event

Executes Before/After

Trigger Description

STARTUP (database event)


Executes when the database is started

SHUTDOWN (database event)


Executes when the database is shut down

SERVERERROR (database event)


Executes when a server-side error occurs

LOGON (database event)


Executes when a session connects to the database

LOGOFF (database event)


Executes when a session disconnects from the database

CREATE (DDL event)


Executes when a database object is created; could be created to apply to the schema or to the entire database

ALTER (DDL event)


Executes when a database object is altered; could be created to apply to the schema or to the entire database

DROP (DDL event)


Executes when a database object is dropped; could be created to apply to the schema or to the entire database

In addition, the trigger can also be defined at a DATABASE or SCHEMA level. This is supplied with the ON clause.
  ... keyword DATABASE  specifies that the trigger is being defined on the entire database.
  ... keyword SCHEMA specifies that the trigger is being defined on the current schema.

 Before a trigger can be created, the user SYS must run the SQL script "DBMSSTDX.SQL".   The CREATE TRIGGER / CREATE ANY TRIGGER privilege is required to create a trigger in OWN / ANY schema.
 In addition to  the  preceding privileges, to  create  a  trigger on DATABASE, one  must have  the  ADMINISTER DATABASE TRIGGER system privilege.

after startup ON database

ON schema

Event attribute functions
 One can obtain certain event-specific attributes when a trigger is fired. These attributes can be used as standalone functions. Some examples ...
 1.  How to get the name of the login user
   ATTRIBUTE : login_user
   EXAMPLE   :  SELECT sys.login_user FROM dual;

 2. Check if a particular error has occured.
   ATTRIBUTE : is_servererror
   EXAMPLE   :  IF (is_servererror(942))
     THEN INSERT INTO event_table
     (_table not present!!_);
         END IF;

    The following example creates a logon statistics table and a LOGON and LOGOFF database trigger to capture the time when a user connects/disconnects to/from the database.

CREATE TABLE session_logon_statistics
(user_logged VARCHAR2(30),
start_time DATE,
end_time DATE);

   INSERT INTO session_logon_statistics (user_logged, start_time)

CREATE OR REPLACE TRIGGER logoff_log_trigger
   UPDATE session_logon_statistics SET end_time = SYSDATE
      WHERE user_logged = USER
       AND end_time IS NULL;

The following script retrieves the information from the session_logon_statistics table.
SELECT user_logged,
TO_CHAR(start_time, 'MM/DD/YYYY HH24:MI:SS') "START TIME",
FROM session_logon_statistics
order by user_logged, start_time;

 - DDL triggers are supported only for the following database objects:   cluster, function, index, package, procedure, role, sequence, synonym, table, tablespace, trigger, type, view, and user.
 - Only AFTER triggers are relevant for LOGON, STARTUP, and SERVERERROR.
 - Only BEFORE triggers are relevant for LOGOFF and SHUTDOWN.
 - AFTER STARTUP and BEFORE SHUTDOWN triggers apply only to DATABASE.

Online Index Rebuilds and Coalesce
Example:   alter index my_indx REBUILD ONLINE;
Example:   alter index my_indx COALESCE

Bitmap Indexes
- The bitmap index works on items with low cardinality (Marital status, Account status)
- The search is much faster!!
- Benefits data warehousing applications with large amount of data and ad hoc queries.
- Not good for OLTP applications with large numbers of transactions modifying data.
    CREATE INDEX employee_idx ON  employee (status)

Reverse Key Indexes
Some problems can occur when the values for a PK are assigned chronologically in sequence: the B*tree works hard with inserts or deletes
The answer is the reverse-key index whereby different index leafs are used, even if the key sequence is chronological.

Creating a reverse key index.
CREATE INDEX employee_idx ON  employee (ID) REVERSE;

To make an existing index a reverse key index use:

Function Based Indexes
Improves dramatically the speed of queryes containing formulas or SUM or AVG functions.

The following index:
  CREATE INDEX idx ON table_1 (a + b * (c - 1), a, b);
 then Oracle can use it when processing queries such as this:
  SELECT a FROM table_1 WHERE a + b * (c - 1) < 100;

 The following index:
  CREATE INDEX uppercase_idx ON emp (UPPER(empname));
 can facilitate processing queries such as this:

1. The following initialization parameters must be defined:
 - COMPATIBLE must set to or a greater value


3. To use a function-based index:
        - Cost Based Optimizer (CBO) must be used to optimize the query (Eg: The table is analyzed after the index is created or a hint supplied)
        - The query must be guaranteed not to need any NULL values from the indexed expression, since NULL values are not stored in indexes.

4. Rule-based optimization never uses function-based indexes. Thus the optimizer_mode should NOT be set to RULE. If set to CHOOSE then the table must be analyzed.

5. If any user-written function is used in a function-based index, it must have been declared with the DETERMINISTIC keyword to indicate that the function will always return the same output return value for any given set of input argument values, now and in the future.

Multiple Remote Archival of Redo Logs
A database can specify up to 5 archival destinations with a new (dynamic) initialization parameter
Each destination may be a local location (physical disk) or a remote standby database

LOG_ARCHIVE_DEST_1="location=/u01/arch813_1 mandatory"
LOG_ARCHIVE_DEST_2="location=/u02/arch813_2 optional reopen=60"
LOG_ARCHIVE_DEST_3="service=uksn117_jb reopen=60"

Log Miner
In 8i, Oracle introduces the DBMS_LOGMNR and DBMS_LOGMNR_D packages.
- View contents of the transaction log files
- Very useful for auditing transactions
- Can be used to 'undo' erroneous transactions
- Detects LOGICAL corruption

First, ensure that UTL_FILE_DIR in your init.ora is set to a directory that you have write/read permissions on. Next, create a data dictionary. This allows the columns and table names to be recognizable. As sys, perform the following:

=>filename => 'dictionary.ora',
=>dictionary_location => 'pathname/filename');

Now you're ready to specify the logs to analyze. You may read a log for a time range or SCN range. We'll take the simplest case and read the whole log. The new option replaces previous log files with the one specified. You can include additional logs by specifying the log name with the addfile option. Still as sys, execute the following:

exec dbms_logmnr.add_logfile
=> (logfilename => 'pathname/redo
=>filename', options =>;

Finally, perform the following:
exec dbms_logmnr.start_logfile
=> (dictfilename => 'pathname/

Now you can query the view v$logmnr_contents. It will show the original and undo SQL. Issuing exec dbms_logmnr.end_logfile will empty the v$ view.

Fine-grained access control
- Provides row-level security
- Users can access ONLY to rows created by them

Transportable Tablespaces
1- Convert tablespace to read-only
    alter tablespace &tablespace_name read only;
2- Export the tablespace
    Connect as "sys as sysdba"
    exp file=tablespEXP.dmp transportable_tablespace=y tablespaces=xxxxx
3- Copy the tablespace to target database
4- Convert tablespace to read-write if required
    alter tablespace &tablespace_name read write;
5- Import the tablespaces to the new database
    imp file=tablespEXP.dmp transportable_tablespace=y datafiles='xxxxx.dbf'

Partitioning Tables
A table can be divided into partitions based on a range of key values. Each partition can be operated independently. You access to a portion of it.
    - Admin and availability benefits (reduce downtime)
    - Performance Improvements (disk striping)


- Range Partitioning
 create table dept (deptno number (2), dept_name varchar2(30) )
  PARTITION by RANGE (deptno)

Hash type partitioning
This type of partitioning allows a better mastering of the distribution of the data in the different partitions. These partitions  are defined with the help of a hashing function offered by Oracle. This function is then applied to a list of columns.

CREATE TABLE emp_hpart(
  empno NUMBER(4) NOT NULL,
  ename VARCHAR2(10),
  sal NUMBER(7,2))

This partitioning method is recommended in the following cases:
    - Impossible to have a criteria for the distribution of data
    - Difficult to anticipate the quantity of data for a given partition
    - Hard  to balance the load in each partition

This partitioning method requires that the number of partitions allocated be a power of  2 in order to ensure a uniform distribution of data in the different partitions.
Specific storage clauses cannot be specified for each partition. These clauses are inherited from  the tablespace in which the partitions reside.
The administration of these HASH type partitions is similar to that of the RANGE ones except for DROP, SPLIT and MERGE.  The ADD and COALESCE commands are used to add and drop partitions.

Composite type partitioning
This method combines the two sorting methods  described previously. The first level being based on the RANGE method and the second, finer grained, is based on a HASHing function. The benefit obtained is the following:
    The first level sorts the data on a logical basis while  the second balances its distribution among each one of the partitions.

CREATE TABLE emp_composite(
   empno NUMBER(4) NOT NULL,
   ename VARCHAR2(10),
   sal NUMBER(6))

This query presents a RANGE type first level partitioning based on a  NUMBER type column (EMPNO). The second level, HASH type partitioning, also based on a NUMBER  type column (SAL) partitions each of these partitions into 4 sub-partitions.
At  the RANGE level the partitioning is a logical sort . The data is then physically stored in the sub-partitions(HASH). The storage at this level can be visualised  in the DBA_TAB_SUBPARTITIONS view.

Partitioning Indexes
Oracle offers 2 main types of indexes: Local (easier to use) and Global
Local indexes are equi-partitioned with the table they are based on. That means that for each index entry residing in partition A there is a row of the table which also resides in that same partition A.
A local index linked to a table has the following properties:
    - Same number of partitions/sub-partitions
    - Same partition limits
    - Same partitioning keys

The index partitioning is automatically maintained along with that of the associated table in case of partition/subpartition addition, subtraction, splitting, or compacting.

Global partitioned indexes are more flexible than local partitioned indexes, in that they are not constrained by the partition key on the underlying table. Global partitioned indexes are defined by their own ranges and degrees that do not need to match that of the related table. When creating this type of index it is necessary to define, in a precise manner, the types and limits for the partitions.
While this does offer a great deal of flexibility, it is not without cost.   Since global partitioned indexes are not directly related to any single table partition, operations that affect any partition in the table can render all global partitioned indexes unusable.  Specifically the operations:


Three index types are supported in the two groups mentioned above:
    - Local prefixed
    - Local non prefixed
    - Global prefixed

Actions on  partitions.
Consider the following Test table:

ename VARCHAR2(10),
sal NUMBER(7,2))
partition emp_p1 VALUES LESS THAN (50),
partition emp_p2 VALUES LESS THAN (100),
partition emp_p3 VALUES LESS THAN (150),
partition emp_p4 VALUES LESS THAN (200)) ;

1.Moving partitions.
Allows the transfer of a table partitions from one tablespace to another. The status of  the index partitions tied to this partition will become 'unusable'. In the case of a global index, then the whole index will have to be rebuilt.

ALTER TABLE test MOVE PARTITION emp_p1 tablespace tbs1 ;

2 Adding partitions.
Allows you to add an extra partition beyond the last partition as long as the upper limit is not equal to MAXVALUE.  Should it be equal, then adding a partition would be impossible.  SPLIT enables you to add  intermediate partitions. SPLIT cuts an existing partition  in half, making two distinct partitions. Applied to the upper partition, SPLIT allows you to add an extra partition beyond the upper limit.
The following command adds an extra partition at the end of the TEST table:

ALTER TABLE test ADD PARTITION emp_p5 values less than (250) ;

In the case of indexes, partitions can only be added to global indexes. The upper limit of a global index always being MAXVALUE implies that SPLIT is the only possible command.

3 Splitting partitions.
Separates the contents of a partition into two distinct partitions. The associated index partitions, global and local, become 'unusable'.

INTO (PARTITION emp_p11, PARTITION emp_p12) ;

In the previous example, the emp_p1 partitions is divided into two distinct partitions emp_p11 and emp_p12. The two partitions are redefined on the following values: 1-24 &  24-49.  This functionality also works on global indexes.
The SPLIT command can't be used on HASH type partitions, ADD has to be used instead.
Consider the emp_hpart table described previously, distributed on 4 partitions P1,P2,P3,P4:

--------------- ------------------ ---------------
P1                               1 DATA01
P2                               2 DATA02
P3                               3 DATA03
P4                               4 DATA04

--------------- --------------- --------------- ----------- ------------
EMP_HPART       P1              DATA01                    8           21
EMP_HPART       P2              DATA02                    9           21
EMP_HPART       P3              DATA03                   10            2
EMP_HPART       P4              DATA04                   11            2

The contents of the first partition P1 is the following:

EMPNO      ENAME                SAL
---------- ---------- ----------
       303 TEST            10302
       306 TEST            10305
       310 TEST            10309
       325 TEST            10324
       326 TEST            10325
       332 TEST            10331
       338 TEST            10337
       341 TEST            10340
       343 TEST            10342
       347 TEST            10346
       348 TEST            10347
       349 TEST            10348
       350 TEST            10349
       360 TEST            10359
       361 TEST            10360
       365 TEST            10364
       368 TEST            10367
       370 TEST            10369
       373 TEST            10372
       380 TEST            10379
       385 TEST            10384
       386 TEST            10385
       396 TEST            10395
       400 TEST            10399
       401 TEST            10400

25 rows selected.

The following command will add a new partition P5 and redistribute the rows of partition P1 between P1 and P5.


--------------- --------------- --------------- ----------- ------------
EMP_HPART       P1              DATA01                    8            2
EMP_HPART       P2              DATA02                    9           21
EMP_HPART       P3              DATA03                   10            2
EMP_HPART       P4              DATA04                   11            2
EMP_HPART       P5              USERS                     5         1292

SQL>select * from emp_hpart partition(P1);

     EMPNO ENAME             SAL
---------- ---------- ----------
       306 TEST            10305
       310 TEST            10309
       325 TEST            10324
       326 TEST            10325
       338 TEST            10337
       341 TEST            10340
       343 TEST            10342
       347 TEST            10346
       350 TEST            10349
       370 TEST            10369
       373 TEST            10372
       385 TEST            10384
       400 TEST            10399
       401 TEST            10400

14 rows selected.

SQL> select * from emp_hpart partition(P5);

     EMPNO ENAME             SAL
---------- ---------- ----------
       303 TEST            10302
       332 TEST            10331
       348 TEST            10347
       349 TEST            10348
       360 TEST            10359
       361 TEST            10360
       365 TEST            10364
       368 TEST            1036
       380 TEST            10379
       386 TEST            10385
       396 TEST            10395

11 rows selected.

Notice that partition P1 has been erased and recreated.

4 Dropping partitions.
Allows the withdrawal of a table or global index partition.  The DROP of a table partition causes the status of all of the partitions of the global index to become 'unusable'.  A complete rebuild of the index has to occur to modify this status.


This functionality is not available for HASH partitioned tables which have to use the COALESCE command instead.

5 Truncate partitions.
Discards all the rows of a table partition while the storage allocated may be preserved. This option is not available for indexes. Local index partitions are automatically kept up to date by Oracle. In the case of global indexes, the status of all of the partitions becomes 'unusable'.


6 Merging partitions.
Enables adjacent  partitions to coalesce. The  resulting partitions inherit of the upper boundary of the highest partition.

ALTER TABLE test MERGE PARTITIONS emp_p1, emp_p2 into emp_p1emp_p2 ;

This option is not available on HASH type partitioned tables where COALESCE has to be  used instead.

7 Partition exchanges
Allows the transfer of non-partitioned  tables into partitions as well as the reverse. That is the transfer of partitions into non-partitioned tables.  This option is particularly useful in migrating V7 partitioned views into table partitions. Consider the following partitioned view based on 4 tables: less50, less100, less150, less200.

CREATE VIEW test_view AS
        SELECT * FROM less50
        UNION ALL
        SELECT * FROM less100
        UNION ALL
        SELECT * FROM less150
        UNION ALL
        SELECT * FROM less200 ;

An empty partitioned table needs to be created, on the same schema as the underlying tables of the partitioned views:

CREATE TABLE new_test(
ename VARCHAR2(10),
sal NUMBER(7,2))
partition emp_p1 VALUES LESS THAN (50),
partition emp_p2 VALUES LESS THAN (100),
partition emp_p3 VALUES LESS THAN (150),
partition emp_p4 VALUES LESS THAN (200));

Then, need to be transferred,  each underlying table of the view in the corresponding partition of the partitioned table:


This takes a very short time as the updates only take place in the Oracle dictionary. There is no physical movement of the segments. The structure of the tables to swap, partitioned as well as non-partitioned must be identical in terms of types, columns, sizes as well as number of columns.

8 Coalesce partitions.
New 8.1.5 functionality dedicated to HASH type partitions. It indicated to the Oracle Kernel that it must chose a HASH type partition and redistribute the rows on the remaining partitions. Oracle then drops the partition selected for the operation. Look again at the emp_hpart table:

--------------- ------------------ ---------------
P1                               1 DATA01
P2                               2 DATA02
P3                               3 DATA03
P4                               4 DATA04

--------------- --------------- --------------- ----------- ------------
EMP_HPART       P1              DATA01                    8           21
EMP_HPART       P2              DATA02                    9           21
EMP_HPART       P3              DATA03                   10            2
EMP_HPART       P4              DATA04                   11            2


--------------- --------------- --------------- ----------- ------------
EMP_HPART       P1              DATA01                    8           21
EMP_HPART       P2              DATA02                    9            2
EMP_HPART       P3              DATA03                   10            2

It can be noticed that Oracle has redistributed the rows in the P2 and P4 partitions as well as recreated a new P2 partition.

Cluster Tables
A cluster is a group of tables that share the same data blocks because they share common columns and are often used together

Index Organized Tables (IOT)
- They are similar to a table, but all the data is in the index
- IOT are very useful whenever data is always accessed via the PK index (unique values).
- Tables not accessed via the PK value are not good candidates for IOT. Also, tables whose PK values are updated and tables that have frequent insertions are not good candidates for IOT
- Cannot include LOB or Object Type columns
- You can create secondary Indexes (8.1.6)

ename VARCHAR2(10),
sal NUMBER(7,2))
 partition emp_p1 VALUES LESS THAN (50) TABLESPACE data01,
 partition emp_p2 VALUES LESS THAN (100) TABLESPACE data02,
 partition emp_p3 VALUES LESS THAN (150) TABLESPACE data03,
 partition emp_p4 VALUES LESS THAN (MAXVALUE) tablespace data04);

Each IOT partitions generates 2 partitions: a data partition and an overflow partition. This overflow partition will contain the data associated to the rows beyond the ename column including that very column.

Temporary Tables
- Excellent for Temporary results
- Indexes, triggers and views can be created on these tables (the table MUST be empty)
- Data exists only for the duration of a transaction or session
- Only visible within a single transaction or session
- No redo log info is generated
- Created in a user's temporary tablespaces
- A TRUNCATE command truncates data in its own session

  CREATE GLOBAL TEMPORARY TABLE tablename ( columns )

  The default option is to delete rows on commit.

Move a table to a new tablespace

Prior to Oracle 8i, the two most common methods of doing this were to EXPORT, DROP and then IMPORT the table back into the database, or issue a CREATE TABLE &#8230; AS SELECT to create a new copy of the table, and then drop the original and rename the new one to the name of the original table. 

With the ALTER TABLE <tablename> MOVE syntax it is now possible to reorganize the table while ensuring that security information and indexes are retained, and the table is accessible to users while the MOVE is being performed!! If your table was created on the ACCOUNTING tablespace, for example, and you wanted to move it to the new ACCOUNTING2 tablespace you just created on a new set of disks, you can issue the following command:
    ALTER TABLE Customers MOVE TABLESPACE Accounting2
If you simply wanted to re-organize the table and keep it on the same tablespace, you can do that too by issuing the command:
    ALTER TABLE Customers MOVE TABLESPACE Accounting
In either case, Oracle will rebuild the table in the target tablespace, allow users to query the existing table while the move is taking place, keep the existing permissions assigned to the table, and rebuild the indexes after the operation is completed. Optionally, you can also specify storage clause characteristics to change the extent sizes and other parameters (with the exception of FREELISTS or FREELIST GROUPS) if you find that the pervious storage parameters are no longer compatible with the table access and storage patterns. 
Of course, there is a downside as well. Because the old copy of the table is kept until the move completes to allow queries to take place, this means that you will need sufficient disk space for both the old and new versions of the table - typically double what was required before.
Note that the MOVE option of the ALTER TABLE syntax only applies to non-partitioned or index-organized tables. If you want to move partitions or sub-partitions to another tablespace, or reorganize them, you need to issue the ALTER TABLE &#8230; MOVE PARTITION or ALTER TABLE &#8230; MOVE SUBPARTITION commands for each partition or sub-partition.

Drop table columns

    ALTER TABLE employees SET UNUSED COLUMN fax_number;  --> flagged as UNUSED and physically dropped later
    ALTER TABLE employees DROP COLUMN fax_number;

Random Number Generator (DBMS_RANDOM)

The DBMS_RANDOM package is implemented by accessing a generator internal to Oracle. This package is not installed by default and must be created by executing the ?/rdbms/admin/dbmsrand.sql script
This package contains programs: Initialize, Seed, Random, Terminate
For more random number generation options, the cryptographic toolkit package  (DBMS_CRYPTO_TOOLKIT) should be used.  The DBMS_CRYPTO_TOOLKIT package is created by dbmsoctk.sql.
The first step when using DBMS_RANDOM to generate random numbers is to initialize the generator using the INITIALIZE procedure. The parameter for this procedure identifies the seed number to be used as a basis for the generator. Using at least a five-digit value is recommended to ensure a well-randomized pattern. This seed value can be changed programmatically anytime thereafter by calling the SEED procedure directly.

1. Installation (if necessary): As user SYS, execute the scripts dbmsoctk.sql, prvtoctk.plb, dbmsrand.sql

2. Initialize random number generator with seed.  The seed should be at least 5 digits according to the package comments to ensure randomness.
        SQL> exec dbms_random.initialize (12345);
        PL/SQL procedure successfully completed.

3. Generate random number.
   Example 1:
       SQL> set serveroutput on
       SQL> declare
            random_number binary_integer;
            random_number := dbms_random.random;
       PL/SQL procedure successfully completed.

   Example 2: (from Oracle Cryptographic Toolkit Programmer's Guide Rel 2.0.4)
            i BINARY_INTEGER;
            i := dbms_random.random;
            INSERT INTO some_table VALUES(i);

4. Terminate the random number generator to release memory.
        SQL> exec dbms_random.terminate;
        PL/SQL procedure successfully completed.

5. Change the random number generator seed after initial initialization.
        SQL> exec dbms_random.seed (12346);
        PL/SQL procedure successfully completed.

6. Warnings
If dbms_random.terminate has been executed after dbms_random.random was successful, then subsequent executions of dbms_random.random may return neither a result or an error.  If dbms_random.random is run before executing dbms_random.initialize then an exception will be raise and "ORA-06510: PL/SQL: unhandled user-defined exception results if the exception is not handled in the pl/sql block.  It is not currently possible to use the return value of the random function directly in a SQL statement.
         SQL> exec dbms_random.initialize (12345);
         PL/SQL procedure successfully completed.

        SQL> insert into x values(dbms_random.random);
             insert into x values(dbms_random.random)
             ERROR at line 1:
             ORA-06571: Function RANDOM does not guarantee not to update the database.

Generating random nos. - a simple method?
select trunc((to_number(substr(to_char(to_number(to_char(sysdate,'sssss'))
                     /86399),-7,7))/10000000)*32767) random
  from dual

Block Repair Facility (DBMS_REPAIR)

In Oracle 8.1, Oracle has made it easier for DBAs to find and repair block corruption without losing the entire corrupted database object. The DBMS_REPAIR package detects lost data, while making the object usable even when block corruption occurs. While some data (the corrupted data) may be lost, this feature enables you to skip the corrupted blocks without errors, thus making recovery from block corruption much easier. Additional reporting features are provided with this package. Prior to the introduction of this package, the recovery of an object with corrupted blocks was a manually intensive process.
The dbmsrepr.sql file contains the creation of the DBMS_REPAIR package and is created by default when the database is created. The DBMS_REPAIR is only accessible to the SYS user by default.

Auto Conversion of Literals in Bind Variables
If in tge v$sqlarea I noticed that all the sentences are similar except their values in the where clause we can use CURSOR_SHARING=FORCE to force Oracle to share the cursors. (8.1.6)

Export with the WHERE clause
8i introduced the use of the 'where' clause as an export parameter which gets appended to the select statement that export uses to retrieve the table data
Use the QUERY parameter of the export utility to pass the WHERE clause. As we will see in the following examples, using the QUERY parameter can get messy because of how the different OS treat quotes in the command line.
If you want to export multiple tables and use the QUERY parameter in the export, the WHERE clause must be applicable to all tables. 

UNIX syntax:
- Example: 
Export table data from DEMO.EMP that only includes FRAN like last names
exp demo/demo query=\"where lname like \'FRAN%\'\" tables=emp  file=exp.dmp log=exp.log

In the Utilities Guide we are warned of the possibility of needing to use the UNIX escape ('\') to make some UNIX reserved characters a literal rather  than being interpreted as the reserved character.  The example given is :
query=\"where JOB = \'SALESMAN\' and salary \< 1600\"

Note that the ", ', and < are all UNIX reserved characters and thus need to be escaped.

However, an important addendum to this is that this only applies when giving export options at the command line.  If you are using a parfile these need not be escaped, and in fact it will cause an error if you do.  The same option  in a parfile would look like
query="where JOB = 'SALESMAN' and salary < 1600"

Note the need for the double quotes as this is still a UNIX argument and  spaces are not allowed.  The double quotes are necessary to tell UNIX this  is a string literal.

If there are no single quotes in the query string you may use single quotes or double quotes to denote the literal.  Some examples of valid arguments,  given a table FOO(BAR number)
From the command line:
query=\'where bar = 2\'
query=\"where bar = 2\"
(note that here we may use single or double quotes)

query=\'where bar \< 2\' (note the extra \ to allow the use of the < as a literal)

These three in a parfile would simply be(respectively):
query='where bar=2'
query="where bar=2"
query='where bar < 2'

WINDOWS NT / 2000 and NETWARE syntax:
Under WindowsNT or Windows 2000, as well as NetWare, the following syntax applies. Example:
EXP demo/demo  tables=test  file=exp1.dmp  query="""where enum >12345"""

NOTE:  There should be NO spaces surrounding the double quotes in the  following statement.   There are NO spaces between the double quotes either
If you are going to use a 'PARFILE' parameter instead,  then use the following syntax in the  Parfile:

query="where enum > 12345"

Since the QUERY parameter contains blanks, Windows NT or Win2000 requires that the entire 'WHERE' clause be placed in 3 double quotes (""")to escape the  blanks and reserved characters that the operating system normally looks for.  If using the 'PARFILE' parameter,  you will only need a single set of double quotes(").


Simplified Native Dynamic SQL
Native Dynamic SQL is one of the most significant new PL/SQL features in Oracle 8i. Prior to Oracle 8i, DDL (Data Definition Language) and dynamic DML (Data Manipulation Language) statements could be only executed utilizing the DBMS_SQL package. Oracle 8i introduces the EXECUTE IMMEDIATE command, which provides a much simpler way of creating and executing DDL statements, dynamic SQL, and dynamic PL/SQL. Only one statement is needed, rather than multiple statements, which the DBMS_SQL package required.

Native Dynamic SQL allows developers to write much more compact code, which is easier to read and maintain. This code executes faster because it's embedded into the PL/SQL engine; it's not external to PL/SQL DBMS_SQL package calls. According to Oracle, Native Dynamic SQL provides 30 to 60 percent performance improvements over DBMS_SQL.

The EXECUTE IMMEDIATE command accepts any SQL statement except SELECT ones that retrieve multiple rows. It is important to note that it could accept bind variables with a USING clause, thereby improving code performance significantly by allowing you to reuse parsed SQL statements from an SGA (System Global Area) pool.

In my example, Customer data is divided by region for performance reasons and stored in four tables with an identical structure: Customer_West, Customer_East, Customer_North, and Customer_South. The Create_Customer stored procedure is used to insert records with customer data in one of the tables determined by the first parameter passed to the procedure.

Here is a simplified version of the Create_Customer stored procedure written before Oracle 8i with the DBMS_SQL package:

(Table_Name VARCHAR2,
Customer_ID INTEGER,
Customer_Lastname VARCHAR2,
Customer_Firstname VARCHAR2,
Customer_Address VARCHAR2,
Customer_City VARCHAR2,
Customer_State VARCHAR2,
Customer_Zip VARCHAR2,
Customer_Phone VARCHAR2) IS

csr_handle INTEGER;
cSQL_Statement VARCHAR2(200);

   cSQL_Statement := 'INSERT INTO ' || LTRIM(RTRIM(Table_Name)) || ' VALUES(:Id, :Last, :First, :Address, :City, :State, :Zip, :Phone)';

   -- Step 1: open cursor.
   csr_handle := DBMS_SQL.OPEN_CURSOR;

   -- Step 2: parse cursor.
   DBMS_SQL.PARSE(csr_handle, cSQL_Statement, DBMS_SQL.NATIVE);

   -- Step 3: bind values to the variables.
   DBMS_SQL.BIND_VARIABLE(csr_handle, ':Id',      Customer_ID);
   DBMS_SQL.BIND_VARIABLE(csr_handle, ':Last',    Customer_Lastname);
   DBMS_SQL.BIND_VARIABLE(csr_handle, ':First',   Customer_Firstname);
   DBMS_SQL.BIND_VARIABLE(csr_handle, ':Address', Customer_Address);
   DBMS_SQL.BIND_VARIABLE(csr_handle, ':City',    Customer_City);
   DBMS_SQL.BIND_VARIABLE(csr_handle, ':State',   Customer_State);
   DBMS_SQL.BIND_VARIABLE(csr_handle, ':Zip',     Customer_Zip);
   DBMS_SQL.BIND_VARIABLE(csr_handle, ':Phone',   Customer_Phone);

   -- Step 4: execute cursor.
   cnt_rows := DBMS_SQL.EXECUTE(csr_handle);

   -- Step 5: close cursor.

          'Error in procedure Create_Customer.');
END Create_Customer;

Here is the same procedure rewritten in Oracle 8i with Native Dynamic SQL:

(Table_Name VARCHAR2,
Customer_ID INTEGER,
Customer_Lastname VARCHAR2,
Customer_Firstname VARCHAR2,
Customer_Address VARCHAR2,
Customer_City VARCHAR2,
Customer_State VARCHAR2,
Customer_Zip VARCHAR2,
Customer_Phone VARCHAR2) IS

cSQL_Statement VARCHAR2(200);

   cSQL_Statement := 'INSERT INTO ' || LTRIM(RTRIM(Table_Name)) || ' VALUES(:Id, :Last, :First, :Address, :City, :State, :Zip, :Phone)';

      USING Customer_ID, Customer_Lastname, Customer_Firstname,
      Customer_Address, Customer_City, Customer_State, Customer_Zip,

         'Error in procedure Create_Customer.');
END Create_Customer;

As you can see, this syntax is much more concise and easy to read.

Example1: Executing a single row select with a bind variable, fetching into a single define variable:
  str varchar2(200);
  val varchar2(20);
  ret varchar2(20);
  str := 'select msg from msg where msg = :b1';
  val := 'Hello';
  dbms_output.put_line('Value fetched from table: '||ret);

Value fetched from table: Hello

Example2: Executing a delete with a returning clause. In this case the corresponding bind variable must have the OUT mode specified. Note also the use of sql%rowcount to find the number of rows deleted. All
sql% attributes valid for static SQL are also valid for embedded dynamic SQL:
  str varchar2(200);
  val varchar2(20);
  ret varchar2(20);
  val := 'Bye';
  str := 'delete from msg where msg = :b1 returning msg into :b2';
  dbms_output.put_line('Deleted '||sql%rowcount||
                       ' row(s) with value: '||ret);

Deleted 1 row(s) with value: Bye

From 8.1.6 the EXECUTE IMMEDIATE statement can also be written as:

Locally-Managed Tablespaces

Prior to Oracle8i (8.1), there is only one method that Oracle used to manage space within a tablespace. This method is called "Dictionary-Managed".
As of Oracle8.1, there is an additional method which can be used to manage space within a tablespace. This new method is called "Locally-Managed". Note that the default method is "Dictionary-Managed", unless you specify otherwise. Space is locally managed within the tablespace's data files.
This is accomplished by using an internal bitmap within the tablespace to manage the free and used space within the tablespace. Each bit represents one or more blocks. Bits are set or cleared to indicate whether each extent is allocated or free.

When creating the tablespace, include the EXTENT MANAGEMENT LOCAL clause in the CREATE TABLESPACE statement. (To specify a locally-managed SYSTEM tablespace, you can specify this clause in the CREATE DATABASE statement. This will require that all of the rollback segments also be locally-managed, and of a uniform size).
Along with this clause, you must also choose one of the two extent allocation size methods:
Oracle will calculate the appropriate INITIAL and NEXT sizes for the tablespace's extents (in units of 64K, 1M,16M, or 64M, as determined by the requested object size). The goal is to best utilize the space, while simplifying the management of the extents. Each bit in the tablespace's bitmap represents a 64K extent, regardless of the calculated size. (This is the default).
This allows you to specify a constant extent size of n bytes, regardless of what is specified by the INITIAL and NEXT clauses of the objects to be created in the tablespace. If you do not specify the size clause, it defaults to 1 meg). Each bit in the tablespace's bitmap represents an extent of the specified size. Note that tablespaces using the old dictionary based space management can coexist with ones using the new locally managed space management functionality.
* Improved concurrency and speed of space operations, because space allocations and deallocations predominantly modify locally managed resources (bitmaps stored in header files) rather than requiring centrally managed resources  such as enqueues  
* Improved performance, because recursive operations that are sometimes required during dictionary-managed space allocation are eliminated  
* Readable standby databases are allowed, because locally managed temporary tablespaces (used, for example, for sorts) are locally managed and thus do  not generate any undo or redo.  
* Simplified space allocation when the AUTOALLOCATE clause is specified,  appropriate extent size is automatically selected  
* Reduced user reliance on the data dictionary because necessary information  is stored in file headers and bitmap blocks

  - Migrate all other dictionary tablespaces to locally managed before migrating the SYSTEM tablespace 
  - Compatible Parameter must be higher than

  1. Tablespace cannot be SYSTEM, offline, locally managed or Temporary (ORA-3245).   
       --> to migrate a temporary tablespace to local management, drop and recreate the tablespace with the CREATE TEMPORARY TABLESPACE command.
  2. Tablespace must exist (ORA-959) with  logged to tidy this up.

  3. There must be enough free space in the tablespace to accomodate the bitmaps.
  If there are any problems with the Locally managed scheme, users can still  return to dictionary management using the folowing procedure :


  An Oracle 8i PL/SQL programmer develops a large number of packages, so the need to identify and solve performance problems becomes critical. Oracle 8i provides a profiler that analyzes PL/SQL code and locates bottlenecks. The DBMS_PROFILER package is an API that provides the capability to gather statistics related to the execution of the PL/SQL program and identify performance problems. The DBMS_PROFILER package is not created by default with the database;
you have to generate it with Oracle's ProfLoad.sql script. This script has to be executed by the SYS user and access has to be granted to PUBLIC. This script calls two other scripts: Dbmspbp.sql and Prvtpbp.plb. There are three steps in a typical profiler session:

  1.  Start the profiler to collect statistics for a session.
  2.  Execute PL/SQL program unit to generate profiling data.
  3.  Stop the profiler.
To use the DBMS_PROFILER package, database tables and other structures must be created through the Proftab.sql script. There are three tables to be created:  
For further information related to the DBMS_PROFILER package, please refer  to the Oracle8i Server On-Line Documentation or the Dbmspbp.sql script comments.

  Bulk Binds
  Oracle 8i introduces new PL/SQL FORALL and BULK COLLECT statements to support bulk binds.

The FORALL statement is specifically used for processing DML (INSERT,DELETE, and UPDATE) statements to improve performance by reducing the overhead
of SQL processing. The PL/SQL interpreter executes all procedural statements.  However, all SQL statements in the PL/SQL block are sent to the SQL engine,
  which parses and executes them. The PL/SQL-to-SQL context switch adds some  overhead, which could become significant when SQL statements are nested
in loops. Look at this code:

-- Assume PL/SQL tables Invoice_Id_Tab, Invoice_Date_Tab,
-- and Invoice_Amount_Tab have been populated
-- with invoice information (Invoice ID, Date and Amount)
-- that needs to be inserted to the Invoice table.

FOR nCount IN 1..10000
     INSERT INTO Invoices (Invoice_Id, Invoice_Date, Invoice_Amount)
          VALUES (Invoice_Id_Tab(nCount), Invoice_Date_Tab(nCount), Invoice_Amount_Tab(nCount));

Here 10,000 rows are created in the Invoice table and each INSERT is done individually, which results in 10,000 context switches between PL/SQL and SQL engines. Oracle 8i eliminates all but one of these context switches by passing the entire PL/SQL table to the SQL engine in one step:

FORALL nCount IN 1..10000
     INSERT INTO Invoices (Invoice_Id, Invoice_Date, Invoice_Amount)
         VALUES (Invoice_Id_Tab(nCount), Invoice_Date_Tab(nCount), Invoice_Amount_Tab(nCount));

The FORALL statement has a structure similar to FOR LOOP with a range. However, it doesn't contain an END LOOP statement and it cannot contain any statements other than the index, lower and upper bound, and actual SQL statement (which refers to the index). The range specified by lower and upper bounds (in my example, it's 1 to 10,000) must be contiguous and all the elements within that range must exist, otherwise an ORA-22160 exception will be raised.
The FORALL clause is used for DML statements. The equivalent statement for a bulk fetch is the BULK COLLECT clause, which can be used as a part of SELECT INTO, FETCH INTO, or RETURNING INTO clauses:

SELECT Invoice_Id, Invoice_Date, Invoice_Amount

     BULK COLLECT INTO Invoice_Id_Tab, Invoice_Date_Tab, Invoice_Amount_Tab
     FROM Invoice;

The BULK COLLECT clause can be used for both explicit (FETCH INTO) and implicit (SELECT INTO) cursors. It fetches the data into the collection (PL/SQL table, varray) starting with element 1 and overwrites all consequent elements until it retrieves all the rows. If the collection is varray, it has to be declared large enough to accommodate all fetched rows, otherwise an ORA-22160 exception will be raised. The bulk binds features allow users to increase the performance and reduce
 the overhead of SQL processing by operating on multiple rows in a single DML statement. The entire collection-not just one collection element at a time-is passed back and forth between the PL/SQL and SQL engines. According to Oracle, during internal benchmark tests there was a 30 percent performance improvement as a result of using these new features.

  Autonomus Transactions

Because Oracle is a transactional database, every INSERT, DELETE, and UPDATE operation takes place in the context of a transaction, which is either committed to the database or rolled back as a whole. Prior to Oracle 8i, there was no way to commit individual a SQL operation that was separate from the whole transaction. Oracle 8i introduces a new compiler directive through the AUTONOMOUS_TRANSACTION pragma (a compiler directive), enabling PL/SQL program units to maintain their own transaction states.

An autonomous transaction starts within the context of another transaction, known as a parent transaction, but it is independent of it. This feature allows developers to handle transactions with more ease and finer granularity. Nested transactions can be committed or rolled back without affecting the parent one. Here is an example of a Create_Invoice procedure with AUTONOMOUS_TRANSACTION pragma:

     (Inv_Id INTEGER,
      Inv_Amount NUMBER) IS

     INSERT INTO Invoice (Invoice_Id, Invoice_Date, Invoice_Amount)
          VALUES  (Inv_Id, SYSDATE, Inv_Amount);

The next PL/SQL block is trying to create two new rows in the Invoice table: one directly via INSERT command, and one via the Create_Invoice stored procedure:   BEGIN
     INSERT INTO Invoice (Invoice_Id, Invoice_Date, Invoice_Amount)
          VALUES (1, SYSDATE, 25.50);
     Create_Invoice(2, 30.20);

  As a result, the invoice with ID = 2 for $30.20 was created; the invoice with ID = 1 for $25.50 was not. Without the AUTONOMOUS_TRANSACTION pragma, both rows would have been committed to the database because the COMMIT command in the Create_Invoice procedure would have applied to the whole transaction

  Sample Table Scans

This allows the user to perform a limited, random scan of the table to determine sampled information, rather than performing a full table scan.
  SELECT region AVG(amount)
  FROM sales SAMPLE(5)
  GROUP BY region;
This will perform a random sample of five percent of the rows in the SALES table and return the average of the AMOUNTs found.
A variation on this is to specify that rather than sampling rows, we randomly sample whole blocks at a time. e.g.
  SELECT region AVG(amount)
  GROUP by region;
This will sample five percent of all ( formatted ) blocks in the table, and use all of the rows found in these blocks for the sample.
It should be noted that any query involving SAMPLE will ignore any indexes on the table.


  1. Sample Table scans are restrictied to the Cost based optimizer only.
  2. Sample table scans can only be performed on Local tables.
  3. This form of access is limited to Single Table Queries - Joins cannot use the SAMPLE Syntax and return a Parse error.

Materialized views

In simplest terms, this is a summary table that may be updated when the fact table changes. The SQL compiler is aware of the view and will transparently rewrite queries to take advantage. Materialized views are generic objects  that are used to summarize, precompute, replicate, and/or distribute data. Materialized views are used to increase the speed of these types of queries. They improve query performance by precalculating expensive join and aggregation operations on the database prior to execution and storing these results in the database.
The query optimizer can automatically use a materialized view.  It recognizes when an existing materialized view can and should be used to satisfy a request and transparently rewrites the request to use the materialized view (query rewrite).
Materialized views can be partitioned if necessary.
Indexes can be created on materialized views.
The optimizer directs the query to the materialized view (where the data is precomputed) and not to the underlying detail tables. The type of materialized view that is created determines how it can be refreshed and used by query rewrite.
A materialized view can be used to replicate data and is considered a snapshot. The terms "snapshot" and "materialized view" are now synonymous.
Use the "CREATE MATERIALIZED VIEW" statement to create and/or populate a materialized view. Oracle provides packages, DBMS_MVIEW and DBMS_OLAP, to assist in manipulating, designing and evaluating materialized views.
There are two init.ora parameters that must be set: job_queue_processes, and job_queue_interval. These have to be nonzero values. This allows the snp background processes to activate.

Summary of Materialized Views
  - consume storage space
  - must be refreshed when the data in their master tables changes
  - improve the performance of SQL execution with query rewrites
  - are transparent to SQL applications and users
  - can be accessed directly in a SELECT statement
  - may (depending on the types of refresh), be accessed directly in an INSERT, UPDATE, or DELETE statement

The basic syntax is as follows:
        BUILD <build_clause_body>
        REFRESH <refresh_clause_body>
          SELECT <select_clause_body>  

  CREATE MATERIALIZED VIEW scott.store_dept_sal
    SELECT d.dname, SUM(sal) AS tot_sum
        FROM scott.dept d, scott.emp e
        WHERE d.deptno = e.deptno
        GROUP BY d.dname;

The BUILD option
  - The BUILD IMMEDIATE option of the CREATE MATERIALIZED VIEW will populate the materialized view immediately
  - The BUILD DEFERRED option will populate the materialized view later
  - If a user-defined materialized view already exists, you can use the PREBUILT option
  - Unless PREBUILT, the materialized view requires and occupies storage space.
  - If you don't know how much space the materialized view will require, use the DBMS_OLAP.ESTIMATE_SIZE package to provide an estimate on the bytes
required to store this materialized view.
  - If you use the BUILD IMMEDIATE clause, the materialized view definition is added to the data dictionary, and then the SELECT expression is evaluated and the results are stored in the materialized view. This may take a while depending on the data accessed in SELECT statement.
  - If you use the BUILD DEFERRED clause, the materialized view is created without data.  It can be populated subsequently using theDBMS_MVIEW.REFRESH package.

How to Preform the Refresh
  - There are four options which specify how you want your materialized views to be refreshed: - FORCE, COMPLETE, FAST, or NEVER.
  - COMPLETE will truncate all existing data and completely rebuild all data from the base tables used in the original create command of the materialized view.
  - FAST only applies changes made since the last refresh. There are 2 types of a fast refresh:
          1)Using materialized view logs  - changes to base tables are placed in a log and then applied to the materialized view.
          2)Using Rowid range - a materialized view can be refreshed using fast refresh after a sqlloader direct path load which is based on ROWID's of the new rows.   - FORCE first determines if fast refresh is possible and applies it if  it is; otherwise, it performs a COMPLETE refresh
  - NEVER suppresses refresh of the materialized view

Types of Refresh
Along with the 4 refresh types, there are 2 modes used to keep the materialized views synchronized with their base tables. These modes are either a manual or automatic refresh.

Automatic Refresh
 - is performed on a commit - when changes to the base table are committed then the materialized view is updated.
 - specified by time - therefore the refresh is scheduled. In order to use this option the JOB_QUEUE_PROCESSES parameter must be used in the instance.

Manual Refresh
  These refreshes can be performed by using the DBMS_MVIEW package. This package includes the procedures REFRESH, REFRESH_DEPENDENT, and REFRESH_ALL_MVIEWS.

Query Rewrite
- Query rewrite translates a SQL statement defined in terms of tables or views into a statement accessing one or more materialized views that are defined on those (detail) tables.
- Query rewrite is available with cost-based optimization.  Oracle optimizes the input query with and without rewrite and selects the least costly alternative.
- Query rewrites are transparent to the end user or application and require no reference to the materialized view in the SQL statement.
- When a materialized view is defined, it will not automatically be used by the query rewrite facility unless the ENABLE QUERY REWRITE clause is specified.

 The ALTER MATERIALIZED VIEW statement can be used to change this value.
  - If BUILD DEFERRED is used when the materialized view is created, it is disabled for use by query rewrite until the first REFRESH, after which it will be automatically enabled, if the ENABLE QUERY REWRITE clause has been specified (as in previous example).
  - Materialized views can be added or dropped without invalidating the SQL in the application code (like indexes).
  - If bind variables are used in a query, the query will NOT be rewritten to use materialized views even if query rewrite is enabled.

Fash Refreshes

  - A materialized view log is a table associated with the master table of a materialized view and is created using the CREATE MATERIAL VIEW LOG SQL command.
  - A materialized view log is located in the same schema as the master table (one materialized view log per master table).
  - When changes are made to data in a master table, Oracle stores rows describing those changes in the materialized view log and then uses that log to refresh materialized view(s) based on that master table (a fast refresh).
  - If there is no materialized view log, Oracle will re-execute the materialized view query to refresh the materialized view (a complete refresh).
  - The alert log and trace file should be consulted to check that no errors  have occurred during refreshes.
  - If a refresh fails, the user has to explicitly invoke the refresh procedure using the DBMS_MVIEW package. Until this occurs, the view will not be refreshed
 automatically at commit time.

In this example, a materialized view has been created which contains aggregates on a single table. Because the materialized view log has been created, the  materialized view is fast refreshable. Whenever DML is applied against the fact table, when the commit is issued, the changes will be reflected in the materialized view.
    with rowid (store_key, time_key, dollar_sales, unit_sales)
    including new values;

    SELECT f.store_key, f.time_key, COUNT(*) AS count_grp, SUM(f.dollar_sales) AS sum_dollar_sales,
                  COUNT(f.dollar_sales) AS count_dollar_sales, SUM(f.unit_sales) AS sum_unit_sales,
                  COUNT(f.unit_sales) AS count_unit_sales
      FROM fact f
      GROUP BY f.store_key, f.time_key;   

  Using CUBE and ROLLUP Extensions
The use of data warehouses and data marts requiring OLAP tools is increasing. To support this trend, Oracle provides analytical applications with the CUBE and ROLLUP extensions to the SQL GROUP BY clause.

ROLLUP is an extension or option available for the GROUP BY clause. It allows the user to compute subtotals and totals for groups of data. It is highly efficient and adds little overhead to the SELECT statement.ROLLUP is used to calculate multiple levels of subtotals including a grand total across a specified group of dimensions. ROLLUP takes an ordered list of grouping columns as arguments.  It creates subtotals which roll up from the most detailed level to a grand total and is goverened by the grouping list specified in the ROLLUP clause.
ROLLUP will create subtotals at n+1 levels (extra one for grand total), where n is the number of grouping columns.  For example, if you are grouping on date, region, and division, the result set will contain rows at four aggregation levels.
select deptno, job, mgr, sum(sal)from emp
  group by rollup (deptno, job, mgr);

      DEPTNO JOB             MGR   SUM(SAL)
          10 CLERK          7782       1300
          10 CLERK                     1300
          10 MANAGER        7839       2450
          10 MANAGER                   2450
          10 PRESIDENT                 5000
          10 PRESIDENT                 5000
          10                           8750
          20 ANALYST        7566       6000
          20 ANALYST                   6000
          20 CLERK          7788       1100
          20 CLERK          7902        800
          20 CLERK                     1900
          20 MANAGER        7839       2975
          20 MANAGER                   2975
          20                          10875
          30 CLERK          7698        950
          30 CLERK                      950
          30 MANAGER        7839       2850
          30 MANAGER                   2850
          30 SALESMAN       7698       5600
          30 SALESMAN                  5600
          30                           9400

Multi-Dimensional Analysis
Often we like to see our data summarized by multiple categories or "dimensions" especially with decision support systems. Common dimensions include data such as date or time, geography, product ID, and division.  This information is usually presented in a "cross-tabular" style format. For example, we may need to know the total sales for each product at increasing aggregation levels from city to state to region by year. To visualize multi-dimensional data we can use the analogy of a data "cube," or, a space where facts are stored at the intersection of n dimensions.

CUBE is an extension to the GROUP BY clause and is used in a SELECT statement to calculate subtotals for all possible combinations of a group of dimensions
 in addition to a  grand total. Syntax:
    CUBE (grouping_column_reference_list)

For the specified set of grouping columns, CUBE creates subtotals for all possible combinations of them.  CUBE generates all the subtotals that could be calculated for a data cube with the given dimensions. For n columns specified for a CUBE, there will be 2n combinations of subtotals returned. Example
  select deptno, job, mgr, sum(sal) from emp
  group by cube (deptno, job, mgr);

      DEPTNO JOB             MGR   SUM(SAL)
          10 CLERK          7782       1300
          10 CLERK                     1300
          10 MANAGER        7839       2450
          10 MANAGER                   2450
          10 PRESIDENT                 5000
          10 PRESIDENT                 5000
          10                7782       1300
          10                7839       2450
          10                           5000
          10                           8750
          20 ANALYST        7566       6000
          20 ANALYST                   6000
          20 CLERK          7788       1100
          20 CLERK          7902        800
          20 CLERK                     1900
          20 MANAGER        7839       2975
          20 MANAGER                   2975
          20                7566       6000
          20                7788       1100
          20                7839       2975
          20                7902        800
          20                          10875
          30 CLERK          7698        950
          30 CLERK                      950
          30 MANAGER        7839       2850
          30 MANAGER                   2850
          30 SALESMAN       7698       5600
          30 SALESMAN                  5600
          30                7698       6550
          30                7839       2850
          30                           9400
             ANALYST        7566       6000
             ANALYST                   6000
             CLERK          7698        950
             CLERK          7782       1300
             CLERK          7788       1100
             CLERK          7902        800
             CLERK                     4150
             MANAGER        7839       8275
             MANAGER                   8275
             PRESIDENT                 5000
             PRESIDENT                 5000
             SALESMAN       7698       5600
             SALESMAN                  5600
                            7566       6000                         
                            7698       6550
                            7782       1300
                            7788       1100
                            7839       8275
                            7902        800

Note that NULL values returned by ROLLUP and CUBE may indicate that therow is a subtotal

The GROUPING Function
Oracle 8i introduces a new function called GROUPING. Grouping takes a single column as an argument and returns 1 when it encounters a NULL value created by a ROLLUP or CUBE operation. In other words, if GROUPING determines that the NULL value is for a row that is a subtotal, GROUPING returns a 1. Other values including a stored NULL, will return a 0. Syntax:
  SELECT ...  [GROUPING(dimension_column)...]  ...
    GROUP BY ...    {CUBE | ROLLUP}     

Example using ROLLUP
select deptno, job, mgr, sum(sal),
grouping (deptno) as D,
grouping (job) as J,
grouping (mgr) as M
  from emp
group by rollup (deptno, job, mgr)

    DEPTNO JOB          MGR SUM(SAL)   D   J   M
  ------ ---------- ----- -------- --- --- ---
      10 CLERK       7782     1300   0   0   0
      10 CLERK                1300   0   0   1
      10 MANAGER     7839     2450   0   0   0
      10 MANAGER              2450   0   0   1
      10 PRESIDENT            5000   0   0   0
      10 PRESIDENT            5000   0   0   1
      10                      8750   0   1   1
      20 ANALYST     7566     6000   0   0   0
      20 ANALYST              6000   0   0   1
      20 CLERK       7788     1100   0   0   0
      20 CLERK       7902      800   0   0   0
      20 CLERK                1900   0   0   1
      20 MANAGER     7839     2975   0   0   0
      20 MANAGER              2975   0   0   1
      20                     10875   0   1   1
      30 CLERK       7698      950   0   0   0
      30 CLERK                 950   0   0   1
      30 MANAGER     7839     2850   0   0   0
      30 MANAGER              2850   0   0   1
      30 SALESMAN    7698     5600   0   0   0
      30 SALESMAN             5600   0   0   1
      30                      9400   0   1   1
                             29025   1   1   1

Example using CUBE

  SQL> select deptno, job, mgr, sum(sal),
    2  grouping (deptno) as D,
    3  grouping (job) as J,
    4  grouping (mgr) as M
    5  from emp group by cube (deptno, job, mgr)

Example: Using DECODE for labels

  SQL> SELECT decode(grouping(deptno), 1, 'All Depts', Deptno)
    2      as Deptno,
    3     decode(grouping(job), 1, 'All Jobs',0, null) as
    4       Job, sum(sal) AS Total_Sal
    5*    from emp group by CUBE(Deptno, Job)

Materialized views are useful not only to optimize queries, but also to replicate data. They have essentially replaced shapshots. Since materialized views often contain summary or aggregate data, the CUBE and ROLLUP extensions to the GROUP BY clause are often useful for materialized views. Also, the CUBE and ROLLUP extensions to GROUP BY as well as the new function GROUPING, give us the ability to obtain summary data directly through SQL and are quite useful outside the context of materialized views. These new features may be used together or independently.

'NOCOPY' is an optional 'hint' to tell the PL/SQL 'compiler' not to go through the overhead of making a copy of the variable, instead just send a reference. This is generally because we don't plan on modifying it within the procedure. Take advantage of NOCOPY when you want the performance advantage of avoiding the cost of the temporary storage for OUT or IN OUT parameters. NOCOPY can improve the performance of programs with IN OUT or OUT parameters that pass large data structures like arrays. As you might expect, these potential gains are only available with a trade-off: if a program terminates with an unhandled exception, you cannot trust the values in a NOCOPY actual parameter
If you pass a varible with NOCOPY -- a pointer is passed.  For an IN OUT parameter, this means that when you modify the parameter -- its value is IMMEDIATELY modified, we do not copy to a temporary variable.  What this means is that any modification you make to it in the routine is made to the parameter right then -- not when the procedure succesfully completes.
Here is an example:
procedure p( x in out NOCOPY number ) is
     x := 55;
     raise program_error;

Then you call it like this:
    y number := 0;
exception when others then

that will print 55 -- not 5 (it will print 5 if you do not use NOCOPY since the values are ONLY copied upon SUCCESS).

It can be even more tricky with OUT parameters. OUT parameters are ALWAYS modified -- even if the procedure you called never touches them.  Simply change P to be:
procedure p( x out NOCOPY number ) is
     raise program_error;

that doesn't modify x at all, now if you execute:
    y number := 0;
exception when others then

that'll print "y="  -- y will be set to NULL.

So, you cannot just go in and put NOCOPY everywhere as this side effect can change the existing behaviour of your code.  NOCOPY is ONLY useful on truly large objects like tables and collections.  It doesn't make nearly as much sense to use NOCOPY on a simple NUMBER/DATE type or small strings as it does on these large variables (doesn't make sense on a LOB either as the lob locator itself is small).

In summary:
IN parameters  = REFERENCE. passed by reference (pointer)
OUT parameters = VALUE.     copy on output (temporary set up, filled in a copied on success)
IN OUT         = VALUE.     copy to temporary, copy from temporary (passed by value)
Use NOCOPY and all are passed by reference.

More information with great examples Here