New Features for Logical Standby in Oracle Database 10g

Many of the issues faced by customers implementing Logical standby databases in 9i have been either completely solved or, at the very least, improved upon considerably in Oracle Database 10g.  The main issues usually are the work that has to be done to create a logical standby database, the lack of support for certain types of data and tables, the lag time for a Logical standby database to be caught up with the Primary database and general monitoring and management the DBA can and has to do.  This next section discusses the following subjects and how Data Guard has improved them in Oracle Database 10g.

The first thing a potential logical standby customer runs into is the complexity and delicateness of creating their logical Standby database.  Not only is there a lot of up front work that has to be done, there are also lots of steps that must be followed to the letter for SQL Apply to function correctly.  In addition, to get a logical standby setup you had to either use a current cold backup of your primary database or attempt to quiesce the production database in order to get a consistent view of the transactions.  The fact that a number of customers in spite of these drawbacks are and have been willing to undertake this adventure is a solid testimonial to the value of the Logical Standby technology and to the functionality of the current 9i version.
So, what will Data Guard SQL Apply in Oracle Database 10g bring to the table to make the life of the customer easier and make the procedure of setting up their logical standby databases more bullet proof?

The ability to setup your logical standby without bringing down the Production database or even affecting the progress of the users and removing the dependency on the Resource Manager if you wanted to use a hot backup of the primary database.

How does it work?

Quite simply, in Oracle Database 10g a logical standby starts its life as a pseudo physical standby until it has reached the point where there are no in flight transactions, and then morphs into a full logical standby.
You still have to make sure the primary database can support a logical standby database to your needs and is ready to do so.  This includes checking for unsupported tables (a lot less now), checking uniqueness of the tables and turning on FORCE LOGGING, SUPPLEMENTAL LOGGING and, of course, ARCHIVELOG mode.  If you don’t have a password file you need to create one now as it will be required on both sides of your configuration.  The following are the general steps required to create a Logical Standby database in Oracle Database 10g.

1.Take an on-line backup of your primary database. (For that matter you can take ANY backup of your Primary database, even last Sunday’s as long as you have the Archive Logs available to bring it up to date, automatically!)

2.Create a logical standby control file.  SQL Apply has its own control file now.  The syntax is the same as for physical with the keyword ‘PHYSICAL’ replaced with ‘LOGICAL’.  ALTER DATABASE CREATE LOGICAL STANDBY CONTROL FILE AS ‘LOGSTNDBY.CTL’; It is as simple as that.    A lot of the manual steps in 9i are automated and hidden behind this one simple command.  The dictionary build is started automatically, the start and end SCN numbers of the build are stored in the control file and the control file is marked as a Physical/Logical control file making the transformation to a full logical standby that much easier later.

3.Copy the backup files, the standby control file and the initialization parameter file (if you are using SPFILES you need to create a text one first) over to the standby system. You will notice that I did not mention copying the archive logs.  That’s because normal gap handling will fetch them automatically, they just need to be on disk and still in the primary database’s control file archive log list.

4.Setup the minimum initialization parameters to start redo transport from the primary to the standby and redo reception at the standby side.  At a minimum this means on the primary the ‘LOG_ARCHIVE_DEST_n’ parameters and on the standby the ‘STANDBY_ARCHIVE_DEST’, DB_FILE_NAME_CONVERT, LOG_FILE_NAME_CONVERT, FAL_SERVER, FAL_CLIENT and PARALLEL_MAX_SERVERS parameters.  You will also have to modify the CONTROLFILE parameter to point to your logical standby control file as well as any other parameters with pathnames in them, as usual.  You MUST create a password file for the standby now otherwise redo transport will not be able to function.  As usual you must also have a listener for this standby up and running and the appropriate TNSNAMES definitions on both systems..

5.Start and mount the Standby using STARTUP MOUNT.   At this point you have a running pseudo physical standby.  To start redo shipping from the primary perform a log switch on the primary and start managed recovery on the standby with ALTER DATABASE RECOVER MANAGED STANDBY DATABASE; and wait for it to complete.  The MRP knows this is actually a logical standby by the SCN numbers stored in the control file and will perform point in time recovery to the dictionary build end SCN and stop.  If it is missing any archive logs (gaps) that it needs to bring the standby up to this SCN it will use the FAL gap resolution mechanism to fetch them automatically from the primary as usual.   That’s why they have to be on disk and still in the control file.  If they are not you will have to bring them over yourself and do some manual recovery of the standby first.

6.At this point you have recovered your pseudo Physical standby up to the point where the dictionary is present and all in-flight transactions are resolved and committed.  You are ready to ‘morph’ your standby into its full Logical self!  This is a single activate command, ALTER DATABASE ACTIVATE STANDBY DATABASE; Since you setup the pathname conversion parameters above, the data and log files will be correctly modified to their current location.

7.All that’s left to do now is to complete the transformation by resetting the database name and id using the DBNEWID utility.  This part of the procedure is the same as in 9i and includes a couple of shutdowns and startups open resetlogs, running the DBNEWID utility, fixing the parameter file (changing dbname for example) and of course recreating the password file.  You might also want to use this point to create the missing temporary data files too.

8.You are now running a bona fide logical standby.  Redo should be shipping again from the primary so start up the SQL Apply with ALTER DATABASE START LOGICAL STANDBY APPLY; without the ‘INITIAL’ keyword.  SQL Apply knows where to start from the same SCN in the control file that the MRP used.

Of course this is just an overview of the steps required to get a logical standby up and running.  As you can see there are fewer steps, no downtime for the Primary, and it is much more bulletproof.  For a complete detailed look at the procedure, as usual, refer to the appropriate chapter of the documentation

Secure Switchover
The act of switching roles between a Primary database and either a Physical or Logical standby database is pretty much the same today for the DBA but the work that goes on behind the scenes is quite different. 
With a Physical standby the users have to be logged off the database and the final bit of redo is sent over to the standby notifying it that a switchover has been started.  Once received and applied at the Physical standby the MRP (apply engine) stops and the standby is ready to become a Primary database.
A Logical standby database must do much more work since the current Primary is not yet a ‘Logical’ standby database and has no real knowledge of what the current Logical standby looks like.  A new dictionary build must be performed on the new Primary and that redo sent to the old Primary, now the new Logical standby.  Until the redo containing the new dictionary is received at the new Logical standby, it cannot apply the redo coming in from what is now the Primary database.  This leave a very small window in which some redo might be sent to the standby that cannot be recovered in the event that the new Primary fails.  This is a very small window generally but a window just the same.  Data Guard in Oracle Database 10g provides a new PREPARE command that closes this window completely.  Now a switchover will be performed as follows:
On the current Primary you first perform the prepare command.

This informs the primary that it will become a logical standby and can start receiving redo from the current logical standby.  Once done the second Prepare command is executed on the logical standby that is to become the Primary database.  This tells the standby to start the dictionary build and ship the redo to the current primary in preparation for the switchover.



When this command completes and you are returned to the SQL prompt the dictionary is built and safely sent to the current Primary database.  Now you can proceed with the normal switch over to complete the operation.

On the current Primary database:



On the Logical standby database:



And the process is complete. Finally start the SQL Apply engine on the new Logical standby with


And the dictionary will be processed and all new redo applied correctly. 

It should be noted that if there is a physical standby in the configuration the switchover should be performed between the Primary and the Physical standby since the Logical standby will not know the difference between the two and continue to operate normally.  If the switchover is done between the Primary and the Logical standby the Physical standby becomes a cascaded standby form the new Logical standby database and not a Physical standby of the new Primary.

Monitoring the Logical Standby Database
Setting up a Logical standby requires knowledge of what tables will not be supported at the standby side.  In the current version of Data Guard the DBA_LOGSTDBY_UNSUPPORTED view did not always explain exactly what was wrong with a certain table.  For example, a table that is an Index Only Table would show up as unsupported but upon examining the columns in the table you would not see any unsupported data types.  A new column, called ATTRIBUTES, has been added to the view to fully explain what is wrong with the table.  In this manner you will see immediately that an unsupported tables has table compression or uses an unsupported IOT without having to look at the columns.

SELECT DISTINCT table_name, attributes
  FROM dba_logstdby_unsupported
  WHERE owner = 'HR';

------------ ---------------------
COUNTRIES    Index Organized Table
DEPARTMENTS  Table Compression
EMPLOYEES    Table Compression
JOBS         Table Compression
JOB_HISTORY  Table Compression
LOCATIONS    Table Compression
REGIONS      Table Compression

Another important function that a DBA performs on a regular basis is monitoring the progress of the SQL Apply engine through the archive logs and transactions.  This is important to ensure that the Logical standby is keeping up to date and has not run into something that must be manually resolved, a new data file on the Primary for example.

The DBA_LOGSTDBY_LOG view has been updated to show which archive logs have been completely applied and can be safely deleted if necessary.
SELECT thread#, sequence#, applied FROM
  dba_logstdby_log order by sequence#;

---------- ---------- -------
         1         48 YES
         1         49 CURRENT
         1         50 CURRENT
         1         51 CURRENT
         1         52 CURRENT
         1         53 CURRENT

The DBA_LOGSTDBY_PROGESSS view has also been updated to include more information on the current progress of SQL Apply through the redo stream.
Using the new columns in DBA_LOGSTDBY_PROGRESS you can see more details on the progress of the SQL apply service in your standby database. All columns are the NUMBER data type.
SELECT applied_scn, applied_thread#, newest_scn, newest_thread#
FROM dba_logstdby_progress;

----------- --------------- ---------- --------------
     961263               1     961263              1


When using REAL TIME APPLY, the apply values may be greater the newest. This is expected and normal behavior.

Managing SQL Apply
Finally managing your Logical standby database, though necessary, does not have to be complex, whether it is deciding on what tables in the Logical Standby database will be maintained, bypassing the standby guard to add other objects (new tables or indexes on current tables), or restarting the apply engine after a failed transaction.  The latter two have been transformed from running a package to simple to use SQL commands.
Previously the wildcard features in SKIP procedure did not allow for some tables to be skipped without skipping other tables. For instance if you attempt to skip a table called MI_DATA you would also skip MINDATA. Now with these additional options you will be able to better control what is skipped.

·         use_like – should a wildcard pattern match be performed. Default is true.  Set to False to use the escape character.
·         esc – specifies what escape character is being used in the pattern matching.

Bypassing the Standby guard (the security that prevents users from changing the standby database or, at least, the objects maintained by SQL Apply) required executing a package to allow your session to perform modifications to the Logical standby database.  Now you will be able to turn the guard off and back on with a simple ALTER DATABASE command.

And last, but definitely not least,  restarting the SQL Apply after it has stopped due to some problem that has been corrected, will no longer require finding out what the failed transaction’s id is and executing yet another package to force the transaction to be skipped when SQL Apply restarts.  Now this functionality is merely a few extra words on the start command.

 Be very careful when skipping transactions. Generally speaking skipping a DDL operation is fine as long as you are able to reproduce it manually. But if you skip a DML operation you may make your Logical standby unusable.