Creating a Logical Standby DB with RMAN
Benefits
Even with all the new features that Oracle 11g adds
for physical standby databases (e.g. Real Time Apply),
a logical standby database still offers some interesting
alternatives for offloading a reporting workload, protection against
physical block corruption, patching, and even disaster recovery
purposes:
- Effective System Resource Use. A
logical standby database can function as an independent
production database that still accepts updated data and
transactions from the primary database.
However, unlike a physical standby database, a logical
standby’s physical structure may be different (e.g.
different tablespaces or datafiles). In addition, change data
may beselectively applied to particular tables and indexes.
It’s therefore possible for a logical standby database to offer
data from a combination of sources: tables that are perfectly
synchronized with the primary database, plus tables whose
changed data has been excluded from application to the logical
standby database, and even tables which exist only in
the logical standby database.
- Primary Database Workload
Reduction. Since a logical standby database will
capture change data for all supported datatypes – more on this
later! – this also means that it’s possible to
build materialized views, materialized view logs, and
even additional indexes on existing production tables
that would be prohibitively expensive to maintain against the
same tables on the primary database. Logical standby databases
are therefore excellent targets for data warehouse, data mart,
or decision support applications that only need
to read production data, but that could also utilize
additional constructs (e.g. bitmap indexes) for
speedier query processing. Logical standby databases are also
excellent alternatives to databases created using basic
replication because they’re much easier to maintain.
- Increased Data Protection. Because
a logical standby database’s datafiles aren’t an exact physical
copy of the primary database’s datafiles, it offers protection
against physical block corruption on the primary
database because redo is applied using SQL Apply methods. And
just as with a physical standby database, it’s also possible to
delay DML that’s been generated on the primary database from
being applied against the logical standby database by
implementing delay features.
- Patching. Because
it can assume the role of the primary database during a
switchover operation, a logical standby database can facilitate
extremely quick Oracle Database patching and upgrade operations.
The patches are first applied to the original logical standby
database, which then assumes the primary database role after
switchover is complete. Once the original primary database is
converted to a logical standby database, it can be upgraded as
well.
Limitations
of Logical Standby Databases
While logical standby databases have some excellent benefits, they
do have some definite drawbacks as well, including restrictions on
unsupported objects and datatypes, limited support for DDL and DCL commands, and
what I believe are some glaring implications for failover situations:
Unsupported Objects. A logical standby
database cannot accept updates for the following objects:
•Any tables or
sequences owned by SYS
•Any tables that use table compression
•Any tables that underlie
a materialized view
•Global temporary tables (GTTs)
Unsupported Data Types. In addition, any
tables that are comprised
of or whose columns contain the following unsupported data types cannot be updated in
a logical standby:
•Datatypes BFILE, ROWID,
and UROWID
•Any user-defined TYPEs
•Multimedia data types like Oracle Spatial, ORDDICOM, and Oracle Text
•Collections (e.g. nested tables, VARRAYs)
•SecureFile LOBs
•OBJECT RELATIONAL XMLTypes
•BINARY XML
Starting the Process
We are now going to focus on how we can setup a Logical Standby
database using RMAN. We are assuming that we already have a Physical
Standby.
We are assuming the following configuration is already on place:
Primary hostname – PRIMARY_HOST
Standby hostname – SECONDARY_HOST
TNS alias for Primary – PROD
TNS alias for standby – STDBY
Primary DB_NAME Parameter=PROD
Primary DB_UNIQUE_NAME Parameter= PROD
Standby DB_NAME Parameter=PROD
Standby DB_UNIQUE_NAME Parameter= STDBY
1-
Determine if the Data Types and Storage Attributes for tables are
compatible with a Logical Standby Database
Before setting up a logical
standby database, ensure the logical standby database can maintain
the data types and tables in your primary database.
This is done on the primary database. As we mentioned before,
Logical standby databases do not support the following data types:
BFILE
ROWID,
UROWID
User-defined
types
Collections
(including
VARRAYS and nested tables)
XML
type
Encrypted
columns
Multimedia
data types (including Spatial, Image, and Context)
To find which tables cannot be duplicated on the logical
database run the following select on
Primary:
SELECT DISTINCT
OWNER,TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED
ORDER BY OWNER,TABLE_NAME;
As an example, to view the column names and data types which are
not supported for one of the tables listed in the previous
query, run the following select:
col DATA_TYPE format a20
col COLUMN_NAME format a20
SELECT
COLUMN_NAME,DATA_TYPE
FROM
DBA_LOGSTDBY_UNSUPPORTED
WHERE OWNER='OE' AND TABLE_NAME = 'CUSTOMERS';
2- Ensure Table Rows in the
Primary Database Can Be Uniquely Identified
The physical organization in a logical standby database is
different from that of the primary database, even though the
logical standby database is created from a backup copy of the
primary database. Thus, ROWIDs contained in the redo records
generated by the primary database cannot be used to identify the
corresponding row in the logical standby database.
Oracle uses primary-key or
unique-key supplemental logging to logically identify a modified
row in the logical standby database. When database-wide
primary-key and unique-key supplemental logging is enabled, each
UPDATE statement also writes the column values necessary in the
redo log to uniquely identify the modified row in the logical
standby database.
- If a table has a primary
key defined, then the primary key is logged as part of the
UPDATE statement to identify the modified row.
- In the absence of a
primary key, the shortest non-null unique key is logged as part
of the UPDATE statement to identify the modified row.
- In the absence of both a
primary key and a non-null unique key, all columns of bounded
size are logged as part of the UPDATE statement to identify the
modified row. In other words, all columns except those with the
following types are logged: LONG, LOB, LONG ROW, object type,
and collections.
Oracle recommends that you
add a primary key or a non-null unique index to tables in the
primary database, whenever possible, to ensure that SQL Apply
can efficiently apply redo data updates to the logical standby
database.
Find tables without unique
logical identifier in the primary database:
SELECT OWNER,
TABLE_NAME FROM DBA_LOGSTDBY_NOT_UNIQUE
WHERE (OWNER, TABLE_NAME) NOT IN (SELECT DISTINCT OWNER,
TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED)
AND BAD_COLUMN = 'Y';
If your application ensures the rows in a table are unique, you
can create a disabled primary key RELY constraint on
the table.
This avoids the overhead of maintaining a primary key on the
primary database.
SQL> ALTER TABLE
TableA ADD PRIMARY KEY (id, name) RELY DISABLE;
When you specify the RELY
constraint, the system will assume that rows are unique. Because
you are telling the system to rely on the information, but are
not validating it on every modification done to the table, you
must be careful to select columns for the disabled RELY
constraint that will uniquely identify each row in the table. If
such uniqueness is not present, then SQL Apply will not
correctly maintain the table.
To improve the performance of SQL Apply, add an index to the
columns that uniquely identify the row on the logical standby
database. Failure to do so results in full table scans during
UPDATE or DELETE statements carried out on the table by SQL
Apply.
3- Stop Redo Apply on the Physical Standby Database
Once the candidate physical standby database has
been created, we will need to halt all redo apply processes.
This insures that any change data won’t be applied to the
candidate physical standby until we complete the creation of the LogMiner dictionary in an upcoming step:
SQL>
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
4-
Adjust Initialization Parameters for the Logical Standby Database
in Primary
We recommend to perform this changes if you plan to perform
switchovers
If you plan to transition the primary database to
the logical standby role, then you must also modify the
parameters shown in bold typeface, so that no parameters need to
change after a role transition:
Change the VALID_FOR attribute in the
original LOG_ARCHIVE_DEST_1 destination to archive redo
data only from the online redo log and not from the standby redo
log.
Include the LOG_ARCHIVE_DEST_3 destination on the primary
database.
This parameter only takes effect when the primary database is
transitioned to the logical standby role.
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_3='LOCATION=/u01/app/oracle/oradata/archivelog/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=STDBY';
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_STATE_3=ENABLE;
5- Halt
possibility of ORA-01555 (snapshot too old) during LogMiner
dictionary creation in Primary
SQL> ALTER SYSTEM SET
undo_retention=3600;
6-
Build a Log Miner Dictionary in Primary
A Log Miner dictionary must be build into the redo data so
that the Log Miner component of SQL Apply can properly interpret
changes it sees in the redo. The DBMS_LOGSTDBY.BUILD procedure waits
for all existing transactions to complete. Long-running transactions
executed on the primary database will affect the timeliness of this
command.
A LogMiner dictionary is required so that the logical
standby database’s Log Apply Services can make sense of
incoming change vectors and translate them to appropriate DML, DDL
and DCL statements.
Executing procedure DBMS_LOGSTBY.BUILD creates the
LogMiner dictionary, enables supplemental logging, and
then waits for any existing transactions on the primary
database to complete. Execute the following connected to Primary:
SQL> EXECUTE
DBMS_LOGSTDBY.BUILD;
7-
Convert Physical Standby Database to a Logical Standby Database,
on the Secondary DB execute:
At this point, the candidate physical standby database is ready to
be converted to a logical standby database.
I’ll complete its transformation with a few simple SQL commands
issued against the candidate physical standby database:
SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY STDBY;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
For db_name, specify a database name that is different from the
primary database to identify the new logical standby database.
If you are using a server parameter file (spfile) at the time you
issue this statement, then the database will update the file with
appropriate information about the new logical standby database.
If you are not using an spfile, then the database issues a message
reminding you to set the name of the DB_NAME parameter
after shutting down the database.
The redo logs will be applied until the Log Miner dictionary is
found in the log files. That could take several minutes. If a
dictionary build is not successfully performed on the primary
database, this command will never complete. You can cancel the SQL
statement by issuing an:
ALTER DATABASE RECOVER MANAGED
STANDBY DATABASE CANCEL;
statement from another SQL session.
Close and Mount the Logical Standby Database
SQL> shutdown immediate;
SQL> startup mount;
8-
Adjust Initialization Parameters in the Secondary Database
You need to modify the LOG_ARCHIVE_DEST_n parameters
because, unlike physical standby databases, logical standby
databases are open databases that generate redo data and have
multiple log files (online redo log files, archived redo log files,
and standby redo log files). It is good practice to specify separate
local destinations for:
- Archived redo log files that store redo data generated by the
logical standby database, this is configured as the
LOG_ARCHIVE_DEST_1=LOCATION=/arch1/standbyDB destination.
- Archived redo log files that store redo data received from the
primary database. This is configured as the
LOG_ARCHIVE_DEST_3=LOCATION=/arch2/standbyDBdestination.
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=/arch1/STDBY/
VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STDBY'
scope=both;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=PROD ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PROD' scope=both;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_3='LOCATION=/arch2/STDBY/
VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=STDBY'
scope=both;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE scope=both;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE scope=both;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_3=ENABLE scope=both;
If database compatibility is set to 11.1 or later, you can also use
the fast recovery area to store the remote archived logs.
To do this, set the following parameters (assuming you have already
appropriately set DB_RECOVERY_FILE_DEST and
DB_RECOVERY_FILE_DEST_SIZE):
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST
VALID_FOR=(ONLINE_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=STDBY' scope=both;
SQL> ALTER SYSTEM SET
LOG_ARCHIVE_DEST_3='LOCATION=USE_DB_RECOVERY_FILE_DEST
VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE) DB_UNIQUE_NAME=STDBY' scope=both;
8- Open
the Logical Standby Database and activate it as an logical standby
by starting SQL Apply processes in the Secondary Database
SQL> shutdown immediate;
SQL> startup mount;
SQL> ALTER
DATABASE OPEN RESETLOGS;
Issue the following statement to begin applying redo data to the
logical standby database:
SQL> ALTER DATABASE
START LOGICAL STANDBY APPLY IMMEDIATE;
Because this is the first time the database is being opened, the
database's global name is adjusted automatically to match the
new DB_NAME initialization parameter.
The conversion of the physical standby database to a logical standby
database happens in two phases:As part of the ALTER DATABASE
RECOVER TO LOGICAL STANDBY statement (unless you have specified
the KEEP IDENTITY clause), the DBID of the database is changed.
As part of the first successful invocation of ALTER DATABASE
START LOGICAL STANDBY APPLY statement, the control file is
updated to make it consistent with that of the newly created logical
standby database.
Once you have successfully invoked the ALTER DATABASE START
LOGICAL STANDBY APPLY statement, you should take a full backup
of the logical standby database, because the backups taken from the
primary database cannot be used to restore the logical standby
database.
9-
Verify Logical Standby Database Configuration and Performance.
At this point, it’s crucial to verify that archived redo logs are
getting transmitted to and applied at the new logical standby
database.
You can use the following SQL examples; alternatively, you could
simply monitor the logical standby database’s alert log to insure
that redo is indeed being transmitted to and applied against that
database.
# Check if the archived redo logs were registered:
SQL> SELECT sequence#, first_time, next_time, dict_begin,
dict_end FROM dba_logstdby_log;
# Force a log file switch at the primary database:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
# Query DBA_LOGSTDBY_LOG to see if the logical standby received the
new archived redo logs:
SQL> SELECT sequence#, first_time, next_time, dict_begin,
dict_end FROM dba_logstdby_log;
# Query V$LOGSTDBY_STATS to see if the logical standby is applying
the transactions:
SQL> SELECT name, value FROM v$logstdby_stats WHERE name =
'coordinator state';
# Check ongoing progress of SQL Apply against the logical standby
from $LOGSTDBY_PROGRESS:
SQL> SELECT sid, serial#, spid, type, high_scn, applied_scn,
latest_scn FROM v$logstdby_progress;
10-
Enroll the Logical Standby with DG Broker
Enroll the Logical Standby Database With Data Guard Broker:
$ dgmgrl
DGMGRL> connect SYS
DGMGRL> ADD DATABASE STDBY AS CONNECT
IDENTIFIER IS 'STDBY' ;
DGMGRL> ENABLE DATABASE STDBY;