Cloning an 11g Database with RMAN


You can create a duplicate database using the RMAN duplicate command. The duplicate database will have a different DBID from the source database.

Starting with Oracle 11g there are now two ways to clone a database:
1. Active Database Duplication (duplicate target database for standby from active database command)
Active Database duplication involves copying the live running database transparently using RMAN over the network to the auxiliary destination and creating the duplicate database.

2. Backup-based Duplication
The backup-based duplication requires a manual copy of the backuped files to the destination database.
All RMAN needs here is the location of the backup pieces (previously executed backup) which is accessible and rman reads the backup pieces and restores the spfile,controlfile,datafiles and archivelog files to perform the duplicate operation.

The only difference between the two is you do not need pre-existing RMAN backups and copies (archivelogs) for the 1st Case. The duplication work is performed by the auxiliary channel and this channel corresponds to a server session on the auxiliary instance on the auxiliary (destination) host.
The active database duplication is slower because it is using the network to transport the data blocks instead of accessing existing RMAN backupsets.
RMAN carries out the following steps as part of the duplication process:
    a. Creates a control file for the duplicate database
    b. Restarts the auxiliary instance and mounts the duplicate control file
    c. Creates the duplicate datafiles and recovers them with incremental backups and archived redo logs
    d. Opens the duplicate database with the RESETLOGS option

More information
Check Status of DB
Enable Flashback
Setup DG Broker
Perform actions with DG Broker
Re-Enable Disabled Databases
Fast Start Failover


Before we start, let's go to some definitions from Oracle’s documentation:

RMAN Options:
NOFILENAMECHECK
Prevents RMAN from checking whether the source database datafiles and online redo logs files share the same names as the duplicated files. This option is necessary when you are creating a duplicate database in a different host that has the same disk configuration, directory structure, and filenames as the host of the source database. If duplicating a database on the same host as the source database, then make sure that NOFILENAMECHECK is not set.
The NOFILENAMECHECK option is required when the standby and primary datafiles and online redo logs have identical filenames. Thus, if you want the duplicate database filenames to be the same as the source database filenames, and if the databases are in different hosts, then you must specify NOFILENAMECHECK


SPFILE ... SET 'string_pattern'
Sets the specified initialization parameters to the specified values. You can use SET to set the LOG_FILE_NAME_CONVERT parameter for the online redo logs.
It copies the server parameter file from the source database to the operating system-specific default location for this file on the standby database.
RMAN uses the server parameter file to start the auxiliary instance for standby database creation. Any remaining options of the DUPLICATE command are processed after the database instance is started with the server parameter file.
If you execute DUPLICATE with the SPFILE clause, then the auxiliary instance must already be started with a text-based initialization parameter file. In this case, the only required parameter in the temporary initialization parameter file is DB_NAME, which can be set to any arbitrary value. RMAN copies the binary server parameter file, modifies the parameters based on the settings in the SPFILE clause, and then restarts the standby instance with the server parameter file. When you specify SPFILE, RMAN never uses the temporary text-based initialization parameter file to start the instance.
If FROM ACTIVE DATABASE is specified on DUPLICATE, then a server parameter file must be in use by the source database instance. If FROM ACTIVE DATABASE is not specified on DUPLICATE, then RMAN restores a backup of the server parameter file to the standby database.


DORECOVER:
Specifies that RMAN should recover the standby database after creating it. If you specify an untilClause, then RMAN recovers to the specified SCN or time and leaves the database mounted.
RMAN leaves the standby database mounted after media recovery is complete, but does not place the standby database in manual or managed recovery mode. After RMAN creates the standby database, you must resolve any gap sequence before placing it in manual or managed recovery mode, or opening it in read-only mode.


PARAMETER_VALUE_CONVERT:
Replaces the first string with the second string in all matching initialization parameter values. Note that DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT are exceptions to this rule and are not affected.
You can use PARAMETER_VALUE_CONVERT to set a collection of initialization parameter values and avoid explicitly setting them all. For example, if the source database uses disk group +ALPHA while the standby database will use +BETA, then you could modify all parameters that refer to these disk groups by specifying SPFILE PARAMETER_VALUE_CONVERT (‘+ALHPA’,'+BETA’).


DB_FILE_NAME_CONVERT 'string_pattern'
Specifies a rule for creating the filenames for duplicate datafiles and tempfiles. Note that DB_FILE_NAME_CONVERT specified on the DUPLICATE command overrides the initialization parameter DB_FILE_NAME_CONVERT if it is set in the initialization parameter file.

Example:
DUPLICATE TARGET DATABASE TO dup1
  FROM ACTIVE DATABASE
  DB_FILE_NAME_CONVERT '/disk1','/disk2'
  SPFILE
    PARAMETER_VALUE_CONVERT '/disk1', '/disk2'
    SET LOG_FILE_NAME_CONVERT '/disk1','/disk2'
    SET SGA_MAX_SIZE '200M'
    SET SGA_TARGET '125M';

The PARAMETER_VALUE_CONVERT option substitutes /disk2 for /disk1 in all initialization parameters that specify filenames (with the exception of DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT). The SET LOG_FILE_NAME_CONVERT clause substitutes /disk2 for /disk1 in the filenames of the online redo logs of the duplicate database. The DB_FILE_NAME_CONVERT option replaces /disk1 with /disk2 in the names of the duplicate datafiles and tempfiles.

Example with Path Changes:
run {
    allocate channel prmy1 type disk;
    allocate channel prmy2 type disk;
    allocate auxiliary channel stby type disk;
    duplicate target database for standby from active database
    dorecover
    spfile
        parameter_value_convert 'FGUARD','STB_QAFGUARD'
        SET DB_UNIQUE_NAME='STB_QAFGUARD'
        set db_file_name_convert='/opt/oracle/oradata/FGUARD/','/opt/oracle/oradata/STB_QAFGUARD/'
        set log_file_name_convert='/opt/oracle/oradata/FGUARD/','/opt/oracle/oradata/STB_QAFGUARD/'
        set control_files='/u01/app/oracle/oradata/STB_QAFGUARD/control01.ctl','/u01/app/oracle/oradata/STB_QAFGUARD/control02.ctl'
        set log_archive_max_processes='5'
        set fal_client='STB_QAFGUARD'
        set fal_server='FGUARD'
        set standby_file_management='AUTO'
        set log_archive_config='dg_config=(FGUARD,STB_QAFGUARD)'
        set log_archive_dest_2='service=FGUARD ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=FGUARD'
     ;
     }


Active Database Duplication
Oracle 11g RMAN introduces the FROM ACTIVE DATABASE capability to the DUPLICATE FOR STANDBY command. This alleviates the previous need for interim storage on both the Primary and Standby systems, and the limitation of single stream network traffic.
Now when taking the backup of the Primary database you can simultaneously create and restore the standby database over the network in parallel streams. Apart from some simple Oracle Net setup, and creating a couple of directories and an interim password file, the whole standby creation can be done in one RMAN script.
RMAN will automatically copy the server parameter file to the standby host, start the auxiliary instance with the server parameter file, restore a backup control file, and copy all necessary database files and archived redo logs over the network to the standby host.
The database files are copied from the source to a destination or AUXILIARY instance via an inter-instance network connection. RMAN then uses a “memory script” (one that is contained only in memory) to complete recovery and open the database.
Password files are copied to the destination. The destination must have the same SYS user password as the source. In other words, at the beginning of the active database duplication process, both databases (source and destination) must have password files.
When creating a standby database, the password file from the primary database overwrites the current (temporary) password file on the standby database. When you use the command line and do not duplicate for a standby database, then you need to use the PASSWORD clause (with the FROM ACTIVE DATABASE clause of the RMAN DUPLICATE command).
The primary and standby databases must be using the same version of Oracle Database.



Configuration
The steps described here are to configure the standby database for maximum performance mode, which is the default data protection mode.

Details about this example:
Primary hostname – PRIMARY_HOST
Standby hostname – SECONDARY_HOST

TNS alias for Primary – PROD
TNS alias for standby – STDBY

Primary DB_NAME Parameter=PROD     Primary DB_UNIQUE_NAME Parameter= PROD
Standby DB_NAME Parameter=PROD     Standby DB_UNIQUE_NAME Parameter= STDBY

NOTE =
The DB_NAME of the standby database will be the same as that of the primary, but it must have a different DB_UNIQUE_NAME value.
The DB_UNIQUE_NAME values of the primary and standby database should be used in the DG_CONFIG setting of the LOG_ARCHIVE_CONFIG parameter.
For this example, the standby database will have the value "STB_QAFGUARD".
ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(PROD,STDBY)';


On this step we want to be sure that the primary database is configured correctly to support a physical standby database.
You only need to perform these preparatory tasks once. After you complete these steps, the database is prepared to serve as the primary database for one or more standby databases. You should perform the following steps:
1. Primary in archivelog mode and enable FORCE LOGGING on Primary (if these steps are not already on place):
Determine if FORCE LOGGING is enabled on Primary. If it is not, then enable it by using the FORCE LOGGING mode.
This statement may take some time to complete, because it waits for all unlogged direct write I/O to finish. Use SQL*Plus to execute the following commands:
SELECT force_logging FROM v$database;
ALTER DATABASE FORCE LOGGING;
SELECT force_logging FROM v$database;

Note:
We are assuming that the Primary DB is already on archive log mode. If that is not the case, you need to perform the following steps on that Primary box:
archive log list;
shutdown immediate
startup mount;
alter database archivelog;
alter database open;



2. Create the Oracle Password file for the Auxiliary Instance
Data Guard uses Oracle Net sessions to transport redo data and control messages between the members of a Data Guard configuration. These redo transport sessions are authenticated using either the Secure Sockets Layer (SSL) protocol or a remote login password file.

Create a passwordfile under $ORACLE_HOME/dbs (if that is not already there). Remember that the value of the password needs to be the same as that of the primary database's password file. I have found that the ignorecase parameter is essential in 11g when putting one of these configurations together.
$ orapwd file=orapwPROD password=password ignorecase=y entries=25

Copy the password file from the $ORACLE_HOME/dbs directory on primary server to $ORACLE_HOME/dbs on the standby server.


3. Update listener.ora on Standby machine and tnsnames.ora in both machines
Update listener.ora on Standby machine by adding the following to the SID_LIST_LISTENER section

    (SID_DESC =
      (GLOBAL_DBNAME = STDBY)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1)
      (SID_NAME = STDBY)
    )

Set up the standby database's listener with a static reference to the standby database instance. Your listener.ora file will look like:
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1)
      (PROGRAM = extproc)
    )
    (SID_DESC =
      (GLOBAL_DBNAME = STDBY)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1)
      (SID_NAME = STDBY)
    )
  )

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
      (ADDRESS = (PROTOCOL = TCP)(HOST = 127.0.0.1)(PORT = 1521))
    )
  )

ADR_BASE_LISTENER = /home/oracle/app/oracle
LOGGING_LISTENER = OFF



Stop and restart the Listener on the Standby Machine
lsnrctl stop
lsnrctl start

Also, update tnsnames.ora on Standby as well as Primary site with the alias to PROD and STDBY
PROD =

  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = PRIMARY_HOST)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = PROD)
    )
  )  

STDBY =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = SECONDARY_HOST)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = STDBY)
    )
  )  


4. Enable Role Transition on Primary DB
If you decide to run the data guard in maximum protection or maximum availability, the transaction are copied over to the standby site by the LGWR and are written to the standby logfiles; this is why we create standby logfiles that will be used not locally, but by the standby. We also recommend to create the standby logfiles so that after a possible switchover, the old primary
may work properly as a standby for any protection mode.
Configure the primary database to receive redo data, by adding the standby logfiles to the primary. You can use the following lines, after validating the directory paths reflect your environment.
NOTE: Always create one additional standby log group than there are online log groups. Also, they must be of the same size than the online log groups.

To check the current size you can use:
select bytes from v$log;

To Check the current Location you can use:
select group#,type,member from v$logfile order by group#;

These standby logfiles will be automatically created on the standby DB once we run the RMAN commands. When standby redologs are created, they are listed in V$LOGFILE, but their groups are listed in V$STANDBY_LOG
set echo on
ALTER DATABASE ADD STANDBY LOGFILE '/u01/app/oracle/oradata/PROD/stdby_redo01.log' SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE '/u01/app/oracle/oradata/PROD/stdby_redo02.log' SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE '/u01/app/oracle/oradata/PROD/stdby_redo03.log' SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE '/u01/app/oracle/oradata/PROD/stdby_redo04.log' SIZE 50M;


5. Create init.ora on Standby
Create an init.ora on the Standby machine with just a single line which is the db_name parameter
echo DB_NAME='STDBY' > $ORACLE_HOME/dbs/initSTDBY.ora


6. Create directories on secondary server
On the standby system, go to the /u01/app/oracle/admin directory. Create a directory with a name that matches your physical standby SID, if you plan to use a different name, then you will need to use the parameter file_name_convert in your RMAN script.
cd $ORACLE_BASE/admin
mkdir STDBY
mkdir STDBY/adump
mkdir STDBY/bdump
mkdir STDBY/cdump
mkdir STDBY/udump
mkdir STDBY/dpdump
mkdir STDBY/pfile
mkdir STDBY/scripts
mkdir -p $ORACLE_BASE/oradata/STDBY
mkdir -p $ORACLE_BASE/diag/rdbms/STDBY/trace
mkdir -p $ORACLE_BASE/diag/rdbms/STDBY/cdump

Note: Depending on how you configured your existing primary database you may need to also create a similar directory in your fast recovery area (i.e. $ORACLE_BASE/fast_recovery_area)
mkdir -p $ORACLE_BASE/flash_recovery_area/STDBY


7 . Mount Standby DB
On the standby system, set the ORACLE_SID environment variable and start the instance in NOMOUNT mode with the text initialization parameter file created earlier
export ORACLE_SID=STDBY
sqlplus "sys as sysdba"
startup nomount;
or
startup nomount pfile=$ORACLE_HOME/dbs/initSTDBY.ora


If this is a Windows Machine, then create the Database Service
% set ORACLE_SID=STDBY
% set ORACLE_HOME=E:\opt\app\oracle\product\11.1.0\db_1
% oradim -NEW -SID STDBY


8. On the primary system, Set/Modify some init.ora parameters:
With this command, we are setting up the location of the Archive logs generated by Oracle and we also mention for which database to apply that.
SQL> alter system set LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST  VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PROD' scope=both;

Here we enable that parameter
SQL> alter system set LOG_ARCHIVE_DEST_STATE_1=enable scope=both;

This defines the list of valid DB_UNIQUE_NAME parameters for your DG configuration
SQL> alter system set log_archive_config = 'dg_config=(PROD,STDBY)' scope=both;

This is the main parameter for DG to define redo transportation method
SQL> alter system set log_archive_dest_2 = 'service=STDBY LGWR ASYNC valid_for=(online_logfile,primary_role) db_unique_name=STDBY' scope=both;

Used only for physical standby. It helps by adding/dropping datafiles automatically to the standby if they are added/dropped in the prod DB.
SQL> alter system set standby_file_management = 'AUTO' scope=both



9. Execute RMAN
On the primary system, ensure the ORACLE_SID environment variable is set to your primary database. Then start RMAN and connect to the source database as the target connection. The duplicate database instance will be specified in the AUXILIARY connection.
You can invoke the RMAN client on any host as long as that host has connectivity and you can connect to all of the required database instances. If the auxiliary instance requires a text-based initialization parameter file (pfile) then this file must exist and it must reside on the same host that runs the RMAN client application.

% rman target / auxiliary sys@STDBY
or
% rman target sys/oracle@PROD auxiliary sys/oracle@STDBY
or
% rman
RMAN> connect target sys@PROD
RMAN> connect auxiliary sys@
STDBY

RMAN> run {
    allocate channel prmy1 type disk;
    allocate channel prmy2 type disk;
    allocate channel prmy3 type disk;
    allocate channel prmy4 type disk;
    allocate auxiliary channel stby type disk;
    duplicate target database for standby from active database
    spfile
        parameter_value_convert 'PROD','STDBY
        set db_unique_name='STDBY'
        set db_file_name_convert='/PROD/','/STDBY/’
        set log_file_name_convert='/PROD/','/STDBY/’
        set control_files='/home/oracle/app/oracle/flash_recovery_area/STDBY/control02.ctl','/home/oracle/app/oracle/oradata/STDBY/control01.ctl'
        set log_archive_max_processes='5'
        set fal_client='STDBY'
        set fal_server='PROD'
        set standby_file_management='AUTO'
        set log_archive_config='dg_config=(PROD,STDBY)'
        SET LOG_ARCHIVE_DEST_2='service=PROD LGWR ASYNC VALID_FOR=(online_logfile,primary_role) db_unique_name=PROD'
     ;
     }

A brief explanation of the individual clauses is shown below.

These simple RMAN commands will now go off and do all the work you used to do manually to create the standby.
It will perform a live backup of the PRIMARY and a live restore of the STANDBY without an interim storage.
Once it finishes, you will have a fully functioning physical standby DB that is ready to receive redo. Of course, it will not yet be receiving redo nor applying it.

During this process, you can safely ignore the following message on the alert.log file:
PING[ARC2]: Heartbeat failed to connect to standby 'STDBY'. Error is 16058



More Examples:

In its simplest form, the following command will create the duplicate DB with the SAME structure that the Primary DB:
RMAN> DUPLICATE TARGET DATABASE TO STDBY FROM ACTIVE DATABASE;

The following example will perform a conversion of the DB Files:
RMAN> DUPLICATE TARGET DATABASE TO STDBY FROM ACTIVE DATABASE DB_FILE_NAME_CONVERT '/opt/oradata/PROD','/opt/oradata/STDBY';

The following example will perform a conversion of the parameter values, DB Files and Log Files:
RMAN> DUPLICATE TARGET DATABASE TO STDBY FROM ACTIVE DATABASE
  SPFILE
    PARAMETER_VALUE_CONVERT 'PROD', '
STDBY'
    SET DB_FILE_NAME_CONVERT '
PROD','STDBY'
    SET LOG_FILE_NAME_CONVERT '
PROD','STDBY';

On this example, we are renaming 1 by 1 the datafile names (we previously query them with select name, FILE# from v$datafile), we also change some init.ora parameters on the auxiliary DB:
run {
set newname for datafile '/u01/app/oracle/oradata/PROD/users01.dbf' to '/u01/app/oracle/oradata/STDBY/users01.dbf';
set newname for datafile '/u01/app/oracle/oradata/PROD/undotbs01.dbf' to '/u01/app/oracle/oradata/STDBY/undotbs01.dbf';
set newname for datafile '/u01/app/oracle/oradata/PROD/sysaux01.dbf' to '/u01/app/oracle/oradata/STDBY/sysaux01.dbf';
set newname for datafile '/u01/app/oracle/oradata/PROD/system01.dbf' to '/u01/app/oracle/oradata/STDBY/system01.dbf';
duplicate target database to STDBY from active database
db_file_name_convert '/u01/app/oracle/oradata/PROD' , '/u01/app/oracle/oradata/STDBY'
spfile parameter_value_convert = '/u01/app/oracle/admin/PROD' , '/u01/app/oracle/admin/STDBY'
set log_file_name_convert = '/u01/app/oracle/oradata/PROD','/u01/app/oracle/oradata/STDBY'
set audit_file_dest='/u01/app/oracle/admin/STDBY/adump'
set log_archive_dest_1=''
set memory_target='183001600'
set control_files='/u01/app/oracle/oradata/STDBY/control01.ctl','/u01/app/oracle/oradata/STDBY/control02.ctl','/u01/app/oracle/oradata/STDBY/control03.ctl'
set db_recovery_file_dest_size = '2294967296';
}


On the following example, we will create the auxiliary database to a past point in time. The only difference comes at the end, during the execution of the duplicate command. You must use the until time clause to create an auxiliary database to a past period in time.
RMAN> duplicate target database to STDBY
spfile
nofilenamecheck
until time 'sysdate-1';


10. Change some init.ora in Primary
Finally the last configuration change to perform in PROD
alter system set fal_server='STBY' scope=both;
alter system set fal_client=‘PROD' scope=both;

11. Start the Apply Process on the Secondary Database and perform a log switch
In your Standby DB, execute the following (we will be using Real Time Apply):
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

Execute a Log Switch on Primary:
alter system switch logfile;

Check the Auxiliary DB
sqlplus "sys as sysdba"
archive log list;

select name from v$datafile;
select name,open_mode from v$database;
show sga




Backup-based Duplication

All RMAN needs here is the location of the backup pieces (previously executed backup) which is accessible and rman reads the backup pieces and restores the spfile,controlfile,datafiles and archivelog files to perform the duplicate operation.

Details about this example:
Primary hostname – PRIMARY_HOST
Standby hostname – SECONDARY_HOST

TNS alias for Primary – PROD
TNS alias for standby – STDBY

Primary DB_NAME Parameter=PROD     Primary DB_UNIQUE_NAME Parameter= PROD
Standby DB_NAME Parameter=PROD     Standby DB_UNIQUE_NAME Parameter= STDBY



This is very simple to perform:

1- Create a backup of the source DB (if you do not have one already):
$ rman target=/
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN> BACKUP DATABASE PLUS ARCHIVELOG;

1) Take a backup of the database as follows (if you do not have one already):
RMAN > backup database plus archivelog;

or

RMAN> run{
allocate channel c1 type disk;
allocate channel c2 type disk;
backup as compressed backupset format '/oracle_backup/%d_%u' database plus archivelog
include current controlfile;
release channel c1;
release channel c2;
}

2) Make the backup pieces available for duplicate operation.
++ If the duplicate is going to happen on different server, move the backup pieces to a new server using commands like ftp,scp etc.
++ If the duplicate is going to happen on the same server as target, then you can either retain them in the same location where the backup was done or copy it to a required location.


3) Create a password file for the auxiliary instance.
For unix copy the password file from the target database to $ORACLE_HOME/dbs and rename it.
For windows copy the password file from the target database to %ORACLE_HOME/database and rename it.


4) Create a initialization parameter for the auxiliary instance with only one parameter DB_NAME.
Since we are duplicating the database onto a separate server with the same filesystem as the original, we don't need to convert the file names.
In this case, the PFILE is called "initFGUARD.ora" and is placed in the "$ORACLE_HOME/dbs" directory.

echo DB_NAME='STDBY' > $ORACLE_HOME/dbs/initSTDBY.ora

# Convert file names to allow for different directory structure if necessary
#DB_FILE_NAME_CONVERT=(/u01/app/oracle/oradata/PROD/,/u01/app/oracle/oradata/STDBY/)
#LOG_FILE_NAME_CONVERT=(/u01/app/oracle/oradata/PROD/,/u02/app/oracle/oradata/STDBY/)


5) Setup the tnsnames.ora file
Add the appropriate entries into the "tnsnames.ora" file in the "$ORACLE_HOME/network/admin" directory to allow connections to the target database from the duplicate server on the Duplicated Server.
PROD =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = PRIMARY_HOST)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = PROD)
    )
  )  

STDBY =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = SECONDARY_HOST)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = STDBY)
    )
  )  


6) Create directories (if needed) on Auxiliary Server
On the auxiliary DB, create a directory with a name that matches your SID:
cd $ORACLE_BASE/admin
mkdir STDBY
mkdir STDBY/adump
mkdir STDBY/bdump
mkdir STDBY/cdump
mkdir STDBY/udump
mkdir STDBY/dpdump
mkdir STDBY/pfile
mkdir STDBY/scripts
mkdir -p $ORACLE_BASE/oradata/STDBY
mkdir -p $ORACLE_BASE/diag/rdbms/STDBY/trace
mkdir -p $ORACLE_BASE/diag/rdbms/STDBY/cdump


7) Copy the backup files from the source database to the destination server(optional)
That can be done by either copying them to the matching location on the destination server, or placing them on a shared drive. If you are copying the files, you may want to use something like:
scp -r oracle@destination_server:/u01/app/oracle/fast_recovery_area/FGUARD/archivelog /u01/app/oracle/fast_recovery_area/FGUARD


8) Now start the auxiliary instance to no mount mode.

Unix Example
Just set the environment variables and start the instance.
% export ORACLE_SID=STDBY
% export ORACLE_HOME=/home/oracle/ora11g
% export PATH=$ORACLE_HOME/bin:$PATH
% sqlplus "/as sysdba"
SQL > startup nomount


Windows Example
Create a service and then set the necessary environment variables and start the instance.
% oradim -new -sid STDBY
set ORACLE_SID=STDBY
set ORACLE_HOME=D:\Orahome\Ora11gr2
set PATH=D:\Orahome\Ora11gr2\bin;%PATH%
% sqlplus "/as sysdba"
SQL > startup nomount


9) Connect to the auxiliary instance from RMAN and perform the rman duplicate as follows:

For the duplication to work we must connect to the duplicate database (AUXILIARY), but depending on the type of duplication we are doing, we may optionally connect to the original database (TARGET) and/or the recovery catalog (CATALOG).
$ export ORACLE_SID=FGUARD
$ rman AUXILIARY /
$ rman TARGET sys/password@PROD AUXILIARY /
$ rman CATALOG rman/password@CATALOG AUXILIARY /
$ rman TARGET sys/password@PROD CATALOG rman/password@CATALOG AUXILIARY /

RMAN > DUPLICATE DATABASE TO STDBY
UNTIL TIME "TO_DATE('29-MAY-2010 14:16:42','DD-MON-YYYY HH24:MI:SS')"
SPFILE
set control_files='D:\STDBY\c1.ctl'
set db_file_name_convert='D:\Database\Ora11g2\ora11g2','d:\STDBY'
set log_file_name_convert='D:\Database\Ora11g2\ora11g2','d:\STDBY'
BACKUP LOCATION 'D:\dup11r2'
;

Here use of the BACKUP LOCATION clause identifies the type of duplication as having no target connection, no recovery catalog and being backup-based.

More Examples:
# Backup files are in matching location to that on the source server.
# Duplicate database to TARGET's current state.
DUPLICATE TARGET DATABASE TO FGUARD SPFILE NOFILENAMECHECK;


# Duplicate database to TARGET's state 4 days ago.
DUPLICATE TARGET DATABASE TO FGUARD UNTIL TIME 'SYSDATE-4' SPFILE NOFILENAMECHECK;


# Backup files are in a different location to that on the source server.
# Duplicate database to the most recent state possible using the provided backups.
# Works with just an AUXILIARY connection only.
DUPLICATE DATABASE TO FGUARD SPFILE BACKUP LOCATION '/source/app/oracle/fast_recovery_area/FGUARD' NOFILENAMECHECK;



Check Status of DB's
These are several scripts to monitor the status of the databases.
You will be executing them frequently to check that the databases are in sync and that there are no gaps between them.

On Primary Database:
select protection_mode, protection_level, database_role, switchover_status  from v$database;

PROTECTION_MODE        PROTECTION_LEVEL   DATABASE_ROLE  SWITCHOVER_STATUS
-------------------- ------------------   -------------- ------------------------
MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE  PRIMARY        TO STANDBY


On Secondary Database:
select protection_mode, protection_level, database_role, switchover_status  from v$database;

PROTECTION_MODE        PROTECTION_LEVEL   DATABASE_ROLE     SWITCHOVER_STATUS
-------------------- ------------------   ----------------- ------------------------
MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE  PHYSICAL STANDBY  NOT ALLOWED


On Primary Database:
alter system switch logfile;
archive log list;

On Secondary Database:
SELECT sequence#, first_time, next_time, applied FROM v$archived_log ORDER BY sequence#;

 SEQUENCE# FIRST_TIM NEXT_TIME APPLIED
---------- --------- --------- ---------
       480 07-FEB-12 07-FEB-12 YES
       481 07-FEB-12 07-FEB-12 YES
       482 07-FEB-12 07-FEB-12 YES
       483 07-FEB-12 07-FEB-12 YES
       484 07-FEB-12 07-FEB-12 YES
       485 07-FEB-12 07-FEB-12 IN-MEMORY

Start up Standby if DB is closed:

shutdown immediate;
STARTUP MOUNT;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

Open Standby in Read-Only Mode:
alter database recover managed standby database cancel;
alter database open read only;


From Read-Only to Standby
shutdown immediate;
STARTUP MOUNT;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

Nice Script to check that both databases are in sync
CLEAR SCREEN
ARCHIVE LOG LIST;
col host_name for a10
col db_unique_name for a10
col name for a10
select NAME,DB_UNIQUE_NAME,HOST_NAME,OPEN_MODE,DATABASE_ROLE from v$database, v$instance ;


Verify that the Physical Standby Database is Performing Correctly
Once you create the physical standby database and set up redo transport services, you may want to verify database modifications are being successfully transmitted from the primary database to the standby database. To see that redo data is being received on the standby database, you should first identify the existing archived redo log files on the standby database, force a log switch and archive a few online redo log files on the primary database, and then check the standby database again. The following steps show how to perform these tasks.
1. On the standby database, identify the existing archived redo log files by querying the V$ARCHIVED_LOG view.
SELECT sequence#, first_time, next_time, applied
FROM v$archived_log
ORDER BY sequence#;

2. On the primary database, issue a number of ALTER SYSTEM SWITCH LOGFILE statements to archive a number of redo log files.
alter system switch logfile;
alter system switch logfile;
archive log list;

3. On the standby database, re-query the V$ARCHIVED_LOG view to verify the redo data was received and applied on the standby database.
SELECT sequence#, first_time, next_time, applied
FROM v$archived_log
ORDER BY sequence#;



Enable Flashback on Both Servers

This operation will allow us to perform switchover and switchback extremely simple.

This feature allows you to view the state of your database at a specified prior point in time. Oracle does this by keeping copies of all modified data blocks in flashback logs. The Flashback logs are written in the Flash Recovery Area; a directory specified by a new parameter db_recovery_file_dest.
Suppose you deleted/modified the configuration information for your application. Instead of performing a recovery operation on this database (and having the end users screaming while the application is offline), you can just ask the database to “put the table back the way it was 5 minutes ago”.  
Oracle automatically creates and manages Flashback Logs within the Flash Recovery Area.

On The Primary Database:
SQL> alter system set db_flashback_retention_target=60 scope=both;
SQL> shutdown immediate;

On The Standby Database:
SQL> alter system set db_flashback_retention_target=60 scope=both;
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database flashback on;
SQL> select name,database_role,flashback_on from v$database;
NAME       DATABASE_ROLE    FLASHBACK_ON
---------- ---------------- ------------------
PROD       PHYSICAL STANDBY YES


On The Primary Database:
SQL> startup mount
SQL> alter database flashback on;
SQL> alter database open;
SQL> select name,database_role,flashback_on from v$database;
NAME       DATABASE_ROLE    FLASHBACK_ON
---------- ---------------- ------------------
PROD       PRIMARY          YES



Set up the Data Guard Broker
If you want to use the Data Guard Broker, you will need first to start the Data Guard Monitor process (DMON) on both sites:
connect sys@PROD as sysdba
alter system set DG_BROKER_START=false scope=both;
!
mkdir /home/oracle/DG
alter system set DG_BROKER_CONFIG_FILE1='/home/oracle/DG/My_Broker_DG1.dat' scope=both;
alter system set DG_BROKER_CONFIG_FILE2='/home/oracle/DG/My_Broker_DG2.dat' scope=both;

connect sys@STDBY as sysdba
alter system set DG_BROKER_START=false scope=both;
!
mkdir /home/oracle/DG
alter system set DG_BROKER_CONFIG_FILE1='/home/oracle/DG/My_Broker_DG1.dat' scope=both;
alter system set DG_BROKER_CONFIG_FILE2='/home/oracle/DG/My_Broker_DG2.dat' scope=both;


As you can see, we defined the location of the DG Broker Files.
If you need to change the location of the files the steps are below
1- stop the broker using the DG_BROKER_START parameter by setting this to false (on all nodes in the cluster)
2- change the DG file destination parameters
3- copy the files to the new location
4- then re-start the broker, again set the DG_BROKER_START parameter to true (on all nodes in the cluster)

Now we are ready to start the Broker, so you need to perform the following step in both sites:
connect sys@PROD as sysdba
alter system set DG_BROKER_START=true scope=both;

connect sys@STDBY as sysdba
alter system set DG_BROKER_START=true scope=both;

The next step is to configure the LISTENER.ORA.
The broker uses the Oracle Net Services to make connections to the databases, setup both redo transport and archive gap resolution and perform role transitions. We need to create a special static entry in the listener.ora file for each database in the broker configuration, this entry makes it possible for the broker to connect to an idle instance using a remote SYSDBA connection and perform the necessary startup.

GLOBAL_DBNAME should be set to <<db_unique_name>>_DGMGRL.<<db_domain>> in listener.ora on all instances of both primary and standby.
This is important otherwise you'll have TNS-12154 error during switchover operation.

Add those lines on BOTH listner.ora files, so it will look like:
SID_LIST_LISTENER =
 (SID_LIST =
   (SID_DESC =
     (GLOBAL_DBNAME = PROD_DGMGRL)
     (ORACLE_HOME= /home/oracle/app/oracle/product/11.2.0/dbhome_2)
     (SID_NAME = PROD)
   )
  )

SID_LIST_LISTENER =
 (SID_LIST =
   (SID_DESC =
     (GLOBAL_DBNAME = STDBY_DGMGRL)
     (ORACLE_HOME= /home/oracle/app/oracle/product/11.2.0/dbhome_2)
     (SID_NAME = STDBY)
   )
  )

And Re-start the Listener on all instances


Now we can enter data guard broker (dgmgrl) on any of the servers.
Connect as user sys using appropriate password. Sqlplus style ‘/ as sysdba’ cannot used with broker.
At any time, you can use help command in dgmgrl to get command list
$ dgmgrl
DGMGRL> connect sys

Now we create the configuration file:
DGMGRL> create configuration orcl_dg as primary database is PROD connect identifier is PROD;

Two configuration files appear under $ORACLE_HOME/dbs, they are actually multiplexed copies of each other.
This gives an option of redundancy in production environments.
Creation place can be controlled via parameter DG_BROKER_CONFIG_FILE.

Now add physical standby instance STDBY
DGMGRL> add database STDBY as connect identifier is STDBY maintained as physical;

Let's check its configuration:
DGMGRL> show configuration
Configuration - orcl_dg

  Protection Mode: MaxPerformance
  Databases:
    PROD  - Primary database
    STDBY - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
DISABLED


Now we can enable the newly created configuration
DGMGRL> enable configuration
DGMGRL> show configuration
Configuration - orcl_dg

  Protection Mode: MaxPerformance
  Databases:
    PROD  - Primary database
    STDBY - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS


Performing actions with the Data Guard Broker Manager

There are a number of commands that you can use to change the state of the database

Turn off/on the redo transport service for all standby databases   
On Primary
    DGMGRL> edit database PROD set state=transport-off;
    DGMGRL> edit database PROD set state=transport-on;

Turn off/on the apply state   
On Standby
    DGMGRL> edit database STDBY set state=apply-off;
    DGMGRL> edit database STDBY set state=apply-on;

Put a database into a real-time query mode   
On Standby
    DGMGRL> edit database STDBY set state=apply-off;
    sql> alter database open read only;
    DGMGRL> edit database STDBY set state=apply-on;

Change the protection mode   
On Primary
# Choose what level of protection you require
sql> alter database set standby to maximize performance;
sql> alter database set standby to maximize availability;
sql> alter database set standby to maximize protection;

Check the current configuration mode:
DGMGRL> show database PROD LogXptMode


Maximum Protection with Logical Standby Database
dgmgrl> edit database PROD set property LogXptMode=SYNC;
dgmgrl> edit database STDBY set property LogXptMode=SYNC;
dgmgrl> edit configuration set protection mode as maxprotection;
show configuration;


Change of Protection Level and Transport Method
dgmgrl> connect sys@PROD
dgmgrl> show configuration verbose;
dgmgrl> show database verbose STDBY ;

Maximum Protection; here 2 Standby Databases are recommended. The changes are always done on Primary and Standby in case of a later SWITCHOVER.
dgmgrl> edit database PROD set property LogXptMode=SYNC;
dgmgrl> edit database STDBY set property LogXptMode=SYNC;
dgmgrl> edit configuration set protection mode as maxprotection;
dgmgrl> show configuration;

Maximum Availability
dgmgrl> edit database PROD set property LogXptMode=SYNC;
dgmgrl> edit database STDBY set property LogXptMode=SYNC;
dgmgrl> edit configuration set protection mode as maxavailability;
dgmgrl> show configuration;

Maximum Performance with LGWR-Transport
If there was a higher Protection Level beforehand, it must be lowered to Maximum Performance now
dgmgrl> edit configuration set protection mode as maxperformance;
dgmgrl> edit database PROD set property LogXptMode=ASYNC;
dgmgrl> edit database STDBY set property LogXptMode=ASYNC;
dgmgrl> show configuration;

Maximum Performance with ARCH-Transport 
dgmgrl> edit configuration set protection mode as maxperformance;
dgmgrl> edit database PROD set property LogXptMode=ARCH;
dgmgrl> edit database STDBY set property LogXptMode=ARCH;
dgmgrl> show configuration;



Quick List of Equivalent Broker Commands to 'ALTER SYSTEM' :
SQL> alter database recover managed STDBY database cancel;
DGMGRL> edit database ‘STDBY' set state='LOG-APPLY-OFF';

SQL> alter database recover managed STDBY database disconnect;
DGMGRL> edit database ‘STDBY' set state='ONLINE';

SQL> alter system set log_archive_max_processes=4;
DGMGRL> edit database ‘PROD' set property 'LogArchiveMaxProcesses'=4;

SQL> alter system set log_archive_dest_state_2='enable' scope=both;
DGMGRL> edit database ‘STDBY' set property 'LogShipping'='ON';

SQL> alter system set log_archive_dest_state_2='defer' scope=both;
DGMGRL> edit database ‘STDBY' set property 'LogShipping'='OFF';
DGMGRL> edit database ‘PROD' set state='LOG-TRANSPORT-OFF‘;
This will defer all standby databases



Add the Logical Standby to the Broker Configuration if that has been configured:


dgmgrl

dgmgrl> connect sys@PROD
dgmgrl> show configuration;
dgmgrl> add database RECO as connect identifier is RECO maintained as logical;
dgmgrl> enable database RECO;
dgmgrl> show configuration;



Switchover
Switchover: Primary and Standby exchange their roles
dgmgrl> connect sys@PROD
dgmgrl> switchover to STDBY;

After it completes, issue startup mount at old primary site, which is now new standby.
DGMGRL> show configuration

Configuration - orcl_dg
Protection Mode: MaxAvailability
Databases:

STDBY - Primary database
PROD  - Physical standby database

Fast-Start Failover: DISABLED
Configuration Status
SUCCESS

You can switch roles back by issuing a new switchover command from dgmgrl
DGMGRL> switchover to PROD;
And issue again startup mount on previous primary after it has been brought down if neccessary.


Failover
Primary is lost, unaccessible or some other type of fault and we need to brind former standby open as new primary.
If old primary will come alive after failover, it will need either flashed back to the scn when the failover took place, or completely re-built from the new primary by duplication.
dgmgrl>failover to STDBY;

Performing failover NOW, please wait...

Failover succeeded, new primary is "STDBY"

DGMGRL> show configuration
Configuration - orcl_dg

Protection Mode: MaxAvailability
Databases:

STDBY - Primary database
Warning: ORA-16629: database reports a different protection level from the protection mode

PROD - Physical standby database (disabled)
ORA-16661: the standby database needs to be reinstated

Fast-Start Failover: DISABLED
Configuration Status:
WARNING



Re-Enabling Disabled Databases
To restore your original disaster-recovery solution after switchover to a logical standby database or after failover to any standby database, you may need to perform additional steps.

Databases that have been disabled after a role transition are not removed from the broker configuration, but they are disabled in the sense that the databases are no longer managed by the broker.

To re-enable broker management of these databases, you must reinstate or re-create the databases using one of the following procedures:
* If a database can be reinstated, the database will show the following status after a complete failover:
ORA-16661: the standby database needs to be reinstated
Reinstate the database using the DGMGRL REINSTATE DATABASE command or the reinstate option in Enterprise Manager

* If a database must be re-created from a copy of the new primary database, it will have the following status:
ORA-16795: the standby database needs to be re-created
Re-create the standby database from a copy of the primary database and then re-enable it

You can use the broker's REINSTATE command to re-enable the failed primary database after performing a complete failover to either a physical, snapshot, or logical standby database. You can also use the broker's REINSTATE command to re-enable any physical standby databases that were not the target of the failover operation but were disabled during a complete failover to a physical standby database.

IMPORTANT NOTE:
For the REINSTATE command to succeed, Flashback Database must have been enabled on the original primary database on the database prior to the failover and there must be sufficient flashback logs on that database. In addition, the database to be reinstated and the new primary database must have network connectivity.



To reinstate a database:
1 - Restart the database to the mounted state
2 - Connect to the new primary database
3 - Use Enterprise Manager or DGMGRL to reinstate the database

Reinstatement Using DGMGRL
Issue the following command while connected to any database in the broker configuration, except the database that is to be reinstated:
DGMGRL> REINSTATE DATABASE db_unique_name;
The newly reinstated standby database will begin serving as standby database to the new primary database.

Reinstatement Using Enterprise Manager
On the Data Guard Overview page, click the Database must be reinstated link. This brings up the General Properties page that provides a Reinstate button. After you click the Reinstate button, Enterprise Manager begins reinstating the database.
When the process is complete, the database will be enabled as a standby database to the new primary database, and Enterprise Manager displays the Data Guard Overview page.

When reinstating a failed primary database, the broker re-enable it as a standby database of the same type (physical or logical standby database) as the old standby database. When reinstating physical standby databases that were disabled during a failover, the broker re-enable them as physical standby databases to the new primary database.




Fast-Start Failover

Fast-start failover allows the broker to automatically fail over to a previously chosen standby database in the event of loss of the primary database. Fast-start failover quickly and reliably fails over the target standby database to the primary database role, without requiring the DBA to perform any manual steps to invoke the failover. Fast-start failover can be used only in a broker configuration and can be configured only through DGMGRL or Enterprise Manager.


Either maximum availability mode or maximum performance mode can be used with fast-start failover. Maximum availability mode provides an automatic failover environment guaranteed to lose no data. Maximum performance mode provides an automatic failover environment guaranteed to lose no more than the amount of data (in seconds) specified by the FastStartFailoverLagLimit configuration property. This property indicates the maximum amount of data loss that is permissible in order for an automatic failover to occur. It is only used when fast-start failover is enabled and the configuration is operating in maximum performance mode.

FSFO is very nice feature that plays big role in Oracle's Maximum Availability Architecture.

We are assuming that Flashback Database feature is enabled on both the primary and the standby database

Since it’s certainly possible that more than one physical standby database could exist in a Data Guard configuration, the first thing that it's needed to establish is which physical standby database should be paired with the primary database in case a fast-start failover is initiated. In this example, I’ll do that by setting a value for the FastStartFailoverTarget parameter via the DGMGRL utility. Note that I’ve chosen the primary database as the fast-start failover target for the selected physical standby database as well:

DGMGRL> EDIT DATABASE PROD SET PROPERTY FastStartFailoverTarget = ‘STDBY';
DGMGRL> EDIT DATABASE STDBY SET PROPERTY FastStartFailoverTarget = ‘PROD';

Next, we establish how long the Fast-Start Failover Observer should wait until it decides that the primary database is unreachable by setting a value of 180 seconds for the FastStartFailoverThreshold parameter:
DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverThreshold = '180';

Then we will setup the Log Mode and Protection Mode with the following commands:
DGMGRL> EDIT DATABASE 'PROD' SET PROPERTY LogXptMode=SYNC;
DGMGRL> EDIT DATABASE 'STDBY' SET PROPERTY LogXptMode=SYNC;
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;

Now we are ready to enable Fast Failover:
DGMGRL> enable fast_start failover

If you check the configuration at this moment you may find the following warning: ORA-16608: one or more databases have warnings.
If you check the status of one of the databases, you can see this warning: ORA-16819: Fast-Start Failover observer not started

And we can also start the observer. Ideally the observer should be started on another host, to observe the primary, as if the host goes down it can monitor the outage and do specify actions.
In this case we will start it from the Standby Database:
DGMGRL> connect sys@STDBY
DGMGRL> start observer

The session will seem to hang. You can do a control C. Or create and run the script as nohup.
nohup dgmgrl sys/oracle@STDBY "start observer file='$ORACLE_HOME/dbs/fsfo.dat'" -logfile $HOME/observer.log &
This will start the observer process and the process will write output to a logfile called ‘observer.log’  under the $HOME Directory

And another way to start:
dgmgrl -logfile $HOME/observer.log sys/oracle@orcl "start observer" &

Now that the basic fast-start failover configuration is completed, we can confirm its status with the SHOW FAST_START FAILOVER command:
DGMGRL> show fast_start failover
Fast-Start Failover: ENABLED
  Threshold:         180 seconds
  Target:            PROD
  Observer:          STDBY
  Lag Limit:         30 seconds
  Shutdown Primary:  TRUE
  Auto-reinstate:    TRUE

Configurable Failover Conditions
  Health Conditions:
    Corrupted Controlfile          YES

    Corrupted Dictionary           YES
    Inaccessible Logfile            NO
    Stuck Archiver                  NO
    Datafile Offline               YES
Oracle Error Conditions:     (none)