The following documentation provides instructions for building an Oracle 12c database with ASM residing on RedHat. 12c comes with a number of new features and architecture changes. Flex ASM and Container Databases to name just two major changes. This guide shows how to install a standard clustered database, the type used in 11g. It does not make use of Flex ASM or Container Databases.
Further reference documentation about Oracle 12c and Oracle Linux is detailed in the Other References section below.
You must also have sufficient disk space in the software installation locations to store the Oracle software, as described in the following table.Location | Amount | Purpose |
---|---|---|
Grid home directory | At least 8 GB | Software installation of Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM) |
Grid home directory | 100 GB is recommended | Additional disk space for the associated log files and patches |
Oracle base of the Oracle Grid Infrastructure installation owner (Grid user) | At least 10 GB | Oracle Clusterware and Oracle ASM log files and for diagnostic collections generated by Trace File Analyzer (TFA) Collector |
Oracle home | At least 6.4 GB | Oracle Database software binaries |
First
Step: Configure Operating System
Second Step: Install Grid
Infraestructure
Third
Step: Install the Database Software
Fourth
Step: Create ASM Disk Group For DATA and FRA
Fifth
Step: Create the Database
Extra Information
Very good Links
Ensure the Oracle Linux version is 6 or higher.
- cat /etc/*release*
Ensure the Oracle Linux Kernel version for both nodes is 2.6.32 or higher.
- uname -r
Set the kernel parameters in /etc/sysctl.conf for all nodes as follows. NOTE If the current value for any parameter is higher than the value listed below, do not change the value of that parameter.
|
Add the following lines to the "/etc/sysctl.conf" file,
or in a file called "/etc/sysctl.d/98-oracle.conf".
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
# oracle-database-preinstall-19c setting for fs.file-max is
6815744
fs.file-max = 6815744
# oracle-database-preinstall-19c setting for kernel.sem is '250
32000 100 128'
kernel.sem = 250 32000 100 128
# oracle-database-preinstall-19c setting for kernel.shmmni is 4096
kernel.shmmni = 4096
# oracle-database-preinstall-19c setting for kernel.shmall is
1073741824 on x86_64
kernel.shmall = 1073741824
# oracle-database-preinstall-19c setting for kernel.shmmax is
4398046511104 on x86_64
kernel.shmmax = 4398046511104
# oracle-database-preinstall-19c setting for kernel.panic_on_oops
is 1 per Orabug 19212317
kernel.panic_on_oops = 1
# oracle-database-preinstall-19c setting for net.core.rmem_default
is 262144
net.core.rmem_default = 262144
# oracle-database-preinstall-19c setting for net.core.rmem_max is
4194304
net.core.rmem_max = 4194304
# oracle-database-preinstall-19c setting for net.core.wmem_default
is 262144
net.core.wmem_default = 262144
# oracle-database-preinstall-19c setting for net.core.wmem_max is
1048576
net.core.wmem_max = 1048576
# oracle-database-preinstall-19c setting for
net.ipv4.conf.all.rp_filter is 2
net.ipv4.conf.all.rp_filter = 2
# oracle-database-preinstall-19c setting for
net.ipv4.conf.default.rp_filter is 2
net.ipv4.conf.default.rp_filter = 2
# oracle-database-preinstall-19c setting for fs.aio-max-nr is
1048576
fs.aio-max-nr = 1048576
# oracle-database-preinstall-19c setting for
net.ipv4.ip_local_port_range is 9000 65500
net.ipv4.ip_local_port_range = 9000 65535
Add the following lines to a file called
"/etc/security/limits.d/oracle-database-server-12cR2-preinstall.conf"
file.
oracle soft
nofile 1024
oracle hard nofile 65536
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
oracle hard memlock
134217728
oracle soft memlock
134217728
Run one of the following commands to change the current kernel
parameters, depending on which file you edited.
/sbin/sysctl -p
Upgrade Packages
On each node
yum upgrade
yum install oracleasm-support
Selinux Disable
On each node
You can do disabled or permissive
vi /etc/selinux/config
SELINUX=disabled
Firewall Close and Disable
On each node
You can open after installation,but you have to give permission to necessery ports.
systemctl stop firewalld.service
systemctl disable firewalld.service
Create Users and Groups on each node
oracle:x:1000:oracle
onboard:x:1001:
asmdba:x:1002:
oinstall:x:54321:oracle
dba:x:54322:oracle
oper:x:54323:
backupdba:x:54324:
dgdba:x:54325:
kmdba:x:54326:
racdba:x:54330:
groupadd -g 1003 asmadmin
groupadd -g 1004 asmoper
groupadd -g 501 oinstall
groupadd -g 502 dba
groupadd -g 503 oper
groupadd –g 505 asmdba
useradd –u 1000 –g oinstall –G dba,asmdba,oper oracle
passwd oracle
useradd -u 1002 -g oinstall -G asmadmin,asmoper,asmdba grid
passwd grid
Configure Bash Profile of
Users
Connect as Oracle On Node 1 and:
vi .bash_profile
# Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=wfrac0081.us.oracle.com
export ORACLE_UNQNAME=RACTEST
export ORACLE_BASE=/oracle/db/12.2.0.1
export DB_HOME=$ORACLE_BASE/db_home
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=RACTEST1
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias home='cd /oracle/db/12.2.0.1/db_home'
Connect as Oracle On Node 2 and:
vi .bash_profile
# Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=wfrac0082.us.oracle.com
export ORACLE_UNQNAME=RACTEST
export ORACLE_BASE=/oracle/db/12.2.0.1
export DB_HOME=$ORACLE_BASE/db_home
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=RACTEST2
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias home='cd /oracle/db/12.2.0.1/db_home'
Connect as Grid user bash_profile on node 1
# Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=wfrac0081.us.oracle.com
export ORACLE_UNQNAME=RACTEST
export ORACLE_BASE=/oracle/gridbase/12.2.0.1
export GRID_BASE=/oracle/gridbase/12.2.0.1
export GRID_HOME=/oracle/grid/12.2.0.1/grid_home
export ORACLE_HOME=/oracle/grid/12.2.0.1/grid_home
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias home='cd /oracle/grid/12.2.0.1/grid_home'
Connect as Grid user bash_profile on node 2
# Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=wfrac0082.us.oracle.com
export ORACLE_UNQNAME=RACTEST
export ORACLE_BASE=/oracle/gridbase/12.2.0.1
export GRID_BASE=/oracle/gridbase/12.2.0.1
export GRID_HOME=/oracle/grid/12.2.0.1/grid_home
export ORACLE_HOME=/oracle/grid/12.2.0.1/grid_home
export ORACLE_SID=+ASM2
export ORACLE_TERM=xterm
export BASE_PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$BASE_PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias home='cd /oracle/grid/12.2.0.1/grid_home'
Create Directories on each
Node as root
mkdir -p /oracle/grid/12.2.0.1/grid_home
mkdir -p /oracle/grid/12.2.0.1/log
mkdir -p /oracle/gridbase/12.2.0.1/
mkdir -p /oracle/db/12.2.0.1/db_home
chown -R oracle.oinstall /oracle/
chown -R grid.oinstall /oracle/grid*
chmod -R 775 /oracle/
Configure ASM
Disks on each Node as root
# oracleasm configure -i
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM
library
driver. The following questions will determine whether the
driver is
loaded on boot and what permissions it will have. The
current values
will be shown in brackets ('[]'). Hitting <ENTER>
without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm
Create partition.
I will arrive to this:
ASM1 --> /dev/xvdb 53.6 GB
ASM2 --> /dev/xvdc 53.6 GB
DATA --> /dev/xvdd 214.7 GB
fdisk /dev/xvdb
fdisk /dev/xvdc
fdisk /dev/xvdd
oracleasm createdisk ASM1 /dev/xvdb
oracleasm createdisk ASM2 /dev/xvdc
oracleasm createdisk DATA /dev/xvdd
oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "ASM1"
Instantiating disk "ASM2"
Instantiating disk "DATA"
oracleasm listdisks
ASM1
ASM2
DATA
You have to see disks on each node with oracleasm listdisks
command.
ll /dev/oracleasm/disks/
total 0
brw-rw----. 1 grid asmadmin 202, 17 Jun 23
12:46 ASM1
brw-rw----. 1 grid asmadmin 202, 33 Jun 23
12:46 ASM2
brw-rw----. 1 grid asmadmin 202, 49 Jun 23
12:46 DATA
oracleasm querydisk -d DATA
Disk "DATA" is a valid ASM disk on device
[202,33]
oracleasm querydisk -d ASM1
Disk "ASM1" is a valid ASM disk on device
[202,33]
oracleasm querydisk -d ASM2
Disk "ASM2" is a valid ASM disk on device
[202,33]
How to Configure, Create, Scan, List, Query, Rename, Delete
OracleASM disk in Linux
https://www.2daygeek.com/create-scan-list-query-rename-delete-configure-oracleasm-disk-linux/
Configure Passwordless SSH Connection (recommended)
You can configure ssh passwordless connection. Grid installation step can make this step, but sometimes you can get an error.
Create SSH Key on First Node as Grid User:
mkdir ~/.ssh
chmod 700 ~/.ssh
/usr/bin/ssh-keygen -t dsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
cd /home/grid/.ssh/
cat id_dsa.pub >> authorized_keys
In the .ssh directory, you should see the id_dsa.pub keys that
you have created, and the file authorized_keys
scp authorized_keys node2:/home/grid/.ssh/
Your output should be similar to the following, where xxx
represents parts of a valid IP address:
[grid@node1 .ssh]$ scp authorized_keys node2:/home/grid/.ssh/
The authenticity of host 'node2 (xxx.xxx.173.152) can't be
established.
DSA key fingerprint is
7e:60:60:ae:40:40:d1:a6:f7:4e:zz:me:a7:48:ae:f6:7e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,xxx.xxx.173.152' (dsa) to the
list
of known hosts
grid@node2's password:
authorized_keys
100%
828
7.5MB/s 00:00
Using SSH, log in to the node where you copied the authorized_keys
file. Then change to the .ssh directory, and using the cat
command, add the DSA keys for the second node to the
authorized_keys file, clicking Enter when you are prompted for a
password, so that passwordless SSH is set up:
[grid@node1 .ssh]$ ssh node2
[grid@node2 grid]$ cd .ssh
[grid@node2 ssh]$ cat id_dsa.pub >>
authorized_keys
Repeat these steps from each node to each other member node in the
cluster.
When you have added keys from each cluster node member to the
authorized_keys file on the last node you want to have as a
cluster node member, then use scp to copy the authorized_keys file
with the keys from all nodes back to each cluster node member,
overwriting the existing version on the other nodes.
To confirm that you have all nodes in the authorized_keys file,
enter the command more authorized_keys, and determine if there is
a DSA key for each member node. The file lists the type of key
(ssh-dsa), followed by the key, and then followed by the user and
server. For example:
Copyssh-dsa AAAABBBB . . . = grid@node1
Copyssh-dsa AAAABBBB . . . = grid@node2
The grid user's /.ssh/authorized_keys file on every node must
contain the contents from all of the /.ssh/id_dsa.pub files that
you generated on all cluster nodes.
Chrony plugin came with Oracle Linux 7 version for ntp sync. You
can write your ntp server information in /etc/chrony.conf file.
On each node as root:
vi /etc/chrony.conf
server ntp.yourdomain
systemctl restart chronyd.service
systemctl enable chronyd.service
ifconfig
eth0: inet 10.22.91.164 netmask
255.255.224.0 broadcast 10.22.95.255
eth1: inet 192.168.0.81 netmask
255.255.255.0 broadcast 192.168.0.255
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet
192.168.122.1 netmask 255.255.255.0 broadcast
192.168.122.255
cat /etc/resolv.conf
nameserver 10.209.76.198
nameserver 10.209.76.197
Configure Hosts File
Your public IP address and your Virtual should be on the same
subnet.
On each Node as root:
vi /etc/hosts
#Public
10.22.91.164 wfrac0081.us.oracle.com wfrac0081
10.22.91.165 wfrac0082.us.oracle.com wfrac0082
#Virtual
10.22.91.166 wfrac0081-vip.us.oracle.com wfrac0081-vip
10.22.91.167 wfrac0082-vip.us.oracle.com wfrac0082-vip
#Private
192.168.0.81 wfrac0081-priv.us.oracle.com
wfrac0081-priv
192.168.0.82 wfrac0082-priv.us.oracle.com
wfrac0082-priv
#SCAN
10.22.91.168 wfrac008182-r.us.oracle.com wfrac008182-r
10.22.91.169 wfrac008182-r.us.oracle.com wfrac008182-r
10.22.91.170 wfrac008182-r.us.oracle.com wfrac008182-r
DNS Register (optional)
If you do not want to take an warning at last step of
installation, you can add your alias to your domain server.
Also, you can configure dns information of nodes in
/etc/resolv.conf file.
Preinstallation
Package Intallation
On each node as root:
yum install oracle-database-server-12cR2-preinstall
reboot
Copy and Unzip Grid Files as
root
cp linuxx64_12201_grid_home.zip /oracle/grid/12.2.0.1/grid_home
cd /oracle/grid/12.2.0.1/grid_home
chown grid.oinstall linuxx64_12201_grid_home.zip
su - grid
cd /oracle/grid/12.2.0.1/grid_home
unzip linuxx64_12201_grid_home.zip
Install the following package from the grid home as the "root"
user on all nodes.
su -
# Local node.
cd /oracle/grid/12.2.0.1/grid_home/cv/rpm
rpm -Uvh cvuqdisk*
# Remote node.
scp ./cvuqdisk* root@node2:/tmp
ssh root@node2 rpm -Uvh /tmp/cvuqdisk*
exit
Display Config and
Start Grid Installation
su –
export DISPLAY=:0.0
xhost +
su – grid
export DISPLAY=:0.0
xhost +
cd $GRID_HOME
./gridSetup.sh
GIMR came with Oracle 12c version. Every Oracle Standalone Cluster and Oracle Domain Services Cluster contains a Grid Infrastructure Management Repository (GIMR), or the Management Database (MGMTDB).
The Grid Infrastructure Management Repository (GIMR) is a multitenant database with a pluggable database (PDB) for the GIMR of each cluster. The GIMR stores the following information about the cluster:
You have to give at least 40 GB size to GIMR data. I gave 100 GB this system and gave separate disk area for GIMR db.
On the next screen you can say NO or YES to Create the GIMR
If NO:
Select the "No" option, as we don't want to create a separate disk group for the GIMR in this case. Click the "Next" button:
Set the redundancy to "External", click the "Change Discovery Path" button and set the path to "/dev/oracleasm/disks/*". Return to the main screen and select your disks. In this example I selected all, but you can select 1 of them, later using ASMCA you can create more GROUPS over the rest of disks.
Uncheck the "Configure Oracle ASM Filter Driver" option, then click the "Next" button.
If YES
Select YES
Change the Name to GRID, click on "Change Discovery Path" and select /dev/oracleasm/disks/*
Then all the disks will be displayed:
Enter the credentials and click the "Next" button.
Accept the default IPMI option by clicking the "Next" button.
Don't register with EM. Click the "Next" button.
Define the proper groups:
Accept the default inventory directory (it was defined on the .bash_profile) by clicking the "Next" button.
If you want the root scripts to run automatically, enter the relevant credentials. I prefer to run them manually. Click the "Next" button.
Wait while the prerequisite checks complete. If you have any issues use the "Fix & Check Again" button.
Fix & Check Again button creates fix scripts to resolve warnings.
You have to run this scripts on both nodes to resolve warnings.
Once possible fixes are complete, check the "Ignore All" checkbox and click the "Next" button. It is likely the "Physical Memory" and "Network Time Protocol (NTP)" tests will fail for this type of installation. This is OK.
If you are happy with the summary information, click the "Install" button.
When prompted, run the configuration scripts on each node. Please do it one at a time
This process takes some time
When this is done, You can check your grid status with crs_stat -t and crsctl stat res -t commands with grid user.
crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....SM.lsnr ora....er.type ONLINE ONLINE wfrac0081
ora.DATA.dg ora....up.type ONLINE ONLINE wfrac0081
ora....ER.lsnr ora....er.type ONLINE ONLINE wfrac0081
ora....AF.lsnr ora....er.type OFFLINE OFFLINE
ora....N1.lsnr ora....er.type ONLINE ONLINE wfrac0082
ora....N2.lsnr ora....er.type ONLINE ONLINE wfrac0081
ora....N3.lsnr ora....er.type ONLINE ONLINE wfrac0081
ora.MGMTLSNR ora....nr.type ONLINE ONLINE wfrac0081
ora.asm ora.asm.type ONLINE ONLINE wfrac0081
ora.chad ora.chad.type ONLINE ONLINE wfrac0081
ora.cvu ora.cvu.type ONLINE ONLINE wfrac0081
ora.mgmtdb ora....db.type ONLINE ONLINE wfrac0081
ora....network ora....rk.type ONLINE ONLINE wfrac0081
ora.ons ora.ons.type ONLINE ONLINE wfrac0081
ora.proxy_advm ora....vm.type OFFLINE OFFLINE
ora.qosmserver ora....er.type ONLINE ONLINE wfrac0081
ora.scan1.vip ora....ip.type ONLINE ONLINE wfrac0082
ora.scan2.vip ora....ip.type ONLINE ONLINE wfrac0081
ora.scan3.vip ora....ip.type ONLINE ONLINE wfrac0081
ora....81.lsnr application ONLINE ONLINE wfrac0081
ora....081.ons application ONLINE ONLINE wfrac0081
ora....081.vip ora....t1.type ONLINE ONLINE wfrac0081
ora....82.lsnr application ONLINE ONLINE wfrac0082
ora....082.ons application ONLINE ONLINE wfrac0082
ora....082.vip ora....t1.type ONLINE ONLINE wfrac0082
crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE wfrac0081 STABLE
ONLINE ONLINE wfrac0082 STABLE
ora.DATA.dg
ONLINE ONLINE wfrac0081 STABLE
ONLINE ONLINE wfrac0082 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE wfrac0081 STABLE
ONLINE ONLINE wfrac0082 STABLE
ora.chad
ONLINE ONLINE wfrac0081 STABLE
ONLINE ONLINE wfrac0082 STABLE
ora.net1.network
ONLINE ONLINE wfrac0081 STABLE
ONLINE ONLINE wfrac0082 STABLE
ora.ons
ONLINE ONLINE wfrac0081 STABLE
ONLINE ONLINE wfrac0082 STABLE
ora.proxy_advm
OFFLINE OFFLINE wfrac0081 STABLE
OFFLINE OFFLINE wfrac0082 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE wfrac0082 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE wfrac0081 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE wfrac0081 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE wfrac0081 169.254.10.200 192.1
68.0.81,STABLE
ora.asm
1 ONLINE ONLINE wfrac0081 Started,STABLE
2 ONLINE ONLINE wfrac0082 Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE wfrac0081 STABLE
ora.mgmtdb
1 ONLINE ONLINE wfrac0081 Open,STABLE
ora.qosmserver
1 ONLINE ONLINE wfrac0081 STABLE
ora.scan1.vip
1 ONLINE ONLINE wfrac0082 STABLE
ora.scan2.vip
1 ONLINE ONLINE wfrac0081 STABLE
ora.scan3.vip
1 ONLINE ONLINE wfrac0081 STABLE
ora.wfrac0081.vip
1 ONLINE ONLINE wfrac0081 STABLE
ora.wfrac0082.vip
1 ONLINE ONLINE wfrac0082 STABLE
--------------------------------------------------------------------------------
You can unzip database file under /tmp/ and you have to give permission oracle user.
mkdir /tmp/database
chown oracle.oinstall –R /tmp/database/
cp linuxx64_12201_database.zip /tmp/database/
cd /tmp/database/
unzip linuxx64_12201_database.zip
cd database
./runInstaller
Select the "Install database software only" option, then click the "Next" button.
Accept the "Oracle Real Application Clusters database installation" option by clicking the "Next" button.
Make sure both nodes are selected. Also setup SSH Connectivity if needed
Then click the "Next" button.
Select the "Enterprise Edition" option, then click the "Next" button.
Define the locations , then click the "Next" button.
Select the desired operating system groups, then click the "Next" button.
Wait for the prerequisite check to complete. If there are any problems either click the "Fix & Check Again" button, or check the "Ignore All" checkbox and click the "Next" button.
If you are happy with the summary information, click the "Install" button.
Execute the scripts as root when needed:
cd /oracle/db/12.2.0.1/db_home/
sh root.sh
Please close when done:
If you need to create more GROUPS, follow these steps (example, I didn't use it):
Login as grid user start asmca from /oracle/grid/12.2.0.1/grid_home/bin/asmca
Create ‘DATA’ disk group with External Redundancy by selecting appropriate candidate disks
Create two ‘REDO’ disk groups – REDO1 and REDO2 - with External Redundancy by selecting at least one candidate disk per REDO disk group
Create ‘FRA’ disk group with External Redundancy by selecting appropriate candidate disks
Create ‘TEMP’ disk group with External Redundancy by selecting appropriate candidate disks
Verify all required disk groups and click Exit to close from ASMCA utility
Change ASM striping to fine-grained for REDO, TEMP and FRA diskgroups as a grid user using below commands
SQL> ALTER DISKGROUP REDO ALTER TEMPLATE onlinelog ATTRIBUTES (fine)
SQL> ALTER DISKGROUP TEMP ALTER TEMPLATE tempfile ATTRIBUTES (fine)
SQL> ALTER DISKGROUP FRA ALTER TEMPLATE onlinelog ATTRIBUTES (fine)
The following steps are applicable for node 1 of your cluster environment, unless otherwise specified:
cd ORACLE_HOME
dbca
In the Select Database Operation window, select Create Database and click Next
Select the "Advanced Configuration" option.
Review the Defaults and select Next
In the Select List of nodes window select nodes and click Next
In the Specify Database Identification Details window:
In the Storage Options Window select Datafiles storage location and Unselect Use Oracle- Managed Files (OMF) and click Next:
Accept the Warning:
In the Select Fast Recovery Option window, select Specify Fast Recovery Area location and define the size. Thenk click Next
In the Select Oracle Data Vault Config Option window, select default values and click Next
In the Specify Configuration Options window put required SGA and PGA values and click Next
In the Specify Management Options window select "Run Cluster Verification periodically"and click Next
In the Specify Database User Credentials window input password and click Next
In the Select Database Creation Options window, Click on Customize Storage Locations
Then Click on Redo Log Groups and Increase their size to , at least, 800 MB
Also Click on each Redo Log File to define its Redo Disk Group Location and multiplex each one of them Click on "Update All" and "Apply". Then click Next
In the Summary window, click Finish to create database
Click Close on the Finish window after the database creation is complete. The RAC database creation is now complete.
1. Create the extra Disk Groups
- . oraenv
- ORACLE_SID = [oracle] ? +ASM1
- sqlplus / as sysasm
- SQL> CREATE DISKGROUP DATA01 EXTERNAL REDUNDANCY DISK ‘/dev/asm-data0*' ATTRIBUTE ‘COMPATIBLE.ASM’=’12.1.0.0.0′,’COMPATIBLE.RDBMS’=’12.1.0.0.0′,’AU_SIZE’=’4M';
- SQL> CREATE DISKGROUP FRA01 EXTERNAL REDUNDANCY DISK ‘/dev/asm-fra0*' ATTRIBUTE ‘COMPATIBLE.ASM’=’12.1.0.0.0′,’COMPATIBLE.RDBMS’=’12.1.0.0.0′,’AU_SIZE’=’4M';
- SQL> CREATE DISKGROUP REDO01 EXTERNAL REDUNDANCY DISK ‘/dev/asm-redo0[1,3,5,7]′ ATTRIBUTE ‘COMPATIBLE.ASM’=’12.1.0.0.0′,’COMPATIBLE.RDBMS’=’12.1.0.0.0′,’AU_SIZE’=’4M';
- SQL> CREATE DISKGROUP REDO02 EXTERNAL REDUNDANCY DISK ‘/dev/asm-redo0[2,4,6,8]′ ATTRIBUTE ‘COMPATIBLE.ASM’=’12.1.0.0.0′,’COMPATIBLE.RDBMS’=’12.1.0.0.0′,’AU_SIZE’=’4M';
2. Update parameter file with new Diskgroups
- SQL> ALTER SYSTEM SET ASM_DISKGROUPS=‘DATA01′,‘FRA01′,‘REDO01′,‘REDO02′ SCOPE=BOTH SID=’*’
- SQL> ALTER DISKGROUP VOTE01 SET ATTRIBUTE ‘compatible.rdbms’ = ‘12.1.0.0.0’
- SQL> quit
- srvctl start diskgroup -g DATA01
- srvctl start diskgroup -g FRA01
- srvctl start diskgroup -g REDO01
- srvctl start diskgroup -g REDO02
3. Configure ASM to use huge pages
- . oraenv
- ORACLE_SID = [oracle] ? +ASM1
- SQL> sqlplus / as sysasm
- SQL> alter system set memory_max_target=0 scope=spfile sid=’*’;
- SQL> alter system set memory_target=0 scope=spfile sid=’*’;
- SQL> alter system set sga_target=1088M scope=spfile sid=’*’;
- SQL> alter system set use_large_pages=’ONLY’ scope=spfile sid=’*’;
- SQL> quit
- sudo /u01/app/12.1.0/grid/bin/crsctl stop cluster -all
- sudo /u01/app/12.1.0/grid/bin/crsctl start cluster -all
1. Log on as Oracle
2. Update the oratab
- ssh node1
- . oraenv
- vi /etc/oratab
- Add the following line
- MYRACDB01:/u01/app/oracle/product/12.1.0/dbhome_1:N
- ssh node2
- . oraenv
- vi /etc/oratab
- Add the following line
- MYRACDB02:/u01/app/oracle/product/12.1.0/dbhome_1:N
3. Set some database options and rename the spfile in ASM
- . oraenv
- ORACLE_SID = [oracle] ? RACTEST1
- SQL> sqlplus / as sysdba
- SQL> alter database flashback on;
- SQL> alter database force logging;
- SQL> alter system set cursor_sharing=force scope=spfile sid=’*';
- SQL> alter system set control_file_record_keep_time =14 scope=spfile sid=’*’;
- SQL> alter system set use_large_pages =’ONLY’ scope=spfile sid=’*’;
- SQL> BEGIN;
- SQL> exec dbms_auto_task_admin.disable(client_name => ‘auto optimizer stats collection’, operation => NULL, window_name => NULL);
- SQL> alter system set undo_retention=54000 scope=spfile sid=’*';
- SQL> alter system set inmemory_query=’DISABLE’ scope=spfile sid=’*';
- SQL> quit
- srvctl stop cluster -all
- ssh myhost1
- cd $ORACLE_HOME/dbs
- mv initRACTEST1.ora /u01/app/oracle/admin/RACTEST/pfile/initRACTEST1.ora
- ln -s /u01/app/oracle/admin/RACTEST/pfile/initRACTEST1.ora
- ssh myhost2
- cd $ORACLE_HOME/dbs
- mv initRACTEST2.ora /u01/app/oracle/admin/RACTEST/pfile/initRACTEST2.ora
- ln -s /u01/app/oracle/admin/RACTEST/pfile/initRACTEST2.ora
- srvctl start cluster -all
- rman
- RMAN> connect target /
- RMAN> configure controlfile autobackup on;
- RMAN> quit
3. Implement DCD on the databases.
- On each node perform the following
- cd $TNS_ADMIN
- vi sqlnet.ora
- Add the following line to the sqlnet.ora file.
- SQLNET.EXPIRE_TIME = 10
The following steps should be performed once the database has been built to ensure that the environment conforms to Oracle's Best Practices and also ensures that it doesn't fall foul of Internal Audit.
1. Remove the GRANT EXECUTE TO PUBLIC privilege from potentially harmful packages.
- sqlplus '/ as sysdba'
- SQL> revoke execute on utl_file from public;
- SQL> revoke execute on dbms_random from public;
- SQL> revoke execute on utl_http from public;
- SQL> revoke execute on utl_smtp from public;
- SQL> revoke execute on utl_tcp from public;
- SQL> exit
2. Ensure the following user-ids' passwords, where applicable, have been changed on ASM and the Database:
- SYS
- SYSTEM
- DBSNMP
- ASMSNMP
3. Adjust the database DEFAULT TABLESPACE settings so that the SYSTEM tablespace is not used inappropriately.
- sqlplus '/ as sysdba'
- SQL> alter database default tablespace users;
- SQL> alter user sys default tablespace system;
- SQL> alter user system default tablespace system;
4. Correct any users that already have SYSTEM as their DEFAULT TABLESPACE
- sqlplus '/ as sysdba'
- SQL> select username, default_tablespace from dba_users where default_tablespace = 'SYSTEM';
- SQL> alter user <user name> default tablespace <non-SYSTEM tablespace>;
- SQL> exit
5. Lock and expire potentially harmful built-in user-ids.
- sqlplus '/ as sysdba'
- SQL> alter user &user_idpassword expire account lock;
- Repeat for each user as required.
- SQL> exit