Chapter 4. Oracle RAC 12c Release 2 Configuration
4.1. Installing Oracle Grid Infrastructure (Required for ASM)
The installation of the Oracle Grid Infrastructure for Oracle RAC 12c Release 2 is required for the use of Oracle ASM. Prior to the installation of the Oracle Grid Infrastructure, ensure that the prerequisites from the following sections have been met:
The reference environment uses the /u01/app/12.2.0/grid as the Grid home. The owner is set to grid and the group is set to oinstall.
The following commands create the Grid home directory and set the appropriate permissions:
On each node within the Oracle RAC environment, as the root user
# mkdir --parents /u01/app/12.2.0/grid # chown --recursive grid.oinstall /u01
The following steps are intended only for node one of the Oracle RAC Database environment unless otherwise specified.
- Download the Oracle Grid Infrastructure software files9 from the Oracle Software Delivery Cloud
Change the ownership and permissions of the downloaded file, move the fileto the Grid home and install
unzippackage for unpackaging of the file.# cd <grid_download_location> # chown grid.oinstall V840012-01.zip # mv V840012-01.zip /u01/app/12.2.0/grid # yum install unzip
sshas thegriduser with the -Y option, change directory into the Grid home/u01/app/12.2.0/gridandunzipthe download zip file.ssh -Y grid@<hostname> $ cd /u01/app/12.2.0/grid $ unzip -q V840012-01.zip
As the
griduser, start the OUI via the command:$ /u01/app/12.2.0/grid/gridSetup.sh
NoteEnsure to issue
sshwith the -Y option as thegriduser from the client server. Otherwise, a DISPLAY error may occur.Within the Configuration Option window, select Configure Oracle Grid Infrastructure for a New Cluster and select Next.

Within the Cluster Configuration window, select Configure an Oracle Standalone Cluster and select Next.

Within the Grid Plug and Play window, enter the Cluster Name, SCAN Name and SCAN Port and select _Next.

Within the Cluster Node Information window, click the Add button to add each node within the Oracle RAC Database cluster and click OK. Each node within the Oracle RAC cluster requires the public hostname and VIP information.

Within the same Cluster Node Information window, select the SSH Connectivity button to ste the passwordless SSH connectivity by entering the OS Password credentials for the
griduser and clicking Setup. Once a dialog box returns with the 'Successfully established passwordless SSH conectivity between the selected nodes', click OK and click Next to continue.
-
Within the Network Interface Usage window, select the Interface Name, bond0, to be set as the Interface Type Public, and the Interface
em3andem4to be set as the Interface Type ASM & Private. Any other interfaces should be set to Do Not Use. Select Next and continue. - Within the Storage Option window, select Configure ASM using block devices.
Within the Grid Infrastructure Management window, select Yes to create a GIMR ASM diskgroup.

Within the Create ASM Disk Group window, provide the following:
- Disk group name, i.e. OCRVOTE
Redundancy Level
- External - redundancy provided by the storage system RAID, and not by Oracle ASM
- Normal - provides two-way mirroring by Oracle ASM, thus provided two copies of every data extent.
- High provides three-way mirroring by Oracle ASM thus enduring the loss of two ASM disks within different failure groups.
Disks to be assigned to the Disk group, i.e. /dev/mapper/ocrvote1p1, /dev/mapper/ocrvote2p1, /dev/mapper/ocrvote3p1
NoteThis reference environment uses Normal redundancy
Allocation Unit (AU) Size set to 4MB
A 4MB AU size is used to crease the amount of extents Oracle needs to manage. With less extends to manage, CPU utilization and memory consumption is reduced thus improving performance. The AU size varies depending on the type of Oracle workload, I/O size per transaction, and overall diskgroup size. There is no "best size" for AU size, but a good starting point is 4 MB. Please visit Oracle’s documentation10 for more information.
To display the appropriate candidate disks, click on the Change Discovery Path button and enter as the Disk Discovery Path one of the following as appropriate:
For device mapper devices, type: dev/mapper/*

Click Next once complete within the Create ASM Disk Group window.
Within the GIMR Data Disk Group window, enter the Disk group name , select the appropriate Redundancy level and select the disk. This reference architecture uses External redundancy and the disk labeled /dev/mapper/gimr1

-
Within the ASM Password window, specify the password for the
SYSandASMSNMPuser accounts, click Next. - Within the Failure Isolation window, enter the Intelligent Platform Management Interface (IPMI)_ information, or select Do not use IPMI. This reference environment does not use IPMI.
- Within the Management Options window, ensure the Register with Enterprise Manager (EM) Cloud Control is unchecked, click Next.
Within the Operating System Groups window, select the appropriate OS groups and click Next. The values as created and assigned within this reference environment are as follows:
- Oracle ASM Administrator Group – ASMADMIN
- Oracle ASM DBA Group – ASMDBA
Oracle ASM Operator Group – ASMOPER

Within the Installation Location window, specify the appropriate Oracle base and software locations and click Next. The values set by this reference environment are as follows:
-
Oracle base:
/u01/app/grid -
Software location:
/u01/app/12.2.0/grid
-
Oracle base:
Within the Create Inventory window, specify the inventory directory and click Next. The values set by this reference environment are as follows:
-
Inventory Directory -
/u01/app/oraInventory
-
Inventory Directory -
Within the Root script execution configuration window, select the check box labeled Automatically run configuration scripts and enter the
rootuser credentials. The step specifying therootuser credentials in order to run specific configuration scripts automatically at the end of the installation is optional. For the purposes of this reference environment, therootcredentials are given in order to speed up the Oracle Grid Infrastructure installation process. Click Next.Within the Prerequiste Checks window, review the status and ensure there are no errors prior to continuing the installation. Initially,
cvudiskpackage needs to be installed. Select the Fix & Check Again button. Follow the instructions in the Oracle OUI to run therunfixup.shscript.The following check errors are common and may be ignored if verified.
- /dev/shm mounted as a temporary file system - This is related to a bug Oracle DOC ID: 2065603.1 were the installer is looking for /dev/shm to be located in /etc/fstab. Within Red Hat Enterprise Linux 7 tmpfs is mounted by default on the OS.
-
Network Time Protocol (NTP) - This task verifies cluster time synchornization on clusters. Manually verify that
ntpis running on all nodes within the Oracle RAC cluster. If NTP is properly running and configured, this error can be safely ignored. -
Device Checks for ASM - This task checks to verify that the specified devices meet the requiresments for ASM. In this particular case, it is having issues indicating that the /dev/mapper/ocrvote* devices are not shared across nodes. However, this can be confirmed with
multipath -llthat they are. Thus this error can be safely ignored.
- Within the Summary window, review all the information provided, and select Install to start the installation.
9: Oracle Database 12c Release 2 - V840012-01.zip from http://edelivery.oracle.com
10: Oracle ASM Extents - https://docs.oracle.com/database/121/OSTMG/GUID-1E5C4FAD-087F-4598-B959-E66670804C4F.htm
4.2. Installing Oracle 12c R1 Database Software
Prior to the installation of the Oracle RAC 12c Release 2, ensure the following prerequisites from the following sections have been met:
The reference environment uses the /u01/app/oracle as the Oracle base. The owner is set to oracle and the group is set to oinstall.
The following commands create the Oracle base directory and set the appropriate permissions:
As the root user, on node one:
# mkdir --parents /u01/app/oracle # mkdir --parents /u01/app/oracle-software # chown --recursive oracle.oinstall /u01/app/oracle # chown --recursive oracle.oinstall /u01/app/oracle-software
On all other Oracle RAC Database nodes:
# mkdir --parents /u01/app/oracle # chown --recursive oracle.oinstall /u01/app/oracle
The following steps are intended only for node one of the Oracle RAC Database environment unless otherwise specified. As the root user,
- Download the Oracle Database software files9 from the Oracle Software Delivery Cloud
Change the ownership and permissions of the downloaded file, move the file to the Oracle home and install
unzippackage for unpackaging of the file.# cd <oracle_download_location> # chown oracle.oinstall V839960-01.zip # mv V839960-01.zip /u01/app/oracle-software
sshas theoracleuser, change directory into the/u01/app/oracle-softwareandunzipthe download zip file.ssh -Y oracle@<hostname> $ cd /u01/app/oracle-software $ unzip -q V839960-01.zip
As the
oracleuser, start the OUI via the command:$ /u01/app/oracle-software/database/runInstaller
NoteEnsure to issue
sshwith the -Y option as theoracleuser from the client server. Otherwise, a DISPLAY error may occur.- Within the Configure Security Updates window, provide the My Oracle Support email address for the latest security issues information. Otherwise uncheck the I wish to receive security updates via My Oracle Support and click Next.
Within the Installation Option window, select Install database software only and click Next.

Within the Database Installation Options window, select Oracle Real Application Clusters database installation as the type of database installation being performed and click Next.

Within the Nodes Selection window, ensure all nodes for the Oracle RAC database cluster are checked and click on the SSH Connectivity button. Within the OS Password dialog box enter the password for the user
oracleand click Setup. Once a dialog box returns with Successfully established passwordless SSH connectivity between the selected nodes, click OK and Next to continue.
- Within the Database Edition window, select the appropriate database edition and click Next. For the purposes of this reference environment, Enterprise Edition is the edition of choice.
Within the Installation Location window, select the appropriate Oracle base and software location and click Next. For the purposes of this reference environment, the following values are set:
-
Oracle base -
/u01/app/oracle -
Software Location -
/u01/app/oracle/product/12.2.0/dbhome_1
-
Oracle base -
Within the Operating System Groups window, select the appropriate OS groups and click Next. For the purposes of this reference environment, the following values are set as:
- Database Administrator group – DBA
- Database Operator group – OPER
- Database Backup and Recovery group – BACKUPDBA
- Data Guard Administrative group – DGDBA
- Encryption Key Management Administrative group – KMDBA
Oracle Real Application Cluster Administration group - RACDBA

Within the Prerequiste Checks window, review the status and ensure there are no errors prior to continuing the installation.
The following check errors are common and may be ignored if verified.
- /dev/shm mounted as a temporary file system - This is related to a bug Oracle DOC ID: 2065603.1 were the installer is looking for /dev/shm to be located in /etc/fstab. Within Red Hat Enterprise Linux 7 tmpfs is mounted by default on the OS.
-
Clock Synchronization - This task checks to see if NTP daemon or service is running. Manually verify that all nodes across the Oracle RAC Database cluster are running the
ntpdservice. If so, this error can be safely ignored. -
Maximum locked shared memory check - This task checks if memlock is set within the /etc/security/limits.conf file and is only a warning. Setting memlock allows the
oracleuser to lock a certain amount of memory from physical RAM that isn’t swapped out. The value is expressed in kilobytes and is important from the Oracle perspective because it provides theoracleuser permission to use huge pages. This warning can be safely ignored at the moment of installation as it is configured later during the setup of huge pages. More information regarding huge pages can be found Section 4.5, “Enabling HugePages”.
- Within the Summary window, review all the information provided, and select Install to start the installation.
Once the installation completes, execute the scripts within the Execute Configuration scripts window. As the
rootuser on each Oracle node, run the following:# /u01/app/oracle/product/12.2.0/dbhome_1/root.sh
NoteIn the example above, /u01/app/oracle/product/12.2.0/dbhome_1 is the Oracle home directory.
- Click OK within the Execute Configuration scripts window.
- Within the Finish window, verify the installation was successful and click Close.
4.3. Creating ASM Diskgroups via the ASM Configuration Assitant (ASMCA)
Prior to the creation of an Oracle RAC database, create the Database (DATA) disgroup, Fast Recovery Area (FRA) and Redo Logs Oracle ASM diskgroups via Oracle’s ASM Configuration Assistant (ASMCA).
The following steps should be done on node one of the Oracle RAC Database cluster environment.
-
sshwith the -Y option as thegriduser is required prior to runningasmca. As the
griduser, startasmcavia the following command:$ /u01/app/12.2.0/grid/bin/asmca
Note/u01/app/12.2.0/gridis the Grid home directory.Via the
asmcaapplication, select the Disk Groups and click Create.
Within the Create Disk Group window, provide the following:
- A name for the disk group, i.e. DATA
- Redundancy level for the disk group, i.e. External Redundancy
- Selection of the disks to be added to the disk group, i.e. /dev/mapper/fra1
- Select an AU Size, i.e. 4 MB
To display the appropriate eligible disks, click on the Change Discovery Path button and enter as the Disk Discovery Path one of the following as appropriate:
For Device Mapper devices, type: /dev/mapper/*
Click the OK button once the steps above are complete.

- Repeat the above steps to configure additional disk groups. It is recommended to create a diskgroup to separate the Redo logs, however, it is not required.
- Once all the disk groups are created, click the Exit button from the main ASM Configuration Assistant window. Click yes when asked to confirm quitting the application.
4.4. Creating Pluggable Databases using Database Configuration Assistant (DBCA)
With the introduction to Oracle Database 12c, Oracle introduced the Multitenant architecture. The Multitenant architecture provides the ability to consolidate multiple databases known as pluggable databases (PDBs) into a single container database (CDB). It provides advantages11 that include easier management and monitoring of the physical database, fewer patches and upgrades, performance metrics consolidated into one CDB, and sizing one SGA instead of multiple SGAs. While using the Multitenant architecture is optional, this reference architecture focuses on describing the step-by-step procedure of taking advantage of it. When creating an Oracle database, the recommended method is the usage of the dbca utility. Prior to getting into to the details of installing a container database (CDB) and deploying pluggable databases (PDB), an overview of the key concepts of the Multitenant Architecture is provided.
Container11 – is a collection of schemas, objects, and related structures in a multitenant container database (CDB) that appears logically to an application as a separate database. Within a CDB, each container has a unique ID and name.
A CDB consists of two types of containers: the root container and all the pluggable databases that attach to a CDB.
Root container11 – also known as the root, is a collection of schemas, schema objects, and nonschema objects to which all PDBs belong. Every CDB has one and only one root container, that stores the system metadata required to manage PDBs (no user data is stored in the root container). All PDBs belong to the root. The name of the root container is CDB$ROOT.
PDB11– is a user-created set of schemas, objects, and related structures that appears logically to an application as a separate database. Every PDB is owned by SYS, that is a common user in the CDB, regardless of which user created the CDB.
For more information on Oracle’s Multitenant architecture, visit Oracle’s documentation11.
The following section describes the step-by-step procedure to create a container database (CDB) that holds two pluggable databases (PDB) thus taking advantage of Oracle’s Multitenant architecture.
The following steps should be done on node one of the Oracle RAC Database cluster environment.
-
sshwith the -Y option as theoracleuser prior to runningdbca. As the
oracleuser, run thedbcautility via the command:$ /u01/app/oracle/product/12.2.0/dbhome_1/bin/dbca
NoteIn the example above, /u01/app/oracle/product/12.2.0/dbhome_1 is the Oracle home directory.
- Within the Database Operations window, select Create a database radio button and click Next.
- Within the Creation Mode window, select Advanced Mode radio button and click Next.
- Within the Deployment Type window, select Database Type as Oracle Real Applications Cluster (RAC) database, Configuration type as either Admin or Policy managed, and select the Custom Database radio button. Click Next. This reference environment uses Admin Managed policy. More information can be found: Using Server Pools with Oracle RAC
- Within the Nodes Selection window, ensure all the nodes within the Oracle RAC cluster are selected and click Next.
Within the Database Identification window, set a global database name and Oracle System Identifier (SID), i.e. cdb. Check the check box that reads Create as Container Database. Select the number of PDBs to install and provide a PDB Name Prefix, i.e. orclpdb and click Next. This reference environment creates two PDBs.

Within the Storage Option window, select Use following for the database storage attributes radio button. Change the Database file storage type: to Automatic Storage Management (ASM). Within the _Database file location: select the Browse button and pick the database disk group, i.e.
+DATA. Select the Mutliplex redo logs and control files and enter the name of the redo log disk group (if created previously), i.e.+REDODG.NoteThe use of Oracle-Managed Files (OMF) is used within the reference environment, however, it is not required.

Within the Fast Recovery Option window, check the checkbox labeled Specify Fast Recovery Area, and select the Browse button to pick the diskgroup that is to be assigned for Fast Recovery Area, i.e.
+FRADG. Enter an appropriate size based upon the size of the disk group.
- Within the Database Options window, select the database components to install. This reference environment kept the defaults. Once selected, click Next.
- Within the Configuration Options window, ensure the Use Automatic Shared Memory Segment is selected, and use the scroll bar or enter the appropriate SGA and PGA values for the environment. The remaining tabs, Sizing, Character sets, Connection mode, the defaults are used.
- Within the Management Options window, check or uncheck the Run Cluster Verification Utility (CVU) checks periodically and modify the Enterprise Manager database port (if needed) or deselect Configure Enterprise (EM) database express if not being used. This reference architecture uses the defaults and selected Next.
- Within the User Credentials window, enter the credentials for the different administrative users and click Next.
- Within the Creation Option window, ensure the Create database checkbox is selected. This reference architecture uses the defaults for all other options, but may be customizable to fit an environment’s requirements.
Within the Prerequiste Checks window, review the status and ensure there are no errors prior to continuing the installation.
The following check errors are common and may be ignored if verified.
- /dev/shm mounted as a temporary file system - This is related to a bug Oracle DOC ID: 2065603.1 were the installer is looking for /dev/shm to be located in /etc/fstab. Within Red Hat Enterprise Linux 7 tmpfs is mounted by default on the OS.
- Within the Summary window, review the Summary, and click Finish. to start the database creation.
4.5. Enabling HugePages
Transparent Huge Pages (THP) are implemented within Red Hat Enterprise Linux 7 to improve memory management by removing many of the difficulties of manually managing huge pages by dynamically allocating huge pages as needed. Red Hat Enterprise Linux 7, by default, uses transparent huge pages also known as anonymous huge pages. Unlike static huge pages, no additional configuration is needed to use them. Huge pages can boost application performance by increasing the chance a program may have quick access to a memory page. Unlike traditional huge pages, transparent huge pages can be swapped out (as smaller 4kB pages) when virtual memory clean up is required. Unfortunately, Oracle Databases do not take advantage of transparent huge pages for interprocess communication. In fact, My Oracle Support 12 states to disable THP due to unexpected performance issues or delays when THP is found to be enabled. To reap the benefit of huge pages for an Oracle database, it is required to allocate static huge pages and disable THP. Due to the complexity of properly configuring huge pages, it is recommended to copy the bash shell script found within Appendix C, Huge Pages Script and run the script once the database is up and running. The reasoning behind allocating huge pages once the database is up and running is to provide a proper number of pages to handle the running shared memory segments. The steps are as follows:
On node one within the Oracle RAC environment,
- Copy the bash script found within Appendix C, Huge Pages Script and save it as huge_pages_settings.sh
As the
rootuser, ensure the huge_pages_settings.sh is executable by running:# chmod +x huge_pages_settings.sh
As the
rootuser, ensure thebcpackage is installed# yum install bc
As the
rootuser, execute the huge_pages_settings.sh script as follows:# /path/to/huge_pages_settings.sh Recommended setting within the kernel boot command line: hugepages = <value> Recommended setting within /etc/security/limits.d/99-grid-oracle-limits.conf: oracle soft memlock <value> Recommended setting within /etc/security/limits.d/99-grid-oracle-limits.conf: oracle hard memlock <value>
- On each node within the Oracle RAC Database cluser,
Add the number of hugepages provided by the script to the kernel boot command line within the /etc/default/grub as follows:
GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .\*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="nofb splash=quiet crashkernel=auto rd.lvm.lv=myvg/root rd.lvm.lv=myvg/swap rd.lvm.lv=myvg/usr rhgb quiet transparent_hugepage=never hugepages=<value-provided-by-script>" GRUB_DISABLE_RECOVERY="true"NoteAllocating the number of huge pages within the kernel boot command line is the most reliable method due to memory not yet becoming fragmented.13
For the grub changes to take effect, run the command:
# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-693.1.1.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-693.1.1.el7.x86_64.img Found linux image: /boot/vmlinuz-3.10.0-514.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-514.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-f9650ab62cd449b8b2a02d39ac73881e Found initrd image: /boot/initramfs-0-rescue-f9650ab62cd449b8b2a02d39ac73881e.img done
Oracle requires setting the soft and hard limits to memlock. Setting memlock allows the oracle user to lock a certain amount of memory from physical RAM that isn’t swapped out. The value is expressed in kilobytes and is important from the Oracle perspective because it provides the oracle user permission to use huge pages. This value should be slightly larger than the largest SGA size of any of the Oracle Database instances installed in an Oracle environment. To set memlock, add within /etc/security/limits.d/99-grid-oracle-limits.conf the following:
oracle soft memlock <value-provided-by-script> oracle hard memlock <value-provided-by-script>
Reboot each node to ensure the huge pages setting takes effect properly.
Verify the value provided by the huge_pages_settings.sh matches the total number of huge pages available on the node(s) with the following command:
# cat /proc/meminfo | grep -i hugepages_total HugePages_Total: <value-provided-by-script>
Verify the current status of the transparent huge pages is set to
nevervia the command:# cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never]
12: ALERT: Disable Transparent HugePages on SLES11,RHEL6,OEL6 and UEK2 Kernels (DOC ID: 1557478.1)
13: https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.