Chapter 3. Reference Architecture Configuration Details
This reference architecture focuses on the deployment of Oracle RAC Database 12c Release 2 with Oracle Automatic Storage Management (ASM) on Red Hat Enterprise Linux 7 x86_64. The configuration is intended to provide a comprehensive Red Hat | Oracle solution. The key solution components covered within this reference architecture consists of:
- Red Hat Enterprise Linux 7
- Oracle Grid Infrastructure 12c Release 2
- Oracle RAC Database 12c Release 2 Software Installation
- Deploying an Oracle RAC Database 12c Release 2 with iSCSI disks
- Enabling Security-Enhanced Linux (SELinux)
- Configuring Device Mapper Multipathing
- Using udev rules instead of Oracle ASMLib or Oracle ASM Filter Driver
3.1. Setting OS Hostname
A unique host name is required for the installation of Oracle RAC Database 12c Release 2. The host names within this reference environment is: oracle1.e2e.bos.redhat.com and oracle2.e2e.bos.redhat.com.
To set a hostname for a server use the hostnamectl command. An example of setting oracle1.e2e.bos.redhat.com hostname is shown below.
# hostnamectl set-hostname oracle1.e2e.bos.redhat.com
Verify the status:
# hostnamectl status
Static hostname: oracle1.e2e.bos.redhat.com
Icon name: computer-server
Chassis: server
Machine ID: f9650ab62cd449b8b2a02d39ac73881e
Boot ID: 4b4edc0eb2d8418080d86e343433067f
Operating System: Storage
CPE OS Name: cpe:/o:redhat:enterprise_linux:7.3:GA:server
Kernel: Linux 3.10.0-514.el7.x86_64
Architecture: x86-643.2. Network Configuration
The network configuration focuses on the proper setup of public and private network interfaces along with the DNS configuration for the Single Client Access Name (SCAN). The public bonded network interface provides an Oracle environment with high availability in case of a network interface failure. The High Availability Internet Protocol (HAIP) provides the private network interfaces with failover and load balancing across each private network interface. SCAN provides the Oracle RAC Database 12c Release 2 environment a single name that can be used by any client trying to access an Oracle RAC Database within the cluster.
3.2.1. Configuring /etc/resolv.conf file
The resolver is a set of routines in the C library that provides access to the Internet Domain Name System (DNS). The resolver configuration file contains information that is read by the resolver routines the first time they are invoked by a process. The file is designed to be human readable and contains a list of keywords with values that provide various types of resolver information3. The /etc/resolv.conf file for this reference environment consists of two configuration options: nameserver and search. The search option is used to search for a host name that is part of a particular domain. The nameserver option is the IP address of the name server the system oracle1 must query. If more than one nameserver is listed, the resolver library queries them in order. An example of the /etc/resolv.conf file used on the reference environment is shown below.
# cat /etc/resolv.conf # Generated by NetworkManager search e2e.bos.redhat.com nameserver 10.19.114.2
3: Linux man pages - man resolv.conf
3.2.2. Public Network Configuration
The public network configuration consists of two network interfaces bonded together to provide high availability. The example below shows how to bond physical interfaces em1 and em2 with a bond device labeled bond0.
The usage of NetworkManager is optional.
Check the status of Network Manager:
# systemctl status NetworkManager.service
\u25cf NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2017-07-18 02:29:13 UTC; 2 days ago
Docs: man:NetworkManager(8)
Main PID: 2038 (NetworkManager)
CGroup: /system.slice/NetworkManager.service
\u251c\u25002038 /usr/sbin/NetworkManager --no-daemon
\u2514\u25002140 /sbin/dhclient -d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-em1.pid -lf /var/lib/NetworkManager/dhclie...On each node within the environment:
Create a channel bonding interface:
# cat /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 NAME=bond0 BONDING_MASTER=yes BOOTPROTO=static IPADDR=10.19.114.44 PREFIX=23 GATEWAY=10.19.115.254 BONDING_OPTS="mode=1 miimon=100 primary=em1" ONBOOT=yes
On each node within the environment:
Create em1 and em2 as slave interfaces:
# cat /etc/sysconfig/network-scripts/ifcfg-em1 DEVICE=em1 HWADDR="44:a8:42:af:58:66" ONBOOT=yes IPV6INIT=no PEERROUTES=yes SLAVE=yes BOOTPROTO="none" MASTER=bond0
# cat /etc/sysconfig/network-scripts/ifcfg-em2 DEVICE=em2 HWADDR="44:a8:42:af:58:67" ONBOOT=yes IPV6INIT=no PEERROUTES=yes SLAVE=yes BOOTPROTO="none" MASTER=bond0
On each node within the environment:
Restart the network service
# systemctl restart network.service
To ensure NetworkManager is aware of the changes issue the command:
# nmcli con reload
If for some reason, issues getting bond0 properly to add the different interfaces, reboot the host.
Once the bond0 device is configured on each node, use the ping command to verify connectivity as follows: On node one labeled oracle1,
# ping 10.19.114.48 PING 10.19.114.48 (10.19.114.48) 56(84) bytes of data. 64 bytes from 10.19.114.48: icmp_seq=1 ttl=64 time=0.417 ms
On node two labeled oracle2,
# ping 10.19.114.44 PING 10.19.114.44 (10.19.114.44) 56(84) bytes of data. 64 bytes from 10.19.114.44: icmp_seq=1 ttl=64 time=0.417 ms
Please ensure a DNS entry that resolves to the appropriate hostname. This reference architecture resolves the following IP address to the following host:
Table 3.1. Public IP & Hostname
| IP | Hostname |
| 10.19.114.44 | oracle1.e2e.bos.redhat.com |
| 10.19.114.48 | oracle2.e2e.bos.redhat.com |
3.2.3. Configure SCAN via DNS
SCAN provides a single name in which a client server can use to connect to a particular Oracle database. The main benefit of SCAN is the ability to keep a client connection string the same even if changes within the Oracle RAC Database environment occur, such as adding or removing of nodes within the cluster. The reason this works is because every client connection sends a request to the SCAN Listener, then reroutes the traffic to an available VIP Listener within the Oracle RAC cluster to establish a database connection. The setup of SCAN requires the creation of a single name, no longer than 15 characters in length not including the domain suffix, resolving to three IP addresses using a round-robin algorithm from the DNS server. SCAN must reside in the same subnet as the public network within the Oracle RAC Database cluster and be resolvable without the domain suffix. Within the reference environment, the domain is e2e.bos.redhat.com and SCAN name is db-oracle-scan.
An example DNS entry for the SCAN is as follows:
db-oracle-scan IN A 10.19.115.60
IN A 10.19.115.73
IN A 10.19.115.75An example of the DNS entry for the SCAN to enable reverse lookups is as follows:
60 IN PTR db-oracle-scan.e2e.bos.redhat.com 73 IN PTR db-oracle-scan.e2e.bos.redhat.com 75 IN PTR db-oracle-scan.e2e.bos.redhat.com
On each node within the Oracle RAC cluster, verify the SCAN configuration within the DNS server is setup properly using the nslookup and host command as follows:
# nslookup db-oracle-scan Server: 10.19.114.2 Address: 10.19.114.2#53 Name: db-oracle-scan.e2e.bos.redhat.com Address: 10.19.115.75 Name: db-oracle-scan.e2e.bos.redhat.com Address: 10.19.115.60 Name: db-oracle-scan.e2e.bos.redhat.com Address: 10.19.115.73
nslookup requires the package bind-utils to be installed.
On each node within the Oracle RAC cluster, verify the SCAN configuration reverse lookup is setup properly using the nslookup and host command as follows:
# host db-oracle-scan db-oracle-scan.e2e.bos.redhat.com has address 10.19.115.73 db-oracle-scan.e2e.bos.redhat.com has address 10.19.115.75 db-oracle-scan.e2e.bos.redhat.com has address 10.19.115.60
# nslookup 10.19.115.75 Server: 10.19.114.2 Address: 10.19.114.2#53 75.115.19.10.in-addr.arpa name = db-oracle-scan.e2e.bos.redhat.com.
Repeat the above step for the reverse lookup on the remaining IP addresses used for the SCAN.
The reference environment resolves the following IP address to the following host name:
Table 3.2. Scan IP & Hostname
| IP | Hostname |
| 10.19.115.60 | db-oracle-scan.e2e.bos.redhat.com |
| 10.19.115.73 | |
| 10.19.115.75 |
For more information on SCAN, please refer to Oracle’s documentation3
3.2.4. Configure Virtual IP (VIP) via DNS
The virtual IP is an IP address assigned to each node within an Oracle RAC Database environment with the IP address residing in the public subnet. During the installation of the Oracle Grid Infrastructure, each VIP Listener registers with every SCAN Listener. The reason is because when a client sends a request, the SCAN Listener routes the incoming traffic to one of the VIP Listeners within the Oracle RAC Database cluster. If a client connection string uses the VIP to talk directly to the VIP Listener (as done in prior versions), every time changes to the Oracle RAC Database environment are made, such as adding or removing nodes within the cluster, the client connection string would require updating. Due to this, Oracle recommends always using the SCAN for the client connection string.
An example DNS entry for the VIPs is as follows:
db-oracle1-vip IN A 10.19.115.40 db-oracle2-vip IN A 10.19.115.41
On each node within the Oracle RAC cluster, verify the VIP address for oracle1-vip and oracle2-vip within the DNS server is setup properly using the nslookup and host command. An example of checking oracle1-vip can be seen below.
# nslookup oracle1-vip Server: 10.19.114.2 Address: 10.19.114.2#53 Name: oracle1.e2e.bos.redhat.com Address: 10.19.115.40
# host oracle1-vip oracle1-vip.e2e.bos.redhat.com has address 10.19.115.40
An example of the DNS entry for the SCAN to enable reverse lookups is as follows:
40 IN PTR oracle1-vip.e2e.bos.redhat.com 41 IN PTR oracle2-vip.e2e.bos.redhat.com
On each node within the Oracle RAC Database cluster, verify the VIP address reverse lookup for both VIP addresses (10.19.115.40 and 10.19.115.41) is setup properly using the nslookup and host command. An example is shown using VIP address 10.19.115.40 below.
# nslookup 10.19.115.40 Server: 10.19.114.2 Address: 10.19.114.2#53 40.115.19.10.in-addr.arpa name = oracle1-vip.e2e.bos.redhat.com.
# host 10.19.115.40 41.115.19.10.in-addr.arpa domain name pointer oracle-1-vip.e2e.bos.redhat.com.
The VIP address should provide a Destination Host Unreachable response if an attempt to ping the VIP or VIP host name is attempted. This reference environment resolves the following Virtual IP addresses to the following host names:
Table 3.3. Virtual IP & Hostnames
| IP | Hostname |
| 10.19.115.40 | oracle1-vip.e2e.bos.redhat.com |
| 10.19.115.41 | oracle2-vip.e2e.bos.redhat.com |
3.2.5. Private Network Configuration
The private network configuration consists of two network interfaces em3 and em4. The private network provides interconnect communication between all the nodes in the cluster. This is accomplished via Oracle’s Redundant Interconnect, also known as Highly Available Internet Protocol (HAIP), that allows the Oracle Grid Infrastructure to activate and load balance traffic on up to four Ethernet devices for private interconnect communication. The example below shows how to setup physical interfaces em3 and em4 to be used with HAIP.
On each node, set em3 and em4 for private interconnect traffic. An example below:
# cat /etc/sysconfig/network-scripts/ifcfg-p3p1 DEVICE=em3 HWADDR="44:a8:42:af:58:68" ONBOOT=yes IPV6INIT=no BOOTPROTO=static IPADDR=192.11.1.51 PREFIX=24 MTU=9000
# cat /etc/sysconfig/network-scripts/ifcfg-p3p2 DEVICE=em4 HWADDR="44:a8:42:af:58:69" ONBOOT=yes IPV6INIT=no BOOTPROTO=static IPADDR=192.12.1.51 PREFIX=24 MTU=9000
ifdown all the interfaces if status was UP during changing the config files.
Restart all interfaces using:
# nmcli con reload
Ensure all private Ethernet interfaces are set to different subnets on each node. If different subnets are not used and connectivity is lost, this can cause a node reboot within the cluster. For the reference environment, subnets 192.11.1.0/24 and 192.12.1.0/24 are used on each node within the Oracle RAC Database cluster.
Verify connectivity on each node using the ping command.
On node one labeled oracle1,
# ping 192.11.1.51 # ping 192.12.1.51
On node two labeled oracle2,
# ping 192.11.1.50 # ping 192.12.1.50
Table 3.4. Private IP, Ethernet Interfaces, & Host
| IP | Ethernet Interface | Host |
| 192.11.1.50 |
|
|
| 192.12.1.50 |
|
|
| 192.11.1.51 |
|
|
| 192.12.1.51 |
|
|
3.2.6. iSCSI Network Configuration
The following section only applies to environments taking advantage of iSCSI storage. If not using an iSCSI storage array, please skip to the following section Section 3.3, “OS Configuration”.
The iSCSI network configuration consists of two network interfaces em3 and em4. Set em3 and em4 for iSCSI traffic. An example below:
# cat /etc/sysconfig/network-scripts/ifcfg-em3 DEVICE=em3 HWADDR="44:a8:42:af:52:61" ONBOOT=yes IPV6INIT=no BOOTPROTO=static IPADDR=172.17.114.250 PREFIX=24 MTU=9000
# cat /etc/sysconfig/network-scripts/ifcfg-em4 DEVICE=em4 HWADDR="44:a8:42:af:52:62" ONBOOT=yes IPV6INIT=no BOOTPROTO=static IPADDR=172.17.114.251 PREFIX=24 MTU=9000
It is recommended to take advantage of Jumbo Frames for iSCSI storage. Ensure that the iSCSI switches have Jumbo Frames enabled.
Stop and start the network interface
ifdown p3p1; ifdown p3p2; ifup p3p1; ifup p3p2
Verify connectivity on each node using the ping command.
# ping <Equallogic Group IP>
3.2.6.1. iSCSI Switch and Dell EqualLogic Recommendations
Regarding the Dell EqualLogic PS Array, the following are recommendations to achieve optimal performance.
- Create an isolated network for iSCSI traffic, i.e. VLANs
- A trunk between the switches that equals the ttoal amount of bandwidth available on the EqualLogic PS Array
- Enable Rapid Spanning Tree Protocol (RSTP) on the iSCSI switches
- Enable PortFast within the switch ports on the iSCSI switches
- Enable Flow Control within the switch ports on the iSCSI switches
- Disable unicast storm control within the switch ports on the iSCSI switches
- Enable Jumbo Frames on the iSCSI switches
3.3. OS Configuration
3.3.1. Red Hat Subscription Manager
The subscription-manager command registers a system to the Red Hat Network (RHN) and manages the subscription entitlements for a system. The --help option specifies on the command line to query the command for the available options. If the --help option is issued along with a command directive, then options available for the specific command directive are listed.
To use Red Hat Subscription Management for providing packages to a system, the system must first register with the service. In order to register a system, use the subscription-manager command and pass the register command directive. If the --username and --password options are specified, then the command does not prompt for the RHN Network authentication credentials.
An example of registering a system using subscription-manager is shown below.
# subscription-manager register --username [User] --password '[Password]' The system has been registered with id: abcd1234-ab12-ab12-ab12-481ba8187f60
After a system is registered, it must be attached to an entitlement pool. For the purposes of this reference environment, the Red Hat Enterprise Linux Server is the pool chosen. Identify and subscribe to the Red Hat Enterprise Linux Server entitlement pool, the following command directives are required.
# subscription-manager list --available | grep -A8 "Red Hat Enterprise Linux
Server"
Subscription Name: Red Hat Enterprise Linux Server, Standard (8 sockets)
(Unlimited guests)
Provides: Red Hat Beta
Oracle Java (for RHEL Server)
Red Hat Enterprise Linux Server
Red Hat Software Collections Beta (for RHEL Server)
SKU: RH0186633
Contract: 10541483
Pool ID: <poolid>
Available: 47
Suggested: 1
Service Level: STANDARD
Service Type: L1-L3# subscription-manager attach --pool <pool_id>
Successfully attached a subscription for: Red Hat Enterprise Linux Server,
Standard (8 sockets) (Unlimited guests)
The Red Hat Enterprise Linux supplementary repository is part of subscribing to the Red Hat Enterprise Linux Server entitlement pool, however, it is disabled by default. Enable the supplementary repository via the subscription-manager command.
The following step is required in order to install the compat-libstdc++-33 package that is required for a successful Oracle Database 12c Release 2 install on Red Hat Enterprise Linux 7 and to install the custom tuned profile labeled tuned-profiles-oracle. The packages are only available in the rhel-7-server-optional-rpms repository.
# subscription-manager repos --enable=rhel-7-server-optional-rpms Repo 'rhel-7-server-optional-rpms' is enabled for this system.
For more information on the use of Red Hat Subscription Manager, please visit the Red Hat Subscription management documentation4.
3.3.2. Chrony Configuration
The chronyd — is a daemon for synchronisation of the system clock. It can synchronise the clock with NTP servers, reference clocks (e.g. a GPS receiver), and manual input using wristwatch and keyboard via chronyd. It can also operate as an NTPv4 (RFC 5905) server and peer to provide a time service to other computers in the network5.
5: chronyd - chronyd daemon man page – man chronyd (8)
In order to configure the chronyd daemon, on each node follow the instructions below.
If not installed, install chrony via yum as follows:
# yum install chrony
Edit the
/etc/chrony.conffile with a text editor such asvi.# vi /etc/chrony.conf
Locate the following public server pool section, and modify to include the appropriate servers. For the purposes of this reference environment, only one server is used, but three is recommended. The
iburstoption is added to speed up the time that it takes to properly sync with the servers.# Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server 10.5.26.10 iburst
-
Save all the changes within the
/etc/chrony.conffile. Start the
chronyddaemon via the command:# systemctl start chronyd.service
Ensure that the
chronyddaemon is started when the host is booted.# systemctl enable chronyd.service
Verify the
chronyddaemon status.# systemctl status chronyd.service ● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2018-01-08 17:16:07 UTC; 2 months 26 days ago Docs: man:chronyd(8) man:chrony.conf(5) Process: 815 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS) Process: 732 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS) Main PID: 754 (chronyd) CGroup: /system.slice/chronyd.service └─754 /usr/sbin/chronyd
3.3.3. Oracle Database 12c Release 2 Package Requirements
A specific set of packages is required to properly deploy Oracle RAC Database 12c Release 2 on Red Hat Enterprise Linux 7. The number of installed packages required varies depending on whether a default or minimal installation of Red Hat Enterprise Linux 7 (x86_64) is performed. For the purposes of this reference environment, a minimal Red Hat Enterprise Linux 7 installation is performed to reduce the number of installed packages. A sample kickstart file as been provided within Appendix H, Sample Kickstart File. Red Hat Enterprise Linux 7 installation requires the following group packages:
Table 3.5. Group Packages
| Required Group Packages | |
| @Base | @Core |
Oracle Grid Infrastructure and Oracle Database 12c Release 2 required x86_64 RPM packages.
Table 3.6. Required Packages
| Required Packages | |||
| binutils | libX11 | compat-libcap1 | libXau |
| compat-libstdc++-33 | libaio | gcc | libaio-devel |
| gcc-c++ | libdmx | glibc-devel | glibc |
| ksh | make | libgcc | sysstat |
| libstdc++ | xorg-x11-utils | libstdc++-devel | xorg-x11-xauth |
| libXext | libXv | libXtst | libXi |
| libxcb | libXt | libXmu | libXxf86misc |
| libXxf86dga | libXxf86vm | nfs-utils | smartmontools |
After the installation of Red Hat Enterprise Linux 7 is completed, create a file, req-rpm.txt, that contains the name of each RPM package listed above on a separate line. For simplicity, this req-rpm.txt file is included in Appendix D, Oracle Database Package Requirements Text File.
Within each node:
Use the yum package manager to install the packages and any of their dependencies with the following command:
# yum install `awk '{print $1}' ./req-rpm.txt`
A minimum installation of Red Hat Enterprise Linux 7 does not install the X Window System server package, but only the required X11 client libraries. In order to run the Oracle Universal Installer (OUI), a system with the X Window System server package installed is required.
Using a system with X Window System installed, ssh into each Oracle RAC Database server with the -Y option to ensure trusted X11 forwarding is set. The command is as follows:
# ssh -Y oracle1.e2e.bos.redhat.com
Alternatively, if a system with the X Window System server package is unavailable, install the X Window System server package directly on node one of the Oracle RAC cluster.
3.3.4. Configuring Security-Enhanced Linux (SELinux)
SELinux is an implementation of a mandatory access control (MAC) mechanism developed by the National Security Agency (NSA). The purpose of SELinux is to apply rules on files and processes based on defined policies. When policies are appropriately defined, a system running SELinux enhances application security by determining if an action from a particular process should be granted thus protecting against vulnerabilities within a system. The implementation of Red Hat Enterprise Linux 7 enables SELinux by default and appropriately sets it to the default setting of ENFORCING.
It is highly recommended that SELinux be kept in ENFORCING mode when running Oracle RAC Database 12c Release 2.
Verify that SELinux is running and set to ENFORCING:
As the root user on each node,
# sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 28
If the system is running in PERMISSIVE or DISABLED mode, modify the /etc/selinux/config file and set SELinux to enforcing as shown below.
SELINUX=enforcing
The modification of the /etc/selinux/config file takes effect after a reboot. To change the setting of SELinux immediately without a reboot, run the following command:
# setenforce 1
For more information on Security Enhanced Linux, please visit the Red Hat Enterprise Linux 7 Security Enhanced Linux User Guide
3.3.5. Configuring Firewall Settings
Firewall access and restrictions play a critical role in securing your Oracle RAC Database 12c Release 2 environment. It is not uncommon for corporations to be running hardware based firewalls to protect their corporate networks. Due to this, enabling firewall may not be required. However, this reference environment demonstrates how to successfully implement firewall settings for an Oracle RAC Database environment. The firewall rules described below only apply to the public network. Oracle recommends the private network should not have any firewall rules as this can cause issues with the installation of Oracle RAC Database, as well as, disruption with the Oracle RAC Database private interconnect. It is highly recommended that the private network be isolated and communicate only between nodes locally. Red Hat Enterprise Linux 7 introduces the use of firewalld, a dynamic firewall daemon, instead of the traditional iptables service. firewalld works by assigning network zones to assign a level of trust to a network and its associated connections and interfaces6. The key difference and advantage of firewalld over the iptables service is that it does not require flushing of old firewall rules to apply the new firewall rules. firewalld changes the settings during runtime without losing existing connections6. With the implementation of firewalld, the iptables service configuration file /etc/sysconfig/iptables does not exist. It is recommended that the firewall settings be configured to permit access to the Oracle RAC Database network ports only from authorized database or database-management clients. For example, in order to allow access to a specific database client with an IP address of 10.19.142.54 and to make requests to the database server via SQL*Net using Oracle’s TNS (Transparent Network Substrate) Listener (default port of 1521), the following permanent firewall rule within the public zone must be added to the firewalld configuration.
On all nodes within the Oracle RAC cluster unless otherwise specified,
# firewall-cmd --permanent --zone=public --add-rich-rule=”rule family=”ipv4” source address=”10.19.142.54” port protocol=”tcp” port=”1521” accept” success
Likewise, if a particular database client with an IP address of 10.19.142.54 required access to the web-based EM Express that uses the default port of 5500, the following firewall rich rule must be added using the firewall-cmd command.
# firewall-cmd --permanent --zone=public --add-rich-rule=”rule family=”ipv4” source address=”10.19.142.54” port protocol=”tcp” port=”5500” accept” success
Ensure the firewall allows all traffic from the private network by accepting all traffic (trusted) from the private Ethernet interfaces em3 and em4 from all nodes within the Oracle RAC cluster. It is highly recommended that the private network be isolated and communicate only between nodes locally.
# firewall-cmd --permanent -–zone=trusted -–change-interface=em3 success # firewall-cmd --permanent -–zone=trusted –-change-interface=em4 success
The following rules are added to satisfy the Oracle Installer’s prerequisites. Once Oracle installation is complete, the following rules can be removed. Steps for removal shown upon the completion of the Oracle RAC Database installation in Section 6.1, “Removal of firewalld Trusted Source Address”
On node one of the Oracle RAC cluster, add the public source IP of all remaining nodes. This reference environment only adds the public IP of node two of the Oracle RAC cluster as it is a two-node Oracle RAC Database environment.
# firewall-cmd -–permanent --zone=trusted -–add-source=10.19.114.48/23
On node two of the Oracle RAC cluster, add the public source IP of all remaining nodes. This reference environment only adds the public IP of node one of the Oracle RAC cluster as it is a two-node Oracle RAC Database environment.
# firewall-cmd -–permanent --zone=trusted -–add-source=10.19.114.44/23
Once the rules have been added, run the following command to activate:
# systemctl restart firewalld.service
To verify the port 1521 & 5500 has been added and database client with IP address of 10.19.142.54 has been properly added, run the following command:
# firewall-cmd --zone=public --list-all public (default, active) interfaces: bond0 em1 em2 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: icmp-blocks: rich rules: rule family="ipv4" source address="10.19.142.54" port port="1521" protocol="tcp" accept rule family="ipv4" source address="10.19.142.54" port port="5500" protocol="tcp" accept
To verify the firewall rules being applied to the trusted zone for the private Ethernet interfaces, and temporarily for the source public IP run the following command:
Example of oracle1
# firewall-cmd --zone=trusted --list-all trusted (active) interfaces: em3 em4 sources: 10.19.114.48/23 services: ports: masquerade: no forward-ports: icmp-blocks: rich rules:
3.3.6. Modifying Kernel Parameters
The following sections regarding virtual memory, shared memory, semaphores, network ports, I/O synchronous requests, file handles, and kernel panic on OOPS parameters provide a detailed explanation of what these parameters are and their effect in an Oracle deployment. It is recommended to read carefully each parameter for a better understanding on how to tweak a specific environment for a particular workload.
The recommended values listed are to be used as a starting point when setting virtual memory, there is no "one-size fits all" regarding performance tuning.
Each section provides the manual steps to tweaking the parameters. With that said, if looking to tweak parameters immediately, Section 3.4.6, “Optimizing Database Storage using Automatic System Tuning” covers setting the parameters using the oracle-tuned-profile.
3.3.7. Setting Virtual Memory
Tuning virtual memory requires the modification of five kernel parameters that affect the rate that virtual memory is used within Oracle RAC databases.
A brief description7 and recommended settings for the virtual memory parameters, as well as, the definition of dirty data are described below.
SWAPPINESS7 - Starting with Red Hat Enterprise Linux 6.4 and above, the definition of swappiness has changed. Swappiness is defined as a value from 0 to 100 that controls the degree to which the system favors anonymous memory or the page cache. A high value improves file-system performance, while aggressively swapping less active processes out of memory. A low value avoids swapping processes out of memory, that usually decreases latency, at the cost of I/O performance. The default value is 60.
Since Red Hat Enterprise Linux 6.4, setting swappiness to 0 will even more aggressively avoid swapping out which increases the risk of out-of-memory (OOM) killing under strong memory and I/O pressure. To achieve the same behavior of swappiness as previous versions of Red Hat Enterprise Linux 6.4 in which the recommendation was to set swappiness to 0, set swappiness to the value between 1 and 20. The recommendation of swappiness for Red Hat Enterprise Linux 6.4 or higher running Oracle databases is now a value between 1-20.
DIRTY DATA – Dirty data is data that has been modified and held in the page cache for performance benefits. Once the data is flushed to disk, the data is clean.
DIRTY_RATIO7 – Contains, as a percentage of total system memory, the number of pages at which a process that is generating disk writes will itself start writing out dirty data. The default value is 20. The recommended value is between 40 and 80. The reasoning behind increasing the value from the standard Oracle 15 recommendation to a value between 40 and 80 is because dirty ratio defines the maximum percentage of total memory that be can be filled with dirty pages before user processes are forced to write dirty buffers themselves during their time slice instead of being allowed to do more writes. All processes are blocked for writes when this occurs due to synchronous I/O, not just the processes that filled the write buffers. This can cause what is perceived as unfair behavior where a single process can hog all the I/O on a system. As the value of dirty_ratio is increased, it is less likely that all processes will be blocked due to synchronous I/O, however, this allows for more data to be sitting in memory that has yet to be written to disk.
DIRTY_BACKGROUND_RATIO7 – Contains, as a percentage of total system memory, the number of pages that the background write back daemon will start writing out dirty data. The Oracle recommended value is 3.
An example with the dirty_background_ratio set to 3 and dirty_ratio set to 80, the background write back daemon will start writing out the dirty data when it hits the 3% threshold asynchronously, however, non of that data is written synchronously until the dirty_ratio is 80% full which is what causes for all processes to be blocked for writes when this occurs.
DIRTY_EXPIRE_CENTISECS7 - Defines when dirty in-memory data is old enough to be eligible for writeout. The default value is 3000, expressed in hundredths of a second. The Oracle recommended value is 500.
DIRTY_WRITEBACK_CENTISECS7 - Defines the interval of when writes of dirty in-memory data are written out to disk. The default value is 500, expressed in hundredths of a second. The Oracle recommended value is 100.
Create a file labeled 98-oracle-kernel.conf within /etc/sysctl.d/
# vi 98-oracle-kernel.conf vm.swappiness = 1 vm.dirty_background_ratio = 3 vm.dirty_ratio = 80 vm.dirty_expire_centisecs = 500 vm.dirty_writeback_centisecs = 100
For the changes to take effect immediately, run the following command on each node of the Oracle RAC cluster:
# sysctl -p /etc/sysctl.d/98-oracle-kernel.conf
A full listing of all the kernel parameters modified within the /etc/sysctl.d/98-oracle-kernel.conf file can be found at Appendix E, Kernel Parameters (98-oracle-kernel.conf).
7: RHEL7 Kernel Documentation (requires package kernel-doc to be installed) - /usr/share/doc/kernel-doc-3.10.0/Documentation/sysctl/vm.txt
3.3.9. Setting Semaphores (SEMMSL, SEMMNI, SEMMNS)
Red Hat Enterprise Linux 7 provides semaphores for synchronization of information between processes. The kernel parameter sem is composed of four parameters:
SEMMSL – is defined as the maximum number of semaphores per semaphore set
SEMMNI – is defined as the maximum number of semaphore sets for the entire system
SEMMNS – is defined as the total number of semaphores for the entire system
SEMOPM – is defined as the total number of semaphore operations performed per semop system call.
SEMMNS is calculated by SEMMSL * SEMMNI
The following line is required within the /etc/sysctl.d/98-oracle-kernel.conf file to provide default values for semaphores for Oracle:
kernel.sem = 250 32000 100 128
For the changes to take effect immediately, run the following command on each node of the Oracle RAC cluster:
# sysctl -p /etc/sysctl.d/98-oracle-kernel.conf
The values above are sufficient for most environments and no tweaking should be necessary. However, the following describes how these values can be optimized and should be set when the defaults don’t suffice.
Example errors:
ORA-27154: post/wait create failed ORA-27300: OS system dependent operation:semget failed with status: 28 ORA-27301: OS failure message: No space left on device ORA-27302: failure occurred at: sskgpcreates
It is recommended to first use the default values and adjust only when deemed necessary.
Semaphores are used by Oracle for internal locking of SGA structures. Sizing of semaphores directly depends on only the PROCESSES parameter of the instance(s) running on the system. The number of semaphores to be defined in a set should be set to a value that minimizes the waste of semaphores.
For example, say our environment consists of two Oracle instances with the PROCESSES set to 300 for database one and 600 for database two. With SEMMSL set at 250 (default), the first database requires 2 sets. The first set is 250 semaphores but an additional 50 semaphores is required thus an additional SEMMSL set is required thus wasting 200 semaphores. Our 2nd instance requires 3 sets, set one 250 semaphores, set two 250 semaphores, giving us a total of 500, but an additional 100 semaphores is required thus adding an additional SEMMSSL set wasting 150 semaphores. A better value of SEMMSL in this particular case would be 150. With SEMMSL set at 150, the first database requires two sets (wasting zero semaphores), our second instance requires four sets (wasting zero semaphores). This is an ideal example, and most likely some semaphore wastage is expected and okay as semaphores in general consume small amounts of memory. As more databases are created in an environment, these calculations may get complicated. In the end, the goal is to limit semaphore waste.
Regarding SEMMNI, SEMMNI should be set high enough for proper amount of sets to be available on the system. Using the value of SEMMSL, one can determine max amount of SEMMNI required. Round up to the nearest power of 2.
SEMMNI = SEMMNS/SEMMSL
Oracle requires 2x value of PROCESSES in the init.ora parameter for semaphores (SEMMNS value) on startup of the database, then half of those semaphores are released. To properly size SEMMNS, one must know the sum of all PROCESSES set across all instances on the host. SEMMNS should best be set higher than SEMMNI*SEMMSL value (this is how we get 32000 for default value 250*128)
SEMOP is calculated using the total SEMMNI divided by SEMMSL. In the default scenario that is 3200/250 = 128
3.3.10. Ephemeral Network Ports
Oracle recommends that the ephemeral default port range be set starting at 9000 to 65500. This ensures that all well known ports used by Oracle and other applications are avoided. To set the ephemeral port range, modify the /etc/sysctl.d/98-oracle-kernel.conf file and add the following line:
net.ipv4.ip_local_port_range = 9000 65500
For the changes to take effect immediately, run the following command on each node of the Oracle RAC cluster:
# sysctl -p /etc/sysctl.d/98-oracle-kernel.conf
3.3.11. Optimizing Network Settings
Optimizing the network settings for the default and maximum buffers for the application sockets in Oracle is done by setting static sizes to RMEM and WMEM. The RMEM parameter represents the receive buffer size, while the WMEM represents the send buffer size. The recommended values by Oracle are configured within the /etc/sysctl.conf file.
net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576
For the changes to take effect immediately, run the following command on each node of the Oracle RAC cluster:
# sysctl -p /etc/sysctl.d/98-oracle-kernel.conf
3.3.12. Setting NOZEROCONF
On each node within the Oracle RAC Database cluster, set the value of NOZEROCONF to yes within the /etc/sysconfig/network file. Setting NOZEROCONF ensures that the route 169.254.0.0/16 is not added to the routing table.
NOZEROCONF=yes
3.3.13. Disabling the avahidaemon Service
From the Red Hat Customer Portal article: https://access.redhat.com/solutions/25463,
The Avahi website defines Avahi as: 'a system which facilities service discovery on a local network. This helps to plug the laptop or computer into a network and instantly be able to view other people who you can chat with, find printers to print to, or find files being shared…' Avahidaemon (on by default on Red Hat Enterprise Linux) can interfere with Oracle RAC’s multicast heartbeat causing the application-layer interface to assume it has been disconnected on a node and reboot the node. It is not recommended to remove the package due to its many dependencies. The avahi libraries are being used by many packages on a system. On each node within the Oracle RAC Database cluster, stop and disable the avahi services run the following commands:
# systemctl stop avahi-dnsconfd # systemctl stop avahi-daemon Warning: Stopping avahi-daemon, but it can still be activated by: avahi-daemon.socket
To keep the avahi services off persistently across reboots, on each node run the following:
# systemctl disable avahi-dnsconfd # systemctl disable avahi-daemon rm '/etc/systemd/system/dbus-org.freedesktop.Avahi.service' rm '/etc/systemd/system/multi-user.target.wants/avahi-daemon.service' rm '/etc/systemd/system/sockets.target.wants/avahi-daemon.socket'
3.3.14. Increasing synchronous I/O Requests
The kernel parameter FS.AIO-MAX-NR sets the maximum number of current asynchronous I/O requests. Oracle recommends setting the value to 1048576. In order to add FS-AIO-MAX-NR to 1048576, modify the /etc/sysctl.d/98-oracle-kernel.conf file on each node of the Oracle RAC cluster as follows:
fs.aio-max-nr = 1048576
In order for the changes take effect immediately, run the following command on each node of the Oracle RAC cluster:
# sysctl -p /etc/sysctl.d/98-oracle-kernel.conf
3.3.15. Increasing File Handles
The kernel parameter FS.FILE-MAX sets the maximum number of open file handles assigned to the Red Hat Enterprise Linux 7 operating system. Oracle recommends that for each Oracle RAC database instance found within a system, allocate 512*PROCESSSES in addition to the open file handles already assigned to the Red Hat Enterprise Linux 7 operating system. PROCESSES within a database instance refers to the maximum number of processes that can be concurrently connected to the Oracle RAC database by the oracle user. The default value for PROCESSES is 2560 for Oracle RAC Database 12c Release 2. To properly calculate the FS.FILE-MAX for a system, first identify the current FS.FILE-MAX allocated to the system via the following command:
sysctl fs.file-max
Next, add all the PROCESSES together from each Oracle RAC database instance found within the system and multiple by 512 as seen in the following command.
# echo “512 * 2560” | bc
To determine the current PROCESSES value, log into each Oracle RAC database instance and run the following command below. Since no Oracle RAC database has yet been created within this reference environment, the default value of 2560 PROCESSES is used.
SQL> show parameter processes; NAME TYPE VALUE ------------------------------------ ----------- processes integer 2560
Finally, add the current FS.FILE-MAX value with the new value found from multiplying 512*PROCESSES to attain the new FS.FILE-MAX value.
While the value of the FS.FILE-MAX parameter varies upon every environment, this reference environment uses the default value within Red Hat Enterprise Linux 7.4 (9784283). Oracle recommends a value no smaller than 6815744. In order to modify the value of FS.FILE-MAX, add to the_/etc/sysctl.d/98-oracle-kernel.conf_ file as follows:
fs.file-max = <value>
In order for the changes take effect immediately, run the following command:
# sysctl -p etc/sysctl.d/98-oracle-kernel.conf
In order for the changes take effect immediately, run the following command on each node of the Oracle RAC cluster:
# sysctl -p /etc/sysctl.d/98-oracle-kernel.conf
It is recommended to revisit the FS.FILE-MAX value if the PROCESSES value is increased for any Oracle RAC databases created.
A full listing of all the kernel parameters modified within the 98-oracle-kernel.conf file can be found at Appendix E, Kernel Parameters (98-oracle-kernel.conf)
3.3.16. Reverse Path Filtering
Red Hat Enterprise Linux 7 defaults to the use of Strict Reverse Path filtering. The reason strict mode is the default is to prevent IP spoofing from Distributed Denial-of-service (DDos) attacks. However, having strict mode enabled on the private interconnect of an Oracle RAC database cluster may cause disruption of interconnect communication. It is recommended to set the RP_FILTER from strict mode to loose mode. Loosening the security on the private Ethernet interfaces should not be of concern as best practices recommend for an isolated private network that can only communicate between nodes specifically for Oracle’s private interconnect.
To satisfy the Oracle Installer prerequisite, add the following modifications to the /etc/sysctl.d/98-oracle-kernel.conf on each node of the Oracle RAC cluster as follows:
net.ipv4.conf.em3.rp_filter = 2 net.ipv4.conf.em4.rp_filter = 2
In addition to the above, please include the following modification within the 98-oracle-kernel.conf file located within the /etc/sysctl.d/ directory. The 98-oracle-kernel.conf file can be found within Appendix E, Kernel Parameters (98-oracle-kernel.conf)
In order for the changes take effect immediately, run the following command on each node of the Oracle RAC cluster:
# sysctl -p /etc/sysctl.d/98-oracle-kernel.conf [Output Appreciated ...] net.ipv4.conf.em3.rp_filter = 2 net.ipv4.conf.em4.rp_filter = 2
# sysctl -p //etc/sysctl.d/98-oracle-kernel.conf net.ipv4.conf.em3.rp_filter = 2 net.ipv4.conf.em4.rp_filter = 2
3.3.17. User Accounts & Groups
Prior to the installation of Oracle RAC Database 12c Release 2, Oracle recommends the creation of a grid user for the Oracle Grid Infrastructure and an oracle user for the Oracle RAC Database software installed on the system.
For the purposes of this reference environment, the Oracle Grid Infrastructure owner is the user grid and the Oracle Database software owner is the user oracle. Each user is designated different groups to handle specific roles based on the software installed. However, the creation of separate users requires that both the oracle user and the grid user have a common primary group, the Oracle central inventory group (OINSTALL).
The following are the recommended system groups created for the installation of the Oracle RAC Database and part of the oracle user.
OSDBA group (DBA) – determines OS user accounts with DBA privileges
OSOPER group (OPER) – an optional group created to assign limited DBA privileges (SYSOPER privilege) to particular OS user accounts
OSBACKUPDBA group (BACKUPDBA) – an optional group created to assign limited administrative privileges (SYSBACKUP privilege) to a user for database backup and recovery
OSDGDBA group (DGDBA) – an optional group created to assign limited administrative privileges (SYSDG privilege) to a user for administering and monitoring Oracle Data Guard
OSKMDBA group (KMDBA) – an optional group created to assign limited administrative privileges (SYSKM privilege) to a user for encryption key management when using Oracle Wallet Manager
OSRACDBA group (RACDBA privilege) - grants the SYSRAC privileges to perform administrative tasks on an Oracle RAC cluster.
The following are the recommended system groups created for the installation of the Oracle Grid Infrastructure and part of the grid user:
OSDBA group (ASMDBA privilege) – provides administrative access to Oracle ASM instances
OSASM group (ASMADMIN privilege) – provides administrative access for storage files via the SYSASM privilege
OSOPER group (ASMOPER privilege) – an optional group created to assign limited DBA privileges with regards to ASM to particular OS user accounts
OSRACDBA group (RACDBA privilege) - grants the SYSRAC privileges to perform administrative tasks on an Oracle RAC cluster.
RACDBA group is still used even within the Oracle Database Standalone server.
As the root user on each node, create the following user accounts, groups, and group assignments using a consistent UID and GID assignments across your organization:
# groupadd --gid 54321 oinstall # groupadd --gid 54322 dba # groupadd --gid 54323 asmdba # groupadd --gid 54324 asmoper # groupadd --gid 54325 asmadmin # groupadd --gid 54326 oper # groupadd --gid 54327 backupdba # groupadd --gid 54328 dgdba # groupadd --gid 54329 kmdba # groupadd --gid 54330 racdba # useradd --uid 54321 --gid oinstall --groups dba,oper,asmdba,racdba,\ > backupdba,dgdba,kmdba oracle # passwd oracle # useradd --uid 54322 --gid oinstall --groups dba,asmadmin,asmdba,asmoper,\ > racdba grid # passwd grid
Verify the oracle and grid user on each Oracle RAC database cluster node correctly displays the appropriate primary and supplementary groups via the commands:
# id oracle uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba), 54323(asmdba),54326(oper),54327(backupdba),54328(dgdba),54329(kmdba), 54330(racdba) # id grid uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba), 54323(asmdba),54324(asmoper),54325(asmadmin),54330(racdba)
3.3.18. Setting Shell Limits for the Grid and Oracle User
Oracle recommends the following settings for the soft and hard limits for the number of open file descriptors (nofile), number of processes (nproc), and size of the stack segment (stack) allowed by each user respectively. The purpose of setting these limits is to prevent a system wide crash that could be caused if an application, such as Oracle, were allowed to exhaust all of the OS resources under an extremely heavy workload.
On each node, create a file labeled 99-grid-oracle-limits.conf within /etc/security/limits.d/ as follows:
# touch /etc/security/limits.d/99-grid-oracle-limits.conf
The reason that the /etc/security/limits.conf file is not directly modified is due to the order in which limit files are read in the system. After reading the /etc/security/limits.conf file, files within the /etc/security/limits.d/ directory are read. If two files contain the same entry, the entry read last takes precedence. For more information visit Red Hat Article: “What order are the limit files in the limits.d directory read in?8
Within the /etc/security/limits.d/99-grid-oracle-limits.conf file, add the following soft and hard limits for the oracle and grid user:
oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 oracle hard stack 32768 grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 grid soft stack 10240 grid hard stack 32768
Due to Bug 1597142116, the soft limit of nproc is not adjusted at runtime by the Oracle database. Due to this, if the nproc limit is reached, the Oracle database may become unstable and not be able to fork additional processes. A high enough value for the maximum number of concurrent threads for the given workload must be set, or use the hard limit value of 16384 as done above if in doubt.
Modifications made to the 99-grid-oracle-limits.conf file take effect immediately. However, please ensure that any previously logged in oracle or grid user sessions (if any) are logged out and logged back in for the changes to take effect.
8: What order are limits files in the limits.d directory read in?
As the root user on each node, create a shell script labeled oracle-grid.sh within /etc/profile.d/ to create the ulimits for the oracle and grid user. The contents of the oracle-grid.sh script:
#Setting the appropriate ulimits for oracle and grid user if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi if [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi
While the ulimit values can be set directly within the /etc/profile file, it is recommended to create a custom shell script within /etc/profile.d instead. The oracle-grid.sh script can be downloaded from the Appendix I, Configuration Files
As oracle and grid user, verify the ULIMIT values by running the following command:
# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 385878 max locked memory (kbytes, -l) 14854144 max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 16384 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
3.4. Storage Configuration
The following storage configuration section describes the best practices for setting up iSCSI CHAP Authentication, configuring host access to volumes, device mapper multipath, the use of udev rules for ASM disk management, and the use of the tuned package for optimal performance.
3.4.1. iSCSI CHAP Authentication
This section applies to users taking advantage of iSCSI storage. If not using iSCSI storage, please skip to section Section 3.4.3, “Device Mapper Multipath”.
For security purposes, CHAP (Challenge-Handshake Authentication Protocol) is used to validate the identity of the node(s) connecting to it. The process includes creating a secret username and password to authenticate on each node(s). The details on enabling CHAP within the iSCSI storage itself may vary depending on the vendor. Within the Dell EqualLogic PS Array the steps are as follows:
- Within the left navigation var, select Group Configuration
- Within the right pane, select the iSCSI tab
- Within the Local CHAP Accounts section select Add
- Within the popup dialog box, enter the appropriate credentials and select OK.
Once the CHAP user is created within the iSCSI storage array, the following steps are required for each Oracle node(s).
Install
iscsi-initiator-utilspackage# yum install iscsi-initiator-utils
Modify the /etc/iscsi/iscsid.conf file with the CHAP credentials. An example below only shows the sections modified within CHAP Settings.
# ************* # CHAP Settings # ************* # To enable CHAP authentication set node.session.auth.authmethod # to CHAP. The default is None. node.session.auth.authmethod = CHAP # To set a CHAP username and password for initiator # authentication by the target(s), uncomment the following lines: node.session.auth.username = <username> node.session.auth.password = <password> # To enable CHAP authentication for a discovery session to the target # set discovery.sendtargets.auth.authmethod to CHAP. The default is None. discovery.sendtargets.auth.authmethod = CHAP # To set a discovery session CHAP username and password for the initiator # authentication by the target(s), uncomment the following lines: # authentication by the target(s), uncomment the following lines: discovery.sendtargets.auth.username = <username> discovery.sendtargets.auth.password = <password>
Start the iSCSI service and enable it persistently across reboots
# systemctl start iscsid.service # systemctl enable iscsid.service
Verify the iSCSI service started
# systemctl status iscsid.service
3.4.2. Configuring Host Access to Volumes
The following section provides steps in connecting the Dell EqualLogic iSCSI volumes to be used for the Oracle installation.
As the root user on each Oracle node,
Verify Ethernet devices
em3andem4can ping the Dell EqualLogic group IP.# ping -I em3 <EqualLogic_Group_IP> # ping -I em4 <EqualLogic_Group_IP>
Create an iSCSI interface (
iface) for each storage NIC. While the interface can have any name, for easy identification purposes theifaceare labelediem3iem4.# iscsiadm –m iface –I ip3p1 -–op=new New interface ip3p1 added # iscsiadm –m iface –I ip3p2-–op=new New interface ip3p2 added
Associate the iSCSI interface to the corresponding Ethernet device
# iscsiadm -m iface -I ip3p1 --op=update -n iface.net_ifacename -v p3p1 iem3 updated. # iscsiadm -m iface -I ip3p2 --op=update -n iface.net_ifacename -v p3p2 iem4 updated.
Verify the iSCSI interface configuration
# iscsiadm -m iface ip3p1 tcp,<empty>,<empty>,p3p1,<empty> ip3p2 tcp,<empty>,<empty>,p3p2,<empty>
Discover the iSCSI targets
# iscsiadm -m discovery -t st -p <EqualLogic_Group_IP> -I ip3p1 -I ip3p2
Login the iSCSI targets
# iscsiadm -m node --login all
Verify the iSCSI sessions are logged in
# iscsiadm -m session
3.4.3. Device Mapper Multipath
Device mapper multipath provides the ability to aggregate multiple I/O paths to a newly created device mapper mapping to achieve high availability, I/O load balancing, and persistent naming. The following procedures provide the best practices to installing and configuring device mapper multipath devices.
Ensure Oracle RAC database volumes are accessible via the operating system on all nodes within the Oracle RAC Database cluster prior to continuing with the section below.
The following instructions are required on each node within the Oracle RAC Database 12c cluster.
As the
rootuser, install thedevice-mapper-multipathpackage using theyumpackage manager.# yum install device-mapper-multipath
Create the multipath.conf file in /etc/
# mpathconf --enable
Capture the scsi id of the local disk(s) on the system. This example assumes the local disk is located within
/dev/sda# /usr/lib/udev/scsi_id --whitelisted --replace-whitespace \ --device=/dev/sda 3600508b1001030353434363646301200
Modify the blacklist section at the bottom of the
/etc/multipath.conffile to include the scsi id of the local disk on the system. Once complete, save the changes made to themultipath.conffile.NoteNotice how the
wwidmatches the value found in the previous step.blacklist { wwid 3600508b1001030353434363646301200 devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" }Start the multipath daemon.
# systemctl start multipathd.service
Enable the multipath daemon to ensure it is started upon boot time.
# systemctl enable multipathd.service
Identify the dm- device, size, and WWID of each device mapper volume for Oracle data disks and recovery disks. In this example, volume mpathb is identified via the following command:
# multipath -ll
Figure 3.1. Multipath Device (mpathb)

Figure 3.1, “Multipath Device (mpathb)” properly identifies the current multipath alias name, size, WWID, and dm device. This information is required for the application of a custom alias to each volume as shown in step 9.
The default values used by
device-mapper-multipathcan be seen using the commandmultipathd show config. Below is an example of the default output.defaults { verbosity 2 polling_interval 5 max_polling_interval 20 reassign_maps "yes" multipath_dir "/lib64/multipath" path_selector "service-time 0" path_grouping_policy "failover" uid_attribute "ID_SERIAL" prio "const" prio_args "" features "0" path_checker "directio" alias_prefix "mpath" failback "manual" rr_min_io 1000 rr_min_io_rq 1 max_fds 1048576 rr_weight "uniform" queue_without_daemon "no" flush_on_last_del "no" user_friendly_names "yes" fast_io_fail_tmo 5 bindings_file "/etc/multipath/bindings" wwids_file /etc/multipath/wwids log_checker_err always find_multipaths yes retain_attached_hw_handler no detect_prio no detect_path_checker no hw_str_match no force_sync no deferred_remove no ignore_new_boot_devs no skip_kpartx no config_dir "/etc/multipath/conf.d" delay_watch_checks no delay_wait_checks no retrigger_tries 3 retrigger_delay 10 missing_uev_wait_timeout 30 new_bindings_in_boot no remove_retries 0 disable_changed_wwids no }NoteThe standard options can be customized to better fit the storage array capabilities. Check with your storage vendor for details.
Uncomment the multipath section found within the
/etc/multipath.conffile and create an alias for each device mapper volume in order to enable persistent naming of those volumes. Once complete, save the changes made to themultipath.conffile. The output should resemble the example below. For reference, refer to the Oracle data volumes created for this reference environment displayed in Table 2.6, “Oracle OCR, Voting Disk, & Data File Sizes for Reference Architecture”.multipaths { multipath { wwid 3600c0ff000d7e7a899d8515101000000 alias db1 } multipath { wwid 3600c0ff000dabfe5a7d8515101000000 alias db2 } multipath { wwid 3600c0ff000d7e7a8dbd8515101000000 alias fra } multipath { wwid 3600c0ff000dabfe5f4d8515101000000 alias redo } multipath { wwid 3600c0ff000dabfe596a0f65101000000 alias ocrvote1 } multipath { wwid 3600c0ff000dabfe5a2a0f65101000000 alias ocrvote2 } multipath { wwid 3600c0ff000dabfe5b4a0f6510100000 alias ocrvote3 } multipath { wwid 3600c0ff000dacff5f4d8515101000000 alias gimr1 } }Restart the device mapper multipath daemon
# systemctl restart multipathd.service
Verify the device mapper paths and aliases are displayed properly. Below is an example of one device mapper device labeled fra.
# multipath -ll fra (3600c0ff000d7e7a89e85ac5101000000) dm-10 EQL,100E-00 size=186G features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active | |- 3:0:0:3 sdd 8:48 active ready running | |- 3:0:1:3 sdh 8:112 active ready running `-+- policy='service-time 0' prio=10 status=enabled |- 3:0:2:3 sdl 8:176 active ready running |- 3:0:3:3 sdp 8:240 active ready running
3.4.5. Configuring Oracle ASM Disks
The configuration of Oracle ASM requires the use of either udev rules, Oracle ASMLib or Oracle ASM Filter Driver.
The following table provides key considerations between udev rules, Oracle ASMLib and Oracle ASM Filter Driver (ASMFD).
Table 3.7. Oracle ASM Key Considerations
| Technology | Pros | Cons |
| udev rules | No proprietary user space utilities; native to OS; standard device manager on Linux distributions; same performance as Oracle ASMLib and ASMFD | Cannot stop an accidential I/O write done by a program or user error |
| Oracle ASMLib | No pros as it is slowly being deprecated in favor of ASMFD | Requires additional kernel module on OS; disks managed by Oracle instead of native OS; errors loading Oracle ASMLib may cause losing access of Oracle ASM disks until module can be reloaded; no performance benefit over native udev rules |
| Oracle ASM Filter Driver | Filters out all non-Oracle I/Os that may cause accidental overwrites to managed disks | Requires additional kernel module on OS; disks managed by Oracle instead of native OS; errors loading ASMFD may cause losing access of Oracle ASM disks until module can be reloaded; no performance benefit over native udev rules |
This reference architecture takes advantage of Red Hat’s native device manager udev rules as the method of choice for configuring Oracle ASM disks. For more information on Oracle ASM Filter Driver and installation method, visit: Administering Oracle ASM Filter Driver
3.4.5.1. Oracle ASMLib and Oracle ASM Filter Driver Alternative: Configuring udev Rules
This section focuses on the best practices of using Red Hat’s native udev rules to setup the appropriate permissions for each device mapper disk.
On the first node of the Oracle RAC 12c Release 2 cluster as the
rootuser, identify the Device Mapper Universally Unique Identifier (DM_UUID) for each device mapper volume. The example below shows the DM_UUID for the partitions of the volumes labeled db1p1,db2p1,fra1,redo1, ocrvote1p1,ocrvote2p1,ocrvote3p1,gimr1.# for i in ocrvote1p1 ocrvote2p1 ocrvote3p1 db1p1 db2p1 fra1 redo1 gimr1; do \ > printf "%s %s\n" "$i" "$(udevadm info --query=all --name=/dev/mapper/$i | \ > grep -i dm_uuid)"; done ocrvote1p1 E: DM_UUID=part1-mpath-3600c0ff000dabfe596a0f65101000000 ocrvote2p1 E: DM_UUID=part1-mpath-3600c0ff000dabfe5a2a0f65101000000 ocrvote3p1 E: DM_UUID=part1-mpath-3600c0ff000dabfe5b4a0f6510100000 db1p1 E: DM_UUID=part1-mpath-3600c0ff000d7e7a899d8515101000000 db2p1 E: DM_UUID=part1-mpath-3600c0ff000dabfe5a7d8515101000000 fra1 E: DM_UUID=part1-mpath-3600c0ff000d7e7a8dbd8515101000000 redo1 E: DM_UUID=part1-mpath-3600c0ff000dabfe5f4d8515101000000 gimr1 E: DM_UUID=part1-mpath-3600c0ff000dacff5f4d8515101000000
- Create a file labeled 99-oracle-asmdevices.rules within /etc/udev/rules.d/
Within 99-oracle-asmdevices.rules file, create rules for each device similar to the example below:
KERNEL=="dm-*",ENV{DM_UUID}=="part1-mpath- 3600c0ff000dabfe5f4d8515101000000",OWNER="grid",GROUP="asmadmin",MODE="06 60"To understand the rule above, it can be read as follows: If any
dm-device (dm-*) matches the DM_UUID ofpart1-mpath- 3600c0ff000dabfe5f4d8515101000000, assign thatdm-device to be owned by thegriduser and part of theasmadmingroup with the permission mode set to 0660.- Save the file labeled 99-oracle-asmdevices.rules
Copy the 99-oracle-asmdevices.rules file to each node within the Oracle RAC Database cluster using the
scpcommand and enter the appropriate password credentials for the other nodes. The example below shows how to copy the file to node two of the Oracle RAC Database 12c cluster.# scp /etc/udev/rules.d/99-oracle-asmdevices.rules oracle2:/etc/udev/rules.d/ root@oracle2's password: 99-oracle-asmdevices.rules 100% 834 0.8KB/s 00:00
On each node within the Oracle RAC Database cluster, locate the
dm-device for each Oracle related partition. An example of how to find thedm-device for each partition is to run the following command:# for i in db1p1 db2p1 fra1 redo1 ocrvote1p1 ocrvote2p1 ocrvote3p1 gimr1; do printf “%s %s\n” “$i” “$(ls -ll /dev/mapper/$i)”; done db1p1 lrwxrwxrwx. 1 root root 8 May 20 20:39 /dev/mapper/db1p1 -> ../dm-11 db2p1 lrwxrwxrwx. 1 root root 8 May 20 20:39 /dev/mapper/db2p1 -> ../dm-12 fra1 lrwxrwxrwx. 1 root root 8 May 20 20:39 /dev/mapper/fra1 -> ../dm-13 redo1 lrwxrwxrwx. 1 root root 8 May 20 20:39 /dev/mapper/redo1 -> ../dm-14 ocrvote1p1 lrwxrwxrwx. 1 root root 8 Jan 28 12:11 /dev/mapper/ocrvote1p1 -> ../dm-18 ocrvote2p1 lrwxrwxrwx. 1 root root 8 Jan 28 12:11 /dev/mapper/ocrvote2p1 -> ../dm-19 ocrvote3p1 lrwxrwxrwx. 1 root root 8 Jan 28 12:11 /dev/mapper/ocrvote3p1 -> ../dm-20 gimr1 lrwxrwxrwx. 1 root root 8 Jan 28 12:11 /dev/mapper/gimr1 -> ../dm-21
On each node within the Oracle RAC Database cluster, apply and test the rules created within the 99-oracle-asmdevices.rules by running a
udevadm teston each device. The example below demonstrates a udevadm test ondm-11# udevadm test /sys/block/dm-11 [ ... Output Abbreviated ... ] udevadm_test: DM_NAME=db1p1 udevadm_test: DM_UUID=part1-mpath-3600c0ff000d7e7a86485ac5101000000 udevadm_test: DM_SUSPENDED=0 udevadm_test: DEVLINKS=/dev/mapper/db1p1 /dev/disk/by-id/dm-name-db1p1 /dev/disk/by-id/dm-uuid-part1-mpath-3600c0ff000d7e7a86485ac5101000000 /dev/block/253:11 udevadm_test: ID_FS_TYPE=oracleasm
Confirm each device has the desired permissions. Example of db1p1 → dm-11, with owner set to
gridand group set toasmadmin.# ls -lh /dev/dm-11 brw-rw----. 1 grid asmadmin 253, 11 Jun 6 20:59 /dev/dm-11
NoteIf the desired permissions are not visible, please reboot the particular node from the Oracle RAC Database cluster.
NoteFor simplicity, this 99-oracle-asmdevices.rules file is included in Appendix G, 99-oracle-asmdevices.rules
3.4.6. Optimizing Database Storage using Automatic System Tuning
The tuned package in Red Hat Enterprise Linux 7 is recommended for automatically tuning the system for common workloads via the use of profiles. Each profile is tailored for different workload scenarios such as: throughput performance, balanced, & high network throughput.
In order to simplify the tuning process for Oracle databases, the creation of a custom oracle profile labeled tuned-profiles-oracle resides in the rhel-7-server-optional-rpms repository. The tuned-profiles-oracle profile uses the throughput performance profile as its foundation and additionally sets all the different parameters mentioned in previous sections of this reference architecture and disables Transparent HugePages (THP) for Oracle databases workload environments.
For more information on why THP is disabled, see Section 4.5, “Enabling HugePages”. Table 3.8, “Profile Tuned Profile Comparsion” provides details between the balanced profile, throughput-performance profile, and the custom profile tuned-profiles-oracle.
Table 3.8. Profile Tuned Profile Comparsion
| Tuned Parameters | balanced | throughput-performance | tuned-profiles-oracle |
| I/O Elevator | deadline | deadline | deadline |
| CPU governor | OnDemand | performance | performance |
| kernel.sched_min_granularity_ns | auto-scaling | 10ms | 10ms |
| kernel.sched_wake_up_granularity_ns | 3ms | 15ms | 15ms |
| disk read-ahead | 128 KB | 4096 KB | 4096 KB |
| vm.dirty_ratio | 20% | 40% | 80%* |
| File-system barrier | on | on | on |
| Transparent HugePages | on | on | off |
| vm.dirty_background_ratio | 10% | 10% | 3%* |
| vm.swappiness | 60% | 10% | 1%* |
| energy_perf_bias | normal | performance | performance |
| min_perf_pct (intel_pstate_only) | auto-scaling | auto-scaling | auto-scaling |
| tcp_rmem_default | auto-scaling | auto-scaling | 262144* |
| tcp_wmem_default | auto-scaling | auto-scaling | 262144* |
| udp_mem(pages) | auto-scaling | auto-scaling | auto-scaling |
| vm.dirty_expire_centisecs | - | - | 500* |
| vm.dirty_writeback_centisecs | - | - | 100* |
| kernel.shmmax | - | - | 439804651110417* |
| kernel.shmall | - | - | 107374182417* |
| kernel.sem | - | - | 250 32000 1000 128* |
| fs.file-max | - | - | 681574417* |
| fs.aio-max-nr | - | - | 104857617* |
| ip_local_port_range | - | - | 9000 65500* |
| tcp_rmem_max | - | - | 4194304* |
| tcp_wmem_max | - | - | 104857617* |
| kernel.panic_on_oops | - | - | 1* |
-
The values expressed within the
tuned-profiles-oracleare subject to change. The values found within thetuned-profiles-oracleare meant to be used as starting points and may require changes for the specific environment being tuned for the optimal performance of the Oracle Database environment.
The following procedures provide the steps that are required to install, enable, and select the tuned-profiles-oracle profile.
On each node within the Oracle RAC Database cluster, as the root user,
Install the
tunedpackage via theyumpackage manager.# yum install tuned
Enable
tunedto ensure it is started upon boot time.# systemctl enable tuned.service
Start the
tunedservice# systemctl start tuned.service
Ensure that the
rhel-7-server-optional-rpmsrepository is available, otherwise enable via:# subscription manager repos --enable=rhel-7-server-optional-rpms
Install the
tuned-profiles-oraclepackage# yum install tuned-profiles-oracle
Activate the
tuned-profiles-oracleprofile# tuned-adm profile oracle
Verify that THP is now disable via:
# cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never]
Disable transparent huge pages persistently across reboots by adding
transparent_hugepage=neverto the kernel boot command line within the /etc/default/grub and add within theGRUB_CMDLINE_LINUXthe following:# cat /etc/default/grub GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .\*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="rd.lvm.lv=myvg/swap rd.lvm.lv=myvg/usr vconsole.font=latarcyrheb-sun16 rd.lvm.lv=myvg/root crashkernel=auto vconsole.keymap=us rhgb quiet transparent_hugepage=never" GRUB_DISABLE_RECOVERY="true"For the grub changes to take effect, run the following:
# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-123.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-123.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue- 41c535c189b842eea5a8c20cbd9bff26 Found initrd image: /boot/initramfs-0-rescue- 41c535c189b842eea5a8c20cbd9bff26.img done
If at any point in time a revert to the original settings are required with persistence across reboots, the following commands can be run:
# systemctl stop tuned.service # systemctl disable tuned.service
Even if reverting to the original settings, it is recommended to keep transparent huge pages disabled within the /etc/default/grub file.
3.4.6.1. Customizing the tuned-profiles-oracle profile
The purpose of the tuned-profiles-oracle profile is to provide a starting baseline for an Oracle Database deployment. When further customization is required, the following section describes how to modify the profiles settings to meet custom criteria.
In order to modify the existing tuned-profiles-oracle profile, changes to the tuned.conf file within /usr/lib/tuned/oracle is required. Due to the changes since Red Hat Enterprise Linux 7.0, the following are recommendations for changes when running Red Hat Enterprise Linux 7.1 or higher.
The following parameters are commented out due to default or higher values being used by Red Hat Enterprise Linux 7.1 or higher with a default installation. The list includes:
#kernel.shmmax = 4398046511104 #kernel.shmall = 1073741824 #kernel.shmmni = 4096 #fs.file-max = 6815744 #kernel.panic_on_oops = 1
The following parameter is left out from the tuned-profiles-oracle relating to reverse path filtering. It can be added manually as shown in Section 3.3.16, “Reverse Path Filtering” or the parameter may be added to the tuned.conf file.
Example of tuned.conf with rp_filter
# # tuned configuration # [main] summary=Optimize for Oracle RDBMS include=throughput-performance [sysctl] vm.swappiness = 1 vm.dirty_background_ratio = 3 vm.dirty_ratio = 80 vm.dirty_expire_centisecs = 500 vm.dirty_writeback_centisecs = 100 #kernel.shmmax = 4398046511104 #kernel.shmall = 1073741824 #kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 #fs.file-max = 6815744 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 #kernel.panic_on_oops = 1 net.ipv4.conf.em3.rp_filter = 2 net.ipv4.conf.em4.rp_filter = 2 [vm] transparent_hugepages=never
The specific Ethernet devices that provide private interconnect are added.
As mentioned earlier, all these values are starting points and may require additional adjustments to meet an environment’s requirements.
Restart the tuned service for the changes to take effect.
# systemctl restart tuned.service
If the tuned package is used to setup the kernel parameters, ensure to conclude the other prerequisites starting with Section 3.3.17, “User Accounts & Groups”.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.