Chapter 4. Oracle 12c Release 2 Configuration

4.1. Installing Oracle Grid Infrastructure (Required for ASM)

The installation of the Oracle Grid Infrastructure for Oracle 12c Release 2 is required for the use of Oracle ASM. Prior to the installation of the Oracle Grid Infrastructure, ensure that the prerequisites from the following sections have been met:

Note

The reference environment uses the /u01/app/12.2.0/grid as the Grid home. The owner is set to grid and the group is set to oinstall.

The following commands create the Grid home directory and set the appropriate permissions:

As the root user,

# mkdir --parents /u01/app/12.2.0/grid
# chown --recursive grid.oinstall /u01
  1. Download the Oracle Grid Infrastructure software files9 from the Oracle Software Delivery Cloud
  2. Change the ownership and permissions of the downloaded file, move the fileto the Grid home and install unzip package for unpackaging of the file.

    # cd <grid_download_location>
    # chown grid.oinstall V840012-01.zip
    # mv V840012-01.zip /u01/app/12.2.0/grid
    # yum install unzip
  3. ssh as the grid user with the -Y option, change directory into the Grid home /u01/app/12.2.0/grid and unzip the download zip file.

    ssh -Y grid@<hostname>
    $ cd /u01/app/12.2.0/grid
    $ unzip -q V840012-01.zip
  4. As the grid user, start the OUI via the command:

    $ /u01/app/12.2.0/grid/gridSetup.sh
    Note

    Ensure to issue ssh with the -Y option as the grid user from the client server. Otherwise, a DISPLAY error may occur.

  5. Within the Configuration Option window, select Configure Oracle Grid Infrastructure for a Standalone Server (Oracle Restart) and select Next.

    grid win1
  6. Within the Create ASM Disk Group window, provide the following:

    • Disk group name, i.e. DATA
    • Redundancy Level

      • External - redundancy provided by the storage system RAID, and not by Oracle ASM
      • Normal - provides two-way mirroring by Oracle ASM, thus provided two copies of every data extent.
      • High provides three-way mirroring by Oracle ASM thus enduring the loss of two ASM disks within different failure groups.
    • Disks to be assigned to the Disk group, i.e. /dev/mapper/db1p1, /dev/mapper/db2p1

      Note

      This reference environment uses Normal redundancy

    • Allocation Unit (AU) Size set to 4MB

      • A 4MB AU size is used to crease the amount of extents Oracle needs to manage. With less extends to manage, CPU utilization and memory consumption is reduced thus improving performance. The AU size varies depending on the type of Oracle workload, I/O size per transaction, and overall diskgroup size. There is no "best size" for AU size, but a good starting point is 4 MB. Please visit Oracle’s documentation10 for more information.

        To display the appropriate candidate disks, click on the Change Discovery Path button and enter as the Disk Discovery Path one of the following as appropriate:

    • For device mapper devices, type: dev/mapper/*

      grid win2
  7. Click Next once complete within the Create ASM Disk Group window.
  8. Within the ASM Password window, specify the password for the SYS and ASMSNMP user accounts, click Next.
  9. Within the Management Options window, ensure the Register with Enterprise Manager (EM) Cloud Control is unchecked, click Next.
  10. Within the Operating System Groups window, select the appropriate OS groups and click Next. The values as created and assigned within this reference environment are as follows:

    • Oracle ASM Administrator Group – ASMADMIN
    • Oracle ASM DBA Group – ASMDBA
    • Oracle ASM Operator Group – ASMOPER

      grid win3
  11. Within the Installation Location window, specify the appropriate Oracle base and software locations and click Next. The values set by this reference environment are as follows:

    • Oracle base: /u01/app/12.2.0
    • Software location: /u01/app/12.2.0/grid
  12. Within the Create Inventory window, specify the inventory directory and click Next. The values set by this reference environment are as follows:

    • Inventory Directory - /u01/app/oraInventory
  13. Within the Root script execution configuration window, select the check box labeled Automatically run configuration scripts and enter the root user credentials. The step specifying the root user credentials in order to run specific configuration scripts automatically at the end of the installation is optional. For the purposes of this reference environment, the root credentials are given in order to speed up the Oracle Grid Infrastructure installation process. Click Next.

    grid win4
  14. Within the Prerequiste Checks window, review the status and ensure there are no errors prior to continuing the installation. For failures with a status set to Fixable, select the Fix & Check Again button. The execution of the Fix & Check Again button provides a runfixup.sh script provided by the OUI. If selected, Automatically run configuration scripts from the previous step, the Oracle OUI uses the root credentials and runs the fixup.sh script automatically. Otherwise, as root, run the /tmp/gGridSetupActions_<timestamp>/CVU_<grid_version>_grid/runfixup.sh and click on the Check Again button once the runfixup.sh has finished.
  15. Within the Summary window, review all the information provided, and select Install to start the installation.
  16. During the installation process, within the Oracle Grid Infrastructure pop up window, select yes to allow the installer to run as the root user to execute the configuration scripts.
  17. Within the Finish window, verify the installation was successful and click Close.

9: Oracle Database 12c Release 2 - V840012-01.zip from http://edelivery.oracle.com

10: Oracle ASM Extents - https://docs.oracle.com/database/121/OSTMG/GUID-1E5C4FAD-087F-4598-B959-E66670804C4F.htm

4.2. Installing Oracle 12c R1 Database Software

Prior to the installation of the Oracle 12c Release 2, ensure the following prerequisites from the following sections have been met:

Note

The reference environment uses the /u01/app/oracle as the Oracle base. The owner is set to oracle and the group is set to oinstall.

The following commands create the Oracle base directory and set the appropriate permissions:

As the root user,

# mkdir --parents /u01/app/oracle
# mkdir --parents /u01/app/oracle-software
# chown --recursive oracle.oinstall /u01/app/oracle
# chown --recursive oracle.oinstall /u01/app/oracle-software
  1. Download the Oracle Database software files9 from the Oracle Software Delivery Cloud
  2. Change the ownership and permissions of the downloaded file, move the file to the Oracle home and install unzip package for unpackaging of the file.

    # cd <oracle_download_location>
    # chown oracle.oinstall V839960-01.zip
    # mv V839960-01.zip /u01/app/oracle-software
  3. ssh as the oracle user, change directory into the /u01/app/oracle-software and unzip the download zip file.

    ssh -Y oracle@<hostname>
    $ cd /u01/app/oracle-software
    $ unzip -q V839960-01.zip
  4. As the oracle user, start the OUI via the command:

    $ /u01/app/oracle-software/database/runInstaller
    Note

    Ensure to issue ssh with the -Y option as the oracle user from the client server. Otherwise, a DISPLAY error may occur.

  5. Within the Configure Security Updates window, provide the My Oracle Support email address for the latest security issues information. Otherwise uncheck the I wish to receive security updates via My Oracle Support and click Next.
  6. Within the Installation Option window, select Install database software only and click Next.

    oracle win2
  7. Within the Database Installation Options window, select Single Instance database installation as the type of database installation being performed and click Next.

    oracle win3
  8. Within the Database Edition window, select the appropriate database edition and click Next. For the purposes of this reference environment, Enterprise Edition is the edition of choice.
  9. Within the Installation Location window, select the appropriate Oracle base and software location and click Next. For the purposes of this reference environment, the following values are set:

    • Oracle base - /u01/app/oracle
    • Software Location - /u01/app/oracle/product/12.2.0/dbhome_1
  10. Within the Operating System Groups window, select the appropriate OS groups and click Next. For the purposes of this reference environment, the following values are set as:

    • Database Administrator group – DBA
    • Database Operator group – OPER
    • Database Backup and Recovery group – BACKUPDBA
    • Data Guard Administrative group – DGDBA
    • Encryption Key Management Administrative group – KMDBA
    • Oracle Real Application Cluster Administration group - RACDBA

      oracle win4
  11. Within the Summary window, review all the information provided, and select Install to start the installation.
  12. Once the installation completes, execute the scripts within the Execute Configuration scripts window. As the root user, run the following:

    # /u01/app/oracle/product/12.2.0/dbhome_1/root.sh
    Note

    In the example above, /u01/app/oracle/product/12.2.0/dbhome_1 is the Oracle home directory.

  13. Click OK within the Execute Configuration scripts window.
  14. Within the Finish window, verify the installation was successful and click Close.

4.3. Creating ASM Diskgroups via the ASM Configuration Assitant (ASMCA)

Prior to the creation of an Oracle database, create the Fast Recovery Area (FRA) and Redo Logs Oracle ASM diskgroups via Oracle’s ASM Configuration Assistant (ASMCA).

  1. ssh with the -Y option as the grid user is required prior to running asmca.
  2. As the grid user, start asmca via the following command:

    $ /u01/app/12.2.0/grid/bin/asmca
    Note

    /u01/app/12.2.0/grid is the Grid home directory.

  3. Via the asmca application, select the Disk Groups and click Create.

    asmca win1
  4. Within the Create Disk Group window, provide the following:

    • A name for the disk group, i.e. FRADG
    • Redundancy level for the disk group, i.e. External Redundancy
    • Selection of the disks to be added to the disk group, i.e. /dev/mapper/fra1
    • Select an AU Size, i.e. 4 MB
Note

To display the appropriate eligible disks, click on the Change Discovery Path button and enter as the Disk Discovery Path one of the following as appropriate:

  • For Device Mapper devices, type: /dev/mapper/*

    1. Click the OK button once the steps above are complete.

      asmca win2
    2. Repeat the above steps to configure additional disk groups. It is recommended to create a diskgroup to separate the Redo logs, however, it is not required.
    3. Once all the disk groups are created, click the Exit button from the main ASM Configuration Assistant window. Click yes when asked to confirm quitting the application.

4.4. Creating Pluggable Databases using Database Configuration Assistant (DBCA)

With the introduction to Oracle Database 12c, Oracle introduced the Multitenant architecture. The Multitenant architecture provides the ability to consolidate multiple databases known as pluggable databases (PDBs) into a single container database (CDB). It provides advantages11 that include easier management and monitoring of the physical database, fewer patches and upgrades, performance metrics consolidated into one CDB, and sizing one SGA instead of multiple SGAs. While using the Multitenant architecture is optional, this reference architecture focuses on describing the step-by-step procedure of taking advantage of it. When creating an Oracle database, the recommended method is the usage of the dbca utility. Prior to getting into to the details of installing a container database (CDB) and deploying pluggable databases (PDB), an overview of the key concepts of the Multitenant Architecture is provided.

Container11 – is a collection of schemas, objects, and related structures in a multitenant container database (CDB) that appears logically to an application as a separate database. Within a CDB, each container has a unique ID and name.

A CDB consists of two types of containers: the root container and all the pluggable databases that attach to a CDB.

Root container11 – also known as the root, is a collection of schemas, schema objects, and nonschema objects to which all PDBs belong. Every CDB has one and only one root container, that stores the system metadata required to manage PDBs (no user data is stored in the root container). All PDBs belong to the root. The name of the root container is CDB$ROOT.

PDB11– is a user-created set of schemas, objects, and related structures that appears logically to an application as a separate database. Every PDB is owned by SYS, that is a common user in the CDB, regardless of which user created the CDB.

For more information on Oracle’s Multitenant architecture, visit Oracle’s documentation11.

11: https://docs.oracle.com/database/122/ADMIN/overview-of-managing-a-multitenant-environment.htm#ADMIN13507

The following section describes the step-by-step procedure to create a container database (CDB) that holds two pluggable databases (PDB) thus taking advantage of Oracle’s Multitenant architecture.

  1. ssh with the -Y option as the oracle user prior to running dbca.
  2. As the oracle user, run the dbca utility via the command:

    $ /u01/app/oracle/product/12.2.0/dbhome_1/bin/dbca
    Note

    In the example above, /u01/app/oracle/product/12.2.0/dbhome_1 is the Oracle home directory.

  3. Within the Database Operations window, select Create a database radio button and click Next.
  4. Within the Creation Mode window, select Advanced Mode radio button and click Next.
  5. Within the Database Template window, select Database Type as Oracle Single Instance database and Custom Database radio button. Click Next.
  6. Within the Database Identification window, set a global database name and Oracle System Identifier (SID), i.e. cdb. Check the check box that reads Create as Container Database. Select the number of PDBs to install and provide a PDB Name Prefix, i.e. orclpdb and click Next. This reference environment creates two PDBs.

    dbca win1
  7. Within the Storage Option window, select Use following for the database storage attributes radio button. Change the Database file storage type: to Automatic Storage Management (ASM). Within the _Database file location: select the Browse button and pick the database disk group, i.e. +DATA. Select the Mutliplex redo logs and control files and enter the name of the redo log disk group (if created previously), i.e. +REDODG.

    Note

    The use of Oracle-Managed Files (OMF) is used within the reference environment, however, it is not required.

    dbca win3
  8. Within the Fast Recovery Option window, check the checkbox labeled Specify Fast Recovery Area, and select the Browse button to pick the diskgroup that is to be assigned for Fast Recovery Area, i.e. +FRADG. Enter an appropriate size based upon the size of the disk group.

    dbca win4
  9. Within the Network Configuration window, ensure the LISTENER is checked and click Next.
  10. Within the Database Options window, select the database components to install. This reference environment kept the defaults. Once selected, click Next.
  11. Within the Configuration Options window, ensure the Use Automatic Shared Memory Segment is selected, and use the scroll bar or enter the appropriate SGA and PGA values for the environment. The remaining tabs, Sizing, Character sets, Connection mode, the defaults are used.
  12. Within the Management Options window, modify the Enterprise Manager database port or deselect Configure Enterprise (EM) database express if not being used. This reference architecture uses the defaults and selected Next.
  13. Within the User Credentials window, enter the credentials for the different administrative users and click Next.
  14. Within the Creation Option window, ensure the Create database checkbox is selected. This refernece architecture uses the defaults for all other options, but may be customizable to fit an environment’s requirements.
  15. Within the Summary window, review the Summary, and click Finish. to start the database creation.

4.5. Enabling HugePages

Transparent Huge Pages (THP) are implemented within Red Hat Enterprise Linux 7 to improve memory management by removing many of the difficulties of manually managing huge pages by dynamically allocating huge pages as needed. Red Hat Enterprise Linux 7, by default, uses transparent huge pages also known as anonymous huge pages. Unlike static huge pages, no additional configuration is needed to use them. Huge pages can boost application performance by increasing the chance a program may have quick access to a memory page. Unlike traditional huge pages, transparent huge pages can be swapped out (as smaller 4kB pages) when virtual memory clean up is required. Unfortunately, Oracle Databases do not take advantage of transparent huge pages for interprocess communication. In fact, My Oracle Support 12 states to disable THP due to unexpected performance issues or delays when THP is found to be enabled. To reap the benefit of huge pages for an Oracle database, it is required to allocate static huge pages and disable THP. Due to the complexity of properly configuring huge pages, it is recommended to copy the bash shell script found within Appendix C, Huge Pages Script and run the script once the database is up and running. The reasoning behind allocating huge pages once the database is up and running is to provide a proper number of pages to handle the running shared memory segments. The steps are as follows:

  1. Copy the bash script found within Appendix C, Huge Pages Script and save it as huge_pages_settings.sh
  2. As the root user, ensure the huge_pages_settings.sh is executable by running:

    # chmod +x huge_pages_settings.sh
  3. As the root user, ensure the bc package is installed

    # yum install bc
  4. As the root user, execute the huge_pages_settings.sh script as follows:

    # /path/to/huge_pages_settings.sh
    Recommended setting within the kernel boot command line: hugepages = <value>
    Recommended setting within /etc/security/limits.d/99-grid-oracle-limits.conf:
    oracle soft memlock <value>
    Recommended setting within /etc/security/limits.d/99-grid-oracle-limits.conf:
    oracle hard memlock <value>
  5. Add the number of hugepages provided by the script to the kernel boot command line within the /etc/default/grub as follows:

    GRUB_TIMEOUT=5
    GRUB_DISTRIBUTOR="$(sed 's, release .\*$,,g' /etc/system-release)"
    GRUB_DEFAULT=saved
    GRUB_DISABLE_SUBMENU=true
    GRUB_TERMINAL_OUTPUT="console"
    GRUB_CMDLINE_LINUX="nofb splash=quiet crashkernel=auto rd.lvm.lv=myvg/root rd.lvm.lv=myvg/swap rd.lvm.lv=myvg/usr rhgb quiet transparent_hugepage=never hugepages=<value-provided-by-script>"
    GRUB_DISABLE_RECOVERY="true"
    Note

    Allocating the number of huge pages within the kernel boot command line is the most reliable method due to memory not yet becoming fragmented.13

  6. For the grub changes to take effect, run the command:

    # grub2-mkconfig -o /boot/grub2/grub.cfg
    Generating grub configuration file ...
    Found linux image: /boot/vmlinuz-3.10.0-693.1.1.el7.x86_64
    Found initrd image: /boot/initramfs-3.10.0-693.1.1.el7.x86_64.img
    Found linux image: /boot/vmlinuz-3.10.0-514.el7.x86_64
    Found initrd image: /boot/initramfs-3.10.0-514.el7.x86_64.img
    Found linux image: /boot/vmlinuz-0-rescue-f9650ab62cd449b8b2a02d39ac73881e
    Found initrd image: /boot/initramfs-0-rescue-f9650ab62cd449b8b2a02d39ac73881e.img
    done
  7. Oracle requires setting the soft and hard limits to memlock. Setting memlock allows the oracle user to lock a certain amount of memory from physical RAM that isn’t swapped out. The value is expressed in kilobytes and is important from the Oracle perspective because it provides the oracle user permission to use huge pages. This value should be slightly larger than the largest SGA size of any of the Oracle Database instances installed in an Oracle environment. To set memlock, add within /etc/security/limits.d/99-grid-oracle-limits.conf the following:

    oracle soft memlock <value-provided-by-script>
    oracle hard memlock <value-provided-by-script>

    Reboot the system to ensure the huge pages setting takes effect properly.

  8. Verify the value provided by the huge_pages_settings.sh matches the total number of huge pages available on the node(s) with the following command:

    # cat /proc/meminfo | grep -i hugepages_total
    HugePages_Total: <value-provided-by-script>
  9. Verify the current status of the transparent huge pages is set to never via the command:

    # cat /sys/kernel/mm/transparent_hugepage/enabled
    always madvise [never]

12: ALERT: Disable Transparent HugePages on SLES11,RHEL6,OEL6 and UEK2 Kernels (DOC ID: 1557478.1)

13: https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt