-
Language:
English
-
Language:
English
Red Hat Training
A Red Hat training course is available for Red Hat Enterprise Linux
26.2. Installing Oracle 10gR2 Cluster Ready Services (CRS) with MPIO
There is a bug installing Oracle 10gR2 CRS services using MPIO(Multipath I/O). The bug has been reported to Oracle and the bug number and description states:
BUG 5005148 - CANNOT USE BLOCK DEVICES IN VOTING DISK AND OCR DURING 10G RAC INSTALLATION
This has been fixed in Oracle 11g.
Before attempting to install CRS, ensure that all firewalls are disabled:
service iptables stop chkconfig iptables off
There are workarounds to successfully install CRS using MPIO. MPIO has to be disabled during installation. After the Vipca portion of the CRS installation is complete, the MPIO can be turned back on, restart CRS services. In the following examples using MPIO, friendly names are used for ease of manageability. In this example Oracle 10g R2 CRS is using internal redundancy for the
OCR
and VOTING
disks.
Read
/usr/share/doc/device-mapper-multipath-0.4.5/Multipath-usage.txt
for more information on configuring multipath.
First, determine which block devices will be associated with MPIO devices with your OCR or Voting Disks and save the output to a log file.
# multipath -l > multipath.log ocr1 (36006016054141500beac25dcc436dc11) [size=1 GB][features="0"][hwhandler="1 emc"] \_ round-robin 0 [active] \_ 1:0:1:0 sdc 8:32 [active] \_ 2:0:1:0 sdn 8:208 [active] ocr2 (36006016054141500a898dae7c436dc11) [size=2 GB][features="0"][hwhandler="1 emc"] \_ round-robin 0 [active] \_ 1:0:1:1 sdd 8:48 [active] \_ 2:0:1:1 sdo 8:224 [active] vote1 (3600601605414150040a48bf2c436dc11) [size=3 GB][features="0"][hwhandler="1 emc"] \_ round-robin 0 [active] \_ 1:0:1:2 sde 8:64 [active] \_ 2:0:1:2 sdp 8:240 [active] vote2 (36006016054141500c0fc0fffc436dc11) [size=4 GB][features="0"][hwhandler="1 emc"] \_ round-robin 0 [active] \_ 1:0:1:4 sdg 8:96 [active] \_ 2:0:1:4 sdr 65:16 [active] vote3 (36006016054141500340b8309c536dc11) [size=5 GB][features="0"][hwhandler="1 emc"] \_ round-robin 0 [active] \_ 1:0:1:5 sdh 8:112 [active] \_ 2:0:1:5 sds 65:32 [active]
Delete raw links to Multipath Devices on all nodes:
rm -f /dev/raw/*
Shut down multipath:
multipath -F #shuts off Multipath
Looking at the multipath.log output file, you can see which block devices will used for Oracle CRS. To be sure you have the correct disk on all nodes, look at
/proc/partitons
. In this installation example, each device has a unique size so it is easier to map. Now link the block devices to the raw devices on all nodes by modifying /etc/sysconfig/rawdevices
. Run ll
on the device to get the major minor number for a block device, as seen below.
# ll /dev/sdc1 brw-rw---- 1 root disk 8, 33 Oct 1 12:46 /dev/sdc1 #ocr1 # ll /dev/sdd1 brw-rw---- 1 root disk 8, 49 Oct 1 12:47 /dev/sdd1 #ocr2 # ll /dev/sde1 brw-rw---- 1 root disk 8, 65 Oct 1 12:47 /dev/sde1 #vote1 # ll /dev/sdg1 brw-rw---- 1 root disk 8, 97 Oct 1 12:48 /dev/sdg1 #vote2 # ll /dev/sdh1 brw-rw---- 1 root disk 8, 113 Oct 1 12:49 /dev/sdh1 #vote3
Change the configuration file
/etc/sysconfig/rawdevices
using vi or your favorite editor and add the following block devices.
$ vi /etc/sysconfig/rawdevices /dev/raw/raw1 8 33 /dev/raw/raw2 8 49 /dev/raw/raw3 8 65 /dev/raw/raw4 8 97 /dev/raw/raw5 8 113
On the other nodes run
blockdev
so the partition table are read again and created:
# blockdev --rereadpt /dev/sdc # blockdev --rereadpt /dev/sdd # blockdev --rereadpt /dev/sde # blockdev --rereadpt /dev/sdg # blockdev --rereadpt /dev/sdh
Double check on the other nodes that the minor numbers for the block devices match. The second number in the example below, 8 in this case, is the minor number and 1 is the major number.
$ ll /dev/sdh1 brw-rw---- 1 root disk 8, 113 Oct 1 13:14 /dev/sdh1
Copy
/etc/sysconfig/rawdevices
to other nodes then restart it. You will see the created raw devices in /dev/raw
.
# /etc/rc5.d/S56rawdevices start Assigning devices: /dev/raw/raw1 --> 8 33 /dev/raw/raw1: bound to major 8, minor 33 /dev/raw/raw2 --> 8 49 /dev/raw/raw2: bound to major 8, minor 49 /dev/raw/raw3 --> 8 65 /dev/raw/raw3: bound to major 8, minor 65 /dev/raw/raw4 --> 8 97 /dev/raw/raw4: bound to major 8, minor 97 /dev/raw/raw5 --> 8 113 /dev/raw/raw5: bound to major 8, minor 113 done
Now check that the permissions are set for the Oracle installation.
# ll /dev/raw total 0 crwxrwxrwx 1 oracle dba 162, 1 Oct 1 13:13 raw1 crwxrwxrwx 1 oracle dba 162, 2 Oct 1 13:13 raw2 crwxrwxrwx 1 oracle dba 162, 3 Oct 1 13:13 raw3 crwxrwxrwx 1 oracle dba 162, 4 Oct 1 13:13 raw4 crwxrwxrwx 1 oracle dba 162, 5 Oct 1 13:13 raw5
Now you can commence an Oracle Clusterware installation.
Note
If during the installation process, Oracle says the raw devices are busy, then Multipath was left on in one of the nodes.
After the
VIPCA
portion of the Oracle CRS is complete, you need to comment out the raw devices you created and copy /etc/sysconfig/rawdevices
to the other nodes:
#/dev/raw/raw1 8 33 #/dev/raw/raw2 8 49 #/dev/raw/raw3 8 65 #/dev/raw/raw4 8 97 #/dev/raw/raw5 8 113
Check to be sure the Oracle Notification Services has started on all nodes.
# ps -ef |grep ons oracle 20058 1 0 13:49 ? 00:00:00 /ora/crs/opmn/bin/ons -d oracle 20059 20058 0 13:49 ? 00:00:00 /ora/crs/opmn/bin/ons -d root 20812 28286 0 13:50 pts/2 00:00:00 grep ons
The installation is now complete. Now you can to turn off Oracle CRS and start using MPIO.
Note
It is best to do this process one node at a time. No need for reboots during this process.
Shutdown the Oracle CRS Process.
# /ora/crs/bin/crsctl stop crs Stopping resources. Successfully stopped CRS resources Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued.
Delete the raw devices created for Oracle CRS installation.
$ rm -f /dev/raw/*
Turn on Multipath I/O (MPIO) again.
$ multipath
Create a simple script in
/etc/rc5.d/
(before CRS Starts but after MPIO starts) to bind raw devices using MPIO.
num="1" for i in `ls /dev/mpath/ocr?p? | sort` do raw /dev/raw/raw${num} $i let "num = $num + 1" done num="3" for i in `ls /dev/mpath/vote?p? | sort` do raw /dev/raw/raw${num} $i let "num = $num + 1" done
Then run the script,
/etc/rc5.d/
, that you just created.
# /etc/rc5.d/S57local start /dev/raw/raw1: bound to major 253, minor 25 /dev/raw/raw2: bound to major 253, minor 21 /dev/raw/raw3: bound to major 253, minor 22 /dev/raw/raw4: bound to major 253, minor 23 /dev/raw/raw5: bound to major 253, minor 24
Verify the correct permissions are all set.
# ll /dev/raw total 0 crwxrwxrwx 1 oracle dba 162, 1 Oct 1 13:58 raw1 crwxrwxrwx 1 oracle dba 162, 2 Oct 1 13:58 raw2 crwxrwxrwx 1 oracle dba 162, 3 Oct 1 13:58 raw3 crwxrwxrwx 1 oracle dba 162, 4 Oct 1 13:58 raw4 crwxrwxrwx 1 oracle dba 162, 5 Oct 1 13:58 raw5
Restart CRS and be sure the ONS processes all start. This ensures Oracle CRS is functional on this node.
# /ora/crs/bin/crsctl start crs Attempting to start CRS stack The CRS stack will be started shortly
Repeat for the process for the rest of the nodes. Now the Oracle CRS installation is complete using Multipath I/O.
Once complete, you can check the status of the nodes:
# crs_stat -t Name Type Target State Host ------------------------------------------------------------------------ ora....t08.gsd application ONLINE ONLINE et-virt08 ora....t08.ons application ONLINE ONLINE et-virt08 ora....t08.vip application ONLINE ONLINE et-virt08 ora....t09.gsd application ONLINE ONLINE et-virt09 ora....t09.ons application ONLINE ONLINE et-virt09 ora....t09.vip application ONLINE ONLINE et-virt09 ora....t10.gsd application ONLINE ONLINE et-virt10 ora....t10.ons application ONLINE ONLINE et-virt10 ora....t10.vip application ONLINE ONLINE et-virt10 ora....t11.gsd application ONLINE ONLINE et-virt11 ora....t11.ons application ONLINE ONLINE et-virt11 ora....t11.vip application ONLINE ONLINE et-virt11 #