SAN Storage Persistently acquires /dev/sda label
I am having my Fiber connected SAN to a host whose specific partitions (say /dev/sda4, 5 and 6) are for my Guest machines.
The guests cant access the partitions every time I attach the SAN storage because the storage partitions(LUNs) takes the first labels(say from /dev/sda to /dev/sde) resulting to the host device name adjusting to another name say /dev/sdf(1,2,..6).
How can I fix this prior to implementation of Multipathing?
Responses
Every time there is server reboot there is good chance to mash /dev/sd* attach via FC
Using multipath is recommended and easy way of solving this as you will only need to map for example /dev/mpathf.
But there is "non-recommended" way too :)
[I DID NOT TEST THIS SO THERE MAY BE ERROR / TYPO, IF THIS EATS YOUR DATA / DOG IT IS NOT MY FAULT. ]
do:
# ls -l /dev/disk/by-path
And you will get something like this:
lrwxrwxrwx 1 root root 9 Jul 21 13:52 pci-0000:0b:00.1-fc-0x50060e80122d2911-lun-205 -> ../../sdf
create new file /etc/udev/rules.d/55-persistent-disk.rules and add to it:
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace /dev/$name", RESULT=="pci-0000:0b:00.1-fc-0x50060e80122d2911-lun-205", SYMLINK+="lun205"
With little luck you should get something like /dev/lun205 as persistent name.
I can recommend another multipath solution:
device {
vendor “HITACHI”
product “DF600F*”
path_grouping_policy group_by_prio
prio_callout “/sbin/mpath_prio_hds_modular %d”
path_checker readsector0
getuid_callout “/sbin/scsi_id -g -u -s /block/%n”
}
In that example, the getuid_callout could be modified to run a script to "rename" your devices. For example, in my environment, we call a perl script that generates a return which is based on the Array's serial number and then the LUNID. I.e. HDS02_34d9 (our array names are not very creative ;-)
The HDS02 was derived from a substring in the middle of the WWN and the LUNID is actually pretty simple.
I can't recall specifically which portion of the WWN is the serial number, but the following example would be close enough:
HDS02="050e2"
HDS05="00e13"
360003baccc75000 050e2 09270007 34d9 => HDS02_34d9
3600d02300000000 00e13 955cc375 7803 => HDS05_7803
and we would replace the getuid_callout with
getuid_callout “/sbin/scsi_id_custom_rhel6 -g -u -s /block/%n”
So - now when I look at my multipath -ll -v2 output, I would see something like
HDS02_34d9 (360003baccc75000050e20927000734d9) [size=10 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [prio=1][active]
\_ 2:0:0:6 sdb 8:16 [active][ready]
\_ round-robin 0 [prio=1][enabled]
\_ 3:0:0:6 sdc 8:64 [active][ready]
HDS05_7803 (3600d0230000000000e13955cc3757803) [size=50 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [prio=1][active]
\_ 2:0:1:4 sdd 8:42 [active][ready]
\_ round-robin 0 [prio=1][enabled]
\_ 3:0:1:4 sde 8:68 [active][ready]
and depending on whether you're using RHEL 5 or 6, you will then see a matching entry in /dev/mpath (only RHEL 5) and /dev/mapper (in both).
ls /dev/mapper/HDS*
HDS02_34d9 HDS05_7803
NOTE: I put this email together without actual access to an system with multipath, but I will review it again later and put some real output in the examples.
The syntax of the getuid_callout is different between RHEL 5 and 6 (takes different options)
I recommend looking that DM manual as it provides some good examples and advice.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/DM_Multipath/index.html
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/DM_Multipath/index.html
-
Did you update your configuration of Guests to use /dev/lun3 insted of for example /dev/sdd ?
-
As you already have 2 paths you should really use multipath software as James and I recommended.
I have wrote udev stuff for fun and giggles but it should not be used in production.
Multipath is definitely your best option. I (personally) do not use udev for storage as it is not as simple to be dynamic (i.e. you have to update the rules every time you add a lun).
I can provide the getuid_callout script that we use if that would help.
EDIT: (NEVERMIND - I do believe this would work, but does not give you multipath)
I wonder if you could map the actual by-path
/dev/disk/by-path/pci-0000:2e:00.0-fc-0x50060e801603da64-lun-0
Like this in your /etc/libvirt/qemu/
<disk type='block' device='lun' sgio='unfiltered'>
<driver name='qemu' type='raw' cache='none'/>
<source dev='/dev/disk/by-path/pci-0000:2e:00.0-fc-0x50060e801603da64-lun-0'/>
<target dev='vdb' bus='scsi'/>
<address type='drive' controller='4' bus='0' target='0' unit='0'/>
</disk>
I recommend reviewing HP's recommendations for your SAN attached multipath. I found the following:
NOTE: getuid_callout syntax is different between RHEL 5 and 6
device
{
vendor "HP"
product "P2000 G3 FC|P2000G3 FC/iSCSI"
path_grouping_policy group_by_prio
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
path_checker tur
path_selector "round-robin 0"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
rr_weight uniform
failback immediate
hardware_handler "0"
no_path_retry 18
rr_min_io 100
}
http://h50146.www5.hp.com/products/software/oe/linux/mainstream/support/doc/option/fibre/pdfs/c02020121.pdf
You can use the following doc to create an "alias" for each device,
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Configuration_Example_-_Oracle_HA_on_Cluster_Suite/multipath_configuration.html
I agree on the Multipath option - I do that with my Hitachi VSP disk array.
Just to be clear:
While multipath allows you to set different persistent names to be used the underlying /dev/sd* names will still differ between reboots. That doesn't matter though because in your setups you do NOT use those names - you use the multipath names.
Note the configuration of mutlipath.conf varies slightly between RHEL5 and RHEL6 due to the options of the scsi_id command. You didn't mention which you have.
FOR RHEL6 (note the getuid_callout lines below should all be one line so that
"--whitelisted" is immediately followed by "--device"...):
Devices definition for Hitachi VSP disk array
#
devices {
device {
vendor "(HITACHI|HP)"
product "OPEN-.*"
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
features "1 queue_if_no_path"
hardware_handler "0"
path_selector "round-robin 0"
path_grouping_policy multibus
rr_weight uniform
rr_min_io 100
path_checker tur
prio const
}
}
--OR--
FOR RHEL5:
Devices definition for Hitachi VSP disk array
#
devices {
device {
vendor "HITACHI"
product "OPEN-V.*"
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
features "0"
hardware_handler "0"
path_grouping_policy multibus
failback immediate
rr_weight uniform
rr_min_io 1000
path_checker tur
}
}
I define unique multipath names based on the last 4 characters of the ID found above the foregoing in multipath.conf:
There will be an existing “#multipaths {“ line followed by other commented out lines ending in “#}“.
Leave that as is and insert before the next section (which starts with commented out “#devices {" a new multipaths section with the hldev naming SIMILAR (i.e. NOT exactly the same) to the following:
===============================================================
#
Added Hitachi VSP array LDEV devices persistent naming.
Aliases defined will be created in /dev/mpath & /dev/mapper (RHEL5) or just /dev/mapper (RHEL6)
multipaths {
Aliases for Hitachi LDEVs used in VGs
Note: No requirement to be in VG – just usual here
#
===============================================================
DEFINITION for say VGTEST
multipath {
wwid 360060e8006d142000000d14200003040
alias hldev3040
}
===============================================================
End of Definition for VGTEST
==============================================================
}
End of multipaths
===============================================================
In the above The "multipath {" followed by wwid, alias and terminated by "}" should be created for each LDEV (LUN) that is being presented from the Hitachi VSP Disk Array.
The last characters of the wwid and the alias are the CU:LDEV nummbers from the LDEV ID in Hitachi. So the above EXAMPLE device wwid and alias names ending in 3040 are for the LDEV with ID 00:30:40 from the Hitachi.
The actual numbers used for those 4 positions would NOT be 3040 but rather the ones from the LDEVs being presented to YOUR server by YOUR array.
In multipath -l -v2 output later I'd see the above similar what other have written with name as "hdlev3040" and components disks such as /dev/sdi and /dev/sdr which on next boot might be /dev/sda and /dev/sdk but the /dev/mapper device would be hldev3040 each time. I therefore use /dev/mapper/hldev3040 when referring to the disk and ignore the compoenent /dev/sd* names.
As a side note:
If you're doing LVM it scans disks on boot and will find it regardless of its name because it reads the header off the disks themselves.
When we moved our Oracle servers from HP-UX on HP Itanium (directly because Oracle abruptly dropped HPUX support), we were faced with moving Oracle -- so we used VMware vSphere ESXi to host Red Hat Enterprise Linux 6.4 (in Dec 2012/Jan 2013).
As a ESXi host and VM, we had to come up with best practices (NUMA changes, huge pages/large pages, reserve all RAM (Production) or all SGA (for non Production VMs), etc.
For I/O we used 4 paravirtualization SCSI controllers to spread out the IO as RHEL kernels still would behave differently if the kernel through there was a lot of ways to buffer/spread out I/O.
Imagine our surprise when the golden image VM was all ready, but after reboot, the LUNs->DataStores->RedHat disk partition /dev/sd[a-p] (data Warehouse) or /dev/sd[a-l] (OLTP VM) would bounce around. Fortunately we caught this before DBA has any changes to reformat the attached disk.
What we discovered: [ Always use lsscsi and by inspecting boot messages] ... We saw that on a given controllers, all disks started in the desired sequence, but at boot time, the SCSI controllers (pvSCSI actually) would get handled randomly in the order the controller was seen by the kernel.
Here's what we saw before we did the reboot testing..
[root@myOLTPVM .audit]# lsscsi
[1:0:0:0] cd/dvd NECVMWar VMware IDE CDR10 1.00 /dev/sr0 [2:0:0:0] disk VMware Virtual disk 1.0 /dev/sda [2:0:1:0] disk VMware Virtual disk 1.0 /dev/sdb [2:0:2:0] disk VMware Virtual disk 1.0 /dev/sdc [3:0:0:0] disk VMware Virtual disk 1.0 /dev/sdd [4:0:0:0] disk VMware Virtual disk 1.0 /dev/sde [4:0:1:0] disk VMware Virtual disk 1.0 /dev/sdf [4:0:2:0] disk VMware Virtual disk 1.0 /dev/sdg [4:0:3:0] disk VMware Virtual disk 1.0 /dev/sdh [5:0:0:0] disk VMware Virtual disk 1.0 /dev/sdi [5:0:1:0] disk VMware Virtual disk 1.0 /dev/sdj [5:0:2:0] disk VMware Virtual disk 1.0 /dev/sdk [5:0:3:0] disk VMware Virtual disk 1.0 /dev/sdl [root@myOLTPVM .audit]#
Unfortunately, the order was random other than 1:0:0:0 after
--> so we wanted (for a non-production OLTP VM)
3 drives on "2" (Red Hat / Oracle / Swap file systems)
1 drive on "3" (FRA)
4 drives on "4" (DATA LUNs)
4 drives on "5" (REDO LUNs)
Note: the 9 drives on "3","4","5" were presented to Oracle ASM which was goal of this effort.
My original strategy was to map the /dev/sd? to
/dev/asm-OLTP-FRA-1,
/dev/asm-OLTP-DATA-[1 to 4], and
/dev/asm-OLTP-REDO-[1 to 4]
But as one other writer said, you needed to use KERNEL=="sd*1", BUS=="scsi", to make this work.
That meant I need to use a "scsi_id command" to get this -- and I needed to use the "RESULT" field in my udev/rules.d file to always assign my desired name no matter what drive letter it was given:
KERNEL=="sd*1", BUS=="scsi", PROGRAM="/sbin/scsi_id -g -u -d /dev/%P", RESULT=="36000c298fe60266f4d0c62f576aad2a e", NAME="asm-OLTP-DATA-QA-1", OWNER="oracle", GROUP="dba", MODE="0660"
================================================================ It's easy now to explain it, but back then looking at logs, across multiple reboots, etc.,
I wanted to see more than just the Drive Size, -- I wanted to see the real SCSI id presented by the RH kernel (from VM from ESXI from LUN on SAN) as I needed a unique number if I couldn't
Are you familiar with "scsi_id"?
SCSI_ID(8) Linux Administrator’s Manual SCSI_ID(8)
NAME scsi_id - retrieve and generate a unique SCSI identifier
SYNOPSIS scsi_id [options]
DESCRIPTION scsi_id queries a SCSI device via the SCSI INQUIRY vital product data (VPD) page 0x80 or 0x83 and uses the resulting data to generate a value that is unique across all SCSI devices that properly support page 0x80 or page 0x83.
If a result is generated it is sent to standard output, and the program
exits with a zero value. If no identifier is output, the program exits
with a non-zero value.
scsi_id is primarily for use by other utilities such as udev that
require a unique SCSI identifier.
....
So with /etc/udev/rules.d, and having the right tools, we got our answer to lock in the /dev/asm-.... no matter what boot order the SCSI controllers were discovered at boot time.
- lsscsi
- fdisk -l and
- we got the final piece we needed to using SCSI_ID in the following script, (note this is one single line that I pasted into my SSH window:
for dsk in ls -l /dev/sd[b-z]* | awk ' {print $10 } '; do sid=scsi_id -gud $dsk; size=fdisk -l $dsk | grep 'Disk /dev/sd' | awk ' { print $3 }'; echo $dsk $sid $size; done
In action:
[root@myOLTPVM .audit]# for dsk in ls -l /dev/sd[b-z]* | awk ' {print $10 } '; do sid=scsi_id -gud $dsk; size=fdisk -l $dsk | grep 'Disk /dev/sd' | awk ' { print $3 }'; echo $dsk $sid $size; done
/dev/sdb 36000c2944d5e7aae91b64a91b7df384f 34.4 /dev/sdb1 34.4 /dev/sdc 36000c294bcadaeeb72ebd84e45eb1355 62.3 /dev/sdc1 62.3 /dev/sdd 36000c2901df4fa1d15634529bb4496c8 750.3 /dev/sde 36000c29930c36f60da64dc810c7db556 20.2 /dev/sdf 36000c29d9d76f9e4f558c9cb6d1ab7b1 20.2 /dev/sdg 36000c2959cbf38d76b4d2af06bdb163e 20.2 /dev/sdh 36000c2932cf9ca5c0bd8d56d67ef09a2 20.2 /dev/sdi 36000c298fe60266f4d0c62f576aad2ae 73.8 /dev/sdj 36000c291d59748b7482ddc0b2804f254 73.8 /dev/sdk 36000c29c21c334dad03386c42486b403 73.8 /dev/sdl 36000c290ad2b5ff35380c481831f71ad 73.8 [root@myOLTPVM .audit]#
[root@myOLTPVM .audit]# lsscsi
[1:0:0:0] cd/dvd NECVMWar VMware IDE CDR10 1.00 /dev/sr0 [2:0:0:0] disk VMware Virtual disk 1.0 /dev/sda [2:0:1:0] disk VMware Virtual disk 1.0 /dev/sdb [2:0:2:0] disk VMware Virtual disk 1.0 /dev/sdc [3:0:0:0] disk VMware Virtual disk 1.0 /dev/sdd [4:0:0:0] disk VMware Virtual disk 1.0 /dev/sde [4:0:1:0] disk VMware Virtual disk 1.0 /dev/sdf [4:0:2:0] disk VMware Virtual disk 1.0 /dev/sdg [4:0:3:0] disk VMware Virtual disk 1.0 /dev/sdh [5:0:0:0] disk VMware Virtual disk 1.0 /dev/sdi [5:0:1:0] disk VMware Virtual disk 1.0 /dev/sdj [5:0:2:0] disk VMware Virtual disk 1.0 /dev/sdk [5:0:3:0] disk VMware Virtual disk 1.0 /dev/sdl [root@myOLTPVM .audit]#
ACTUAL FILE STORED IN /etc/udev/rules/d/99-oracle-asmdevices.rule:[root@myOLTPVM .audit/etc/udev/rules.d]# more 99-oracle-asmdevices.rules KERNEL=="sd1", BUS=="scsi", PROGRAM="/sbin/scsi_id -g -u -d /dev/%P", RESULT=="36000c298fe60266f4d0c62f576aad2a e", NAME="asm-OLTP-DATA-QA-1", OWNER="oracle", GROUP="dba", MODE="0660" # KERNEL=="sd1", BUS=="scsi", PROGRAM="/sbin/scsi_id -g -u -d /dev/%P", RESULT=="36000c291d59748b7482ddc0b2804f25 4", NAME="asm-OLTP-DATA-QA-2", OWNER="oracle", GROUP="dba", MODE="0660" # KERNEL=="sd1", BUS=="scsi", PROGRAM="/sbin/scsi_id -g -u -d /dev/%P", RESULT=="36000c29c21c334dad03386c42486b40 3", NAME="asm-OLTP-DATA-QA-3", OWNER="oracle", GROUP="dba", MODE="0660" # KERNEL=="sd1", BUS=="scsi", PROGRAM="/sbin/scsi_id -g -u -d /dev/%P", RESULT=="36000c290ad2b5ff35380c481831f71a d", NAME="asm-OLTP-DATA-QA-4", OWNER="oracle", GROUP="dba", MODE="0660" # KERNEL=="sd1", BUS=="scsi", PROGRAM="/sbin/scsi_id -g -u -d /dev/%P", RESULT=="36000c2901df4fa1d15634529bb4496c 8", NAME="asm-OLTP-FRA-QA-1", OWNER="oracle", GROUP="dba", MODE="0660" # KERNEL=="sd1", BUS=="scsi", PROGRAM="/sbin/scsi_id -g -u -d /dev/%P", RESULT=="36000c29930c36f60da64dc810c7db55 6", NAME="asm-OLTP-REDO-QA-1", OWNER="oracle", GROUP="dba", MODE="0660" # KERNEL=="sd1", BUS=="scsi", PROGRAM="/sbin/scsi_id -g -u -d /dev/%P", RESULT=="36000c29d9d76f9e4f558c9cb6d1ab7b 1", NAME="asm-OLTP-REDO-QA-2", OWNER="oracle", GROUP="dba", MODE="0660" # KERNEL=="sd1", BUS=="scsi", PROGRAM="/sbin/scsi_id -g -u -d /dev/%P", RESULT=="36000c2959cbf38d76b4d2af06bdb163 e", NAME="asm-OLTP-REDO-QA-3", OWNER="oracle", GROUP="dba", MODE="0660" # KERNEL=="sd*1", BUS=="scsi", PROGRAM="/sbin/scsi_id -g -u -d /dev/%P", RESULT=="36000c2932cf9ca5c0bd8d56d67ef09a 2", NAME="asm-OLTP-REDO-QA-4", OWNER="oracle", GROUP="dba", MODE="0660" [root@myOLTPVM .audit/etc/udev/rules.d]#
Of course if you are in the udev modification game, you have to run the following to see changes:
udevadm control --reload-rules start_udev
I hope that helps...
By the way, I ran the command that was mentioned above: "ls -l /dev/disk/by-path" today to see what the results were:
[root@myOLTPVM .audit]# ls -l /dev/disk/by-path total 0 lrwxrwxrwx 1 root root 9 Mar 24 14:32 pci-0000:00:07.1-scsi-1:0:0:0 -> ../../sr0 lrwxrwxrwx 1 root root 9 Mar 24 14:19 pci-0000:03:00.0-scsi-0:0:0:0 -> ../../sda lrwxrwxrwx 1 root root 10 Mar 24 14:32 pci-0000:03:00.0-scsi-0:0:0:0-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Mar 24 14:32 pci-0000:03:00.0-scsi-0:0:0:0-part2 -> ../../sda2 lrwxrwxrwx 1 root root 9 Mar 24 14:32 pci-0000:03:00.0-scsi-0:0:1:0 -> ../../sdb lrwxrwxrwx 1 root root 10 Mar 24 14:32 pci-0000:03:00.0-scsi-0:0:1:0-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 9 Mar 24 14:19 pci-0000:03:00.0-scsi-0:0:2:0 -> ../../sdc lrwxrwxrwx 1 root root 10 Mar 24 14:32 pci-0000:03:00.0-scsi-0:0:2:0-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 9 Mar 24 14:19 pci-0000:04:00.0-scsi-0:0:0:0 -> ../../sdd lrwxrwxrwx 1 root root 23 Jul 18 15:45 pci-0000:04:00.0-scsi-0:0:0:0-part1 -> ../../asm-OLTP-FRA-QA-1 lrwxrwxrwx 1 root root 9 Mar 24 14:32 pci-0000:0c:00.0-scsi-0:0:0:0 -> ../../sde lrwxrwxrwx 1 root root 24 Jul 18 15:45 pci-0000:0c:00.0-scsi-0:0:0:0-part1 -> ../../asm-OLTP-REDO-QA-1 lrwxrwxrwx 1 root root 9 Mar 24 14:19 pci-0000:0c:00.0-scsi-0:0:1:0 -> ../../sdf lrwxrwxrwx 1 root root 24 Jul 18 15:45 pci-0000:0c:00.0-scsi-0:0:1:0-part1 -> ../../asm-OLTP-REDO-QA-2 lrwxrwxrwx 1 root root 9 Mar 24 14:19 pci-0000:0c:00.0-scsi-0:0:2:0 -> ../../sdg lrwxrwxrwx 1 root root 24 Jul 18 15:45 pci-0000:0c:00.0-scsi-0:0:2:0-part1 -> ../../asm-OLTP-REDO-QA-3 lrwxrwxrwx 1 root root 9 Mar 24 14:19 pci-0000:0c:00.0-scsi-0:0:3:0 -> ../../sdh lrwxrwxrwx 1 root root 24 Jul 18 15:45 pci-0000:0c:00.0-scsi-0:0:3:0-part1 -> ../../asm-OLTP-REDO-QA-4 lrwxrwxrwx 1 root root 9 Mar 24 14:19 pci-0000:1b:00.0-scsi-0:0:0:0 -> ../../sdi lrwxrwxrwx 1 root root 24 Jul 18 15:58 pci-0000:1b:00.0-scsi-0:0:0:0-part1 -> ../../asm-OLTP-DATA-QA-1 lrwxrwxrwx 1 root root 9 Mar 24 14:19 pci-0000:1b:00.0-scsi-0:0:1:0 -> ../../sdj lrwxrwxrwx 1 root root 24 Jul 18 15:58 pci-0000:1b:00.0-scsi-0:0:1:0-part1 -> ../../asm-OLTP-DATA-QA-2 lrwxrwxrwx 1 root root 9 Mar 24 14:19 pci-0000:1b:00.0-scsi-0:0:2:0 -> ../../sdk lrwxrwxrwx 1 root root 24 Jul 18 15:58 pci-0000:1b:00.0-scsi-0:0:2:0-part1 -> ../../asm-OLTP-DATA-QA-3 lrwxrwxrwx 1 root root 9 Mar 24 14:19 pci-0000:1b:00.0-scsi-0:0:3:0 -> ../../sdl lrwxrwxrwx 1 root root 24 Jul 18 15:58 pci-0000:1b:00.0-scsi-0:0:3:0-part1 -> ../../asm-OLTP-DATA-QA-4 You have new mail in /var/spool/mail/root [root@myOLTPVM .audit]#
==============================================================
And the final "comfy chair" command result of fdisk -l:
[root@myOLTPVM .audit]## fdisk -l
Disk /dev/sda: 64.4 GB, 64424509440 bytes 64 heads, 32 sectors/track, 61440 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00025d60
Device Boot Start End Blocks Id System /dev/sda1 * 2 501 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 502 61440 62401536 8e Linux LVM Partition 2 does not end on cylinder boundary.
Disk /dev/sdb: 34.4 GB, 34359738368 bytes 255 heads, 63 sectors/track, 4177 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004093b
Device Boot Start End Blocks Id System /dev/sdb1 1 4178 33553408 8e Linux LVM
Disk /dev/sdc: 62.3 GB, 62277025792 bytes 255 heads, 63 sectors/track, 7571 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006727b
Device Boot Start End Blocks Id System /dev/sdc1 1 7572 60816384 8e Linux LVM
Disk /dev/sde: 20.2 GB, 20229295104 bytes 64 heads, 32 sectors/track, 19292 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xaf7e536b
Device Boot Start End Blocks Id System /dev/sde1 1 19292 19754944 83 Linux
Disk /dev/sdd: 750.3 GB, 750320049152 bytes 255 heads, 63 sectors/track, 91221 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0eaf9745
Device Boot Start End Blocks Id System /dev/sdd1 1 91221 732732618+ 83 Linux
Disk /dev/sdf: 20.2 GB, 20229295104 bytes 64 heads, 32 sectors/track, 19292 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf23f0d50
Device Boot Start End Blocks Id System /dev/sdf1 1 19292 19754944 83 Linux
Disk /dev/sdh: 20.2 GB, 20229295104 bytes 64 heads, 32 sectors/track, 19292 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x69a63198
Device Boot Start End Blocks Id System /dev/sdh1 1 19292 19754944 83 Linux
Disk /dev/sdi: 73.8 GB, 73766063104 bytes 255 heads, 63 sectors/track, 8968 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x34684ff9
Device Boot Start End Blocks Id System /dev/sdi1 1 8968 72035396 83 Linux
Disk /dev/sdj: 73.8 GB, 73766063104 bytes 255 heads, 63 sectors/track, 8968 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x895bae0e
Device Boot Start End Blocks Id System /dev/sdj1 1 8968 72035396 83 Linux
Disk /dev/sdg: 20.2 GB, 20229295104 bytes 64 heads, 32 sectors/track, 19292 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x74ae1a56
Device Boot Start End Blocks Id System /dev/sdg1 1 19292 19754944 83 Linux
Disk /dev/sdk: 73.8 GB, 73766063104 bytes 255 heads, 63 sectors/track, 8968 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd50053e8
Device Boot Start End Blocks Id System /dev/sdk1 1 8968 72035396 83 Linux
Disk /dev/sdl: 73.8 GB, 73766063104 bytes 255 heads, 63 sectors/track, 8968 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x525d412b
Device Boot Start End Blocks Id System /dev/sdl1 1 8968 72035396 83 Linux
Disk /dev/mapper/vg_swap-logvol_swap: 34.4 GB, 34355544064 bytes 255 heads, 63 sectors/track, 4176 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Disk /dev/mapper/vg_root-logvol_root: 63.9 GB, 63896027136 bytes 255 heads, 63 sectors/track, 7768 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Disk /dev/mapper/vg_app_oracle-logvol_app_oracle: 62.3 GB, 62272831488 bytes 255 heads, 63 sectors/track, 7570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
[root@myOLTPVM .audit]#
Good luck.
Note about last post: There was a lot of steps to create our Oracle ASM disk group devices that ultimately mapped to our LUNs for performance including the use of offsets with fdisk creation, etc.
This was especially important to remember twenty five months later when we used our SAN's LUN Migration to expand the LUNs (to their new location, but same WWN, etc numbers) presented to ESXi and then the Datastore and then the VM disk and then Red Hat Kernel.
Hint: use sg3-utils rpm to our Red hat helps know when the kernel really saw the new increased disk size before doing any of the commands so fdisk -l would show the correct size. That stumped us for a week.
All the best.
We had to do the udev setup for ASM in our Oracle RAC/GRID environment mainly because each of our RAC/GRID nodes only had a single path to the storage so multipath wasn't really useful. Also using udev let us insure we saw the same /dev/asm* device names on both nodes.
If you have more than one path to the same storage doing multipath would be the way to go.
Thinking back on it now I guess we could have used multipath instead of udev for persistent naming even though it would only show one active path for each device. (The downside to that being that if you looked at it later without knowing you always only had one path you'd think your other path had disappeared.)
P.S. It wasn't my choice to do the single path so don't tell me that isn't optimal. Apparently the powers that be assumed since we had redundant nodes each with a path to the array there was no need to have redundant paths on each of the nodes.
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
