Questions on utilizing UUIDs and UDEV for RHEL VMs that live on SAN

Latest response

So I have some questions based on doing some research here on Red Hat Customer Portal and this is due to some issues with one of our SANs which caused a number of data disks on the VM to shuffle out of order on a RHEL 5.11 server that houses Oracle and Oracle ASM. We couldn't fix the issue as the UUIDs dropped from the partitions that were on that SAN, and we had to fall back to backup tape.

I'll also state that I've had no real training for SAN technology, other than jumping in with it, shadowing those who have more experience and knowledge then me, and asking lots of questions. 99% of those folks I shadow only know Windows OS stuff and have no clue when it comes to Linux things, so I have to be the expert.

First, I don't believe that I need to do anything special on the RHEL VM side for the OS to access data that is mapped to the SAN, correct? I don't believe there is a need for multipathing, how do I even know if that is enabled? Or if its even needed?

Second, since the RHEL VMs are using SAN disk, its seems wiser to use UUID then say /dev/sdX, correct? I'm thinking that is one of the reasons why those disk on that RHEL VM was scrambled because they couldn't reference anything back on the SAN? Right now, I'm in the process of converting entries in /etc/fstab to UUID, however I want to be certain on this.

And third, we have some RHEL servers that use OracleASM and it looks like that entries have to be made in udev. I'm reading thru this previous thread here, trying to make sense of it:

https://access.redhat.com/discussions/876913

Right now I have a RHEL 5.11 system we are trying to convert to to RHEL6 which is using Oracle ASM to manage disks that live on SAN. I'm just trying to figure out the best way to to set this up correctly. I'm sure I'll have more questions.

thanks

Responses

Hi Christopher,

Can you clarify what hypervisor the RHEL VM is running on? If you are using vmware (and udev) you may need to enable disk.EnabledUUID to correctly access the UUID for the disks.

Are the LUNs mapped through directly to the VM or are they mapped to the host machine then presented to the VM?

If the disks are ASM disks, they should be added to /etc/fstab as ASM disks aren't mounted through this facility, they are are configured and mounted through ASM. What ASM does is basically 'mark' the header of the partition (or full disk) as ASM, and then scans for disks that have have this header and acts on the result.

There are a few core things that you need to confirm:
1. Are you using ASMlib or udev?
2. If you are using udev, are you able to locate any ASM specific rules for your configuration? (in /etc/udev/rules.d)
3. Do you have access to the oracleasm tool on the server for commands such as:

oracleasm listdisks
oracleasm scandisks

The hypervisor is VMWare vCenter and we are running v6 for production and v5.5 for our development environment. From the raw disk mapping thread I figured out how to setup the disk.EnabledUUID on RHEL servers.

As for the LUNs, we are not using RDM, basically the .vmdk files are living on the LUNs and point back to the VMs.

As for the ASM disks, they were mounted under /etc/fstab but they were being mounted as /dev/sd?? and not to UUID, which I think is where I went wrong.

  1. I have the rpms for ASMlib installed, so I'm pretty sure I was not using udev.

  2. Not using udev at this time.

  3. Yes, I have access to oracleasm via /etc/init.d where I've run commands in the past like:

oracleasm start 
oracleasm restart
oracleasm querydisks
oracleasm status
oracleasm listdisks

Christopher,

If the disks are backed by standard VMDK files you don't need to concern yourself with multipathing etc. as this should be taken care of below the VM (ie. multipathing will be configured at the ESXi hypervisor level). You will however need to enable additional VM options if these disks are shared between nodes (ie. in a RAC cluster).

Can you show what the full mount line was for the ASM disks in /etc/fstab? I've never seen ASM disks mounted in fstab, so this is definitely new to me.

If the disks are marked as ASM (regardless of the device they are presented as) the following commands should find/show them:

oracleasm scansisks
oracleasm listdisks

Do these commands provide any results?

Can you also check the following to confirm the ASM kernel module is loaded:

lsmod | grep asm

We don't have a RAC cluster just Oracle running on two RHEL server. One is where backup files are placed for easy access.

[root@server init.d]# ./oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
[root@server init.d]# ./oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@server init.d]# ./oracleasm listdisks
DATA0
DATA1
DATA2
DATA3
DATA4
DATA5
FRA1
FRA2
[root@server init.d]# lsmod | grep asm
oracleasm              83880  1
[root@ameda4aisrx0009 init.d]#

The weird thing is that this is the server that holds the database and I'm not sure now what disks oracleasm is referring to as here is the layout from df -ahT. The server that we had an issue with, I know that the storage that lived on SAN was being managed by Oracleasm. However I'm uncertain right now.

[root@server init.d]# df -haT
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
              ext3     44G   20G   22G  48% /
proc          proc       0     0     0   -  /proc
sysfs        sysfs       0     0     0   -  /sys
devpts      devpts       0     0     0   -  /dev/pts
/dev/mapper/VolGroup00-LogVol02
              ext3    9.6G  452M  8.6G   5% /tmp
/dev/mapper/VolGroup00-LogVol04
              ext3    9.7G  718M  8.5G   8% /var
/dev/mapper/VolGroup00-LogVol05
              ext3    9.6G  170M  8.9G   2% /var/log/audit
/dev/mapper/VolGroup00-LogVol01
              ext3    9.6G  1.3G  7.8G  15% /home
/dev/mapper/VolGroup00-LogVol03
              ext3     15G  2.6G   11G  20% /usr
/dev/sda1     ext3    251M   43M  195M  19% /boot
tmpfs        tmpfs     12G  6.1G  5.7G  52% /dev/shm
none   binfmt_misc       0     0     0   -  /proc/sys/fs/binfmt_misc
/dev/mapper/VolGroup01-LogVol00
              ext3     98G   50G   43G  54% /orainstall
/dev/mapper/VolGroup02-LogVol00
              ext3     74G  180M   70G   1% /datapump
oracleasmfs
       oracleasmfs       0     0     0   -  /dev/oracleasm

Also want to be certain on this, its better to use UUID instead of pointing at devices under /dev and /dev/mapper, correct? Especially if all of this living on SAN storage.

thanks for the help so far.

Disclaimer: I am not a DBA, and my only ASM systems are still on RHEL 5. But in my experience, you will not see the Oracle ASM-managed volumes at all in 'df' output - and you will not see regular Linux OS volumes with the Oracle ASM tools. They are two entirely separate worlds, and need to be managed separately.

On my ASM-using servers, I can see all partitions via 'cat /proc/partitions', then I have to look at both "pvdisplay" and "asmcmd lsdsk" to see which /dev/sdxx devices (or /dev/emcpowerxx devices, in our case) map to Linux LVM and which ones map to Oracle ASM control.

For monitoring used/free space, we need to use both 'df' and 'asmcmd lsdg'.

(on our new RHEL 7 systems, we have opted to ditch ASM entirely, and just use native LVM + xfs, since we are moving from a "large server / many DBs" model to "small VMs, one DB each". Thus, we will only need/use the Linux-native tools for basic disk management & monitoring).

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.