Select Your Language

Infrastructure and Management

Cloud Computing

Storage

Runtimes

Integration and Automation

  • Comments
  • vg_owner check fails in lvm_by_vg.sh

    Posted on

    I have another cluster problem for you. This time on RHEL 5.5

     

    I'm getting the following error:

     

    Oct  6 18:03:54 omzdbat13 clurgmgrd: [7691]: WARNING: vgshrapp02 should not be active
    Oct  6 18:03:54 omzdbat13 clurgmgrd: [7691]: WARNING: omzdbat13priv does not own vgshrapp02
    Oct  6 18:03:54 omzdbat13 clurgmgrd: [7691]: WARNING: Attempting shutdown of vgshrapp02
    Oct  6 18:03:54 omzdbat13 clurgmgrd[7691]: status on lvm "vgshrapp02" returned 1 (generic error)
    Oct  6 18:03:54 omzdbat13 clurgmgrd[7691]: Stopping service service:omzdbat13svc
     

    I discovered that this check is in lvm_by_vg.sh:

     

            vg_owner
            if [ $? -ne 1 ]; then
                    ocf_log err "WARNING: $OCF_RESKEY_vg_name should not be active"
                    ocf_log err "WARNING: $my_name does not own $OCF_RESKEY_vg_name"
                    ocf_log err "WARNING: Attempting shutdown of $OCF_RESKEY_vg_name

     

    Here's the vg_owner routine:

     

    # vg_owner
    #
    # Returns:
    #    1 == We are the owner
    #    2 == We can claim it
    #    0 == Owned by someone else
    function vg_owner
    {
            local owner=`vgs -o tags --noheadings $OCF_RESKEY_vg_name`
            local my_name=$(local_node_name)

            if [ -z $my_name ]; then
                    ocf_log err "Unable to determine cluster node name"
                    return 0
            fi
     

            if [ -z $owner ]; then
                    # No-one owns this VG yet, so we can claim it
                    return 2
            fi

            if [ $owner != $my_name ]; then
                    if is_node_member_clustat $owner ; then
                            return 0
                    fi
                    return 2
            fi

            return 1
    }
     

    I suspected it was failing the check on the tag field on the VG, so I just created one like this:

     

    # vgchange --addtag omzdbat13priv vgshrapp02

     

    Well, this allowed the service to start up and stay up. But then when I fail the service over to the other node in the cluster, that node reports the same failure.

     

    This script actually controls the value of the tags on the VG with out functions like:

     

    function strip_tags

    function strip_and_add_tag

     

    So I have a suspction that these are the functions that are actually not working correctly and applying the correct tag to the VG.

     

    Does anyone have any ideas about this one?

     

    Mark

    by

    points

    Responses

    Red Hat LinkedIn YouTube Facebook X, formerly Twitter

    Quick Links

    Help

    Site Info

    Related Sites

    © 2026 Red Hat