Anyone else having trouble using iscsi?

Latest response

I seem to be unable to use ISCSI direct attached LUN's with VM's, unless I log onto the hypervisor, and manually add the 'qemu' user to the 'disk'. This is not permanent, so it sucks, but it is enough for testing.

 

This is the error message I get:

 

Thread-1773::DEBUG::2012-11-16 13:29:01,455::vm::607::vm.Vm::(_startUnderlyingVm) vmId=`8b982672-1ca8-488b-a112-dcd0a7f8bc15`::_ongoingCreations released
Thread-1773::ERROR::2012-11-16 13:29:01,458::vm::631::vm.Vm::(_startUnderlyingVm) vmId=`8b982672-1ca8-488b-a112-dcd0a7f8bc15`::The vm start process failed
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 597, in _startUnderlyingVm
File "/usr/share/vdsm/libvirtvm.py", line 1416, in _run
File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 83, in wrapper
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2490, in createXML
libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/1
qemu-kvm: -drive file=/dev/mapper/1IET_00010002,if=none,id=drive-virtio-disk1,format=raw,serial=,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /dev/mapper/1IET_00010002: Permission denied

Thread-1773::DEBUG::2012-11-16 13:29:01,460::vm::969::vm.Vm::(setDownStatus) vmId=`8b982672-1ca8-488b-a112-dcd0a7f8bc15`::Changed state to Down: internal error Process exited while reading console log output: char device redirected to /dev/pts/1
qemu-kvm: -drive file=/dev/mapper/1IET_00010002,if=none,id=drive-virtio-disk1,format=raw,serial=,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /dev/mapper/1IET_00010002: Permission denied

Thread-1778::DEBUG::2012-11-16 13:29:02,993::BindingXMLRPC::894::vds::(wrapper) client [10.10.10.10]::call vmGetStats with ('8b982672-1ca8-488b-a112-dcd0a7f8bc15',) {}
Thread-1778::DEBUG::2012-11-16 13:29:02,993::BindingXMLRPC::900::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Down', 'hash': '0', 'exitMessage': 'internal error Process exited while reading console log output: char device redirected to /dev/pts/1\nqemu-kvm: -drive file=/dev/mapper/1IET_00010002,if=none,id=drive-virtio-disk1,format=raw,serial=,cache=none,werror=stop,rerror=stop,aio=native: could not open disk image /dev/mapper/1IET_00010002: Permission denied\n', 'vmId': '8b982672-1ca8-488b-a112-dcd0a7f8bc15', 'timeOffset': '-1', 'exitCode': 1}]}

 

However, this seems to be a bug the size of Antarctica, so I'd like to file a bug. What product do I file RHEV bugs against in Red Hat Bugzilla. And don't say 'RHEV', because that product doesn't show in the 'Enter new bug' screen... :(

Responses

Can you describe the environment exactly? Using a deirect LUN over iSCSI works for me all the time, so I'd like to understand what it is exactly that may be different

Dan,

 

I have one machine as RHEVM, two machines as hypervisor nodes, running the available latest image from the beta, a build from 20121107, iirc. Machines are connected using a single 1GiB ethernet interface.

 

As this is a mere demo setup, I have tgtd running on the RHEVM machine, exporting a couple of iSCSI LUN's to the hypervisors.

 

I can build a VM on NFS (exported from the RHEVM), start it up normally. If I attach an iSCSI LUN to the VM and start it, I get failures as above. If I log onto the hypervisor and add the qemu user to the disk group (usermod -a -G disk qemu) I can successfully start the VM with the attached iSCSI LUN.

 

What more information do you need? I'll happily provide it :)

 

Also, I cannot attach the ISCSI LUN to a running VM, does that work for you too?

 

Is it possible you have newer, Red Hat internal, builds than I have?

Also, the problem is also mentioned on the oVirt mailinglist, so I'm not alone ;)

 

http://lists.ovirt.org/pipermail/users/2012-June/002653.html

Running the currently available RHN code, I just exported a 50Gb LUN from a RHEL 6.3 machine with tgtd, and created a direct LUN disk on this target. Works perfectly out of the box. Moreover, this worked with beta1 and beta2 as well, so there definitely must be a configuration issue on your side Maxim.

 

Please open a support case, gather a log collection, and submit it

That is odd, since the hypervisors nodes are booted with the hypervisor image and thus cannot be configured. Also, if I can fix this with adding the qemu user to a disk group, doesn't that suggest a problem in the hypervisor image? Anyway, I hope this is fixed before GA. My customers would be somewhat disappointed if it isn't. :P

 

I'm anxious to discuss this with the Red Hat guys, but sadly, I cannot open a support request since we use partner NFR's and I cannot open RHEV bugs in RHBZ either: RHEV is not a product choice when making new bugs...

Just tried it again with the recently released updates. No go. One last thing that I can think of: are you using the hypervisor image (RHEVH) or a plain RHEL6 machine to host the VM's on?

I'm seeing this problem as well with a customer...

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.