Unable to connect to Storage "•Cannot connect server to Storage"
I have reinstalled with RHEV 3 GA but am not having much luck adding Storage to my host via the GUI (using the IE browser). I can't find what the problem is, the export directory I have chosen either cannot be connected to:
"Operation Cancelled •Cannot connect server to Storage""
or, if I use the directory I gave during the setup process of rhevm (rhev-setup), the error I get is:
"Create Operation failed. Domain OS-folder already exists in the system"
I was advised to delete the contents of the initial directory (as given in rhev-setup) but this has not helped. All new test directories I have tried simply cannot be connected to. I am using NFS, and I have checked NFS config in /etc/exports to ensure the directories are all there, with rw permissions, shared with all-comers IP-wise (*). Also I have 36:36'd everything so that user and group is "vdsm kvm".
Can anyone suggest anything that perhaps I have not tried?
Responses
Ok, glad it all worked for you. For the future, please try to make it clear which host has which role, and also, there is some difference when dealing with RHEV-H and RHEL hosts, so letting us know ahead which it is you're using will also help
Thanks
Dan
I tried to solve the same problem as TS describes here and got the unluck results:
"Operation Cancelled •Cannot connect server to Storage""
Also, I tried to catch out some activity on nfs server IP using tcpdump at port 2049, there was absolute silence. Then I examined my rhevm.log while "NewDomain creating" performed.
2012-03-20 15:21:17,851 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] (http-0.0.0.0-8443-5) START, ValidateStorageServerConnectionVDSCommand(vdsId = c11e2aac-71bd-11e1-84fa-5254005e49df, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: null, connection: desktop71.sandbox:/iso };]), log id: 195da499
2012-03-20 15:21:17,982 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] (http-0.0.0.0-8443-5) FINISH, ValidateStorageServerConnectionVDSCommand, return: {00000000-0000-0000-0000-000000000000=453}, log id: 195da499
2012-03-20 15:21:18,060 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (http-0.0.0.0-8443-5) The connection with details desktop71.sandbox:/iso failed because of error code 453 and error message is: the specified path does not exist or cannot be reached.
verify the path is correct, and for remote storage,
check the connection to your storage
2012-03-20 15:21:18,063 WARN [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (http-0.0.0.0-8443-5) CanDoAction of action AddStorageServerConnection failed. Reasons:ACTION_TYPE_FAILED_STORAGE_CONNECTION
2012-03-20 15:21:18,311 INFO [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (http-0.0.0.0-8443-1) Running command: RemoveStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System
2012-03-20 15:21:18,348 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (http-0.0.0.0-8443-1) START, DisconnectStorageServerVDSCommand(vdsId = c11e2aac-71bd-11e1-84fa-5254005e49df, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: null, connection: desktop71.sandbox:/iso };]), log id: 7d521ae5
2012-03-20 15:21:20,778 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (http-0.0.0.0-8443-1) FINISH, DisconnectStorageServerVDSCommand, return: {00000000-0000-0000-0000-000000000000=0}, log id: 7d521ae5
Then I mount it manually and it was successfull.
Could anyone clarify what I missed, or some another suggetion?
Thanks in advance for your attention.
I allready set it up for another folder and reexport it using "exportfs -r", and got the same:
2012-03-20 15:39:28,049 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (http-0.0.0.0-8443-1) The connection with details desktop71.sandbox:/exports/iso failed because of error code 453 and error message is: the specified path does not exist or cannot be reached.
verify the path is correct, and for remote storage,
check the connection to your storage
ls -lah /exports/iso/
total 8.0K
drwxr-xr-x. 2 36 kvm 4.0K Mar 20 12:45 .
drwxr-xr-x. 3 36 kvm 4.0K Mar 20 10:44 ..
-rw-r--r--. 1 36 kvm 0 Mar 20 12:45 rhel_asdfgasdfgbsrfgb.iso
What I've done wrong?
lets do it from the start:
1. create a new directory somewhere, leave it empty
2. chown it to 36:36
3. add it to /etc/exports, as a simple share for now - *(rw) with no additions
4. make sure it can be mounted by a RHEV host
5. try to add it as an ISO domain
Do you get the same errors?
mkdir -p /tmp/iso
chown -R 36:36 /tmp/ /tmp/iso
cat /etc/exports
/tmp/iso *(rw)
## Execute from another host
mount desktop71.sandbox:/tmp/iso
umount /tmp/iso
And the operation is cancelled again.
Then I checked 'rpcinfo -p' , and didn't find there *nfs*, despite I restart it three minutes ago. I did "service nfs restart" again, and it displayed in "rpcinfo". After this steps the ISO Domain mounted successfully :)
Thank you very much, for your step-by-step hints style :)
You're welcome :)
OK, at this point, we need to look a bit deeper. We would need to see what is going on in /var/log/vdsm/vdsm.log on the host that is used to set up the storage domain.
I suggest you ssh intot he host before trying to add the domain, and put a marker in the log, so that we don't need to parse too many log messages
I just ran into this same issue with a brand new RHEV3.0 stack. I followed a lot of the directions in this post but nothing helped. I do have another RHEV3.0 stack that didn't have this problem though so I looked through the configs on it and stumbled into the answer.
First I must note that I was able to connect to the NFS iso domain on a seperate box before performing the next step.
On the only host that I have running I went to the location: /rhev/data-center/mnt/ and created the folder with the exact name of the NFS share I would be connecting. Then I chown 36:36'd it as well as chmod 777 both of which I determined to be the way that my other working RHEV3.0 stack was setup.
After this step I was able to connect the host through RHEVM as well as use the rhevm-iso-uploader tool.
I have not added another host to this setup yet but when I do later this week we'll see if I encounter the same problem. Hope this helps.
I just ran into this same issue with a brand new RHEV3.0 stack. I followed a lot of the directions in this post but nothing helped. I do have another RHEV3.0 stack that didn't have this problem though so I looked through the configs on it and stumbled into the answer.
First I must note that I was able to connect to the NFS iso domain on a seperate box before performing the next step.
On the only host that I have running I went to the location: /rhev/data-center/mnt/ and created the folder with the exact name of the NFS share I would be connecting. Then I chown 36:36'd it as well as chmod 777 both of which I determined to be the way that my other working RHEV3.0 stack was setup.
After this step I was able to connect the host through RHEVM as well as use the rhevm-iso-uploader tool.
I have not added another host to this setup yet but when I do later this week we'll see if I encounter the same problem. Hope this helps.
When an NFS domain is used, RHEV first tries to mount it as the vdsm:kvm UID:GID, and then checks it's contents and r/w permissions. If there is a UUID-like directory in there (meaning this mount is already in use by another RHEV storage domain), or the RHEV host cannot touch a file in the NFS mount, the procedure will fail.