cannot add ISO domain

Latest response

Hi All,

After battling with the issue for over a week now, I am finally giving up troubleshooting on my own and sharing it here in hope to find the solution:

Scenario:

1 x dns (10.10.3.219) server (VM on top of esxi)

1 x rhev-m server (10.10.3.220) (VM on top of esxi)

1 x rhev-h server (10.10.3.225) (physical server)

  1. Install rhel 7.3 for rhev-m ….subscribe….attach relevant repos….yum update…..install rhev-m….configure rhev-m using engine-setup…………all done

  2. Access rhev-m administration portal….done

  3. Install rhev-h on physical server….done

  4. Access rhev-m portal, add rhev-h as host in default data center……done

  5. Create data storage domain (LUNs mapped on rhev-h)…done

  6. Attach already created ISO domain in step 1 to default data center……errors

a. Rhev-m portal says “Error while executing action Add Storage Connection: Network error during communication with the Host.”
b. If I try to mount NFS shared ISO Domain directory /var/lib/exports/iso manually on rhev-h it says “mount.nfs: Connection timed out”
c. I can ping both rhev-m & rhev-h servers using ip address and fqdn, which works fine – so I am assuming its not a communication issue

[root@rhvh1 ~]# ping 10.10.3.220
PING 10.10.3.220 (10.10.3.220) 56(84) bytes of data.
64 bytes from 10.10.3.220: icmp_seq=1 ttl=64 time=0.334 ms
64 bytes from 10.10.3.220: icmp_seq=2 ttl=64 time=0.347 ms
64 bytes from 10.10.3.220: icmp_seq=3 ttl=64 time=0.271 ms
^C
--- 10.10.3.220 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.271/0.317/0.347/0.036 ms
[root@rhvh1 ~]#


[root@rhvh1 ~]# ping rhvm.mydomain.local
PING rhvm.mydomain.local (10.10.3.220) 56(84) bytes of data.
64 bytes from 10.10.3.220 (10.10.3.220): icmp_seq=1 ttl=64 time=0.342 ms
64 bytes from 10.10.3.220 (10.10.3.220): icmp_seq=2 ttl=64 time=0.266 ms
64 bytes from 10.10.3.220 (10.10.3.220): icmp_seq=3 ttl=64 time=0.215 ms
64 bytes from 10.10.3.220 (10.10.3.220): icmp_seq=4 ttl=64 time=0.230 ms
^C
--- rhvm.mydomain.local ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 7007ms
rtt min/avg/max/mdev = 0.215/0.263/0.342/0.050 ms
[root@rhvh1 ~]#


[root@rhvm ~]# ping 10.10.3.225
PING 10.10.3.225 (10.10.3.225) 56(84) bytes of data.
64 bytes from 10.10.3.225: icmp_seq=1 ttl=64 time=0.354 ms
64 bytes from 10.10.3.225: icmp_seq=2 ttl=64 time=0.321 ms
64 bytes from 10.10.3.225: icmp_seq=3 ttl=64 time=0.305 ms
64 bytes from 10.10.3.225: icmp_seq=4 ttl=64 time=0.279 ms
^C
--- 10.10.3.225 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.279/0.314/0.354/0.034 ms
[root@rhvm ~]#
[root@rhvm ~]#


[root@rhvm ~]# ping rhvh1.mydomain.local
PING rhvh1.mydomain.local (10.10.3.225) 56(84) bytes of data.
64 bytes from 10.10.3.225 (10.10.3.225): icmp_seq=1 ttl=64 time=0.332 ms
64 bytes from 10.10.3.225 (10.10.3.225): icmp_seq=2 ttl=64 time=0.285 ms
64 bytes from 10.10.3.225 (10.10.3.225): icmp_seq=3 ttl=64 time=0.309 ms
64 bytes from 10.10.3.225 (10.10.3.225): icmp_seq=4 ttl=64 time=0.294 ms
64 bytes from 10.10.3.225 (10.10.3.225): icmp_seq=5 ttl=64 time=0.280 ms
^C
--- rhvh1.mydomain.local ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 8009ms
rtt min/avg/max/mdev = 0.280/0.300/0.332/0.018 ms
[root@rhvm ~]#
[root@rhvm ~]#


[root@rhvh1 ~]#
[root@rhvh1 ~]# showmount -e 10.10.3.220
Export list for 10.10.3.220:
/home/iso 10.10.3.0/22
/var/lib/exports/iso (everyone)

[root@rhvh1 ~]#
[root@rhvh1 ~]#


[root@rhvh1 ~]#
[root@rhvh1 ~]# mount -t nfs 10.10.3.220:/var/lib/exports/iso /mnt
mount.nfs: Connection timed out

[root@rhvh1 ~]#
[root@rhvh1 ~]#


[root@rhvh1 ~]# cat /var/log/messages
Mar 7 14:55:12 rhvh1 kernel: NFS: nfs mount opts='vers=4,addr=10.10.3.220,clientaddr=10.10.3.225'
Mar 7 14:55:12 rhvh1 kernel: NFS: parsing nfs mount option 'vers=4'
Mar 7 14:55:12 rhvh1 kernel: NFS: parsing nfs mount option 'addr=10.10.3.220'
Mar 7 14:55:12 rhvh1 kernel: NFS: parsing nfs mount option 'clientaddr=10.10.3.225'
Mar 7 14:55:12 rhvh1 kernel: NFS: MNTPATH: '/var/lib/exports/iso'
Mar 7 14:55:12 rhvh1 kernel: --> nfs4_try_mount()
Mar 7 14:55:12 rhvh1 kernel: --> nfs4_create_server()
Mar 7 14:55:12 rhvh1 kernel: --> nfs4_init_server()
Mar 7 14:55:12 rhvh1 kernel: --> nfs4_set_client()
Mar 7 14:55:12 rhvh1 kernel: --> nfs_get_client(10.10.3.220,v4)
Mar 7 14:55:12 rhvh1 kernel: NFS: get client cookie (0xffff88204eb61400/0xffff88205dea7058)
Mar 7 14:55:12 rhvh1 kernel: nfs_create_rpc_client: cannot create RPC client. Error = -22
Mar 7 14:59:30 rhvh1 kernel: nfs_create_rpc_client: cannot create RPC client. Error = -110

Mar 7 14:59:30 rhvh1 kernel: --> nfs_put_client({1})
Mar 7 14:59:30 rhvh1 kernel: --> nfs_free_client(4)
Mar 7 14:59:30 rhvh1 kernel: NFS: releasing client cookie (0xffff88204eb61400/0xffff88205dea7058)
Mar 7 14:59:30 rhvh1 kernel: <-- nfs_free_client()
Mar 7 14:59:30 rhvh1 kernel: <-- nfs4_init_client() = xerror -110
Mar 7 14:59:30 rhvh1 kernel: <-- nfs4_set_client() = xerror -110
Mar 7 14:59:30 rhvh1 kernel: <-- nfs4_init_server() = -110
Mar 7 14:59:30 rhvh1 kernel: --> nfs_free_server()
Mar 7 14:59:30 rhvh1 kernel: <-- nfs_free_server()
Mar 7 14:59:30 rhvh1 kernel: <-- nfs4_create_server() = error -110
Mar 7 14:59:30 rhvh1 kernel: <-- nfs4_try_mount() = -110 [error]
Mar 7 15:00:01 rhvh1 systemd: Started Session 21 of user root.
Mar 7 15:00:01 rhvh1 systemd: Starting Session 21 of user root.
Mar 7 15:01:01 rhvh1 systemd: Started Session 22 of user root.
Mar 7 15:01:01 rhvh1 systemd: Starting Session 22 of user root.
Mar 7 15:03:04 rhvh1 kernel: perf: interrupt took too long (6323 > 6047), lowering kernel.perf_event_max_sample_rate to 31000
Mar 7 15:10:01 rhvh1 systemd: Started Session 23 of user root.
Mar 7 15:10:01 rhvh1 systemd: Starting Session 23 of user root.
[root@rhvh1 ~]#

***********************************************************************88

[root@rhvm ~]# cat /var/log/messages
Mar 7 14:55:58 rhvm kernel: NFSD: laundromat service - starting
Mar 7 14:55:58 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 14:57:28 rhvm kernel: NFSD: laundromat service - starting
Mar 7 14:57:28 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 14:58:58 rhvm kernel: NFSD: laundromat service - starting
Mar 7 14:58:58 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 15:00:01 rhvm systemd: Started Session 34 of user root.
Mar 7 15:00:01 rhvm systemd: Starting Session 34 of user root.
Mar 7 15:00:28 rhvm kernel: NFSD: laundromat service - starting
Mar 7 15:00:28 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 15:01:01 rhvm systemd: Started Session 35 of user root.
Mar 7 15:01:01 rhvm systemd: Starting Session 35 of user root.
Mar 7 15:01:58 rhvm kernel: NFSD: laundromat service - starting
Mar 7 15:01:58 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 15:03:28 rhvm kernel: NFSD: laundromat service - starting
Mar 7 15:03:28 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 15:04:59 rhvm kernel: NFSD: laundromat service - starting
Mar 7 15:04:59 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 15:06:29 rhvm kernel: NFSD: laundromat service - starting
Mar 7 15:06:29 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 15:07:59 rhvm kernel: NFSD: laundromat service - starting
Mar 7 15:07:59 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 15:09:29 rhvm kernel: NFSD: laundromat service - starting
Mar 7 15:09:29 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 15:10:01 rhvm systemd: Started Session 36 of user root.
Mar 7 15:10:01 rhvm systemd: Starting Session 36 of user root.
Mar 7 15:10:59 rhvm kernel: NFSD: laundromat service - starting
Mar 7 15:10:59 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 15:12:29 rhvm kernel: NFSD: laundromat service - starting
Mar 7 15:12:29 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 15:13:59 rhvm kernel: NFSD: laundromat service - starting
Mar 7 15:13:59 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
Mar 7 15:15:29 rhvm kernel: NFSD: laundromat service - starting
Mar 7 15:15:29 rhvm kernel: NFSD: laundromat_main - sleeping for 90 seconds
[root@rhvm ~]#

Responses

According to your log, rhvh1 is attempting a NFSv4 mount. That normally means a TCP connection to port 2049. "showmount -e" might use older NFS protocol versions by default, which might mean using UDP.

Is rhvm configured to accept NFS mount attempts through its software firewall? For example, if eth0 is the network interface used to communicate with rhvh1 and the system is otherwise well protected:

# nmcli c modify eth0 connection.zone trusted
or
# firewall-cmd --zone=trusted --add-interface=eth0
# firewall-cmd --zone=trusted --permanent --add-interface=eth0

Note that you'll need two firewall-cmd commands to make the change both effective immediately and persistent, while nmcli will do both by default.

Is rpcidmapd service running on the NFS server (rhvm) host?

Exactly what is specified in the /etc/exports file on rhvm?

The older rhev versions required the directory exported for use as an ISO domain to be accessible by user vdsm, group kvm (UID=36, GID=36). For NFSv4, correct usernames are important, unlike for older NFS versions.

thanks for your reply.

Firewalls are disabled on both systems, also selinux enforcing is set to disabled.

Verified rpcidmapd service is running on NFS server (rhvm) host

Please see below output for more in depth info:

**[root@rhvm ~]# systemctl status firewalld**
â firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   **Active: inactive (dead)**
     Docs: man:firewalld(1)
[root@rhvm ~]#

[root@rhvh1 ~]#
**[root@rhvh1 ~]# systemctl status firewalld**
â firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   **Active: inactive (dead)**
     Docs: man:firewalld(1)
[root@rhvh1 ~]#

==========================================

[root@rhvm ~]#
**[root@rhvm ~]# systemctl status rpcidmapd**
â nfs-idmapd.service - NFSv4 ID-name mapping service
   Loaded: loaded (/usr/lib/systemd/system/nfs-idmapd.service; static; vendor preset: disabled)
   Active: active (running) since Tue 2017-03-07 22:34:40 PKT; 14min ago
  Process: 636 ExecStart=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS (code=exited, status=0/SUCCESS)
 Main PID: 647 (rpc.idmapd)
   CGroup: /system.slice/nfs-idmapd.service
           ââ647 /usr/sbin/rpc.idmapd

Mar 07 22:34:40 rhvm.arcanainfo.local systemd[1]: Starting NFSv4 ID-name mapping service...
Mar 07 22:34:40 rhvm.arcanainfo.local systemd[1]: Started NFSv4 ID-name mapping service.
[root@rhvm ~]#
==================================

**[root@rhvm ~]# rpcinfo -p**
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp    662  status
    100005    1   udp    892  mountd
    100005    1   tcp    892  mountd
    100024    1   tcp    662  status
    100005    2   udp    892  mountd
    100005    2   tcp    892  mountd
    100005    3   udp    892  mountd
    100005    3   tcp    892  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl
    100021    1   udp  56356  nlockmgr
    100021    3   udp  56356  nlockmgr
    100021    4   udp  56356  nlockmgr
    100021    1   tcp  33347  nlockmgr
    100021    3   tcp  33347  nlockmgr
    100021    4   tcp  33347  nlockmgr
[root@rhvm ~]#
==================================

[root@rhvm ~]#
**[root@rhvm ~]# cat /etc/services | grep 2049**
nfs             2049/tcp        nfsd shilp      # Network File System
nfs             2049/udp        nfsd shilp      # Network File System
nfs             2049/sctp       nfsd shilp      # Network File System
[root@rhvm ~]#
========================================

[root@rhvm ~]#
**[root@rhvm ~]# cat /etc/exports**


**/var/lib/exports/iso 10.10.3.0/22(rw,sync,no_root_squash)**

/home/iso 10.10.3.0/22(rw,sync,no_root_squash)

[root@rhvm ~]#
=======================================

[root@rhvm ~]#
**[root@rhvm ~]# ls -la /var/lib/exports/iso/**
total 0
drwxr-xr-x. 3 vdsm kvm 50 Mar  6 12:23 .
drwxr-xr-x. 3 vdsm kvm 17 Mar  6 12:23 ..
**drwxr-xr-x. 4 vdsm kvm 34 Mar  6 12:23 b3f3a879-aeb6-432c-8f79-a1aa95229912**
[root@rhvm ~]#
===============================================
[root@rhvm ~]#
**[root@rhvm ~]# id vdsm**
**uid=36(vdsm) gid=36(kvm) groups=36(kvm)**
[root@rhvm ~]#
===============================================

There is a script nfs-check.py at https://github.com/oVirt/vdsm/tree/master/contrib. Its used to check if the target is ready for NFS connection from oVirt. Details are on [here] (http://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues/)

I've tried above also but it fails without much useful details:

[root@rhvh1 contrib]#
**[root@rhvh1 contrib]# python nfs-check.py 10.10.3.220:/var/lib/exports/iso**
Current hostname: rhvh1.mydomain.local - IP addr 10.10.3.225
Trying to /bin/mount -t nfs 10.10.3.220:/var/lib/exports/iso...
**Timeout, cannot mount the nfs! Please check the status of NFS service or/and the Firewall settings!**
[root@rhvh1 contrib]#
[root@rhvh1 contrib]#

Firewalld is clearly disabled, but there might still be old-style iptables firewall rules installed. Please use this command to verify that there are no iptables rules in effect:

# iptables -L -vn
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.