Adding new hypervisor server to rhev-m is failing

Latest response

Hi,
In my Self hosted engine setup, I am using two hp dl360 servers.

Here are the setup details:

   On 1st server:
         - I installed RHV-4.1 package on bare metal server.
         - Configured local nfs storage for (ISO & Data) on this machine.
         - Ran self hosted engine script and configured rhev-m successfully.
         -  Manager is up and running now
         - I added eth1 to eth0 and made bond0
         - I paired eth2 & eth3 and made bond1
         - bond0 is mapped to ovirtmgmt bridge
         - bond1 I mapped to ethbr0 bridge which I created newly.

Server is working fine.

  On 2nd server:
        - I just installed RHV-4.1 package on bare metal server.

I tried to add my 2nd server using rhv-m gui, but It failed. I don't know what went wrong here.

I am attaching my rhv-m snapshot here.

Please help me!!!

Thanks

Responses

Can the 2nd server see the (nfs) storage on the first server? From the 2nd server:

showmount -e  <nfs-server>
mount <nfs-server:/mount/point> /mnt
umount /mnt

If you can't mount the nfs export, check on 1st server: - Is the IP address allowed? (/etc/exports) - Is packet filters open? (iptables -nL)

For nfs4, you should only need 2049. For nfs3, it's complicated...

Thanks for your reply Marcus. showmount -e is listing result but mount is failing and while installing self-hosted engine I opted default option of nfs3.

Here is my output:
  [root@rhvHost141 ~]# mount -t nfs 10.53.197.131:/RHV_STORAGE/DATA_DOMAIN /mnt
  mount.nfs: Connection timed out
  [root@rhvHost141 ~]# mount 10.53.197.131:/RHV_STORAGE/DATA_DOMAIN /mnt
  mount.nfs: Connection timed out
  [root@rhvHost141 ~]#
  [root@rhvHost141 ~]# showmount -e 10.53.197.131
  Export list for 10.53.197.131:
 /RHV_STORAGE/RHVM_DATA     *
 /RHV_STORAGE/EXPORT_DOMAIN *
 /RHV_STORAGE/ISO_DOMAIN    *
 /RHV_STORAGE/DATA_DOMAIN   *
 [root@rhvHost141 ~]#
  [root@rhvHost141 ~]# iptables -nL
  Chain INPUT (policy ACCEPT)
  target     prot opt source               destination
  ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
  ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0
  ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
  ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:54321
  ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:54322
  ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:111
  ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:111
  ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:22
  ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:161
  ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:9090
  ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:16514
  ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 2223
  ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 5900:6923
  ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 49152:49216
  ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:6081
  REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

  Chain FORWARD (policy ACCEPT)
  target     prot opt source               destination
  REJECT     all  --  0.0.0.0/0            0.0.0.0/0            PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-
  prohibited

  Chain OUTPUT (policy ACCEPT)
  target     prot opt source               destination
  ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0            udp dpt:6081
  [root@rhvHost141 ~]#

Please can you guide me!!!

Thanks

Hi there,

It looks like iptables is not allowing nfs though. I would switch to NFSv4 and allow tcp/2049 - on the host update /etc/sysconfig/iptables:

-A INPUT -p tcp --dport 2049 -j ACCEPT

And reload

systemctl reload iptables

Then from the client, test using ipv4

mount 10.53.197.131:/RHV_STORAGE/DATA_DOMAIN /mnt

It should default to NFSv4 but if it doesn't, you can force it like so:

mount -o vers=4 10.53.197.131:/RHV_STORAGE/DATA_DOMAIN /mnt
Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.