Adding new hypervisor server to rhev-m is failing
Hi,
In my Self hosted engine setup, I am using two hp dl360 servers.
Here are the setup details:
On 1st server:
- I installed RHV-4.1 package on bare metal server.
- Configured local nfs storage for (ISO & Data) on this machine.
- Ran self hosted engine script and configured rhev-m successfully.
- Manager is up and running now
- I added eth1 to eth0 and made bond0
- I paired eth2 & eth3 and made bond1
- bond0 is mapped to ovirtmgmt bridge
- bond1 I mapped to ethbr0 bridge which I created newly.
Server is working fine.
On 2nd server:
- I just installed RHV-4.1 package on bare metal server.
I tried to add my 2nd server using rhv-m gui, but It failed. I don't know what went wrong here.
I am attaching my rhv-m snapshot here.

Please help me!!!
Thanks
Responses
Can the 2nd server see the (nfs) storage on the first server? From the 2nd server:
showmount -e <nfs-server>
mount <nfs-server:/mount/point> /mnt
umount /mnt
If you can't mount the nfs export, check on 1st server: - Is the IP address allowed? (/etc/exports) - Is packet filters open? (iptables -nL)
For nfs4, you should only need 2049. For nfs3, it's complicated...
Hi there,
It looks like iptables is not allowing nfs though. I would switch to NFSv4 and allow tcp/2049 - on the host update /etc/sysconfig/iptables:
-A INPUT -p tcp --dport 2049 -j ACCEPT
And reload
systemctl reload iptables
Then from the client, test using ipv4
mount 10.53.197.131:/RHV_STORAGE/DATA_DOMAIN /mnt
It should default to NFSv4 but if it doesn't, you can force it like so:
mount -o vers=4 10.53.197.131:/RHV_STORAGE/DATA_DOMAIN /mnt
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
