Unable to manage node

Latest response

Goal is to have a small Red Hat openstack setup using virtual machines as nodes.

Set up is a HP dl380 gen9, with Ubuntu 18.04. Vm are kvm. Vm are usinf virt-manager

Currently have the undercloud installed on a vm running Red Hat Linux server 7.6. The undercloud was deployed using default settings from the undercloud.conf file. The vm has to virtual interfaces, which are the "default NAT.

Node is a vm running red hat linux server 7.6. It has two Virtual network interfaces one is a default NAT, the other is a Host device bond0: macvtap. with the ip address of 172.16.0.37. Trying to manage node via dashboard, but dashboard is unable to get an IPMI call

screenshot

Responses

So it sounds like you're trying to put together test environment on a single physical host (the HP dl380). You already got the undercloud node configured. Now you need to create VMs for the other nodes. So first, create two additional VMs in virt-manager (both with blank disks) -- one node with be the Controller and the other will be the Compute.

NOTE: This is an unsupported configuration for testing purposes and Red Hat does not provide any support for this type of environment. It's purely for learning purposes only.

Next, you need to control the power management of the virtual machines. For normal physical machines, you'd use IPMI or another vendor-specific power management interface. However, since you're using virtual machines, you need to replicate the IPMI functionality. Fortunately, there's a python module called virtualbmc that you can install on the HP dl380:

Once you get this configured, you should be able to register each node in director using the IPMI address of the physical host (via the default network - 192.168.122.1, I'm guessing?) and the IPMI port you define for each VM in virtualbmc.

A couple of other things to keep in mind:

  • If you used the default settings for the undercloud, you might need to set up another network for provisioning/ctlplane. The default provisioning/ctlplane network is 192.168.24.0/24. Make sure to specify the vnic the undercloud uses for this network using the local_interface param in the undercloud.conf file and update your config by re-running "openstack undercloud install".

  • Also make sure libvirt doesn't provide DHCP serivces on the provisioning/ctlplane network. The undercloud will control DHCP for this network for introspection and provisioning.

Hope this helps!

Thanks for the response. The network the physical device is on is a 172.16.0 network. I was Initially setting up the undercloud using these settings:

[DEFAULT]

local_interface = ens8

local_ip = 172.16.0.34/24

undercloud_admin_host = 172.16.0.34

undercloud_public_host = 172.16.0.34

[ctlplane-subnet]

cidr = 172.16.0.0/24

dhcp_start = 172.16.0.37

dhcp_end = 172.16.0.49

inspection_iprange = 172.16.0.50,172.16.0.70

gateway = 172.16.0.2

but it would get an error or hang. So I decided to try with the default settings. Will this cause issues?

So you can pretty much use any private IP range in the undercloud config, just as long as you have a private network to accommodate it.

So the big problem is that you're using the macvtap interface. This will work for a public interface but you'll need to have a private network specifically for the undercloud node to be able to provision and manage the other nodes (which is what the settings in the undercloud,conf are primarily for).

So first, you'll probably need to setup a new virtual network via virt-manager on the HP dl380 host. You should already have one virtual network ("default"), which I'm guessing is using 192.168.122.0/24 but by default is using DHCP. So it's probably a good idea to to create a new virtual network in virt-manager (let's call it "provisioning"). If need be, you can set the IP range to the undercloud default (192.168.24.0/24) or set it to any other used address range (e.g. 192.168.200.0/24). The only other requirement is to make sure there is no DHCP on this virtual network -- the undercloud will manage DHCP.

Next add a vNIC to the undercloud. Whatever interface this ends up being will be what you set local_interface to.

Finally, pick an unused IP address from the network that the macvtap interface is connected to. Set undercloud_public_host to this IP address. This will be your external interface

Set all other params to IP addresses from the new "provisioning" virtual network.

So for example, let's say:

  • The provisioning network is 192.168.200.0/24 (And FYI, the physical host usually consumes the first IP address in this range)
  • The interface the undercloud uses to connect to the provisioning network is ens9
  • The enternal IP address is 172.16.0.34

The revised undercloud.conf params might look like this:

[DEFAULT]
local_interface = ens9
local_ip = 192.168.200.2/24 
undercloud_admin_host = 192.168.200.3
undercloud_public_host = 172.16.0.34

[ctlplane-subnet]
cidr = 192.168.200.0/24 
dhcp_start = 192.168.200.20
dhcp_end = 192.168.200.60
inspection_iprange = 192.168.200.200,192.168.200.240
gateway = 192.168.200.1

That should take care of the undercloud installation issues you faced.

Then the next step is to create two new VMs and configure their power management, which I mentioned in my previous post.

Note also that when you set up the power management to use either 192.168.200.1 or 192.168.122.1 to refer to the physical host and not via the macvtap interface. This is because macvtap doesn't let the VM traffic pass through to the host.

Hope this helps, but feel free to ask any further questions.