Openstack 13 Floating IP not reachable & Cinder Netapp Integration

Latest response

Hello folks,

can't access the vm with attached floating ip.
But I can ping/traceroute the router-gateway from the project-router.
The security group is set to allow everything (every protocol from everywhere) from ingress and egress.

We use a separate VLAN for our floating-ips and we configured it the same way as in our staging environment.

the router from the admin-project (where there VM is located) has the "router-gateway" with the ip 10.10.1.1 - which is actually reachable via ping/traceroute from director and other machines (like controller oder compute nodes)

the VM gets the floating ip 10.10.1.130 (correctly associated) which is not pingable/traceroute-able from anywhere.
The Status the floating ip is actually down (for example the Status of the floating ips in the staging environment is "Active").

Its important to know that our staging env is running on OSP12 and our production env is running on OSP13.

thanks
Br
NIko

Responses

Can the VM ping anything external to the router?

no. it was not possible. we have started a new deployment adding in our deploy script this line: -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-ovn-dvr-ha.yaml

I should have also asked, can the VM ping the admin-project router? If so, and you can ping the admin-project router externally, it might be just a simple default routing problem on the VM's side. Might be a good idea to check the VM's routing table and see if the right default gateway (10.10.1.1) is assigned.

yes the VM can't ping we figured out the we must enable ovn and we are getting stuck. our deployment script has these entries. we added the neutron ovn yaml and the deployment failed.

openstack overcloud deploy --templates \ -r /home/stack/templates/roles_data.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \

-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-ovn-dvr-ha.yaml \

-e /home/stack/templates/node-info.yaml \ -e /home/stack/templates/neutron-ovs-dvr.yaml \ -e /home/stack/templates/network-environment.yaml \ -e /home/stack/templates/storage-environment-nfs.yaml \ -e /home/stack/templates/virt-who-service.yaml \ -e /home/stack/templates/webportal-name/cloud-zeus/enable-tls-zeus.yaml \ -e /home/stack/templates/webportal-name/cloud-zeus/cloudname-zeus.yaml \ -e /home/stack/templates/inject-trust-anchor.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-dns.yaml \ -e /home/stack/templates/extraconfig/pre_deploy/docker-certificate-resource-registry.yaml \ -e /home/stack/templates/ceilometer.yaml \ -e /home/stack/templates/overcloud_images.yaml \ -e /home/stack/templates/rhel-registration/environment-rhel-registration.yaml \ -e /home/stack/templates/rhel-registration/rhel-registration-resource-registry.yaml \ -e /home/stack/templates/swift-environment.yaml \ -t 70 \ --libvirt-type kvm

when we start the deployment we get this message.

Invalid default vxlan ("[u\'vxlan\']" is not an allowed value [geneve])',

checking this file /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-ovn-dvr-ha.yaml

# A Heat environment that can be used to deploy OVN services with non HA OVN DB servers.
resource_registry:
  OS::TripleO::Docker::NeutronMl2PluginBase: ../../puppet/services/neutron-plugin-ml2-ovn.yaml                                                                                                        |
  OS::TripleO::Services::OVNController: ../../docker/services/ovn-controller.yaml                                                                                                                     |
  OS::TripleO::Services::OVNDBs: ../../docker/services/pacemaker/ovn-dbs.yaml                                                                                                                         |
  OS::TripleO::Services::OVNMetadataAgent: ../../docker/services/ovn-metadata.yaml                                                                                                                    |
# Disabling Neutron services that overlap with OVN                                                                                                                                                    |
  OS::TripleO::Services::NeutronOvsAgent: OS::Heat::None                                                                                                                                              |
  OS::TripleO::Services::ComputeNeutronOvsAgent: OS::Heat::None                                                                                                                                       |
  OS::TripleO::Services::NeutronL3Agent: OS::Heat::None                                                                                                                                               |
  OS::TripleO::Services::NeutronMetadataAgent: OS::Heat::None                                                                                                                                         |
  OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None                                                                                                                                             |
  OS::TripleO::Services::ComputeNeutronCorePlugin: OS::Heat::None                                                                                                                                     |
                                                                                                                                                                                                      |
                                                                                                                                                                                                      |
parameter_defaults:                                                                                                                                                                                   |
  NeutronMechanismDrivers: ovn                                                                                                                                                                        |
  OVNVifType: ovs                                                                                                                                                                                     |
  OVNNeutronSyncMode: log                                                                                                                                                                             |
  OVNQosDriver: ovn-qos                                                                                                                                                                               |
  OVNTunnelEncapType: geneve                                                                                                                                                                          |
  NeutronEnableDHCPAgent: false                                                                                                                                                                       |
  NeutronTypeDrivers: 'geneve,vlan,flat'                                                                                                                                                              |
  NeutronNetworkType: 'geneve'                                                                                                                                                                        |
  NeutronServicePlugins: 'qos,ovn-router,trunk'                                                                                                                                                       |
  NeutronVniRanges: ['1:65536', ]                                                                                                                                                                     |
  NeutronEnableDVR: true                                                                                                                                                                              |
  ControllerParameters:                                                                                                                                                                               |
    OVNCMSOptions: "enable-chassis-as-gw" 

should be replaced geneve with vxlan????

I'd stick with 'geneve' if using OVN. My guess though is there's a parameter set to 'vxlan' that should be 'geneve'. It might be in the two custom files that you've included:

/home/stack/templates/neutron-ovs-dvr.yaml (most likely this one) /home/stack/templates/network-environment.yaml

Daniel i made the changes in network-environment.yaml and replaced vxlan with geneve. the deployment has been completed successfully.

but now we are not able to add a router. Neither in horizon nor in cli I get unknown error. not really useful.

any idea?

thanks br Niko

What's the output of the error?

This is the output of the error. poor. (openstack) router create admin-router NotFoundException: Unknown error not really useful.

we made the decision to redeploy from scratch because as we added ovn we figured that the service is listening on the wrong port. don't know why. we used the default yaml files without any customization. we have deleted the stack. When the issue still exist I will contact you again.

No prob. If it occurs again, it might be an idea to take a look at the neutron logs when you try to create the router.

Deployment is still running but I get this error output. any suggestion?

2018-09-14 09:07:41Z [Compute.2.NodeExtraConfig]: CREATE_FAILED  Error: resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:07:41Z [Compute.2]: CREATE_FAILED  Resource CREATE failed: Error: resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:07:42Z [Compute.2]: CREATE_FAILED  Error: resources[2].resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:07:42Z [Compute]: UPDATE_FAILED  Resource CREATE failed: Error: resources[2].resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:07:42Z [Compute]: CREATE_FAILED  resources.Compute: Resource CREATE failed: Error: resources[2].resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:07:42Z [overcloud]: CREATE_FAILED  Resource CREATE failed: resources.Compute: Resource CREATE failed: Error: resources[2].resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:07:42Z [overcloud.Compute.0.SshHostPubKey]: CREATE_COMPLETE  state changed
2018-09-14 09:07:57Z [overcloud.Compute.1.NodeExtraConfig]: CREATE_FAILED  Error: resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:07:57Z [overcloud.Compute.1]: CREATE_FAILED  Resource CREATE failed: Error: resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:07:57Z [overcloud.Compute.1]: CREATE_FAILED  Error: resources[1].resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:07:57Z [overcloud.Compute]: UPDATE_FAILED  Resource CREATE failed: Error: resources[1].resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:07:59Z [overcloud.Compute.1.SshHostPubKey]: CREATE_COMPLETE  state changed
2018-09-14 09:08:20Z [overcloud.Compute.3.SshHostPubKey]: CREATE_COMPLETE  state changed
2018-09-14 09:08:21Z [overcloud.Compute.3.NodeExtraConfig]: CREATE_FAILED  Error: resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:08:21Z [overcloud.Compute.3]: CREATE_FAILED  Resource CREATE failed: Error: resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:08:22Z [overcloud.Compute.3]: CREATE_FAILED  Error: resources[3].resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:08:22Z [overcloud.Compute]: UPDATE_FAILED  Resource CREATE failed: Error: resources[3].resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:09:51Z [overcloud.Controller.0.SshHostPubKey]: CREATE_COMPLETE  state changed
2018-09-14 09:09:51Z [overcloud.Controller.0.NodeExtraConfig]: CREATE_COMPLETE  state changed
2018-09-14 09:09:51Z [overcloud.Controller.0]: CREATE_COMPLETE  Stack CREATE completed successfully
2018-09-14 09:09:52Z [overcloud.Controller.0]: CREATE_COMPLETE  state changed
2018-09-14 09:10:01Z [overcloud.Controller.2.NodeExtraConfig]: CREATE_COMPLETE  state changed
2018-09-14 09:10:01Z [overcloud.Controller.2.SshHostPubKey]: CREATE_COMPLETE  state changed
2018-09-14 09:10:01Z [overcloud.Controller.2]: CREATE_COMPLETE  Stack CREATE completed successfully
2018-09-14 09:10:02Z [overcloud.Controller.2]: CREATE_COMPLETE  state changed
2018-09-14 09:11:20Z [overcloud.Compute.0.NodeExtraConfig]: CREATE_FAILED  Error: resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:11:20Z [overcloud.Compute.0]: CREATE_FAILED  Resource CREATE failed: Error: resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:11:20Z [overcloud.Compute.0]: CREATE_FAILED  Error: resources[0].resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
2018-09-14 09:11:20Z [overcloud.Compute]: UPDATE_FAILED  Resource CREATE failed: Error: resources[0].resources.NodeExtraConfig.resources.RHELRegistrationDeployment: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1

Looks like a registration error on the Compute node. If need be, check the /var/log/rhsm/rhsm.log file on the Compute node. to see what happened.

Hi I found the root cause of the issue. In the roles-data.yaml must be set that the compute nodes should set the external interface. With version 12 it worked without adding the interface but for OSP13 it must be added.

Our deployment script is this. -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-ovn-dvr-ha.yaml \

We added the line:

time openstack overcloud deploy --templates \
-r /home/stack/templates/roles_data.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-ovn-dvr-ha.yaml \
-e /home/stack/templates/node-info.yaml \
-e /home/stack/templates/network-environment.yaml \
-e /home/stack/templates/storage-environment-nfs.yaml \
-e /home/stack/templates/cinder-netapp-config.yaml \
-e /home/stack/templates/virt-who-service.yaml \
-e /home/stack/templates/webportal-name/cloud-zeus/enable-tls-zeus.yaml \
-e /home/stack/templates/webportal-name/cloud-zeus/cloudname-zeus.yaml \
-e /home/stack/templates/inject-trust-anchor.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-dns.yaml \
-e /home/stack/templates/extraconfig/pre_deploy/docker-certificate-resource-registry.yaml \
-e /home/stack/templates/ceilometer.yaml \
-e /home/stack/templates/overcloud_images.yaml \
-e /home/stack/templates/rhel-registration/environment-rhel-registration.yaml \
-e /home/stack/templates/rhel-registration/rhel-registration-resource-registry.yaml \
-e /home/stack/templates/swift-environment.yaml \

floating is reachable and router can be also added. next step would be to implement octavia.

Do you have any experience with OSP13 and Octavia?

Thanks br Niko

I don't have an extensive amount of experience with Octavia but I can help find answers you're looking for. Here are the Octavia docs to get you started.

Thanks Daniel, before i start with octavia, do you may know if there is glance netapp integration? There is a cinder netapp yaml but i could not found something for glance. thanks

Sure, so by default, glance uses swift as a backend (via GlanceBackend). To switch to Netapp NFS, the specific params you need to set are:

  • GlanceBackend - Set to file
  • GlanceNetappNfsEnabled - Set to true
  • NetappShareLocation - The share location on the Netapp appliance.
  • GlanceNfsOptions - The NFS options

This will create an Netapp NFS mount on any node with the glance-api composable service (i.e. the Controller nodes by default) at /var/lib/glance/images. Switching the backend to file will store the images as files on the mount

There's an example of the NFS config (including Netapp) in /usr/share/openstack-tripleo-heat-templates/environments/storage/glance-nfs.yaml.

You can also set GlanceBackend to cinder and use Cinder on Netapp to store glance images. But that might be a bit convoluted when you can just use Netapp directly.

Daniel I have set GlanceNetappNfsEnabled: True what should be set for NetappShareLocation: ''

the Ip or the whole IP+share?

title: Enable Glance NFS Backend description: | Configure and include this environment to enable the use of an NFS share as the backend for Glance.

parameter_defaults: # When using GlanceBackend 'file', Netapp mount NFS share for image storage. # Type: boolean GlanceNetappNfsEnabled: True

# NFS mount options for image storage (when GlanceNfsEnabled is true) # Type: string GlanceNfsOptions: _netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0

# NFS share to mount for image storage (when GlanceNfsEnabled is true) # Type: string GlanceNfsShare: '10.10.1.110:/osp_prod_nfs'

# Netapp share to mount for image storage (when GlanceNetappNfsEnabled is true) # Type: string NetappShareLocation: ''

# ****************************************************** # Static parameters - these are values that must be # included in the environment but should not be changed. # ****************************************************** # The short name of the Glance backend to use. Should be one of swift, rbd, cinder, or file # Type: string GlanceBackend: file

# When using GlanceBackend 'file', mount NFS share for image storage. # Type: boolean GlanceNfsEnabled: True

# ********************* # End static parameters # *********************

As far as I know, yep, the whole IP and share. Basically the same as standard NFS shares.

forget my answer. GlanceNetappNfsEnabled: True belongs to NetappShareLocation: 'x.x.x.x:/osp_prod_nfs_/glance' and GlanceNfsEnabled: False belongs to GlanceNfsOptions: _netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0 and GlanceNfsShare: ''

the nfs shared options are not needed. either netapp or nfs but not both

i have remove nfs share and have started an update deployment.

Did I miss something?

thanks Niko

So the output seems formatted incorrectly in the comment, but it formatted properly in my email notification. It looked fine to me and it seems you have the main params. Let me know if it works or not.

It does not work, unfortunately with the main parameters.

Okay, that's problematic. What kind of error message did you get?

Hi Daniel, it doesn't work as expected. I have adapted the yaml, see below and I have just restarted a new deployment.

cinder-netapp-config.yaml

# *************************************************************************************
# DEPRECATED: Use tripleo-heat-templates/environments/storage/cinder-netapp-config.yaml
# instead.
# *************************************************************************************
# A Heat environment file which can be used to enable a
# a Cinder NetApp backend, configured via puppet
resource_registry:
  OS::TripleO::Services::CinderBackendNetApp: /home/stack/templates/puppet/services/cinder-backend-netapp.yaml

parameter_defaults:
  CinderEnableNetappBackend: true
  CinderNetappBackendName: 'tripleo_netapp'
  CinderNetappLogin: 'test'
  CinderNetappPassword: 'test12345'
  CinderNetappServerHostname: '10.10.10.224'
  CinderNetappServerPort: '80'
  CinderNetappSizeMultiplier: '1.2'
  CinderNetappStorageFamily: 'ontap_cluster'
  CinderNetappStorageProtocol: 'nfs'
  CinderNetappTransportType: 'http'
  CinderNetappVfiler: ''
  CinderNetappVolumeList: ''
  CinderNetappVserver: 'env0909'
  CinderNetappPartnerBackendName: ''
  CinderNetappNfsShares: '10.10.11.224:/test-cinder-vol/cinder'
  CinderNetappNfsSharesConfig: '/etc/cinder/shares.conf'
  CinderNetappNfsMountOptions: ''
  CinderNetappCopyOffloadToolPath: ''
  CinderNetappControllerIps: ''
  CinderNetappSaPassword: ''
  CinderNetappStoragePools: ''
  CinderNetappHostType: ''
  CinderNetappWebservicePath: '/devmgr/v2'
  CinderEnableIscsiBackend: false

glance-nfs.yaml

# *******************************************************************
# This file was created automatically by the sample environment
# generator. Developers should use `tox -e genconfig` to update it.
# Users are recommended to make changes to a copy of the file instead
# of the original, if any customizations are needed.
# *******************************************************************
# title: Enable Glance NFS Backend
# description: |
#   Configure and include this environment to enable the use of an NFS
#   share as the backend for Glance.
parameter_defaults:
  # When using GlanceBackend 'file', Netapp mount NFS share for image storage.
  # Type: boolean
  GlanceNetappNfsEnabled: True

  # NFS mount options for image storage (when GlanceNfsEnabled is true)
  # Type: string
  GlanceNfsOptions: _netdev,bg,intr,context=system_u:object_r:glance_var_lib_t:s0

  # NFS share to mount for image storage (when GlanceNfsEnabled is true)
  # Type: string
  # GlanceNfsShare: ''

  # Netapp share to mount for image storage (when GlanceNetappNfsEnabled is true)
  # Type: string
  NetappShareLocation: '10.10.11.224:/test-cinder-vol2/glance'

  # ******************************************************
  # Static parameters - these are values that must be
  # included in the environment but should not be changed.
  # ******************************************************
  # The short name of the Glance backend to use. Should be one of swift, rbd, cinder, or file
  # Type: string
  GlanceBackend: cinder

  # When using GlanceBackend 'file', mount NFS share for image storage.
  # Type: boolean
  # GlanceNfsEnabled: True

  # *********************
  # End static parameters
  # *********************

So probably change this:

GlanceBackend: cinder

To this:

GlanceBackend: file

That way it's using glance directly instead of through cinder.

Now, if the deployment fails, make a note of the stack that fails and post the error that appears. If need be, run the following command on the stack:

$ openstack stack failures list [stackname] --long

We'll try and identify what the issue is based on that.

As you can see the stack is up but i cannot upload images. I guess i need to change GlanceBackend as you have described above

the command openstack stack failures list overcloud --long does not work I do not get any result.

(overcloud) [stack@hdhproddirector ~]$ cinder service-list
+------------------+--------------------------+------+---------+-------+----------------------------+-----------------+
| Binary           | Host                     | Zone | Status  | State | Updated_at                 | Disabled Reason |
+------------------+--------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | hdhprodctrl0             | nova | enabled | up    | 2018-09-24T13:11:11.000000 | -               |
| cinder-scheduler | hdhprodctrl1             | nova | enabled | up    | 2018-09-24T13:11:09.000000 | -               |
| cinder-scheduler | hdhprodctrl2             | nova | enabled | up    | 2018-09-24T13:11:09.000000 | -               |
| cinder-volume    | hostgroup@tripleo_netapp | nova | enabled | up    | 2018-09-24T13:11:11.000000 | -               |
+------------------+--------------------------+------+---------+-------+----------------------------+-----------------+
(openstack) service show f78824fc6612496b9f356779bffd8d7b
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image Service          |
| enabled     | True                             |
| id          | f78824fc6612496b9f356779bffd8d7b |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+

Cool, we're making progress.

If it fails to upload an image with file, tail the Glance logs at /var/log/containers/glance/api.log and see what behaviour occurs.

Also the command openstack stack failures list overcloud --long only produces output if the overcloud deploy has failed. If your deployment completes without any stack failures, it won't show anything. It's more of a diagnostic tool for deployment processes that failing.

Hey ,
cool. we are making really progress. deployment is done without issues and I can upload images and was also able to deploy an instance. It looks very good so far. So I guess we are done with it.

Now i want to implement a second cinder backend. can create a new yaml with the other paramaters and can openstack impement this?

next point is: i want to implement "Configuration of Enhanced Instance Creation and Copy Offload with NetApp FAS for NFS" after that it should be implement this. http://netapp.github.io/openstack-deploy-ops-guide/ocata/content/glance.eic.configuration.html any experience with it?

appreciate your support Daniel

Now i want to implement a second cinder backend. can create a new yaml with the other paramaters and can openstack impement this?

I'm not sure if this is possible. I might have to check. I do know you used to be able to accomplish this using hieradata but am not sure if this still applies.

next point is: i want to implement "Configuration of Enhanced Instance Creation and Copy Offload with NetApp FAS for NFS" after that it should be implement this. http://netapp.github.io/openstack-deploy-ops-guide/ocata/content/glance.eic.configuration.html any experience with it?

Sadly, no, but I might be able to hunt down someone who has.

Hi Daniel,

googled a bit and found some introductions to implement a multiple backend. https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/pdf/netapp_back_end_guide_for_the_shared_file_system_service/Red_Hat_OpenStack_Platform-13-NetApp_Back_End_Guide_for_the_Shared_File_System_Service-en-US.pdf

Cannot imagine that it will be so simple with some lines. It must be a reference to above cinder netapp configuration. With the lines in the documentation i cannot imagine how the stack knows about the backends when only the cluster IP will be provided.

Hi Daniel, multiple backends will not deploy yet because we are facing other issues now floating IP can be only reached in admin project but not in the other projects. very strange behavior. We have ovn in place. any idea? br NIko

we figured also out that the first router we created works but not the second one. it does not make any difference where the router is. admin project or not. very strange behavior.

Any major difference between the first router and the second?

Daniel. I have removed the projects networks etc. I created two new projects exactly in the same way and I have created instances. the first is reachable the second is not reachable try to troubleshoot for any assistance much appreciated. troubleshooting mechnism etc... if i have missed to write. OVN+DVR

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.