Chapter 5. Managing Networking
5.1. Overview
This topic describes the management of the overall cluster network, including project isolation and outbound traffic control.
5.2. Managing Pod Networks
When your cluster is configured to use the ovs-multitenant SDN plugin you can manage the separate pod overlay networks for projects using the administrator CLI.
5.2.1. Joining Project Networks
To join projects to an existing project network:
$ oc adm pod-network join-projects --to=<project1> <project2> <project3>
In the above example, all the pods and services in <project2> and <project3> can now access any pods and services in <project1> and vice versa. Services can be accessed either by IP or fully-qualified DNS name (<service>.<pod_namespace>.svc.cluster.local). For example, to access a service named db in a project myproject, use db.myproject.svc.cluster.local.
Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option.
To verify the networks you have joined together:
$ oc get netnamespaces
Then look at the NETID column. Projects in the same pod-network will have the same NetID.
5.3. Isolating Project Networks
To isolate the project network in the cluster and vice versa, run:
$ oc adm pod-network isolate-projects <project1> <project2>
In the above example, all of the pods and services in <project1> and <project2> can not access any pods and services from other non-global projects in the cluster and vice versa.
Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option.
5.3.1. Making Project Networks Global
To allow projects to access all pods and services in the cluster and vice versa:
$ oc adm pod-network make-projects-global <project1> <project2>
In the above example, all the pods and services in <project1> and <project2> can now access any pods and services in the cluster and vice versa.
Alternatively, instead of specifying specific project names, you can use the --selector=<project_selector> option.
5.4. Disabling Host Name Collision Prevention For Routes and Ingress Objects
In OpenShift Dedicated, host name collision prevention for routes and ingress objects is enabled by default. This means that users without the cluster-admin role can set the host name in a route or ingress object only on creation and cannot change it afterwards. However, you can relax this restriction on routes and ingress objects for some or all users.
Because OpenShift Dedicated uses the object creation timestamp to determine the oldest route or ingress object for a given host name, a route or ingress object can hijack a host name of a newer route if the older route changes its host name, or if an ingress object is introduced.
As an OpenShift Dedicated cluster administrator, you can edit the host name in a route even after creation. You can also create a role to allow specific users to do so:
$ oc create -f - <<EOF apiVersion: v1 kind: ClusterRole metadata: name: route-editor rules: - apiGroups: - route.openshift.io - "" resources: - routes/custom-host verbs: - update EOF
You can then bind the new role to a user:
$ oc adm policy add-cluster-role-to-user route-editor user
You can also disable host name collision prevention for ingress objects. Doing so lets users without the cluster-admin role edit a host name for ingress objects after creation. This is useful to OpenShift Dedicated installations that depend upon Kubernetes behavior, including allowing the host names in ingress objects be edited.
Add the following to the
master.yamlfile:admissionConfig: pluginConfig: openshift.io/IngressAdmission: configuration: apiVersion: v1 allowHostnameChanges: true kind: IngressAdmissionConfig location: ""Restart the master services for the changes to take effect:
$ systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
5.5. Controlling Egress Traffic
As a cluster administrator you can allocate a number of static IP addresses to a specific node at the host level. If an application developer needs a dedicated IP address for their application service, they can request one during the process they use to ask for firewall access. They can then deploy an egress router from the developer’s project, using a nodeSelector in the deployment configuration to ensure that the pod lands on the host with the pre-allocated static IP address.
The egress pod’s deployment declares one of the source IPs, the destination IP of the protected service, and a gateway IP to reach the destination. After the pod is deployed, you can create a service to access the egress router pod, then add that source IP to the corporate firewall. The developer then has access information to the egress router service that was created in their project, for example, service.project.cluster.domainname.com.
When the developer needs to access the external, firewalled service, they can call out to the egress router pod’s service (service.project.cluster.domainname.com) in their application (for example, the JDBC connection information) rather than the actual protected service URL.
You can also assign static IP addresses to projects, ensuring that all outgoing external connections from the specified project have recognizable origins. This is different from the default egress router, which is used to send traffic to specific destinations.
The egress router is not available for OpenShift Dedicated.
As an OpenShift Dedicated cluster administrator, you can control egress traffic in these ways:
- Firewall
- Using an egress firewall allows you to enforce the acceptable outbound traffic policies, so that specific endpoints or IP ranges (subnets) are the only acceptable targets for the dynamic endpoints (pods within OpenShift Dedicated) to talk to.
5.5.1. Using an Egress Firewall to Limit Access to External Resources
As an OpenShift Dedicated cluster administrator, you can use egress firewall policy to limit the external addresses that some or all pods can access from within the cluster, so that:
A pod can only talk to internal hosts, and cannot initiate connections to the public Internet.
Or,
A pod can only talk to the public Internet, and cannot initiate connections to internal hosts (outside the cluster).
Or,
- A pod cannot reach specified internal subnets/hosts that it should have no reason to contact.
You can configure projects to have different egress policies. For example, allowing <project A> access to a specified IP range, but denying the same access to <project B>. Or restrict application developers from updating from (Python) pip mirrors, and forcing updates to only come from desired sources.
Project administrators can neither create EgressNetworkPolicy objects, nor edit the ones you create in their project. There are also several other restrictions on where EgressNetworkPolicy can be created:
-
The
defaultproject (and any other project that has been made global viaoc adm pod-network make-projects-global) cannot have egress policy. -
If you merge two projects together (via
oc adm pod-network join-projects), then you cannot use egress policy in any of the joined projects. - No project may have more than one egress policy object.
Violating any of these restrictions results in broken egress policy for the project, and may cause all external network traffic to be dropped.
Use the oc command or the REST API to configure egress policy. You can use oc [create|replace|delete] to manipulate EgressNetworkPolicy objects. The api/swagger-spec/oapi-v1.json file has API-level details on how the objects actually work.
To configure egress policy:
- Navigate to the project you want to affect.
Create a JSON file with the desired policy details. For example:
{ "kind": "EgressNetworkPolicy", "apiVersion": "v1", "metadata": { "name": "default" }, "spec": { "egress": [ { "type": "Allow", "to": { "cidrSelector": "1.2.3.0/24" } }, { "type": "Allow", "to": { "dnsName": "www.foo.com" } }, { "type": "Deny", "to": { "cidrSelector": "0.0.0.0/0" } } ] } }When the example above is added to a project, it allows traffic to IP range
1.2.3.0/24and domain namewww.foo.com, but denies access to all other external IP addresses. Traffic to other pods is not affected because the policy only applies to external traffic.The rules in an
EgressNetworkPolicyare checked in order, and the first one that matches takes effect. If the three rules in the above example were reversed, then traffic would not be allowed to1.2.3.0/24andwww.foo.combecause the0.0.0.0/0rule would be checked first, and it would match and deny all traffic.Domain name updates are polled based on the TTL (time to live) value of the domain returned by the local non-authoritative servers. The pod should also resolve the domain from the same local nameservers when necessary, otherwise the IP addresses for the domain perceived by the egress network policy controller and the pod will be different, and the egress network policy may not be enforced as expected. Since egress network policy controller and pod are asynchronously polling the same local nameserver, there could be a race condition where pod may get the updated IP before the egress controller. Due to this current limitation, domain name usage in
EgressNetworkPolicyis only recommended for domains with infrequent IP address changes.Use the JSON file to create an EgressNetworkPolicy object:
$ oc create -f <policy>.json
Exposing services by creating routes will ignore EgressNetworkPolicy. Egress network policy service endpoint filtering is done at the node kubeproxy. When the router is involved, kubeproxy is bypassed and egress network policy enforcement is not applied. Administrators can prevent this bypass by limiting access to create routes.
5.6. Enabling Multicast
At this time, multicast is best used for low bandwidth coordination or service discovery and not a high-bandwidth solution.
Multicast traffic between OpenShift Dedicated pods is disabled by default. If you are using the ovs-multitenant or ovs-networkpolicy plugin, you can enable multicast on a per-project basis by setting an annotation on the project’s corresponding netnamespace object:
$ oc annotate netnamespace <namespace> \
netnamespace.network.openshift.io/multicast-enabled=trueDisable multicast by removing the annotation:
$ oc annotate netnamespace <namespace> \
netnamespace.network.openshift.io/multicast-enabled-When using the ovs-multitenant plugin:
- In an isolated project, multicast packets sent by a pod will be delivered to all other pods in the project.
-
If you have joined networks together, you will need to enable multicast in each project’s
netnamespacein order for it to take effect in any of the projects. Multicast packets sent by a pod in a joined network will be delivered to all pods in all of the joined-together networks. -
To enable multicast in the
defaultproject, you must also enable it in thekube-service-catalogproject and all other projects that have been made global. Global projects are not "global" for purposes of multicast; multicast packets sent by a pod in a global project will only be delivered to pods in other global projects, not to all pods in all projects. Likewise, pods in global projects will only receive multicast packets sent from pods in other global projects, not from all pods in all projects.
When using the ovs-networkpolicy plugin:
-
Multicast packets sent by a pod will be delivered to all other pods in the project, regardless of
NetworkPolicyobjects. (Pods may be able to communicate over multicast even when they can’t communicate over unicast.) -
Multicast packets sent by a pod in one project will never be delivered to pods in any other project, even if there are
NetworkPolicyobjects allowing communication between the to projects.
5.7. Enabling NetworkPolicy
The ovs-subnet and ovs-multitenant plug-ins have their own legacy models of network isolation and do not support Kubernetes NetworkPolicy. However, NetworkPolicy support is available by using the ovs-networkpolicy plug-in.
Only the v1 NetworkPolicy features are available in OpenShift Dedicated. This means that egress policy types, IPBlock, and combining podSelector and namespaceSelector are not available in OpenShift Dedicated.
5.8. Enabling HTTP Strict Transport Security
HTTP Strict Transport Security (HSTS) policy is a security enhancement, which ensures that only HTTPS traffic is allowed on the host. Any HTTP requests are dropped by default. This is useful for ensuring secure interactions with websites, or to offer a secure application for the user’s benefit.
When HSTS is enabled, HSTS adds a Strict Transport Security header to HTTPS responses from the site. You can use the insecureEdgeTerminationPolicy value in a route to redirect to send HTTP to HTTPS. However, when HSTS is enabled, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect. This is not required to be supported by the client, and can be disabled by setting max-age=0.
HSTS works only with secure routes (either edge terminated or re-encrypt). The configuration is ineffective on HTTP or passthrough routes.
To enable HSTS to a route, add the haproxy.router.openshift.io/hsts_header value to the edge terminated or re-encrypt route:
apiVersion: v1
kind: Route
metadata:
annotations:
haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload
Ensure there are no spaces and no other values in the parameters in the haproxy.router.openshift.io/hsts_header value. Only max-age is required.
The required max-age parameter indicates the length of time, in seconds, the HSTS policy is in effect for. The client updates max-age whenever a response with a HSTS header is received from the host. When max-age times out, the client discards the policy.
The optional includeSubDomains parameter tells the client that all subdomains of the host are to be treated the same as the host.
If max-age is greater than 0, the optional preload parameter allows external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have preload set. Browsers can then use these lists to determine which sites to only talk to over HTTPS, even before they have interacted with the site. Without preload set, they need to have talked to the site over HTTPS to get the header.
5.9. Troubleshooting Throughput Issues
Sometimes applications deployed through OpenShift Dedicated can cause network throughput issues such as unusually high latency between specific services.
Use the following methods to analyze performance issues if pod logs do not reveal any cause of the problem:
Use a packet analyzer, such as ping or tcpdump to analyze traffic between a pod and its node.
For example, run the tcpdump tool on each pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to/from a pod. Latency can occur in OpenShift Dedicated if a node interface is overloaded with traffic from other pods, storage devices, or the data plane.
$ tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1- 1
podipis the IP address for the pod. Run the following command to get the IP address of the pods:
# oc get pod <podname> -o wide
tcpdump generates a file at /tmp/dump.pcap containing all traffic between these two pods. Ideally, run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes (eliminating the SDN from the equation) with:
# tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789
- Use a bandwidth measuring tool, such as iperf, to measure streaming throughput and UDP throughput. Run the tool from the pods first, then from the nodes to attempt to locate any bottlenecks. The iperf3 tool is included as part of RHEL 7.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.