Chapter 5. Networking
5.1. Networking
5.1.1. Overview
Kubernetes ensures that pods are able to network with each other, and allocates each pod an IP address from an internal network. This ensures all containers within the pod behave as if they were on the same host. Giving each pod its own IP address means that pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration.
Creating links between pods is unnecessary, and it is not recommended that your pods talk to one another directly using the IP address. Instead, it is recommend that you create a service, then interact with the service.
5.1.2. OpenShift Dedicated DNS
If you are running multiple services, such as frontend and backend services for use with multiple pods, in order for the frontend pods to communicate with the backend services, environment variables are created for user names, service IPs, and more. If the service is deleted and recreated, a new IP address can be assigned to the service, and requires the frontend pods to be recreated in order to pick up the updated values for the service IP environment variable. Additionally, the backend service has to be created before any of the frontend pods to ensure that the service IP is generated properly, and that it can be provided to the frontend pods as an environment variable.
For this reason, OpenShift Dedicated has a built-in DNS so that the services can be reached by the service DNS as well as the service IP/port. OpenShift Dedicated supports split DNS by running SkyDNS on the master that answers DNS queries for services. The master listens to port 53 by default.
When the node starts, the following message indicates the Kubelet is correctly resolved to the master:
0308 19:51:03.118430 4484 node.go:197] Started Kubelet for node openshiftdev.local, server at 0.0.0.0:10250 I0308 19:51:03.118459 4484 node.go:199] Kubelet is setting 10.0.2.15 as a DNS nameserver for domain "local"
If the second message does not appear, the Kubernetes service may not be available.
On a node host, each container’s nameserver has the master name added to the front, and the default search domain for the container will be .<pod_namespace>.cluster.local. The container will then direct any nameserver queries to the master before any other nameservers on the node, which is the default behavior for Docker-formatted containers. The master will answer queries on the .cluster.local domain that have the following form:
Table 5.1. DNS Example Names
| Object Type | Example |
|---|---|
| Default | <pod_namespace>.cluster.local |
| Services | <service>.<pod_namespace>.svc.cluster.local |
| Endpoints | <name>.<namespace>.endpoints.cluster.local |
This prevents having to restart frontend pods in order to pick up new services, which would create a new IP for the service. This also removes the need to use environment variables, because pods can use the service DNS. Also, as the DNS does not change, you can reference database services as db.local in configuration files. Wildcard lookups are also supported, because any lookups resolve to the service IP, and removes the need to create the backend service before any of the frontend pods, since the service name (and hence DNS) is established upfront.
This DNS structure also covers headless services, where a portal IP is not assigned to the service and the kube-proxy does not load-balance or provide routing for its endpoints. Service DNS can still be used and responds with multiple A records, one for each pod of the service, allowing the client to round-robin between each pod.
5.2. HAProxy Router Plug-in
5.2.1. Overview
A router is one way to get traffic into the cluster. The HAProxy Template Router plug-in is one of the available router plugins.
5.2.2. HAProxy Template Router
The template router has two components:
- a wrapper that watches endpoints and routes and causes a HAProxy reload based on changes.
- a controller that builds the HAProxy configuration file based on routes and endpoints.
The HAProxy router uses version 1.5.18.
The controller and HAProxy are housed inside a pod, which is managed by a deployment configuration. The process of setting up the router is automated by the oc adm router command.
The controller watches the routes and endpoints for changes, as well as HAProxy’s health. When a change is detected, it builds a new haproxy-config file and restarts HAProxy. The haproxy-config file is constructed based on the router’s template file and information from OpenShift Dedicated.
The HAProxy template file can be customized as needed to support features that are not currently supported by OpenShift Dedicated. The HAProxy manual describes all of the features supported by HAProxy.
The following diagram illustrates how data flows from the master through the plug-in and finally into an HAProxy configuration:
Figure 5.1. HAProxy Router Data Flow

5.2.2.1. HAProxy Template Router Metrics
The HAProxy router exposes or publishes metrics in Prometheus format for consumption by external metrics collection and aggregation systems (e.g. Prometheus, statsd). The router can be configured to provide HAProxy CSV format metrics, or provide no router metrics at all.
The metrics are collected from both the router controller and from HAProxy every 5 seconds. The router metrics counters start at zero when the router is deployed and increase over time. The HAProxy metrics counters are reset to zero every time haproxy is reloaded. The router collects HAProxy statistics for each frontend, backend and server. To reduce resource usage when there are more than 500 servers, the backends are reported instead of the servers since a backend can have multiple servers.
The statistics are a subset of the available HAProxy Statistics.
The following HAProxy metrics are collected on a periodic basis and converted to Prometheus format. For every frontend the "F" counters are collected. When the counters are collected for each backend and the "S" server counters are collected for each server. Otherwise, the "B" counters are collected for each backend and no server counters are collected.
In the following table:
Column 1 - Index from HAProxy CSV statistics
Column 2
| F | Frontend metrics |
| b | Backend metrics when not showing Server metrics due to the Server Threshold, |
| B | Backend metrics when showing Server metrics |
| S | Server metrics. |
Column 3 - The counter
Column 4 - Counter description
| Index | Usage | Counter | Description |
| 2 | bBS | current_queue | Current number of queued requests not assigned to any server. |
| 4 | FbS | current_sessions | Current number of active sessions. |
| 5 | FbS | max_sessions | Maximum observed number of active sessions. |
| 7 | FbBS | connections_total | Total number of connections. |
| 8 | FbS | bytes_in_total | Current total of incoming bytes. |
| 9 | FbS | bytes_out_total | Current total of outgoing bytes. |
| 13 | bS | connection_errors_total | Total of connection errors. |
| 14 | bS | response_errors_total | Total of response errors. |
| 17 | bBS | up | Current health status of the backend (1 = UP, 0 = DOWN). |
| 21 | S | check_failures_total | Total number of failed health checks. |
| 24 | S | downtime_seconds_total | Total downtime in seconds.", nil), |
| 33 | FbS | current_session_rate | Current number of sessions per second over last elapsed second. |
| 35 | FbS | max_session_rate | Maximum observed number of sessions per second. |
| 40 | FbS | http_responses_total | Total of HTTP responses, code 2xx |
| 43 | FbS | http_responses_total | Total of HTTP responses, code 5xx |
| 60 | bS | http_average_response_latency_milliseconds | of the last 1024 requests in milliseconds. |
The router controller scrapes the following items. These are only available with Prometheus format metrics.
| Name | Description |
| template_router_reload_seconds | Measures the time spent reloading the router in seconds. |
| template_router_write_config_seconds | Measures the time spent writing out the router configuration to disk in seconds. |
| haproxy_exporter_up | Was the last scrape of haproxy successful. |
| haproxy_exporter_csv_parse_failures | Number of errors while parsing CSV. |
| haproxy_exporter_scrape_interval | The time in seconds before another scrape is allowed, proportional to size of data. |
| haproxy_exporter_server_threshold | Number of servers tracked and the current threshold value. |
| haproxy_exporter_total_scrapes | Current total HAProxy scrapes. |
| http_request_duration_microseconds | The HTTP request latencies in microseconds. |
| http_request_size_bytes | The HTTP request sizes in bytes. |
| http_response_size_bytes | The HTTP response sizes in bytes. |
| openshift_build_info | A metric with a constant '1' value labeled by major, minor, git commit & git version from which OpenShift was built. |
| ssh_tunnel_open_count | Counter of SSH tunnel total open attempts |
| ssh_tunnel_open_fail_count | Counter of SSH tunnel failed open attempts |
5.3. Routes
5.3.1. Overview
An OpenShift Dedicated route exposes a service at a host name, like www.example.com, so that external clients can reach it by name.
DNS resolution for a host name is handled separately from routing. Your administrator may have configured a DNS wildcard entry that will resolve to the OpenShift Dedicated node that is running the OpenShift Dedicated router. If you are using a different host name you may need to modify its DNS records independently to resolve to the node that is running the router.
Each route consists of a name (limited to 63 characters), a service selector, and an optional security configuration.
Wildcard routes are disabled in OpenShift Dedicated.
5.3.2. Route-specific Annotations
Using environment variables, a router can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations.
Route Annotations
For all the items outlined in this section, you can set annotations on the route definition for the route to alter its configuration
Table 5.2. Route Annotations
| Variable | Description | Environment Variable Used as Default |
|---|---|---|
|
|
Sets the load-balancing algorithm. Available options are |
|
|
|
Disables the use of cookies to track related connections. If set to | |
|
| Specifies an optional cookie to be used for this route. The name must consist of any combination of upper and lower case letters, digits, "_", and "-". The default is the hashed internal key name for the route. | |
|
|
Setting | |
|
| Limits the number of concurrent TCP connections shared by an IP address. | |
|
| Limits the rate at which an IP address can make HTTP requests. | |
|
| Limits the rate at which an IP address can make TCP connections. | |
|
| Sets a server-side timeout for the route. (TimeUnits) |
|
|
| Sets the interval for the back-end health checks. (TimeUnits) |
|
|
| Sets a whitelist for the route. | |
|
| Sets a Strict-Transport-Security header for the edge terminated or re-encrypt route. |
Example 5.1. A Route Setting Custom Timeout
apiVersion: v1
kind: Route
metadata:
annotations:
haproxy.router.openshift.io/timeout: 5500ms 1
[...]- 1
- Specifies the new timeout with HAProxy supported units (us, ms, s, m, h, d). If unit not provided, ms is the default.
Setting a server-side timeout value for passthrough routes too low can cause WebSocket connections to timeout frequently on that route.
5.3.3. Route-specific IP Whitelists
You can restrict access to a route to a select set of IP addresses by adding the haproxy.router.openshift.io/ip_whitelist annotation on the route. The whitelist is a space-separated list of IP addresses and/or CIDRs for the approved source addresses. Requests from IP addresses that are not in the whitelist are dropped.
Some examples:
When editing a route, add the following annotation to define the desired source IP’s. Alternatively, use oc annotate route <name>.
Allow only one specific IP address:
metadata:
annotations:
haproxy.router.openshift.io/ip_whitelist: 192.168.1.10Allow several IP addresses:
metadata:
annotations:
haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12Allow an IP CIDR network:
metadata:
annotations:
haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24Allow mixed IP addresses and IP CIDR networks:
metadata:
annotations:
haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8
Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.