Chapter 7. Multiple networks

7.1. Understanding multiple networks

In Kubernetes, container networking is delegated to networking plug-ins that implement the Container Network Interface (CNI).

OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. During cluster installation, you configure your default Pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your Pods. You can define more than one additional network for your cluster, depending on your needs. This gives you flexibility when you configure Pods that deliver network functionality, such as switching or routing.

7.1.1. Usage scenarios for an additional network

You can use an additional network in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons:

Performance
You can send traffic on two different planes in order to manage how much traffic is along each plane.
Security
You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers.

All of the Pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every Pod has an eth0 interface that is attached to the cluster-wide Pod network. You can view the interfaces for a Pod by using the oc exec -it <pod_name> -- ip a command. If you add additional network interfaces that use Multus CNI, they are named net1, net2, …​, netN.

To attach additional network interfaces to a Pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a Custom Resource (CR) that has a NetworkAttachmentDefinition type. A CNI configuration inside each of these CRs defines how that interface is created.

7.1.2. Additional networks in OpenShift Container Platform

OpenShift Container Platform provides the following CNI plug-ins for creating additional networks in your cluster:

7.2. Attaching a Pod to an additional network

As a cluster user you can attach a Pod to an additional network.

7.2.1. Adding a Pod to an additional network

You can add a Pod to an additional network. The Pod continues to send normal cluster-related network traffic over the default network.

When a Pod is created additional networks are attached to it. However, if a Pod already exists, you cannot attach additional networks to it.

Prerequisites

  • The Pod must be in the same namespace as the additional network.
  • Install the OpenShift CLI (oc).
  • You must log in to the cluster.

Procedure

  1. Add an annotation to the Pod object. Only one of the following annotation formats can be used:

    1. To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the Pod:

      metadata:
        annotations:
          k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1
      1
      To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that Pod will have multiple network interfaces attached to that network.
    2. To attach an additional network with customizations, add an annotation with the following format:

      metadata:
        annotations:
          k8s.v1.cni.cncf.io/networks: |-
            [
              {
                "name": "<network>", 1
                "namespace": "<namespace>", 2
                "default-route": ["<default-route>"] 3
              }
            ]
      1
      Specify the name of the additional network defined by a NetworkAttachmentDefinition CR.
      2
      Specify the namespace where the NetworkAttachmentDefinition CR is defined.
      3
      Optional: Specify an override for the default route, such as 192.168.17.1.
  2. To create the Pod, enter the following command. Replace <name> with the name of the Pod.

    $ oc create -f <name>.yaml
  3. Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the Pod.

    $ oc get pod <name> -o yaml

    In the following example, the example-pod Pod is attached to the net1 additional network:

    $ oc get pod example-pod -o yaml
    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: macvlan-bridge
        k8s.v1.cni.cncf.io/networks-status: |- 1
          [{
              "name": "openshift-sdn",
              "interface": "eth0",
              "ips": [
                  "10.128.2.14"
              ],
              "default": true,
              "dns": {}
          },{
              "name": "macvlan-bridge",
              "interface": "net1",
              "ips": [
                  "20.2.2.100"
              ],
              "mac": "22:2f:60:a5:f8:00",
              "dns": {}
          }]
      name: example-pod
      namespace: default
    spec:
      ...
    status:
      ...
    1
    The k8s.v1.cni.cncf.io/networks-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the Pod. The annotation value is stored as a plain text value.

7.2.1.1. Specifying Pod-specific addressing and routing options

When attaching a Pod to an additional network, you may want to specify further properties about that network in a particular Pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. In order to accomplish this, you can use the JSON formatted annotations.

Prerequisites

  • The Pod must be in the same namespace as the additional network.
  • Install the OpenShift Command-line Interface (oc).
  • You must log in to the cluster.

Procedure

To add a Pod to an additional network while specifying addressing and/or routing options, complete the following steps:

  1. Edit the Pod resource definition. If you are editing an existing Pod, run the following command to edit its definition in the default editor. Replace <name> with the name of the Pod to edit.

    $ oc edit pod <name>
  2. In the Pod resource definition, add the k8s.v1.cni.cncf.io/networks parameter to the Pod metadata mapping. The k8s.v1.cni.cncf.io/networks accepts a JSON string of a list of objects that reference the name of NetworkAttachmentDefinition Custom Resource (CR) names in addition to specifying additional properties.

    metadata:
      annotations:
        k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1
    1
    Replace <network> with a JSON object as shown in the following examples. The single quotes are required.
  3. In the following example the annotation specifies which network attachment will have the default route, using the default-route parameter.

    apiVersion: v1
    kind: Pod
    metadata:
      name: example-pod
      annotations:
        k8s.v1.cni.cncf.io/networks: '
        {
          "name": "net1"
        },
        {
          "name": "net2", 1
          "default-route": ["192.0.2.1"] 2
        }'
    spec:
      containers:
      - name: example-pod
        command: ["/bin/bash", "-c", "sleep 2000000000000"]
        image: centos/tools
    1
    The name key is the name of the additional network to associate with the Pod.
    2
    The default-route key specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than one default-route key is specified, this will cause the Pod to fail to become active.

The default route will cause any traffic that is not specified in other routes to be routed to the gateway.

Important

Setting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for Pod-to-Pod traffic to be routed over another interface.

To verify the routing properties of a Pod, the oc command may be used to execute the ip command within a Pod.

$ oc exec -it <pod_name> -- ip route
Note

You may also reference the Pod’s k8s.v1.cni.cncf.io/networks-status to see which additional network has been assigned the default route, by the presence of the default-route key in the JSON-formatted list of objects.

To set a static IP address or MAC address for a Pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO.

  1. Edit the CNO CR by running the following command:

    $ oc edit networks.operator.openshift.io cluster

The following YAML describes the configuration parameters for the CNO:

Cluster Network Operator YAML configuration

name: <name> 1
namespace: <namespace> 2
rawCNIConfig: '{ 3
  ...
}'
type: Raw

1
Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace.
2
Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used.
3
Specify the CNI plug-in configuration in JSON format, which is based on the following template.

The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plug-in:

macvlan CNI plug-in JSON configuration object using static IP and MAC address

{
  "cniVersion": "0.3.1",
  "plugins": [{ 1
      "type": "macvlan",
      "capabilities": { "ips": true }, 2
      "master": "eth0", 3
      "mode": "bridge",
      "ipam": {
        "type": "static"
      }
    }, {
      "capabilities": { "mac": true }, 4
      "type": "tuning"
    }]
}

1
The plugins field specifies a configuration list of CNI configurations.
2
The capabilities key denotes that a request is being made to enable the static IP functionality of a CNI plug-ins runtime configuration capabilities.
3
The master field is specific to the macvlan plug-in.
4
Here the capabilities key denotes that a request is made to enable the static MAC address functionality of a CNI plug-in.

The above network attachment may then be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given Pod.

Edit the desired Pod with:

$ oc edit pod <name>

macvlan CNI plug-in JSON configuration object using static IP and MAC address

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
      {
        "name": "<name>", 1
        "ips": [ "192.0.2.205/24" ], 2
        "mac": "CA:FE:C0:FF:EE:00" 3
      }
    ]'

1
Use the <name> as provided when creating the rawCNIConfig above.
2
Provide the desired IP address.
3
Provide the desired MAC address.
Note

Static IP addresses and MAC addresses do not have to be used at the same time, you may use them individually, or together.

To verify the IP address and MAC properties of a Pod with additional networks, use the oc command to execute the ip command within a Pod.

$ oc exec -it <pod_name> -- ip a

7.3. Removing a Pod from an additional network

As a cluster user you can remove a Pod from an additional network.

7.3.1. Removing a Pod from an additional network

You can remove a Pod from an additional network.

Prerequisites

  • You have configured an additional network for your cluster.
  • You have an additional network attached to the Pod.
  • Install the OpenShift CLI (oc).
  • You must log in to the cluster.

Procedure

To remove a Pod from an additional network, complete the following steps:

  1. Edit the Pod resource definition by running the following command. Replace <name> with the name of the Pod to edit.

    $ oc edit pod <name>
  2. Update the annotations mapping to remove the additional network from the Pod by performing one of the following actions:

    • To remove all additional networks from a Pod, remove the k8s.v1.cni.cncf.io/networks parameter from the Pod resource definition as in the following example:

      apiVersion: v1
      kind: Pod
      metadata:
        name: example-pod
        annotations: {}
      spec:
        containers:
        - name: example-pod
          command: ["/bin/bash", "-c", "sleep 2000000000000"]
          image: centos/tools
    • To remove a specific additional network from a Pod, update the k8s.v1.cni.cncf.io/networks parameter by removing the name of the NetworkAttachmentDefinition for the additional network.
  3. Optional: Confirm that the Pod is no longer attached to the additional network by running the following command. Replace <name> with the name of the Pod.

    $ oc describe pod <name>

    In the following example, the example-pod Pod is attached only to the default cluster network.

    $ oc describe pod example-pod

    Example output

    Name:               example-pod
    ...
    Annotations:        k8s.v1.cni.cncf.io/networks-status:
                          [{
                              "name": "openshift-sdn",
                              "interface": "eth0",
                              "ips": [
                                  "10.131.0.13"
                              ],
                              "default": true, 1
                              "dns": {}
                          }]
    Status:             Running
    ...

    1
    Only the default cluster network is attached to the Pod.

7.4. Configuring a bridge network

As a cluster administrator, you can configure an additional network for your cluster using the bridge Container Network Interface (CNI) plug-in. When configured, all Pods on a node are connected to a virtual switch. Each Pod is assigned an IP address on the additional network.

7.4.1. Creating an additional network attachment with the bridge CNI plug-in

The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition Custom Resource (CR) automatically.

Important

Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

To create an additional network for your cluster, complete the following steps:

  1. Edit the CNO CR by running the following command:

    $ oc edit networks.operator.openshift.io cluster
  2. Modify the CR that you are creating by adding the configuration for the additional network you are creating, as in the following example CR.

    The following YAML configures the bridge CNI plug-in:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      additionalNetworks: 1
      - name: test-network-1
        namespace: test-1
        type: Raw
        rawCNIConfig: '{
          "cniVersion": "0.3.1",
          "name": "test-network-1",
          "type": "bridge",
          "ipam": {
            "type": "static",
            "addresses": [
              {
                "address": "191.168.1.7"
              }
            ]
          }
        }'
    1
    Specify the configuration for the additional network attachment definition.
  3. Save your changes and quit the text editor to commit your changes.
  4. Optional: Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. There might be a delay before the CNO creates the CR.

    $ oc get network-attachment-definitions -n <namespace>

    Example output

    NAME                 AGE
    test-network-1       14m

7.4.1.1. Configuration for bridge

The configuration for an additional network attachment that uses the bridge Container Network Interface (CNI) plug-in is provided in two parts:

  • Cluster Network Operator (CNO) configuration
  • CNI plug-in configuration

The CNO configuration specifies the name for the additional network attachment and the namespace to create the attachment in. The plug-in is configured by a JSON object specified by the rawCNIConfig parameter in the CNO configuration.

The following YAML describes the configuration parameters for the CNO:

Cluster Network Operator YAML configuration

name: <name> 1
namespace: <namespace> 2
rawCNIConfig: '{ 3
  ...
}'
type: Raw

1
Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace.
2
Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used.
3
Specify the CNI plug-in configuration in JSON format, which is based on the following template.

The following object describes the configuration parameters for the bridge CNI plug-in:

bridge CNI plug-in JSON configuration object

{
  "cniVersion": "0.3.1",
  "name": "<name>", 1
  "type": "bridge",
  "bridge": "<bridge>", 2
  "ipam": { 3
    ...
  },
  "ipMasq": false, 4
  "isGateway": false, 5
  "isDefaultGateway": false, 6
  "forceAddress": false, 7
  "hairpinMode": false, 8
  "promiscMode": false, 9
  "vlan": <vlan>, 10
  "mtu": <mtu> 11
}

1
Specify the value for the name parameter you provided previously for the CNO configuration.
2
Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0.
3
Specify a configuration object for the ipam CNI plug-in. The plug-in manages IP address assignment for the network attachment definition.
4
Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge’s IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false.
5
Set to true to assign an IP address to the bridge. The default value is false.
6
Set to true to configure the bridge as the default gateway for the virtual network. The default value is false. If isDefaultGateway is set to true, then isGateway is also set to true automatically.
7
Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false, if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false.
8
Set to true to allow the virtual bridge to send an ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay. The default value is false.
9
Set to true to enable promiscuous mode on the bridge. The default value is false.
10
Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned.
11
Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.
7.4.1.1.1. bridge configuration example

The following example configures an additional network named bridge-net:

name: bridge-net
namespace: work-network
type: Raw
rawCNIConfig: '{ 1
  "cniVersion": "0.3.1",
  "name": "work-network",
  "type": "bridge",
  "isGateway": true,
  "vlan": 2,
  "ipam": {
    "type": "dhcp"
    }
}'
1
The CNI configuration object is specified as a YAML string.

7.4.1.2. Configuration for ipam CNI plug-in

The ipam Container Network Interface (CNI) plug-in provides IP address management (IPAM) for other CNI plug-ins. You can configure ipam for either static IP address assignment or dynamic IP address assignment by using DHCP. The DHCP server you specify must be reachable from the additional network.

The following JSON configuration object describes the parameters that you can set.

7.4.1.2.1. Static IP address assignment configuration

The following JSON describes the configuration for static IP address assignment:

Static assignment configuration

{
  "ipam": {
    "type": "static",
    "addresses": [ 1
      {
        "address": "<address>", 2
        "gateway": "<gateway>" 3
      }
    ],
    "routes": [ 4
      {
        "dst": "<dst>" 5
        "gw": "<gw>" 6
      }
    ],
    "dns": { 7
      "nameservers": ["<nameserver>"], 8
      "domain": "<domain>", 9
      "search": ["<search_domain>"] 10
    }
  }
}

1
An array describing IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported.
2
An IP address that you specify.
3
The default gateway to route egress network traffic to.
4
An array describing routes to configure inside the Pod.
5
The IP address range in CIDR format.
6
The gateway where network traffic is routed.
7
Optional: DNS configuration.
8
An of array of one or more IP addresses for to send DNS queries to.
9
The default domain to append to a host name. For example, if the domain is set to example.com, a DNS lookup query for example-host is rewritten as example-host.example.com.
10
An array of domain names to append to an unqualified host name, such as example-host, during a DNS lookup query.
7.4.1.2.2. Dynamic IP address assignment configuration

The following JSON describes the configuration for dynamic IP address address assignment with DHCP.

Renewal of DHCP leases

A Pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster.

To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example:

Example shim network attachment definition

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
  name: cluster
spec:
  ...
  additionalNetworks:
  - name: dhcp-shim
    namespace: default
    rawCNIConfig: |-
    {
      "name": "dhcp-shim",
      "cniVersion": "0.3.1",
      "type": "bridge",
      "master": "ens5",
      "ipam": {
        "type": "dhcp"
      }
    }

DHCP assignment configuration

{
  "ipam": {
    "type": "dhcp"
  }
}

7.4.1.2.3. Static IP address assignment configuration example

You can configure ipam for static IP address assignment:

{
  "ipam": {
    "type": "static",
      "addresses": [
        {
          "address": "191.168.1.7"
        }
      ]
  }
}
7.4.1.2.4. Dynamic IP address assignment configuration example using DHCP

You can configure ipam for DHCP:

{
  "ipam": {
    "type": "dhcp"
  }
}

7.4.2. Next steps

7.5. Configuring a macvlan network

As a cluster administrator, you can configure an additional network for your cluster using the macvlan CNI plug-in. When a Pod is attached to the network, the plug-in creates a sub-interface from the parent interface on the host. A unique hardware mac address is generated for each sub-device.

Important

The unique MAC addresses this plug-in generates for sub-interfaces might not be compatible with the security polices of your cloud provider.

7.5.1. Creating an additional network attachment with the macvlan CNI plug-in

The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition Custom Resource (CR) automatically.

Important

Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

To create an additional network for your cluster, complete the following steps:

  1. Edit the CNO CR by running the following command:

    $ oc edit networks.operator.openshift.io cluster
  2. Modify the CR that you are creating by adding the configuration for the additional network you are creating, as in the following example CR.

    The following YAML configures the macvlan CNI plug-in:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      additionalNetworks: 1
      - name: test-network-1
        namespace: test-1
        type: SimpleMacvlan
        simpleMacvlanConfig:
          ipamConfig:
            type: static
            staticIPAMConfig:
              addresses:
              - address: 10.1.1.7
    1
    Specify the configuration for the additional network attachment definition.
  3. Save your changes and quit the text editor to commit your changes.
  4. Optional: Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. There might be a delay before the CNO creates the CR.

    $ oc get network-attachment-definitions -n <namespace>

    Example output

    NAME                 AGE
    test-network-1       14m

7.5.1.1. Configuration for macvlan CNI plug-in

The following YAML describes the configuration parameters for the macvlan Container Network Interface (CNI) plug-in:

macvlan YAML configuration

name: <name> 1
namespace: <namespace> 2
type: SimpleMacvlan
simpleMacvlanConfig:
  master: <master> 3
  mode: <mode> 4
  mtu: <mtu> 5
  ipamConfig: 6
    ...

1
Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace.
2
Specify the namespace to create the network attachment in. If a value is not specified, the default namespace is used.
3
The ethernet interface to associate with the virtual interface. If a value for master is not specified, then the host system’s primary ethernet interface is used.
4
Configures traffic visibility on the virtual network. Must be either bridge, passthru, private, or vepa. If a value for mode is not provided, the default value is bridge.
5
Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.
6
Specify a configuration object for the ipam CNI plug-in. The plug-in manages IP address assignment for the attachment definition.
7.5.1.1.1. macvlan configuration example

The following example configures an additional network named macvlan-net:

name: macvlan-net
namespace: work-network
type: SimpleMacvlan
simpleMacvlanConfig:
  ipamConfig:
    type: DHCP

7.5.1.2. Configuration for ipam CNI plug-in

The ipam Container Network Interface (CNI) plug-in provides IP address management (IPAM) for other CNI plug-ins. You can configure ipam for either static IP address assignment or dynamic IP address assignment by using DHCP. The DHCP server you specify must be reachable from the additional network.

The following YAML configuration describes the parameters that you can set.

ipam CNI plug-in YAML configuration object

ipamConfig:
  type: <type> 1
  ... 2

1
Specify static to configure the plug-in to manage IP address assignment. Specify DHCP to allow a DHCP server to manage IP address assignment. You cannot specify any additional parameters if you specify a value of DHCP.
2
If you set the type parameter to static, then provide the staticIPAMConfig parameter.
7.5.1.2.1. Static ipam configuration YAML

The following YAML describes a configuration for static IP address assignment:

Static ipam configuration YAML

ipamConfig:
  type: static
  staticIPAMConfig:
    addresses: 1
    - address: <address> 2
      gateway: <gateway> 3
    routes: 4
    - destination: <destination> 5
      gateway: <gateway> 6
    dns: 7
      nameservers: 8
      - <nameserver>
      domain: <domain> 9
      search: 10
      - <search_domain>

1
A collection of mappings that define IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported.
2
An IP address that you specify.
3
The default gateway to route egress network traffic to.
4
A collection of mappings describing routes to configure inside the Pod.
5
The IP address range in CIDR format.
6
The gateway where network traffic is routed.
7
Optional: The DNS configuration.
8
A collection of one or more IP addresses for to send DNS queries to.
9
The default domain to append to a host name. For example, if the domain is set to example.com, a DNS lookup query for example-host is rewritten as example-host.example.com.
10
An array of domain names to append to an unqualified host name, such as example-host, during a DNS lookup query.
7.5.1.2.2. Dynamic ipam configuration YAML

The following YAML describes a configuration for static IP address assignment:

Dynamic ipam configuration YAML

ipamConfig:
  type: DHCP

7.5.1.2.3. Static IP address assignment configuration example

The following example shows an ipam configuration for static IP addresses:

ipamConfig:
  type: static
  staticIPAMConfig:
    addresses:
    - address: 10.51.100.11
      gateway: 10.51.100.10
    routes:
    - destination: 0.0.0.0/0
      gateway: 10.51.100.1
    dns:
      nameservers:
      - 10.51.100.1
      - 10.51.100.2
      domain: testDNS.example
      search:
      - testdomain1.example
      - testdomain2.example
7.5.1.2.4. Dynamic IP address assignment configuration example

The following example shows an ipam configuration for DHCP:

ipamConfig:
  type: DHCP

7.5.2. Next steps

7.6. Configuring an ipvlan network

As a cluster administrator, you can configure an additional network for your cluster by using the ipvlan Container Network Interface (CNI) plug-in. The virtual network created by this plug-in is associated with a physical interface that you specify.

7.6.1. Creating an additional network attachment with the ipvlan CNI plug-in

The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition Custom Resource (CR) automatically.

Important

Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

To create an additional network for your cluster, complete the following steps:

  1. Edit the CNO CR by running the following command:

    $ oc edit networks.operator.openshift.io cluster
  2. Modify the CR that you are creating by adding the configuration for the additional network you are creating, as in the following example CR.

    The following YAML configures the ipvlan CNI plug-in:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      additionalNetworks: 1
      - name: test-network-1
        namespace: test-1
        type: Raw
        rawCNIConfig: '{
          "cniVersion": "0.3.1",
          "name": "test-network-1",
          "type": "ipvlan",
          "master": "eth1",
          "mode": "l2",
          "ipam": {
            "type": "static",
            "addresses": [
              {
                "address": "191.168.1.7"
              }
            ]
          }
        }'
    1
    Specify the configuration for the additional network attachment definition.
  3. Save your changes and quit the text editor to commit your changes.
  4. Optional: Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. There might be a delay before the CNO creates the CR.

    $ oc get network-attachment-definitions -n <namespace>

    Example output

    NAME                 AGE
    test-network-1       14m

7.6.1.1. Configuration for ipvlan

The configuration for an additional network attachment that uses the ipvlan Container Network Interface (CNI) plug-in is provided in two parts:

  • Cluster Network Operator (CNO) configuration
  • CNI plug-in configuration

The CNO configuration specifies the name for the additional network attachment and the namespace to create the attachment in. The plug-in is configured by a JSON object specified by the rawCNIConfig parameter in the CNO configuration.

The following YAML describes the configuration parameters for the CNO:

Cluster Network Operator YAML configuration

name: <name> 1
namespace: <namespace> 2
rawCNIConfig: '{ 3
  ...
}'
type: Raw

1
Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace.
2
Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used.
3
Specify the CNI plug-in configuration in JSON format, which is based on the following template.

The following object describes the configuration parameters for the ipvlan CNI plug-in:

ipvlan CNI plug-in JSON configuration object

{
  "cniVersion": "0.3.1",
  "name": "<name>", 1
  "type": "ipvlan",
  "mode": "<mode>", 2
  "master": "<master>", 3
  "mtu": <mtu>, 4
  "ipam": { 5
    ...
  }
}

1
Specify the value for the name parameter you provided previously for the CNO configuration.
2
Specify the operating mode for the virtual network. The value must be l2, l3, or l3s. The default value is l2.
3
Specify the ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used.
4
Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel.
5
Specify a configuration object for the ipam CNI plug-in. The plug-in manages IP address assignment for the attachment definition.
7.6.1.1.1. ipvlan configuration example

The following example configures an additional network named ipvlan-net:

name: ipvlan-net
namespace: work-network
type: Raw
rawCNIConfig: '{ 1
  "cniVersion": "0.3.1",
  "name": "work-network",
  "type": "ipvlan",
  "master": "eth1",
  "mode": "l3",
  "ipam": {
    "type": "dhcp"
    }
}'
1
The CNI configuration object is specified as a YAML string.

7.6.1.2. Configuration for ipam CNI plug-in

The ipam Container Network Interface (CNI) plug-in provides IP address management (IPAM) for other CNI plug-ins. You can configure ipam for either static IP address assignment or dynamic IP address assignment by using DHCP. The DHCP server you specify must be reachable from the additional network.

The following JSON configuration object describes the parameters that you can set.

7.6.1.2.1. Static IP address assignment configuration

The following JSON describes the configuration for static IP address assignment:

Static assignment configuration

{
  "ipam": {
    "type": "static",
    "addresses": [ 1
      {
        "address": "<address>", 2
        "gateway": "<gateway>" 3
      }
    ],
    "routes": [ 4
      {
        "dst": "<dst>" 5
        "gw": "<gw>" 6
      }
    ],
    "dns": { 7
      "nameservers": ["<nameserver>"], 8
      "domain": "<domain>", 9
      "search": ["<search_domain>"] 10
    }
  }
}

1
An array describing IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported.
2
An IP address that you specify.
3
The default gateway to route egress network traffic to.
4
An array describing routes to configure inside the Pod.
5
The IP address range in CIDR format.
6
The gateway where network traffic is routed.
7
Optional: DNS configuration.
8
An of array of one or more IP addresses for to send DNS queries to.
9
The default domain to append to a host name. For example, if the domain is set to example.com, a DNS lookup query for example-host is rewritten as example-host.example.com.
10
An array of domain names to append to an unqualified host name, such as example-host, during a DNS lookup query.
7.6.1.2.2. Dynamic IP address assignment configuration

The following JSON describes the configuration for dynamic IP address address assignment with DHCP.

Renewal of DHCP leases

A Pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster.

To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example:

Example shim network attachment definition

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
  name: cluster
spec:
  ...
  additionalNetworks:
  - name: dhcp-shim
    namespace: default
    rawCNIConfig: |-
    {
      "name": "dhcp-shim",
      "cniVersion": "0.3.1",
      "type": "bridge",
      "master": "ens5",
      "ipam": {
        "type": "dhcp"
      }
    }

DHCP assignment configuration

{
  "ipam": {
    "type": "dhcp"
  }
}

7.6.1.2.3. Static IP address assignment configuration example

You can configure ipam for static IP address assignment:

{
  "ipam": {
    "type": "static",
      "addresses": [
        {
          "address": "191.168.1.7"
        }
      ]
  }
}
7.6.1.2.4. Dynamic IP address assignment configuration example using DHCP

You can configure ipam for DHCP:

{
  "ipam": {
    "type": "dhcp"
  }
}

7.6.2. Next steps

7.7. Configuring a host-device network

As a cluster administrator, you can configure an additional network for your cluster by using the host-device Container Network Interface (CNI) plug-in. The plug-in allows you to move the specified network device from the host’s network namespace into the Pod’s network namespace.

7.7.1. Creating an additional network attachment with the host-device CNI plug-in

The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition Custom Resource (CR) automatically.

Important

Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

To create an additional network for your cluster, complete the following steps:

  1. Edit the CNO CR by running the following command:

    $ oc edit networks.operator.openshift.io cluster
  2. Modify the CR that you are creating by adding the configuration for the additional network you are creating, as in the following example CR.

    The following YAML configures the host-device CNI plug-in:

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      additionalNetworks: 1
      - name: test-network-1
        namespace: test-1
        type: Raw
        rawCNIConfig: '{
          "cniVersion": "0.3.1",
          "name": "test-network-1",
          "type": "host-device",
          "device": "eth1"
        }'
    1
    Specify the configuration for the additional network attachment definition.
  3. Save your changes and quit the text editor to commit your changes.
  4. Optional: Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. There might be a delay before the CNO creates the CR.

    $ oc get network-attachment-definitions -n <namespace>

    Example output

    NAME                 AGE
    test-network-1       14m

7.7.1.1. Configuration for host-device

The configuration for an additional network attachment that uses the host-device Container Network Interface (CNI) plug-in is provided in two parts:

  • Cluster Network Operator (CNO) configuration
  • CNI plug-in configuration

The CNO configuration specifies the name for the additional network attachment and the namespace to create the attachment in. The plug-in is configured by a JSON object specified by the rawCNIConfig parameter in the CNO configuration.

The following YAML describes the configuration parameters for the CNO:

Cluster Network Operator YAML configuration

name: <name> 1
namespace: <namespace> 2
rawCNIConfig: '{ 3
  ...
}'
type: Raw

1
Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace.
2
Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used.
3
Specify the CNI plug-in configuration in JSON format, which is based on the following template.
Important

Specify your network device by setting only one of the following parameters: device, hwaddr, kernelpath, or pciBusID.

The following object describes the configuration parameters for the host-device CNI plug-in:

host-device CNI plug-in JSON configuration object

{
  "cniVersion": "0.3.1",
  "name": "<name>", 1
  "type": "host-device",
  "device": "<device>", 2
  "hwaddr": "<hwaddr>", 3
  "kernelpath": "<kernelpath>", 4
  "pciBusID": "<pciBusID>", 5
    "ipam": { 6
    ...
  }
}

1
Specify the value for the name parameter you provided previously for the CNO configuration.
2
Specify the name of the device, such as eth0.
3
Specify the device hardware MAC address.
4
Specify the Linux kernel device path, such as /sys/devices/pci0000:00/0000:00:1f.6.
5
Specify the PCI address of the network device, such as 0000:00:1f.6.
6
Specify a configuration object for the ipam CNI plug-in. The plug-in manages IP address assignment for the attachment definition.
7.7.1.1.1. host-device configuration example

The following example configures an additional network named hostdev-net:

name: hostdev-net
namespace: work-network
type: Raw
rawCNIConfig: '{ 1
  "cniVersion": "0.3.1",
  "name": "work-network",
  "type": "host-device",
  "device": "eth1"
}'
1
The CNI configuration object is specified as a YAML string.

7.7.1.2. Configuration for ipam CNI plug-in

The ipam Container Network Interface (CNI) plug-in provides IP address management (IPAM) for other CNI plug-ins. You can configure ipam for either static IP address assignment or dynamic IP address assignment by using DHCP. The DHCP server you specify must be reachable from the additional network.

The following JSON configuration object describes the parameters that you can set.

7.7.1.2.1. Static IP address assignment configuration

The following JSON describes the configuration for static IP address assignment:

Static assignment configuration

{
  "ipam": {
    "type": "static",
    "addresses": [ 1
      {
        "address": "<address>", 2
        "gateway": "<gateway>" 3
      }
    ],
    "routes": [ 4
      {
        "dst": "<dst>" 5
        "gw": "<gw>" 6
      }
    ],
    "dns": { 7
      "nameservers": ["<nameserver>"], 8
      "domain": "<domain>", 9
      "search": ["<search_domain>"] 10
    }
  }
}

1
An array describing IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported.
2
An IP address that you specify.
3
The default gateway to route egress network traffic to.
4
An array describing routes to configure inside the Pod.
5
The IP address range in CIDR format.
6
The gateway where network traffic is routed.
7
Optional: DNS configuration.
8
An of array of one or more IP addresses for to send DNS queries to.
9
The default domain to append to a host name. For example, if the domain is set to example.com, a DNS lookup query for example-host is rewritten as example-host.example.com.
10
An array of domain names to append to an unqualified host name, such as example-host, during a DNS lookup query.
7.7.1.2.2. Dynamic IP address assignment configuration

The following JSON describes the configuration for dynamic IP address address assignment with DHCP.

Renewal of DHCP leases

A Pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster.

To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example:

Example shim network attachment definition

apiVersion: operator.openshift.io/v1
kind: Network
metadata:
  name: cluster
spec:
  ...
  additionalNetworks:
  - name: dhcp-shim
    namespace: default
    rawCNIConfig: |-
    {
      "name": "dhcp-shim",
      "cniVersion": "0.3.1",
      "type": "bridge",
      "master": "ens5",
      "ipam": {
        "type": "dhcp"
      }
    }

DHCP assignment configuration

{
  "ipam": {
    "type": "dhcp"
  }
}

7.7.1.2.3. Static IP address assignment configuration example

You can configure ipam for static IP address assignment:

{
  "ipam": {
    "type": "static",
      "addresses": [
        {
          "address": "191.168.1.7"
        }
      ]
  }
}
7.7.1.2.4. Dynamic IP address assignment configuration example using DHCP

You can configure ipam for DHCP:

{
  "ipam": {
    "type": "dhcp"
  }
}

7.7.2. Next steps

7.8. Editing an additional network

As a cluster administrator you can modify the configuration for an existing additional network.

7.8.1. Modifying an additional network attachment definition

As a cluster administrator, you can make changes to an existing additional network. Any existing Pods attached to the additional network will not be updated.

Prerequisites

  • You have configured an additional network for your cluster.
  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

To edit an additional network for your cluster, complete the following steps:

  1. Run the following command to edit the Cluster Network Operator (CNO) CR in your default text editor:

    $ oc edit networks.operator.openshift.io cluster
  2. In the additionalNetworks collection, update the additional network with your changes.
  3. Save your changes and quit the text editor to commit your changes.
  4. Optional: Confirm that the CNO updated the NetworkAttachmentDefinition CR by running the following command. Replace <network-name> with the name of the additional network to display. There might be a delay before the CNO updates the NetworkAttachmentDefinition CR to reflect your changes.

    $ oc get network-attachment-definitions <network-name> -o yaml

    For example, the following console output displays a NetworkAttachmentDefinition that is named net1:

    $ oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}'
    { "cniVersion": "0.3.1", "type": "macvlan",
    "master": "ens5",
    "mode": "bridge",
    "ipam":       {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} }

7.9. Removing an additional network

As a cluster administrator you can remove an additional network attachment.

7.9.1. Removing an additional network attachment definition

As a cluster administrator, you can remove an additional network from your OpenShift Container Platform cluster. The additional network is not removed from any Pods it is attached to.

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

To remove an additional network from your cluster, complete the following steps:

  1. Edit the Cluster Network Operator (CNO) in your default text editor by running the following command:

    $ oc edit networks.operator.openshift.io cluster
  2. Modify the CR by removing the configuration from the additionalNetworks collection for the network attachment definition you are removing.

    apiVersion: operator.openshift.io/v1
    kind: Network
    metadata:
      name: cluster
    spec:
      additionalNetworks: [] 1
    1
    If you are removing the configuration mapping for the only additional network attachment definition in the additionalNetworks collection, you must specify an empty collection.
  3. Save your changes and quit the text editor to commit your changes.
  4. Optional: Confirm that the additional network CR was deleted by running the following command:

    $ oc get network-attachment-definition --all-namespaces

7.10. Configuring PTP

Important

Precision Time Protocol (PTP) hardware is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

7.10.1. About PTP hardware on OpenShift Container Platform

OpenShift Container Platform includes the capability to use PTP hardware on your nodes. You can configure linuxptp services on nodes with PTP capable hardware.

You can use the OpenShift Container Platform console to install PTP by deploying the PTP Operator. The PTP Operator creates and manages the linuxptp services. The Operator provides following features:

  • Discover the PTP capable device in cluster.
  • Manage configuration of linuxptp services.

7.10.2. Installing the PTP Operator

As a cluster administrator, you can install the PTP Operator using the OpenShift Container Platform CLI or the web console.

7.10.2.1. Installing the Operator using the CLI

As a cluster administrator, you can install the Operator using the CLI.

Prerequisites

  • A cluster installed on bare-metal hardware with nodes that have hardware that supports PTP.
  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.

Procedure

  1. Create a namespace for the PTP Operator by completing the following actions:

    1. Create the following Namespace Custom Resource (CR) that defines the openshift-ptp namespace, and then save the YAML in the ptp-namespace.yaml file:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: openshift-ptp
        labels:
          openshift.io/run-level: "1"
    2. Create the namespace by running the following command:

      $ oc create -f ptp-namespace.yaml
  2. Install the PTP Operator in the namespace you created in the previous step by creating the following objects:

    1. Create the following OperatorGroup CR and save the YAML in the ptp-operatorgroup.yaml file:

      apiVersion: operators.coreos.com/v1
      kind: OperatorGroup
      metadata:
        name: ptp-operators
        namespace: openshift-ptp
      spec:
        targetNamespaces:
        - openshift-ptp
    2. Create the OperatorGroup CR by running the following command:

      $ oc create -f ptp-operatorgroup.yaml
    3. Run the following command to get the channel value required for the next step.

      $ oc get packagemanifest ptp-operator -n openshift-marketplace -o jsonpath='{.status.defaultChannel}'
      
      4.3
    4. Create the following Subscription CR and save the YAML in the ptp-sub.yaml file:

      Example Subscription

      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: ptp-operator-subscription
        namespace: openshift-ptp
      spec:
        channel: <channel> 1
        name: ptp-operator
        source: redhat-operators 2
        sourceNamespace: openshift-marketplace

      1
      Specify the value from you obtained in the previous step for the .status.defaultChannel parameter.
      2
      You must specify the redhat-operators value.
    5. Create the Subscription object by running the following command:

      $ oc create -f ptp-sub.yaml
    6. Change to the openshift-ptp project:

      $ oc project openshift-ptp

      Example output

      Now using project "openshift-ptp"

7.10.2.2. Installing the Operator using the web console

As a cluster administrator, you can install the Operator using the web console.

Note

You have to create the Namespace CR and OperatorGroup CR as mentioned in the previous section.

Procedure

  1. Install the PTP Operator using the OpenShift Container Platform web console:

    1. In the OpenShift Container Platform web console, click OperatorsOperatorHub.
    2. Choose PTP Operator from the list of available Operators, and then click Install.
    3. On the Create Operator Subscription page, under A specific namespace on the cluster select openshift-ptp. Then, click Subscribe.
  2. Optional: Verify that the PTP Operator installed successfully:

    1. Switch to the OperatorsInstalled Operators page.
    2. Ensure that PTP Operator is listed in the openshift-ptp project with a Status of InstallSucceeded.

      Note

      During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

      If the operator does not appear as installed, to troubleshoot further:

      • Go to the OperatorsInstalled Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
      • Go to the WorkloadsPods page and check the logs for Pods in the openshift-ptp project.

7.10.3. Automated discovery of PTP network devices

The PTP Operator adds the NodePtpDevice.ptp.openshift.io Custom Resource Definition (CRD) to OpenShift Container Platform. The PTP Operator will search your cluster for PTP capable network devices on each node. The Operator creates and updates a NodePtpDevice Custom Resource (CR) for each node that provides a compatible PTP device.

One CR is created for each node, and shares the same name as the node. The .status.devices list provides information about the PTP devices on a node.

The following is an example of a NodePtpDevice CR created by the PTP Operator:

apiVersion: ptp.openshift.io/v1
kind: NodePtpDevice
metadata:
  creationTimestamp: "2019-11-15T08:57:11Z"
  generation: 1
  name: dev-worker-0 1
  namespace: openshift-ptp 2
  resourceVersion: "487462"
  selfLink: /apis/ptp.openshift.io/v1/namespaces/openshift-ptp/nodeptpdevices/dev-worker-0
  uid: 08d133f7-aae2-403f-84ad-1fe624e5ab3f
spec: {}
status:
  devices: 3
  - name: eno1
  - name: eno2
  - name: ens787f0
  - name: ens787f1
  - name: ens801f0
  - name: ens801f1
  - name: ens802f0
  - name: ens802f1
  - name: ens803
1
The value for the name parameter is the same as the name of the node.
2
The CR is created in openshift-ptp namespace by PTP Operator.
3
The devices collection includes a list of all of the PTP capable devices discovered by the Operator on the node.

7.10.4. Configuring Linuxptp services

The PTP Operator adds the PtpConfig.ptp.openshift.io Custom Resource Definition (CRD) to OpenShift Container Platform. You can configure the Linuxptp services (ptp4l, phc2sys) by creating a PtpConfig Custom Resource (CR).

Prerequisites

  • Install the OpenShift CLI (oc).
  • Log in as a user with cluster-admin privileges.
  • You must have installed the PTP Operator.

Procedure

  1. Create the following PtpConfig CR, and then save the YAML in the <name>-ptp-config.yaml file. Replace <name> with the name for this configuration.

    apiVersion: ptp.openshift.io/v1
    kind: PtpConfig
    metadata:
      name: <name> 1
      namespace: openshift-ptp 2
    spec:
      profile: 3
      - name: "profile1" 4
        interface: "ens787f1" 5
        ptp4lOpts: "-s -2" 6
        phc2sysOpts: "-a -r" 7
      recommend: 8
      - profile: "profile1" 9
        priority: 10 10
        match: 11
        - nodeLabel: "node-role.kubernetes.io/worker" 12
          nodeName: "dev-worker-0" 13
    1
    Specify a name for the PtpConfig CR.
    2
    Specify the namespace where the PTP Operator is installed.
    3
    Specify an array of one or more profile objects.
    4
    Specify the name of a profile object which is used to uniquely identify a profile object.
    5
    Specify the network interface name to use by the ptp4l service, for example ens787f1.
    6
    Specify system config options for the ptp4l service, for example -s -2. This should not include the interface name -i <interface> and service config file -f /etc/ptp4l.conf because these will be automatically appended.
    7
    Specify system config options for the phc2sys service, for example -a -r.
    8
    Specify an array of one or more recommend objects which define rules on how the profile should be applied to nodes.
    9
    Specify the profile object name defined in the profile section.
    10
    Specify the priority with an integer value between 0 and 99. A larger number gets lower priority, so a priority of 99 is lower than a priority of 10. If a node can be matched with multiple profiles according to rules defined in the match field, the profile with the higher priority will be applied to that node.
    11
    Specify match rules with nodeLabel or nodeName.
    12
    Specify nodeLabel with the key of node.Labels from the node object.
    13
    Specify nodeName with node.Name from the node object.
  2. Create the CR by running the following command:

    $ oc create -f <filename> 1
    1
    Replace <filename> with the name of the file you created in the previous step.
  3. Optional: Check that the PtpConfig profile is applied to nodes that match with nodeLabel or nodeName.

    $ oc get pods -n openshift-ptp -o wide

    Example output

    NAME                            READY   STATUS    RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
    linuxptp-daemon-4xkbb           1/1     Running   0          43m   192.168.111.15   dev-worker-0   <none>           <none>
    linuxptp-daemon-tdspf           1/1     Running   0          43m   192.168.111.11   dev-master-0   <none>           <none>
    ptp-operator-657bbb64c8-2f8sj   1/1     Running   0          43m   10.128.0.116     dev-master-0   <none>           <none>
    
    $ oc logs linuxptp-daemon-4xkbb -n openshift-ptp
    I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile
    I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to:
    I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------
    I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 1
    I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1    2
    I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -s -2       3
    I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r     4
    I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------
    I1115 09:41:18.117934 4143292 daemon.go:186] Starting phc2sys...
    I1115 09:41:18.117985 4143292 daemon.go:187] phc2sys cmd: &{Path:/usr/sbin/phc2sys Args:[/usr/sbin/phc2sys -a -r] Env:[] Dir: Stdin:<nil> Stdout:<nil> Stderr:<nil> ExtraFiles:[] SysProcAttr:<nil> Process:<nil> ProcessState:<nil> ctx:<nil> lookPathErr:<nil> finished:false childFiles:[] closeAfterStart:[] closeAfterWait:[] goroutine:[] errch:<nil> waitDone:<nil>}
    I1115 09:41:19.118175 4143292 daemon.go:186] Starting ptp4l...
    I1115 09:41:19.118209 4143292 daemon.go:187] ptp4l cmd: &{Path:/usr/sbin/ptp4l Args:[/usr/sbin/ptp4l -m -f /etc/ptp4l.conf -i ens787f1 -s -2] Env:[] Dir: Stdin:<nil> Stdout:<nil> Stderr:<nil> ExtraFiles:[] SysProcAttr:<nil> Process:<nil> ProcessState:<nil> ctx:<nil> lookPathErr:<nil> finished:false childFiles:[] closeAfterStart:[] closeAfterWait:[] goroutine:[] errch:<nil> waitDone:<nil>}
    ptp4l[102189.864]: selected /dev/ptp5 as PTP clock
    ptp4l[102189.886]: port 1: INITIALIZING to LISTENING on INIT_COMPLETE
    ptp4l[102189.886]: port 0: INITIALIZING to LISTENING on INIT_COMPLETE

    1
    Profile Name is the name that is applied to node dev-worker-0.
    2
    Interface is the PTP device specified in the profile1 interface field. The ptp4l service runs on this interface.
    3
    Ptp4lOpts are the ptp4l sysconfig options specified in profile1 Ptp4lOpts field.
    4
    Phc2sysOpts are the phc2sys sysconfig options specified in profile1 Phc2sysOpts field.