How to configure OpenShift Service Mesh with Mutual TLS EgressGateway Origination with an Egress Router in DNS proxy mode

Updated -

Index

Before to start.

Overview.

Mutual TLS (Transport Layer Security ) or two-way authentication is an optional component of TLS that refers to two parties authenticating each other at the same time in an authentication protocol. It adds a layer of security over TLS and allows services to verify the client that's making the connection.

This article will show you how to configure the Red Hat OpenShift Service Mesh Egress Gateway to perform Mutual TLS Origination for traffic to an external Nginx server. With the Mutual TLS Origination the Egress Gateway is configured to accept unencrypted internal HTTP connections, encrypt the requests with TLS certificates, and then forward them to HTTPS Nginx server that is secured using Mutual TLS.

Here we will also show how to configure the Service Mesh to redirect the traffic from the Egress Gateway to an Egress Router, configured in DNS proxy mode, so that a specific Source IP address is used for all outbound traffic from the Mesh.

Prerequisites.

  • Installed Red Hat OpenShift Container Platform (RHOCP) 4.8 or 4.9 cluster.
  • Installed Red Hat Service Mesh 2.1.1-0
  • A Linux based server with podman and firewalld installed to run the Nginx Web Server.

Part 1 - OpenShift Service Mesh with Mutual TLS EgressGateway Origination.

This section describes how to perform the Mutual TLS Origination for egress traffic using an Istio Egress Gateway. In this example we use an external Nginx service that requires mutual TLS.

The steps described below are as follows:

  1. Generate client and server certificates.
  2. Deploy an Nginx server configured for mutual TLS.
  3. Start a simple Sleep service which will be used as a test source for external calls.
  4. Configure mutual TLS Origination for egress traffic.

Create Client and Server TLS certificates.

Before configuring the the Nginx Server and the Service Mesh it is necessary to generate the certificates and keys. Mutual TLS session requires Certification Authority (CA), Client and Server certificates. For this task we used openssl but you can use your favorite tool to generate certificates and keys.

On your local machine create a shell variable to hold the Fully Qualified Domain Name (FQDN) of the Nginx Server:

$ export FQDN=<FQDN>

Create the CA certificate and private key to sign the certificate for your services:

$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj "/O=example Inc./CN=$FQDN" -keyout CA.key -out CA.crt

Create the Nginx Server certificate and private key:

$ openssl req -out server.csr -newkey rsa:2048 -nodes -keyout server.key -subj "/CN=$FQDN/O=some organization"
$ openssl x509 -req -days 365 -CA CA.crt -CAkey CA.key -set_serial 0 -in server.csr -out server.crt

Generate the Client certificate and private key:

$ openssl req -out client.csr -newkey rsa:2048 -nodes -keyout client.key -subj "/CN=$FQDN/O=client organization"
$ openssl x509 -req -days 365 -CA CA.crt -CAkey CA.key -set_serial 1 -in client.csr -out client.crt

Check the generated files:

$ ls -gG | egrep "crt|key"
-rw-rw-r--. 1 1237 Apr 10 09:26 CA.crt         <= CA certificate
-rw-rw-r--. 1 1704 Apr 10 09:26 CA.key         <= CA key
-rw-rw-r--. 1 1119 Apr 10 09:27 client.crt     <= Client certificate
-rw-rw-r--. 1 1704 Apr 10 09:27 client.key     <= Client kery
-rw-rw-r--. 1 1115 Apr 10 09:27 server.crt     <= Nginx Server certificate
-rw-rw-r--. 1 1704 Apr 10 09:27 server.key     <= Nginx Server key

Configure and deploy the Nginx Server with Mutual TLS.

Copy the previously generated certificate and key on the remote server that will host the Nginx instance:

$ scp -q -i <SSH-KEY>.key server.crt server.key CA.crt client.crt client.key <HOSTNAME>@$FQDN:~/

Note: replace the <SSH-KEY>.key and the <HOSTNAME> with the SSH Identity File and Hostname of the remote server respectively.

On the remote server, use the steps below to create and configure a container running the nginx latest image. The nginx.conf file is used to setup the Nginx Web Server that routes traffic to HTTPS (Port 443) and encrypts communication using our TLS Certificate files.
Before continuing, make sure you are logged in to the container registry of your choice. In this guide the Red Hat Registry has been used.

Create the Nginx server configuration file:

$ cat <<\EOF > ./nginx.conf
events {
}

http {
  log_format main '$remote_addr - $remote_user [$time_local]  $status '
  '"$request" $body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';
  access_log /var/log/nginx/access.log main;
  error_log  /var/log/nginx/error.log;
  server {
    listen 443 ssl;
    server_name <FQDN>;

    root /usr/share/nginx/html;
    index index.html;


    ssl_certificate /etc/nginx/server-certs/server.crt;
    ssl_certificate_key /etc/nginx/server-certs/server.key;

    ssl_client_certificate /etc/nginx/ca-certs/ca.crt;
    ssl_verify_client on;
  }
}
EOF

Use the FQDN of the Nginx server as server_name:

$ export FQDN=<FQDN>
$ sed -i "s/<FQDN>/$FQDN/g" nginx.conf

Create the Nginx rootless container using podman and check it is running:

$ podman run -d --name nginx -p 8443:443 -v ~/nginx.conf:/etc/nginx/nginx.conf:z -v ~/server.crt:/etc/nginx/server-certs/server.crt:z -v ~/server.key:/etc/nginx/server-certs/server.key:z -v ~/CA.crt:/etc/nginx/ca-certs/ca.crt:z nginx:latest

Check the container is running:

$ podman container list --format "table {{.Command}} {{.Status}} {{.Ports}}"
Command                Status                 Ports
nginx -g daemon o...   Up About an hour ago   0.0.0.0:8443-&gt;443/tcp

Use Firewalld to forward external traffic to unprivileged ports:

$ sudo firewall-cmd --add-forward-port=port=443:proto=tcp:toport=8443

Use curl to locally test the container connection and check the validity of the certificates.
You should see an Nginx response similar to the following:

$ curl -s -k --cert ~/client.crt --key ~/client.key --cacert ~/CA.crt https://localhost:8443
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Deploy the Sleep sample application.

The Sleep application is a simple service that does nothing but sleep. It is an ubuntu container with curl installed used to test and invoke services outside the OpenShift Service Mesh. As such it will be deployed and configured to use the Service Mesh.

From you local machine, create the Service Account, Service and Deployment for the Sleep sample application:

$ oc new-project sleep
$ oc apply -n sleep -f - <<EOF     
##################
# Service Account
##################
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sleep
---
##################
# Service 
##################
apiVersion: v1
kind: Service
metadata:
  name: sleep
  labels:
    app: sleep
    service: sleep
spec:
  ports:
  - port: 80
    name: http
  selector:
    app: sleep
---
##################
# Deployment 
##################
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sleep
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sleep
  template:
    metadata:
      labels:
        app: sleep
      annotations:
        sidecar.istio.io/inject: "true"
    spec:
      terminationGracePeriodSeconds: 0
      serviceAccountName: sleep
      containers:
      - name: sleep
        image: curlimages/curl
        command: ["/bin/sleep", "3650d"]
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /etc/sleep/tls
          name: secret-volume
      volumes:
      - name: secret-volume
        secret:
          secretName: sleep-secret
          optional: true
EOF

Check the Sleep application has been correctly deployed in the Service Mesh:

$ oc get deployment,svc,pods -n sleep
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/sleep   1/1     1            1           16m

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/sleep   ClusterIP   172.30.231.213   <none>        80/TCP    16m

NAME                         READY   STATUS    RESTARTS   AGE
pod/sleep-6d646957b7-wvql2   2/2     Running   0          50s               <= istio-proxy sidecar injected 

Configure Mutual TLS Origination for egress traffic.

Create an OpenShift secret to hold previously created Certification Authority and Client certificates:

$ oc create secret -n istio-system generic client-credential --from-file=tls.key=client.key --from-file=tls.crt=client.crt --from-file=ca.crt=CA.crt

The Secret must be created in the Service Mesh Control Plane (smcp) namespace, istio-system in this case.

Create the egress Gateway and ServiceEntry for $FQDN and port 443:

$ oc apply -n sleep -f - <<EOF
##################
# Gateway 
##################
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istio-egressgateway
spec:
  selector:
    istio: egressgateway
  servers:
  - port:
      number: 443
      name: https
      protocol: HTTPS
    hosts:
    - $FQDN
    tls:
      mode: ISTIO_MUTUAL
---
##################
# ServiceEntry 
##################
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: nginx1
spec:
  hosts:
  - $FQDN
  ports:
  - number: 80
    name: http
    protocol: HTTP
  - number: 443
    name: https
    protocol: HTTPS
  resolution: DNS
EOF

Define a DestinationRule and VirtualService to direct the traffic through the Istio Egress Gateway and from the Istio Egress Gateway to the external service:

$ oc apply -n sleep -f - <<EOF
##################
# DestinationRule 
##################
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: egressgateway-for-nginx
spec:
  host: istio-egressgateway.istio-system.svc.cluster.local
  subsets:
  - name: nginx
    trafficPolicy:
      loadBalancer:
        simple: ROUND_ROBIN
      portLevelSettings:
      - port:
          number: 443
        tls:
          mode: ISTIO_MUTUAL
          sni: $FQDN
---
##################
# VirtualService 
##################
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: direct-nginx-through-egress-gateway
spec:
  hosts:
  - $FQDN
  gateways:
  - istio-egressgateway
  - mesh
  http:
  - match:
    - gateways:
      - mesh
      port: 80
    route:
    - destination:
        host: istio-egressgateway.istio-system.svc.cluster.local
        subset: nginx
        port:
          number: 443
      weight: 100
  - match:
    - gateways:
      - istio-egressgateway
      port: 443
    route:
    - destination:
        host: $FQDN
        port:
          number: 443
      weight: 100
EOF

Add a DestinationRule to perform Mutual TLS origination:

oc create -n istio-system -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: originate-mtls-for-nginx
spec:
  host: $FQDN             # <= host update with the OCP Egress Router
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN
    portLevelSettings:
    - port:
        number: 443
      tls:
        mode: MUTUAL
        credentialName: client-credential
        sni: $FQDN
EOF

Communication test and logs.

On your local machine use the curl command to test the connection to the nginx server and check the validity of the certificates:

$ curl -sS --cert ./client.crt --key ./client.key --cacert ./CA.crt https://$FQDN
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Send an HTTP request from the sleep application inside the Service Mesh to test the the entire chain, i.e. SleepEgress GatewayNginx:

$ oc exec "$(oc get pod -n sleep -l app=sleep -o jsonpath={.items..metadata.name})" -n sleep -c sleep -- curl -sS http://$FQDN
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Check the log of the Egress Gateway pod. You should see a line corresponding to our request similar to the following:

$ oc -n istio-system logs $(oc get pod -l app=istio-egressgateway -n istio-system --no-headers | awk '{print $1}') | tail -n 1
[2022-04-11T14:14:17.578Z] "GET / HTTP/1.1" 200 - via_upstream - "-" 0 615 13 12 "10.131.0.208" "curl/7.82.0-DEV" "915d7330-6156-999d-b0d9-c24b2bf3603e" "node-0.nginx.lab.rdu2.cee.redhat.com" "10.10.94.12:443" outbound|443||node-0.nginx.lab.rdu2.cee.redhat.com 10.131.0.211:39518 10.131.0.211:8443 10.131.0.208:59516 node-0.nginx.lab.rdu2.cee.redhat.com -

Part 2 - Mutual TLS EgressGateway Origination with an Egress Router.

For security reasons the external remote service can often be set to allow access only from specific IP addresses. In this case, it is necessary that the traffic sent to the external service and leaving the cluster has a specific IP address. To achieve this step an additional object will be created, the OpenShift Container Platform Egress Router in DNS Proxy Mode. The Egress Router pod redirects traffic to a specified remote server from a private source IP address that is not used for any other purpose. To this aim a macvlan-based additional network is required. The macvlan interface allows pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-base additional network is provided a unique MAC address.

In this case the Service Mesh is configured to forward the HTTPS traffic to the Egress Router which in turn redirects the traffic to the Nginx server. This results in traffic exiting the cluster with a specific IP address as configured on the Egress Router.

Configure and deploy the Egress Router in DNS Proxy Mode.

Identify an IP address from the physical network of the cluster nodes as well as the default gateway used by those nodes:

$ export EGRESS_SOURCE=<egress-router>
$ export EGRESS_GATEWAY=<egress-gateway>

Note: replace <egress-router> with an IP address from the physical network that the node is on that is reserved for use by the egress router pod. Also replace <egress-gateway> with default gateway used by the node.

In the egress-router namespace create the Egress Router Pod and Service:

$ oc new-project egress-router
$ oc apply -f - <<EOF
##################
# Pod 
##################
apiVersion: v1
kind: Pod
metadata:
  name: egress-router
  namespace: egress-router
  labels:
    name: egress-router
    app: egress-router
  annotations:
    pod.network.openshift.io/assign-macvlan: "true" 
spec:
  initContainers:
  - name: egress-router
    image: registry.redhat.io/openshift4/ose-egress-router
    securityContext:
      privileged: true
    env:
    - name: EGRESS_SOURCE 
      value: $EGRESS_SOURCE                        
    - name: EGRESS_GATEWAY 
      value: $EGRESS_GATEWAY                     
    - name: EGRESS_ROUTER_MODE
      value: dns-proxy
  containers:
  - name: egress-router-pod
    image: registry.redhat.io/openshift4/ose-egress-dns-proxy
    securityContext:
      privileged: true
    env:
    - name: EGRESS_DNS_PROXY_DESTINATION          # The incoming traffict to port 443 will be redirected on the same port of the FQDN Server
      value: |-
        443 $FQDN                               
---
##################
# Service 
##################
apiVersion: v1
kind: Service
metadata:
  name: egress-dns-svc
  namespace: egress-router
  labels:
    app: egress-router
spec:
  ports:
  - name: https
    protocol: TCP
    port: 443
    targetPort: 443
  type: ClusterIP
  selector:
    name: egress-router
EOF

Check the Egress Router pod and service:

$ oc get svc,pods -n egress-router
NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/egress-dns-svc   ClusterIP   172.30.85.198   <none>        443/TCP   36m

NAME           READY   STATUS    RESTARTS   AGE
pod/egress-router   1/1     Running   0          64s

Check that the Egress Router pod is communicating with the external service:

$ oc logs egress-router -n egress-router
Running haproxy with config:
  global
      log         127.0.0.1 local2
      chroot      /var/lib/haproxy
      pidfile     /var/lib/haproxy/run/haproxy.pid
      maxconn     4000
      user        haproxy
      group       haproxy
  defaults
      log                     global
      mode                    tcp
      option                  dontlognull
      option                  tcplog
      option                  redispatch
      retries                 3
      timeout http-request    100s
      timeout queue           1m
      timeout connect         10s
      timeout client          1m
      timeout server          1m
      timeout http-keep-alive 100s
      timeout check           10s
  resolvers dns-resolver
      nameserver ns1 172.30.0.10:53
      resolve_retries      3
      timeout retry        1s
      hold valid           10s
  frontend fe1
      bind :443
      default_backend be1
  backend be1
      server node-0.nginx.lab.rdu2.cee.redhat.com:443 check resolvers dns-resolver resolve-prefer ipv4
[WARNING] 115/131810 (1) : Server be1/dest1 is UP, reason: Layer4 check passed, check duration: 141ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.

Update Service Mesh configuration for Egress Router.

In this case the Service Mesh has to redirect the HTTP traffic from the Sleep pod to the Egress Router, so Destination Rule and the Service Entry must be updated:

$ oc apply -f - <<EOF
##################
# VirtualService 
##################
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: direct-nginx-through-egress-gateway
  namespace: sleep
spec:
  hosts:
  - $FQDN
  gateways:
  - istio-egressgateway
  - mesh
  http:
  - match:
    - gateways:
      - mesh
      port: 80
    route:
    - destination:
        host: istio-egressgateway.istio-system.svc.cluster.local
        subset: nginx
        port:
          number: 443
      weight: 100
  - match:
    - gateways:
      - istio-egressgateway
      port: 443
    route:
    - destination:
        host: egress-dns-svc.egress-router.svc.cluster.local       # <= host update with the OCP Egress Router
        port:
          number: 443
      weight: 100
---
##################
# DestinationRule 
##################
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: originate-mtls-for-nginx
  namespace: istio-system
spec:
  host: egress-dns-svc.egress-router.svc.cluster.local            # <= host update with the OCP Egress Router
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN
    portLevelSettings:
    - port:
        number: 443
      tls:
        mode: MUTUAL
        credentialName: client-credential
        sni: $FQDN
EOF

Communication test and logs.

Send an HTTP request from the sleep application inside the Service Mesh to test the the entire chain, i.e. SleepEgress GatewayEgress RouterNginx:

$ curl -sS --cert ./client.crt --key ./client.key --cacert ./CA.crt https://$FQDN
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Check the log of the Egress Gateway pod. You should see a line corresponding to our request similar to the following:

$ oc -n istio-system logs $(oc get pod -l app=istio-egressgateway -n istio-system --no-headers | awk '{print $1}') | tail -n 1
[2022-04-11T13:19:02.901Z] "GET / HTTP/1.1" 200 - "-" "-" 0 615 643 641 "10.128.2.109" "curl/7.82.0-DEV" "b464495f-531d-9b9b-90d2-18e37175c561" "node-0.nginx.lab.rdu2.cee.redhat.com" "10.10.94.12:443" outbound|443||egress-dns-svc.egress-router.svc.cluster.local 10.128.2.57:60782 10.128.2.57:8443 10.128.2.109:57140 node-0.nginx.lab.rdu2.cee.redhat.com -

Conclusions.

This two-parts article aims to describe how to configure an Istio Egress Gateway to perform Mutual TLS Origination for traffic to external remote services. It also provides an optional configuration to redirect traffic to the external remote services through the OpenShift Container Platform Egress Router. This ensures that the traffic leaving the Service Mesh is assigned a specific Source IP address.

With these configurations the application pod inside the Service Mesh can communicate with external remote services using plain HTTP and leave the security management to the Service Mesh configured with Mutual TLS Origination. Furthermore, using the Egress Router all traffic to the external services is sent with a specific Source IP address, thus allowing the configuration of appropriate Firewall rules for the traffic coming from the Source IP address.

Comments