Guide: Setting Up a Restricted-Access OpenShift Cluster for SAP EdgeLM with Service Mesh 3.x

Updated -

Table of Contents

Overview

This guide provides step-by-step instructions for onboarding a shared Red Hat OpenShift cluster into SAP Edge Lifecycle Management (ELM) using a restricted-privilege model. Instead of requiring full cluster-admin rights, ELM will use a dedicated Service Account with a precise, limited set of permissions. The official source of truth for this process is SAP Note 3618713, and this guide serves as an extension of the information provided therein.

Intended Audience: This document is for Red Hat OpenShift Cluster Administrators responsible for preparing the cluster environment.

Prerequisites

Before beginning this setup, ensure you have completed the following requirements:

Required Access and Credentials

  • Cluster-admin privileges on the target OpenShift cluster (required for initial setup only)
  • Access to SAP Note 3618713 (requires SAP S-user credentials)
  • Authenticated oc CLI session to your OpenShift cluster

Software Requirements

  • Red Hat OpenShift Container Platform 4.14+ (tested versions for Service Mesh 3.x)
  • OpenShift CLI (oc) version matching your cluster
  • Red Hat OpenShift Service Mesh 3.x Operators installed:
    • Red Hat OpenShift Service Mesh 3.x (required)
    • Kiali Operator (optional - for service mesh visualization)
    • Red Hat OpenShift distributed tracing platform (optional - for distributed tracing)

Downloaded Resources

  • Download resources.zip from SAP Note 3618713
  • Extract the archive to a working directory
  • Navigate to the extracted directory before starting Step 4 (RBAC)

Process Overview

The setup process consists of six main stages:

  1. Install Operators: Install the Red Hat OpenShift Service Mesh 3.x Operators.
  2. Prepare Namespaces: Create and configure the dedicated namespaces where all components will reside.
  3. Configure Service Mesh 3.x: Deploy and customize the Red Hat OpenShift Service Mesh 3.x to manage network traffic.
  4. Apply Permissions (RBAC): Apply the specific, fine-grained permissions that the ELM Service Account needs to operate.
  5. Generate Kubeconfig File: Create a unique Kubeconfig file that authenticates using the newly created Service Account.
  6. Register the Cluster in ELM: Use the generated Kubeconfig to add the cluster as an Edge Node in the ELM user interface.

Step 1: Install Service Mesh 3.x Operators

Before configuring any components, you must install the necessary operators.

CLI Installation Method:

  1. Log in to the OpenShift web console.
  2. Navigate to OperatorsOperatorHub.
  3. Search for Red Hat OpenShift Service Mesh.
  4. Select the Service Mesh 3.x version.
  5. Click Install and follow the prompts to complete the installation.

Verify Installation:

Run the following command to confirm the operators are installed:

# Verify the required Service Mesh 3.x operator is installed
# Adjust the grep pattern if your operator name differs
oc get csv -n openshift-operators | grep -E "servicemeshoperator3|Service Mesh 3"

Step 2: Prepare Namespaces

First, create the required namespaces to isolate the application components. You'll also apply annotations to these namespaces, which are necessary for OpenShift's Security Context Constraints (SCCs). These annotations pre-assign specific User ID (UID) and group ID ranges, ensuring that pods run with the minimum required privileges.

This single block of commands will create all the required namespaces and apply the necessary security annotations and service mesh labels.

# --- Create all required namespaces ---
oc create namespace edgelm
oc create namespace istio-gateways
oc create namespace edge-icell
oc create namespace edge-icell-secrets
oc create namespace edge-icell-ela
oc create namespace edge-icell-services

# --- Label namespaces for Service Mesh auto-injection ---
oc label namespace edgelm edge-icell edge-icell-ela edge-icell-services istio-gateways istio-injection=enabled edgelm.sap.com/product=edgelm
oc label namespace edge-icell-secrets edgelm.sap.com/product=edgelm

# --- Apply the necessary security annotations for SCCs ---
oc annotate namespace edgelm openshift.io/sa.scc.supplemental-groups="67000/1000" --overwrite
oc annotate namespace edge-icell openshift.io/sa.scc.uid-range="10000/100" --overwrite
oc annotate namespace edge-icell openshift.io/sa.scc.supplemental-groups="1000/100" --overwrite
oc annotate namespace edge-icell-ela openshift.io/sa.scc.uid-range="100000/1000" --overwrite
oc annotate namespace edge-icell-ela openshift.io/sa.scc.supplemental-groups="100002/1000" --overwrite
oc annotate namespace edge-icell-services openshift.io/sa.scc.uid-range="1000000/1000" --overwrite
oc annotate namespace edge-icell-services openshift.io/sa.scc.supplemental-groups="1000002/1000" --overwrite

Verify Step 2 Completion

Run these commands to confirm namespaces are correctly configured:

# Verify all namespaces exist and are Active
oc get namespaces | grep -E "(edgelm|edge-icell|istio-gateways)"

# Expected output: 6 namespaces listed, all showing "Active" status

# Verify Service Mesh labels are applied
oc get namespace edgelm -o yaml | grep -A 5 labels

# Expected: Should see istio-injection=enabled and edgelm.sap.com/product=edgelm

Continue to Step 3 only after all namespaces show "Active" status.

Step 3: Configure OpenShift Service Mesh 3.x

The SAP Note requires a service mesh to be present. This guide provides a specific, tested configuration for Red Hat OpenShift Service Mesh version 3.x. The architecture and configuration approach has changed significantly from Service Mesh 2.x.

Deployment Options:

This step shows the standard shared service mesh approach. For environments requiring enhanced security isolation or multi-tenancy, see Advanced: Multiple Service Mesh Instances for deploying a dedicated service mesh specifically for SAP workloads.

The following steps will create the istio-system namespace, deploy the Service Mesh 3.x control plane, and then add your application namespaces to the mesh.

Note: The configurations provided below are general guidance for a standard Service Mesh 3.x deployment. If you need to make customized changes to the service mesh configuration, please refer to the official Red Hat OpenShift Service Mesh 3.x documentation for detailed customization options.

# --- Command 1: Create the Service Mesh control plane namespace ---
oc new-project istio-system

# --- Command 2: Create the CNI namespace ---
oc new-project istio-cni

# --- Command 3: Create the IstioCNI resource ---
oc apply -f - <<EOF
apiVersion: sailoperator.io/v1
kind: IstioCNI
metadata:
  name: default
  namespace: istio-cni
spec:
  namespace: istio-cni
EOF

Command 4: Create the Istio Control Plane

Choose the appropriate configuration based on your environment:

Standard Configuration (Direct Internet Access):

Use this configuration if your cluster has direct internet access without requiring a corporate proxy.

# --- Command 4a: Create the Istio control plane (standard) ---
oc apply -f - <<EOF
apiVersion: sailoperator.io/v1
kind: Istio
metadata:
  name: default
  namespace: istio-system
spec:
  namespace: istio-system
  values:
    meshConfig:
      discoverySelectors:
        - matchLabels:
            istio-discovery: enabled
EOF

Corporate Proxy Configuration (HTTP Proxy Required):

Use this configuration if your environment requires a corporate proxy for external connectivity (e.g., to reach SAP BTP, XSUAA, or other external services). This configuration enables DNS capture, which is required for proper proxy traffic routing through the Istio sidecar.

# --- Command 4b: Create the Istio control plane (with corporate proxy support) ---
oc apply -f - <<EOF
apiVersion: sailoperator.io/v1
kind: Istio
metadata:
  name: default
  namespace: istio-system
spec:
  namespace: istio-system
  values:
    meshConfig:
      discoverySelectors:
        - matchLabels:
            istio-discovery: enabled
      defaultConfig:
        proxyMetadata:
          ISTIO_META_DNS_CAPTURE: "true"
          ISTIO_META_DNS_AUTO_ALLOCATE: "true"
EOF

Why is DNS capture needed for corporate proxy? When SAP EIC applications connect to external services through a corporate proxy using HTTP CONNECT tunneling, the Istio sidecar (Envoy) must properly intercept DNS queries to route traffic correctly. Without DNS capture enabled, you may encounter ECONNRESET errors or 404 route_not_found responses.

Create a ServiceEntry to allow proxy traffic from within the mesh:

When using a corporate proxy, you must also create a ServiceEntry so that the Istio sidecar permits outbound TCP connections to the proxy. Replace the placeholders with your proxy's hostname, IP address(es), and port.

oc apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: allow-http-proxy
  namespace: edgelm
spec:
  hosts:
    - <proxy-hostname>
  addresses:
    - <proxy-IP>
  ports:
    - number: <proxy-port>
      name: tcp-proxy-tunnel
      protocol: TCP
  location: MESH_EXTERNAL
  resolution: NONE
EOF

Note: Add additional entries under addresses if your proxy has multiple IP addresses.

Complete the Service Mesh Setup

# --- Command 5: Label the control plane namespace for discovery ---
oc label namespace istio-system istio-discovery=enabled

# --- Command 6: Label application namespaces to join the mesh ---
# All namespaces need istio-discovery for mesh visibility
oc label namespace \
  edgelm \
  edge-icell \
  edge-icell-services \
  edge-icell-secrets \
  edge-icell-ela \
  istio-gateways \
  istio-discovery=enabled

# Enable sidecar injection on all namespaces EXCEPT edge-icell-secrets
oc label namespace \
  edgelm \
  edge-icell \
  edge-icell-services \
  edge-icell-ela \
  istio-gateways \
  istio-injection=enabled

Wait for Service Mesh Control Plane to Be Ready

The Service Mesh 3.x control plane takes 5-10 minutes to deploy and become ready. You must wait for it to be ready before proceeding to Step 4.

# Wait for Istio control plane to be ready (may take 5-10 minutes)
oc wait --for=condition=Ready istio/default -n istio-system --timeout=600s

# Verify all components are running
oc get pods -n istio-system
oc get pods -n istio-cni

Expected output: All pods should show Running or Completed status.

Step 4: Apply Permissions (RBAC)

This step configures the necessary Role-Based Access Control (RBAC) permissions. It applies required Custom Resource Definitions (CRDs), an admission webhook, cluster-wide roles, and specific roles within each namespace.

Prerequisites for this step:
- You have downloaded and extracted resources.zip from SAP Note 3618713
- You have navigated to the extracted directory in your terminal

# Verify you're in the correct directory with the YAML files
ls *.yaml | head -5
# Expected: You should see files like cr-edgelm-cluster-admin.yaml, crd-*.yaml, etc.

Cluster-Wide Permissions

Important: Apply CRDs first, then ClusterRoles that reference them:

# Step 4.1: Apply Custom Resource Definitions (CRDs) first
oc apply -f crd-helms.yaml
oc apply -f crd-imagereplications.yaml
oc apply -f crd-replicationservices.yaml
oc apply -f crd-sapcloudconnectors.yaml
oc apply -f crd-sourceregistries.yaml
oc apply -f crd-systemmappings.yaml
oc apply -f crd-solacesoftwarebrokers.yaml

# Step 4.2: Apply ClusterRoles (these reference the CRDs above)
oc apply -f cr-edgelm-cluster-admin.yaml
oc apply -f crb-edgelm-cluster-admin.yaml

# Step 4.3: Apply the admission webhook (if required)
# NOTE: The webhook is OPTIONAL and only needed for high-availability environments
# For detailed webhook setup including certificate generation, see:
# [SAP Note 3618713](https://me.sap.com/notes/3618713), section "Admission webhook"
oc apply -f webhook-pod-initializer.yaml

Namespace-Specific Permissions

# --- Permissions for 'edgelm' namespace ---
oc apply -f role-edgelm-manage.yaml -n edgelm
oc apply -f rb-edgelm-manage.yaml -n edgelm
oc apply -f role-edgelm-admin.yaml -n edgelm
oc apply -f rb-edgelm-admin.yaml -n edgelm

# --- Permissions for 'istio-gateways' namespace ---
oc apply -f role-edgelm-manage.yaml -n istio-gateways
oc apply -f rb-edgelm-manage.yaml -n istio-gateways
oc apply -f role-istio-gateways-admin.yaml -n istio-gateways
oc apply -f rb-istio-gateways-admin.yaml -n istio-gateways

# --- Permissions for 'edge-icell' namespace ---
oc apply -f role-edgelm-manage.yaml -n edge-icell
oc apply -f rb-edgelm-manage.yaml -n edge-icell
oc apply -f role-edge-icell.yaml -n edge-icell
oc apply -f rb-edge-icell.yaml -n edge-icell
oc apply -f rb-edge-icell-admin.yaml -n edge-icell

# --- Permissions for 'edge-icell-ela' namespace ---
oc apply -f role-edgelm-manage.yaml -n edge-icell-ela
oc apply -f rb-edgelm-manage.yaml -n edge-icell-ela
oc apply -f role-edge-icell-ela.yaml -n edge-icell-ela
oc apply -f rb-edge-icell-ela.yaml -n edge-icell-ela
oc apply -f rb-edge-icell-ela-admin.yaml -n edge-icell-ela

# --- Permissions for 'edge-icell-secrets' namespace ---
oc apply -f role-edgelm-manage.yaml -n edge-icell-secrets
oc apply -f rb-edgelm-manage.yaml -n edge-icell-secrets
oc apply -f rb-edge-icell-secrets-admin.yaml -n edge-icell-secrets

# --- Permissions for 'edge-icell-services' namespace ---
oc apply -f role-edgelm-manage.yaml -n edge-icell-services
oc apply -f rb-edgelm-manage.yaml -n edge-icell-services
oc apply -f role-edge-icell-services.yaml -n edge-icell-services
oc apply -f rb-edge-icell-services.yaml -n edge-icell-services
oc apply -f rb-edge-icell-services-admin.yaml -n edge-icell-services

Step 5: Generate Kubeconfig File

These commands create the edgelm Service Account and generate a edgelm-kubeconfig file using a persistent, non-expiring token. ELM will use this file to authenticate to your cluster.

# --- 1. Create the 'edgelm' service account in the 'edgelm' namespace ---
oc create sa edgelm -n edgelm

# --- 2. Create a secret to hold the long-lived service account token ---
oc apply -n edgelm -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: edgelm-kubeconfig-token
  annotations:
    kubernetes.io/service-account.name: edgelm
type: kubernetes.io/service-account-token
EOF

# --- 3. Extract the token and generate the Kubeconfig file ---
export SECRET_NAME_SA=edgelm-kubeconfig-token
export TOKEN_SA=$(oc get secret ${SECRET_NAME_SA} -n edgelm -ojsonpath='{.data.token}' | base64 -d)
oc config view --raw --minify > edgelm-kubeconfig
oc config unset users --kubeconfig=edgelm-kubeconfig
oc config set-credentials edgelm --kubeconfig=edgelm-kubeconfig --token=${TOKEN_SA}
oc config set-context --current --kubeconfig=edgelm-kubeconfig --user=edgelm

echo "Kubeconfig file 'edgelm-kubeconfig' has been generated successfully."

# Test the kubeconfig file to ensure it works
oc --kubeconfig=edgelm-kubeconfig auth can-i list pods -n edgelm

# Expected output: "yes" - this confirms the kubeconfig works correctly

Verify Step 5 Completion

# Verify the service account exists
oc get sa edgelm -n edgelm

# Verify the secret exists and contains a token
oc get secret edgelm-kubeconfig-token -n edgelm -o yaml | grep "token:"

# Test authentication with the generated kubeconfig
oc --kubeconfig=edgelm-kubeconfig get namespaces | grep edgelm

# Expected: Should list edgelm namespace, confirming authentication works

Continue to Step 6 only after the kubeconfig test succeeds.

Step 6: Register the Cluster in ELM

You are now ready to add your cluster as a new Edge Node in the ELM UI.

  1. In the ELM UI, start the Add an Edge Node process.
  2. On the first stage, Provide Edge Node Details, enter a name for your Edge Node.
  3. ⚠️ CRITICAL: Check "Restricted Access to Kubernetes cluster" checkbox
    • You MUST select this checkbox
    • Without this, ELM will attempt to use cluster-admin privileges and the deployment will fail
  4. When prompted for the Kubeconfig, provide the contents of the edgelm-kubeconfig file you just created.
  5. If your environment requires a corporate proxy, select "Enable HTTP Proxy" and configure the proxy settings.
    • Note: Ensure you used the Corporate Proxy Configuration option in Step 3 when deploying the Istio control plane.
  6. Proceed with the rest of the configuration as guided by the UI.

Optional: Exposing the Istio Ingress Gateway via OpenShift Routes

If your environment does not have a LoadBalancer provider (e.g., MetalLB), you can expose the Istio Ingress Gateway using OpenShift Routes instead.

Important: For restricted-access deployments, the Route must be created in the istio-gateways namespace (not istio-system), since that is where the istio-ingressgateway service resides.

For full instructions — including Route YAML examples, the manual service patch workaround, support scope, traffic flow with external load balancers, and operational restrictions — see Exposing the SAP EIC Istio Ingress Gateway in the Installation Guide.


Advanced: Multiple Service Mesh Instances

OpenShift Service Mesh 3.x supports deploying multiple service mesh instances on a single cluster. This advanced pattern allows the entire SAP ELM / EIC stack to run in its own isolated service mesh, separate from other applications on the same cluster. This offers a higher degree of security and autonomy.

The Istio CNI configuration created earlier (in Step 3 of the main flow) is cluster-wide and is reused by both the shared and dedicated control planes. You do not need to create an additional CNI namespace or IstioCNI resource for the dedicated mesh.

For detailed information about this feature, refer to the official Red Hat documentation: Deploying multiple service meshes on a single cluster.

When to Use This Configuration:
- Multi-tenancy: When multiple teams or applications share the same cluster
- Security isolation: When SAP workloads require complete network isolation
- Independent lifecycle management: When different mesh versions or configurations are needed

Alternative Deployment: Dedicated SAP Service Mesh

In the standard flow, Step 3: Configure OpenShift Service Mesh 3.x creates a shared service mesh used by SAP workloads and potentially other applications.
If you require stronger isolation or separate lifecycle management, you can replace Step 3 with the following dedicated mesh configuration.
All other steps (Step 1, Step 2, Step 4, Step 5, Step 6) remain the same.

Prerequisites for this alternative path:
- You have completed Step 1: Install Service Mesh 3.x Operators
- You have completed Step 2: Prepare Namespaces
- The Istio CNI namespace and IstioCNI resource from Step 3 have already been created

If these prerequisites are not yet fulfilled, go back to the main flow and complete Steps 1–3 before proceeding.

Step 3 (Alternative): Deploy an Independent Control Plane for SAP

# Create dedicated namespace for SAP mesh
oc new-project istio-system-sap-edge

Choose the appropriate Istio configuration based on your environment:

Standard Configuration (Direct Internet Access):

# Create dedicated Istio resource with unique selector (standard)
oc apply -f - <<EOF
apiVersion: sailoperator.io/v1
kind: Istio
metadata:
  name: sap-mesh-istio
  namespace: istio-system-sap-edge
spec:
  namespace: istio-system-sap-edge
  values:
    meshConfig:
      discoverySelectors:
        - matchLabels:
            istio-discovery: sap-edge-mesh
EOF

Corporate Proxy Configuration (HTTP Proxy Required):

# Create dedicated Istio resource with unique selector (with corporate proxy support)
oc apply -f - <<EOF
apiVersion: sailoperator.io/v1
kind: Istio
metadata:
  name: sap-mesh-istio
  namespace: istio-system-sap-edge
spec:
  namespace: istio-system-sap-edge
  values:
    meshConfig:
      discoverySelectors:
        - matchLabels:
            istio-discovery: sap-edge-mesh
      defaultConfig:
        proxyMetadata:
          ISTIO_META_DNS_CAPTURE: "true"
          ISTIO_META_DNS_AUTO_ALLOCATE: "true"
EOF

Assign SAP Namespaces to the Dedicated Mesh

# Namespaces must already exist from Step 2: Prepare Namespaces

# Label the control plane namespace for discovery
oc label namespace istio-system-sap-edge istio-discovery=sap-edge-mesh --overwrite

# Assign all SAP-related namespaces to the dedicated mesh
oc label namespace edgelm edge-icell edge-icell-services edge-icell-secrets edge-icell-ela istio-gateways \
  istio-discovery=sap-edge-mesh istio.io/rev=sap-mesh-istio --overwrite

Important – Avoid Label Conflicts for Sidecar Injection

When using revision-based control planes, sidecar injection must be driven only by the istio.io/rev label.
Having both istio.io/rev=sap-mesh-istio and istio-injection=enabled on the same namespace can cause conflicts:

  • istio-injection=enabled targets the default Istio control plane/webhook.
  • istio.io/rev=sap-mesh-istio targets the dedicated SAP mesh revision.

If both are present, injection behavior becomes unpredictable and may fail entirely if the default webhook is not configured for this mesh.

To clean up existing labels and rely solely on the revision label, run:

# Remove the legacy istio-injection label from all SAP namespaces (if present)
oc label namespace edgelm edge-icell edge-icell-services edge-icell-secrets edge-icell-ela istio-gateways istio-injection-

# Optional: Verify that only istio.io/rev and istio-discovery remain
oc get namespace edgelm --show-labels

The istio.io/rev=sap-mesh-istio label on the namespaces ensures that sidecar injection for pods in these namespaces is handled by the dedicated SAP mesh control plane (sap-mesh-istio) instead of the shared mesh.
Optional: You can verify the available mesh revisions and confirm the sap-mesh-istio revision name by running:

oc get istiorevisions -n istio-system-sap-edge

Verify the Dedicated Mesh Deployment

# Wait for the dedicated control plane to be ready
oc wait --for=condition=Ready istio/sap-mesh-istio -n istio-system-sap-edge --timeout=600s

# Verify the dedicated control plane pods
oc get pods -n istio-system-sap-edge

# Verify namespace labels
oc get namespace edgelm -o yaml | grep -A 10 labels

After the dedicated control plane is ready and all namespaces are correctly labeled, continue with Step 4: Apply Permissions (RBAC) in the main flow.

Important Notes:
- This configuration creates isolation between SAP workloads and any other service mesh instances on the same cluster
- Use this approach when you need independent service mesh management or enhanced security isolation
- All subsequent steps (RBAC, kubeconfig generation, ELM registration) remain the same
- Both the shared and dedicated mesh approaches are supported – choose based on your cluster's multi-tenancy and isolation requirements


For Customer Reference: If you are looking for an implementation you can experiment with, an OpenShift Service Mesh 2.x version is available. Please note that this is provided for experimentation purposes only and is not officially supported by SAP. The Service Mesh 3.x implementation described in this guide represents SAP's future direction for this feature.

Comments