Deploying NetApp Trident for SAP Edge Integration Cell on OpenShift 4

Updated -

For general requirements and installation instructions, please consult the corresponding installation guides:

1. OpenShift Container Platform validation version matrix

The following version combinations of SAP Edge Integration Cell (EIC), OpenShift Container Platform (OCP), and NetApp Trident have been validated:

SAP Product OpenShift Container Platform Infrastructure and (Storage)
SAP EIC 8.29 4.18 Bare Metal (NetApp Trident 25.02 (ONTAP SAN))

Note: ONTAP can be used with various tools depending on your setup:

 NFS Tools: Install NFS tools if using ontap-nas, ontap-nas-economy, ontap-nas-flexgroup, azure-netapp-files, or gcp-cvs.
 iSCSI Tools: Install iSCSI tools if using ontap-san, ontap-san-economy, or solidfire-san.
 NVMe Tools: Install NVMe tools if using ontap-san for non-volatile memory express (NVMe) over TCP (NVMe/TCP) protocol.

For more information, see the NetApp documentation.

Important: Using NFS is NOT recommended for configuring the Message Service Storage Class when deploying SAP Edge Integration Cell, as the Solace component within Edge Integration Cell advises against NFS usage. In this article, we will use iSCSI tools as an example. NVMe tools can also be configured in a similar way if your ONTAP backend supports NVMe/TCP or NVMe/FC and your OpenShift nodes are prepared accordingly.

1.1. Supportability Note

The validation of NetApp Trident storage is not conducted by Red Hat. Red Hat does not directly support NetApp Trident software or NetApp hardware.

If you encounter issues related to Trident or its integration with NetApp storage solutions, existing NetApp customers can submit a support case via the NetApp Support Portal. For architecture or pre-sales inquiries, please contact your NetApp account manager.

2. Requirements

2.1. Hardware/VM and OS Requirements

For OpenShift Container Platform (OCP) and SAP Edge Integration Cell (EIC) requirements, please refer to the Prerequisites for Installing SAP Integration Suite Edge Integration Cell.

A management host with oc CLI access to the OpenShift cluster is assumed.

NetApp Requirements:

Please refer to the Astra Trident Requirements section for detailed information, ensuring compatibility between your ONTAP version and Trident version.

Important Notice for SAN Multipathing:

Astra Trident strictly enforces the use of a multipathing configuration in SAN environments, with a recommended value of `find_multipaths: no` in the `/etc/multipath.conf` file on worker nodes.

Using a non-multipathing configuration or setting `find_multipaths: yes` or `find_multipaths: smart` in the `multipath.conf` file will result in mount failures. Astra Trident has recommended the use of `find_multipaths: no` since the 21.07 release.

Trident operator is able to configure worker nodes correctly for iSCSI and multipathing, as detailed in the installation guidance section.

2.2. Software Requirements

An OpenShift Container Platform (OCP) cluster (version 4.18 as per validation matrix) must be installed and configured.

3. NetApp Trident Installation

The following steps guide the preparation of OpenShift nodes, NetApp ONTAP, and the installation and configuration of Trident.

3.1. Prerequisites Verification

Before installing Trident, verify the following prerequisites:

# Verify OpenShift version
oc version

# Check node readiness
oc get nodes

# Verify CSI support
oc get csidriver

# Check available storage classes (before Trident installation)
oc get sc

3.2. OCP Nodes and NetApp ONTAP Preparation

3.2.1. Node Configuration for iSCSI and Multipathing

For OpenShift worker nodes, iSCSI utilities must be installed, and services like iscsid and multipathd must be enabled and running. Crucially, /etc/multipath.conf needs to be configured with find_multipaths no.

Recommended Method: Trident Operator nodePrep Feature

With Trident 24.10 and later, the TridentOrchestrator CR includes a nodePrep field. By setting nodePrep: [iscsi], you instruct the Trident operator to automatically:

  • Install necessary iSCSI packages on the worker nodes.
  • Configure and start the iscsid and multipathd services.
  • Ensure /etc/multipath.conf is correctly set up with Trident's recommendations (including find_multipaths no).

This automated approach via nodePrep is the recommended method and generally makes the manual MachineConfig steps for these specific tasks unnecessary. Details on configuring nodePrep are in section 3.3.

Special Configuration for Dual-Homed Nodes with Dedicated Storage Networks:

For environments with dual-homed nodes where storage networks are not routed (common in enterprise deployments with dedicated storage LANs), you may need to enable IP forwarding on nodes to ensure proper connectivity:

# Check current network configuration
oc explain --api-version operator.openshift.io/v1 network.spec.defaultNetwork.ovnKubernetesConfig.gatewayConfig.routingViaHost

# View current network settings
oc get network.operator.openshift.io cluster -o jsonpath='{.spec.defaultNetwork}' | jq

# Enable routing via host (edit the network configuration)
oc edit network.operator.openshift.io cluster
# Set: routingViaHost: true under spec.defaultNetwork.ovnKubernetesConfig.gatewayConfig

This configuration enables proper routing for storage traffic on dedicated, non-routed storage networks.

(Alternative/Legacy) Manual Configuration via MachineConfig:
If nodePrep is not used, or for environments requiring explicit manual control, MachineConfig objects would be used to deliver the /etc/multipath.conf file and ensure iscsid and multipathd services are enabled. This section is for reference if not using nodePrep.
Example /etc/multipath.conf content:

defaults {
    user_friendly_names yes
    find_multipaths no
}

Example MachineConfig YAML (mc-iscsi-multipath.yaml):

# apiVersion: machineconfiguration.openshift.io/v1
# kind: MachineConfig
# metadata:
#   labels:
#     machineconfiguration.openshift.io/role: worker
#   name: 98-worker-iscsi-multipath
# spec:
#   config:
#     ignition:
#       version: 3.2.0
#     storage:
#       files:
#         - contents:
#             source: data:,defaults%20%7B%0A%09user_friendly_names%20yes%0A%09find_multipaths%20no%0A%7D%0A
#           mode: 0644
#           overwrite: true
#           path: /etc/multipath.conf
#     systemd:
#       units:
#         - name: iscsid.service
#           enabled: true
#         - name: multipathd.service
#           enabled: true

3.3. Install NetApp Trident Operator and Configure Trident

  1. Install Astra Trident Operator via OperatorHub:
    In the OpenShift console, navigate to Operators > OperatorHub. Search for "Astra Trident" (or "NetApp Trident") and install the certified operator (e.g., version 25.02 or later). Choose your preferred installation mode (e.g., "All namespaces on the cluster") and approval strategy.

  2. Create TridentOrchestrator (with nodePrep):
    Once the operator is installed, create a TridentOrchestrator custom resource. This instructs the operator to deploy Trident and prepare worker nodes for iSCSI.

    Example (trident-orchestrator.yaml):

    apiVersion: trident.netapp.io/v1
    kind: TridentOrchestrator
    metadata:
      name: trident
      namespace: trident # Or your chosen namespace for Trident installation (e.g., openshift-operators if installed there)
    spec:
      debug: false # Set to true for initial troubleshooting
      namespace: trident # Namespace where Trident components (controller, node pods) will run
      nodePrep:
        - iscsi # This enables automated iSCSI and multipath preparation on worker nodes
    

    Apply it: oc apply -f trident-orchestrator.yaml
    Note: Using nodePrep: [iscsi] should handle the necessary iSCSI tool installation, service enablement (iscsid, multipathd), and /etc/multipath.conf configuration, making manual MachineConfig for these items unnecessary.

  3. Verify Trident Deployment and Node Preparation:
    After the TridentOrchestrator status is "Installed" (check oc get torc -n <trident-namespace>), verify the Trident pods:

    oc get pods -n trident # Replace 'trident' if a different namespace was used
    

    Expected output (pod names/hashes will vary):

    NAME                                  READY   STATUS    RESTARTS   AGE
    trident-controller-xxxxxxxxxx-yyyyy   6/6     Running   0          5m
    trident-node-linux-abcde              2/2     Running   0          5m
    trident-node-linux-fghij              2/2     Running   0          5m
    ... (one trident-node-linux per relevant worker node)
    

    You can also log into a worker node to verify that /etc/multipath.conf contains find_multipaths no and that iscsid and multipathd services are active.

  4. (Optional) Verify Trident Version using tridentctl:
    If you have the tridentctl binary:

    # ./tridentctl -n trident version
    

    Expected output (version numbers should match your installed Trident, e.g., 25.02.x):

    +----------------+----------------+
    | SERVER VERSION | CLIENT VERSION |
    +----------------+----------------+
    | 25.02.X        | 25.02.X        |
    +----------------+----------------+
    
  5. Create Kubernetes Secret for NetApp Credentials:
    This secret stores the username and password for the NetApp account Trident will use.

    Example (trident-credentials-secret.yaml):

    apiVersion: v1
    kind: Secret
    metadata:
      name: ontap-iscsi-credentials # Choose a descriptive name
      namespace: trident # Must be in the same namespace as Trident components
    type: Opaque
    stringData:
      username: "your_netapp_admin_user" # Replace with your actual NetApp username
      password: "your_netapp_admin_password" # Replace with your actual password
    

    Apply it: oc apply -f trident-credentials-secret.yaml -n trident

  6. Create the TridentBackendConfig for iSCSI Backend:
    This custom resource defines your NetApp ONTAP iSCSI backend.

    Example (trident-backendconfig-iscsi.yaml):

    apiVersion: trident.netapp.io/v1
    kind: TridentBackendConfig # Use this kind based on your CRD
    metadata:
      name: san-iscsi-rhfiler # Name for this backend configuration
      namespace: trident # Namespace where Trident components run
    spec:
      version: 1
      storageDriverName: ontap-san
      backendName: san-iscsi-rhfiler # Can match metadata.name for clarity in `tridentctl` output
      managementLIF: "10.76.35.230"    # Verified Node Management LIF
      svm: "iscsi"                     # Verified SVM for iSCSI
      igroupName: "trident"            # Trident will use or create this igroup on SVM 'iscsi'
      credentials:
        name: "ontap-iscsi-credentials" # Name of the Kubernetes secret created above
      useCHAP: false                   # Set to true if using CHAP, and provide CHAP credentials
      aggregates:
        - "rhfiler_01_SSD_1"
        - "rhfiler_02_SSD_1"
      # serialNumbers: ["651949000414"] # Optional: If needed to identify the cluster by its serial number
      defaults:
        fileSystemType: "ext4" # Or "xfs" as per your preference or application needs
        spaceAllocation: "true" # Enables thin provisioning for LUNs
        snapshotPolicy: "none"  # Default snapshot policy for new volumes
      debug: false # Set to true for verbose logging during troubleshooting
    

    Apply it: oc apply -f trident-backendconfig-iscsi.yaml -n trident

  7. Verify Backend Status:
    Check if the TridentBackendConfig (backend) is online and successfully configured.

    oc get tbc -n trident # 'tbc' is a shortName for TridentBackendConfig
    

    Look for your backend name (e.g., san-iscsi-rhfiler) and check its PHASE and STATUS columns. It should ideally show a successful state like Bound or Online.

    If you have tridentctl:

    # ./tridentctl -n trident get backend san-iscsi-rhfiler
    

    If issues arise, check Trident controller logs:

    # Find the trident-controller pod name first
    # oc get pods -n trident -l app=trident-csi,trident.netapp.io/component=controller
    # Then view logs (replace xxxxxxxxx-yyyyy with actual pod identifier):
    # oc logs -n trident trident-controller-xxxxxxxxx-yyyyy -c trident-csi # Or -c trident-controller
    
  8. Create the iSCSI Storage Class:
    Example (storage-class-iscsi.yaml):

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: san-iscsi-ocp # Your desired StorageClass name
    # mountOptions: # Optional for iSCSI unless specific filesystem mount options are needed
    #  - discard
    parameters:
      backendType: "ontap-san" # Must match tridentctl -n trident get backend "storage driver"
      fsType: "ext4" # Or "xfs", as per your preference or application needs
    provisioner: csi.trident.netapp.io
    reclaimPolicy: Delete # Options: Delete or Retain
    allowVolumeExpansion: true
    volumeBindingMode: Immediate # Options: Immediate or WaitForFirstConsumer
    

    Apply it: oc apply -f storage-class-iscsi.yaml

  9. (Optional) Mark the new Storage Class as Default:
    If this should be the default storage class for PVCs that don't specify one:

    # First, remove the default annotation from any other storage class (if any)
    # oc get sc -o jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")].metadata.name}' | xargs -I{} oc patch storageclass {} -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
    # Then, set the new one as default
    oc patch storageclass san-iscsi-ocp -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
    
  10. Verify Storage Class Creation:

    oc get sc
    

    Expected output should list your san-iscsi-ocp StorageClass:

    NAME                      PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    san-iscsi-ocp (default)   csi.trident.netapp.io   Delete          Immediate           true                   ...
    

4. Validation and Testing

4.1. Test Storage Provisioning

Before proceeding with SAP EIC installation, validate that Trident can successfully provision storage:

# Create a test PVC
cat <<EOF | oc apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-trident-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: san-iscsi-ocp
EOF

# Check PVC status
oc get pvc test-trident-pvc

# Check if PV was created
oc get pv

# Clean up test resources
oc delete pvc test-trident-pvc

4.2. Verify Node Configuration

Verify that nodes are properly configured for iSCSI:

# Check if multipath is configured correctly on nodes
oc debug node/<node-name>
# Inside the debug pod:
chroot /host
cat /etc/multipath.conf | grep find_multipaths
systemctl status multipathd
systemctl status iscsid

5. Troubleshooting

5.1. Common Issues

Issue: PVC stuck in Pending state

# Check events
oc describe pvc <pvc-name>

# Check Trident logs
oc logs -n trident -l app=trident-csi

Issue: iSCSI connection failures

# Check node preparation
oc get nodes -o wide
oc describe node <node-name>

# Check if iscsid is running
oc debug node/<node-name>
chroot /host
systemctl status iscsid

Issue: Multipath configuration problems

# Verify multipath configuration
oc debug node/<node-name>
chroot /host
multipath -ll
cat /etc/multipath.conf

5.2. Useful Commands

# Check Trident version and status
oc get torc -n trident
oc describe torc trident -n trident

# List all backends
oc get tbc -n trident

# Check storage classes
oc get sc

# Monitor Trident pods
oc get pods -n trident -w

6. Security Considerations

6.1. CHAP Authentication

For enhanced security, consider enabling CHAP authentication:

# In TridentBackendConfig
useCHAP: true
chapInitiatorSecret: "trident-chap-secret"
chapTargetInitiatorSecret: "trident-chap-secret"
chapTargetUsername: "target-username"
chapUsername: "initiator-username"

6.2. Network Security

  • Ensure storage networks are properly isolated
  • Use VLANs or dedicated network segments for storage traffic
  • Configure firewalls to allow only necessary iSCSI traffic (port 3260)

7. Performance Optimization

7.1. Storage Class Parameters

For optimal performance with SAP EIC, consider these storage class parameters:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: san-iscsi-ocp-optimized
parameters:
  backendType: "ontap-san"
  fsType: "ext4"
  # Performance optimizations
  unixPermissions: "0755"
  snapshotPolicy: "none"
  spaceReserve: "none"
  encryption: "false"  # Enable if encryption is required
provisioner: csi.trident.netapp.io
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer  # For topology-aware scheduling

7.2. Node Affinity for Storage

For performance-critical workloads, consider node affinity:

# Example: Ensure pods are scheduled on nodes with fast storage connections
nodeAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
      - key: storage-tier
        operator: In
        values: ["high-performance"]

8. Continue with SAP ELM/EIC

Proceed with the preparation for the SAP Edge Integration Cell installation, now that Trident is configured to provide iSCSI storage.

Next Steps:
1. Return to the main Installation Guide section 3.4
2. Configure SAP EIC to use the san-iscsi-ocp storage class
3. Ensure Message Service uses the optimized storage class for best performance

Comments