Chapter 7. Advanced Tutorials

7.1. Example Workflow: Automated Transaction Recovery Feature When Scaling Down a Cluster

Important

This feature is provided as Technology Preview only. It is not supported for use in a production environment, and it might be subject to significant future changes. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features.

This tutorial demonstrates the automated transaction recovery feature of the JBoss EAP for OpenShift image when scaling down a cluster. The jta-crash-rec-eap7 quickstart example and the eap72-tx-recovery-s2i application template are used here to show how XA transactions issued on the OpenShift pod, when terminated within the cluster’s scale down, are recovered by the dedicated migration pod.

Note

The jta-crash-rec-eap7 quickstart uses the H2 database that is included with JBoss EAP. It is a lightweight, relational example datasource that is used for examples only. It is not robust or scalable, is not supported, and should not be used in a production environment.

7.1.1. Prepare for Deployment

  1. Log in to your OpenShift instance using the oc login command.
  2. Create a new project.

    $ oc new-project eap-tx-demo
  3. Add the view role to the default service account, which will be used to run the underlying pods. This enables the service account to view all the resources in the eap-tx-demo namespace, which is necessary for managing the cluster.

    $ oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default
  4. For automated transaction recovery to work, the JBoss EAP application must use a ReadWriteMany persistent volume.

    Provision the persistent volume expected by the eap72-tx-recovery-s2i application template to hold the data for the ${APPLICATION_NAME}-eap-claim persistent volume claim.

    This example uses a persistent volume object provisioned using the NFS method with the following definition:

    $ cat txpv.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: txpv
    spec:
      capacity:
        storage: 1Gi
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      nfs:
        path: /mnt/mountpoint
        server: 192.168.100.175

    Update the path and server fields in the above definition for your environment, and provision the persistent volume with the following command:

    $ oc create -f txpv.yaml
    persistentvolume "txpv" created
    $ oc get pv
    NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
    txpv      1Gi        RWX           Retain         Available                                      26s
    Important

    When using the NFS method to provision persistent volume objects for the eap72-tx-recovery-s2i application template, ensure the mount point is exported with sufficient permissions. On the host from which the mount point is exported, perform the following:

    # chmod -R 777 /mnt/mountpoint
    # cat /etc/exports
    /mnt/mountpoint *(rw,sync,anonuid=185,anongid=185)
    # exportfs -va
    exporting *:/mnt/mountpoint
    # setsebool -P virt_use_nfs 1

    Replace /mnt/mountpoint path above as appropriate for your environment.

7.1.2. Deployment

  1. Deploy the jta-crash-rec-eap7 quickstart using the eap72-tx-recovery-s2i application template. Specify the following:

Example: eap72-tx-recovery-s2i application template

+

$ oc new-app --template=eap72-tx-recovery-s2i \
-p SOURCE_REPOSITORY_URL=https://github.com/jboss-openshift/openshift-quickstarts \
-p SOURCE_REPOSITORY_REF=master \
-p CONTEXT_DIR=jta-crash-rec-eap7 \
-e CUSTOM_INSTALL_DIRECTORIES=extensions/* \
--name=eap-app
--> Deploying template "openshift/eap72-tx-recovery-s2i" to project eap-tx-demo

     JBoss EAP 7.0 (tx recovery)
     ---------
     An example EAP 7 application. For more information about using this template, see https://github.com/jboss-openshift/application-templates.

     A new EAP 7 based application has been created in your project.

     * With parameters:
        * Application Name=eap-app
        * Custom http Route Hostname=
        * Git Repository URL=https://github.com/jboss-openshift/openshift-quickstarts
        * Git Reference=master
        * Context Directory=jta-crash-rec-eap7
        * Queues=
        * Topics=
        * A-MQ cluster password=nyneOXUm # generated
        * Github Webhook Secret=PUW8Tmov # generated
        * Generic Webhook Secret=o7uD7qrG # generated
        * ImageStream Namespace=openshift
        * JGroups Cluster Password=MoR1Jthf # generated
        * Deploy Exploded Archives=false
        * Maven mirror URL=
        * ARTIFACT_DIR=
        * MEMORY_LIMIT=1Gi
        * EAP Volume Size=1Gi
        * Split the data directory?=true

--> Creating resources ...
    service "eap-app" created
    service "eap-app-ping" created
    route "eap-app" created
    imagestream "eap-app" created
    buildconfig "eap-app" created
    deploymentconfig "eap-app" created
    deploymentconfig "eap-app-migration" created
    persistentvolumeclaim "eap-app-eap-claim" created
--> Success
    Build scheduled, use 'oc logs -f bc/eap-app' to track its progress.
    Run 'oc status' to view your app.
Note

The JDK 11 image stream uses the eap72-openjdk11-tx-recovery-s2i application template in the above example, instead of eap72-tx-recovery-s2i used in the JDK 8 image stream.

  1. Wait for the build to finish. You can see the status of the build using the oc logs -f bc/eap-app command.
  2. Modify the eap-app deployment configuration with the definition of JAVA_OPTS_APPEND and JBOSS_MODULES_SYSTEM_PKGS_APPEND environment variables.

    $ oc get dc
    NAME                REVISION   DESIRED   CURRENT   TRIGGERED BY
    eap-app             1          1         1         config,image(eap-app:latest)
    eap-app-migration   1          1         1         config,image(eap-app:latest)
    $ oc set env dc/eap-app \
    -e JBOSS_MODULES_SYSTEM_PKGS_APPEND="org.jboss.byteman" \
    -e JAVA_OPTS_APPEND="-javaagent:/tmp/src/extensions/byteman/byteman.jar=script:/tmp/src/src/main/scripts/xa.btm"
    deploymentconfig "eap-app" updated

    This setting will notify the Byteman tracing and monitoring tool to modify the XA transactions processing in the following way:

    • The first transaction is always allowed to succeed.
    • When an XA resource executes phase 2 of the second transaction, the JVM process of the particular pod is halted.

7.1.3. Using the JTA Crash Recovery Application

  1. List running pods in the current namespace:

    $ oc get pods | grep Running
    NAME                        READY     STATUS      RESTARTS   AGE
    eap-app-2-r00gm             1/1       Running     0          1m
    eap-app-migration-1-lvfdt   1/1       Running     0          2m
  2. Issue a new XA transaction.

    1. Launch the application by opening a browser and navigating to http://eap-app-eap-tx-demo.openshift.example.com/jboss-jta-crash-rec.
    2. Enter Mercedes into the Key field, and Benz into the Value field. Click the Submit button.
    3. Wait for a moment, then click the Refresh Table link.
    4. Notice how the table row containing the Mercedes entry is updated with updated via JMS. If it has not yet updated, click the Refresh Table link couple of times. Alternatively, you can inspect the log of the eap-app-2-r00gm pod to verify the transaction was handled properly:

      $ oc logs eap-app-2-r00gm | grep 'updated'
      INFO  [org.jboss.as.quickstarts.xa.DbUpdaterMDB] (Thread-0 (ActiveMQ-client-global-threads-1566836606)) JTA Crash Record Quickstart: key value pair updated via JMS.
  3. Issue a second XA transaction using your browser at http://eap-app-eap-tx-demo.openshift.example.com/jboss-jta-crash-rec.

    1. Enter Land into the Key field, and Rover into the Value field. Click the Submit button.
    2. Wait for a moment, then click the Refresh Table link.
    3. Notice how the Land Rover entry was added without the updated via …​ suffix.
  4. Scale the cluster down.

    $ oc scale --replicas=0 dc/eap-app
    deploymentconfig "eap-app" scaled
    1. Notice how the eap-app-2-r00gm pod was scheduled for termination.

      $ oc get pods
      NAME                        READY     STATUS        RESTARTS   AGE
      eap-app-1-build             0/1       Completed     0          4m
      eap-app-2-r00gm             1/1       Terminating   0          2m
      eap-app-migration-1-lvfdt   1/1       Running       0          3m
  5. Watch the log of the migration pod and notice how transaction recovery is performed. Wait for the recovery to finish:

    $ oc logs -f eap-app-migration-1-lvfdt
    Finished Migration Check cycle, pausing for 30 seconds before resuming
    ...
    Finished, recovery terminated successfully
    Migration terminated with status 0 (T)
    Releasing lock: (/opt/eap/standalone/partitioned_data/split-1)
    Finished Migration Check cycle, pausing for 30 seconds before resuming
    ...
  6. Scale the cluster back up.

    $ oc scale --replicas=1 dc/eap-app
    deploymentconfig "eap-app" scaled
  7. Using the browser navigate back to http://eap-app-eap-tx-demo.openshift.example.com/jboss-jta-crash-rec.
  8. Notice the table contains entries for both transactions. It looks similar to the following output:

    Table 7.1. Example: Database Table Contents

    Database Table Contents 

    Key

    Value

    Mercedes

    Benz updated via JMS.

    Land

    Rover updated via JMS.

    The content in the above table indicates that, although the cluster was scaled down before the second XA transaction had chance to finish, the migration pod performed the transaction recovery and the transaction was successfully completed.