Chapter 1. Upgrade 3scale 2.0 to 2.1

Perform the steps in this document to upgrade your on-premises AMP deployment from version 2.0 to 2.1.

1.1. Prerequisites:

  • You must be running 3scale On-Premises 2.0
  • OpenShift CLI
  • 3scale AMP 2.1 templates
  • Access and permissions to your openshift server and project
Warning

This process may cause a disruption in service, Red Hat recommends you establish a maintenance window when performing your upgrade.

1.2. Select the Project

  1. Make backups
  2. From a terminal session, log in to your openshift cluster:

    oc login https://<YOUR_OPENSHIFT_CLUSTER>:8443
  3. Select the project you want to upgrade:

    oc project <YOUR_AMP_20_PROJECT>

1.3. Patch System Components

Once you have selected your project, continue your in-place upgrade through the oc patch command. The oc patch command allows you to patch your deployment configurations.

In this section of the upgrade, you must patch deployment configurations for the following pods:

  • system-app
  • system-resque
  • system-sidekiq
  • system-sphinx

Follow these steps to patch deployment configurations:

  1. Patch the system-app deployment configuration

    1. Enter the following oc patch command:

      oc patch dc/system-app -p '
      spec:
        strategy:
          rollingParams:
            pre:
              execNewPod:
                containerName: system-provider
                env:
                - name: SSL_CERT_DIR
                  value: "/etc/pki/tls/certs"
                - name: ZYNC_AUTHENTICATION_TOKEN
                  valueFrom:
                    secretKeyRef:
                      name: zync
                      key: ZYNC_AUTHENTICATION_TOKEN
        template:
          spec:
            containers:
            - name: system-provider
              env:
              - name: SSL_CERT_DIR
                value: "/etc/pki/tls/certs"
              - name: AMP_RELEASE
                value: "2.1.0-CR2-redhat-1"
              - name: ZYNC_AUTHENTICATION_TOKEN
                valueFrom:
                  secretKeyRef:
                    name: zync
                    key: ZYNC_AUTHENTICATION_TOKEN
              volumeMounts:
                - name: system-config
                  mountPath: /opt/system/config/zync.yml
                  subPath: zync.yml
                - name: system-config
                  mountPath: /opt/system/config/rolling_updates.yml
                  subPath: rolling_updates.yml
            - name: system-developer
              env:
              - name: SSL_CERT_DIR
                value: "/etc/pki/tls/certs"
              - name: AMP_RELEASE
                value: "2.1.0-CR2-redhat-1"
              - name: ZYNC_AUTHENTICATION_TOKEN
                valueFrom:
                  secretKeyRef:
                    name: zync
                    key: ZYNC_AUTHENTICATION_TOKEN
              volumeMounts:
                - name: system-config
                  mountPath: /opt/system/config/zync.yml
                  subPath: zync.yml
                - name: system-config
                  mountPath: /opt/system/config/rolling_updates.yml
                  subPath: rolling_updates.yml
            volumes:
              - name: system-config
                configMap:
                  name: system
                  items:
                  - key: zync.yml
                    path: zync.yml
                  - key: rolling_updates.yml
                    path: rolling_updates.yml
      '
  2. Patch the system-resque deployment configuration

    1. Enter the following oc patch command:

      oc patch dc/system-resque -p '
      spec:
        template:
          spec:
            containers:
            - name: system-resque
              env:
              - name: SSL_CERT_DIR
                value: "/etc/pki/tls/certs"
              - name: AMP_RELEASE
                value: "2.1.0-CR2-redhat-1"
              - name: ZYNC_AUTHENTICATION_TOKEN
                valueFrom:
                  secretKeyRef:
                    name: zync
                    key: ZYNC_AUTHENTICATION_TOKEN
              volumeMounts:
                - name: system-config
                  mountPath: /opt/system/config/zync.yml
                  subPath: zync.yml
                - name: system-config
                  mountPath: /opt/system/config/rolling_updates.yml
                  subPath: rolling_updates.yml
            - name: system-scheduler
              env:
              - name: SSL_CERT_DIR
                value: "/etc/pki/tls/certs"
              - name: AMP_RELEASE
                value: "2.1.0-CR2-redhat-1"
              - name: ZYNC_AUTHENTICATION_TOKEN
                valueFrom:
                  secretKeyRef:
                    name: zync
                    key: ZYNC_AUTHENTICATION_TOKEN
              volumeMounts:
                - name: system-config
                  mountPath: /opt/system/config/zync.yml
                  subPath: zync.yml
                - name: system-config
                  mountPath: /opt/system/config/rolling_updates.yml
                  subPath: rolling_updates.yml
            volumes:
              - name: system-config
                configMap:
                  name: system
                  items:
                  - key: zync.yml
                    path: zync.yml
                  - key: rolling_updates.yml
                    path: rolling_updates.yml
      '
  3. Patch the system-sideqik deployment configuration

    1. Enter the following oc patch command:

      oc patch dc/system-sidekiq -p '
      spec:
        template:
          spec:
            containers:
            - name: system-sidekiq
              env:
              - name: SSL_CERT_DIR
                value: "/etc/pki/tls/certs"
              - name: AMP_RELEASE
                value: "2.1.0-CR2-redhat-1"
              - name: ZYNC_AUTHENTICATION_TOKEN
                valueFrom:
                  secretKeyRef:
                    name: zync
                    key: ZYNC_AUTHENTICATION_TOKEN
              volumeMounts:
                - name: system-config
                  mountPath: /opt/system/config/zync.yml
                  subPath: zync.yml
                - name: system-config
                  mountPath: /opt/system/config/rolling_updates.yml
                  subPath: rolling_updates.yml
            volumes:
              - name: system-config
                configMap:
                  name: system
                  items:
                  - key: zync.yml
                    path: zync.yml
                  - key: rolling_updates.yml
                    path: rolling_updates.yml
      '
  4. Patch the system-sphinx deployment configuration

    1. Enter the following oc patch command:

      oc patch dc/system-sphinx -p '
      spec:
        template:
          spec:
            containers:
            - name: system-sphinx
              env:
              - name: SSL_CERT_DIR
                value: "/etc/pki/tls/certs"
              - name: AMP_RELEASE
                value: "2.1.0-CR2-redhat-1"
              - name: ZYNC_AUTHENTICATION_TOKEN
                valueFrom:
                  secretKeyRef:
                    name: zync
                    key: ZYNC_AUTHENTICATION_TOKEN
              volumeMounts:
                - name: system-config
                  mountPath: /opt/system/config/zync.yml
                  subPath: zync.yml
                - name: system-config
                  mountPath: /opt/system/config/rolling_updates.yml
                  subPath: rolling_updates.yml
            volumes:
              - name: system-config
                configMap:
                  name: system
                  items:
                  - key: zync.yml
                    path: zync.yml
                  - key: rolling_updates.yml
                    path: rolling_updates.yml
      '

1.4. Set imageChange triggers

Once you have selected your project and patched the system components, continue your in-place upgrade through the oc set triggers command.

Follow these steps to set up image change triggers:

  1. Enter the following oc set triggers commands for Backend:

    oc set triggers dc/backend-cron --containers='backend-cron' --from-image=amp-backend:latest
    oc set triggers dc/backend-listener --containers='backend-listener' --from-image=amp-backend:latest
    oc set triggers dc/backend-worker --containers='backend-worker' --from-image=amp-backend:latest
  2. Enter the following oc set triggers commands for System:

    oc set triggers dc/system-sphinx --containers='system-sphinx' --from-image=amp-system:latest
    oc set triggers dc/system-app --containers='system-developer,system-provider' --from-image=amp-system:latest
    oc set triggers dc/system-sidekiq --containers='system-sidekiq' --from-image=amp-system:latest
    oc set triggers dc/system-resque --containers='system-scheduler,system-resque' --from-image=amp-system:latest
  3. Enter the following oc set triggers commands for APIcast:

    oc set triggers dc/apicast-staging --containers='apicast-staging' --from-image=amp-apicast:latest
    oc set triggers dc/apicast-production --containers='apicast-production' --from-image=amp-apicast:latest

1.5. Deploy the 2.1 Template

Once you have patched system components and set imageChange triggers, you must deploy the 2.1 AMP template over your 2.0 deployment:

Using the existing wildcard domain of your current deployment, deploy the 2.1 template over top of your 2.0 project:

oc new-app -f amp.yml --param WILDCARD_DOMAIN=<YOUR_DOMAIN>
Note

If you do not know the wildcard domain of your current deployment, you can find it with the following command:

oc get dc/system-app -o jsonpath='{.spec.template.spec.containers[?(@.name == "system-provider")].env[?(@.name == "THREESCALE_SUPERDOMAIN")].value}'

The 2.1 template will deploy over top of your 2.0 deplyment. This deployment will result in a set of errors; these are expected and were resolved in the Patch System Components section.

1.6. Verify Upgrade

Once you have performed the upgrade procedure, verify the success of your upgrade operation by checking the version number in the lower-right corner of your 3scale Admin Portal.

Note

It may take some time for your redeployment operations to complete in OpenShift