Chapter 5. Installing zone aware sample application
Use this section to deploy a zone aware sample application for validating an OpenShift Container Storage Metro Disaster recovery setup is configured correctly.
With latency between the data zones, one can expect to see performance degradation compared to an OpenShift cluster with low latency between nodes and zones (for example, all nodes in the same location). How much will the performance get degraded, will depend on the latency between the zones and on the application behavior using the storage (such as heavy write traffic). Please make sure to test critical applications with Metro DR cluster configuration to ensure sufficient application performance for the required service levels.
5.1. Install Zone Aware Sample Application
In this section the ocs-storagecluster-cephfs
storage class is used to create a Read-Write-Many(RWX) PVC that can be used by multiple pods at the same time. The application we will use is called File Uploader.
Since this application shares the same RWX volume for storing files, we can demonstrate how an application can be spread across topology zones so that in the event of a site outage it is still available. This works for persistent data access as well because OpenShift Container Storage is configured as a Metro DR stretched cluster with zone awareness and high availability.
Create a new project:
oc new-project my-shared-storage
Deploy the example PHP application called file-uploader:
oc new-app openshift/php:7.2-ubi8~https://github.com/christianh814/openshift-php-upload-demo --name=file-uploader
Sample Output:
Found image 4f2dcc0 (9 days old) in image stream "openshift/php" under tag "7.2-ubi8" for "openshift/php:7.2- ubi8" Apache 2.4 with PHP 7.2 ----------------------- PHP 7.2 available as container is a base platform for building and running various PHP 7.2 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common use of PHP coding is probably as a replacement for CGI scripts. Tags: builder, php, php72, php-72 * A source build using source code from https://github.com/christianh814/openshift-php-upload-demo will be cr eated * The resulting image will be pushed to image stream tag "file-uploader:latest" * Use 'oc start-build' to trigger a new build --> Creating resources ... imagestream.image.openshift.io "file-uploader" created buildconfig.build.openshift.io "file-uploader" created deployment.apps "file-uploader" created service "file-uploader" created --> Success Build scheduled, use 'oc logs -f buildconfig/file-uploader' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/file-uploader' Run 'oc status' to view your app.
View the build log and wait for the application to be deployed:
oc logs -f bc/file-uploader -n my-shared-storage
Example Output:
Cloning "https://github.com/christianh814/openshift-php-upload-demo" ... [...] Generating dockerfile with builder image image-registry.openshift-image-regis try.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610c 0e05b593844b41d5494ea STEP 1: FROM image-registry.openshift-image-registry.svc:5000/openshift/php@s ha256:d97466f33999951739a76bce922ab17088885db610c0e05b593844b41d5494ea STEP 2: LABEL "io.openshift.build.commit.author"="Christian Hernandez <christ ian.hernandez@yahoo.com>" "io.openshift.build.commit.date"="Sun Oct 1 1 7:15:09 2017 -0700" "io.openshift.build.commit.id"="288eda3dff43b02f7f7 b6b6b6f93396ffdf34cb2" "io.openshift.build.commit.ref"="master" " io.openshift.build.commit.message"="trying to modularize" "io.openshift .build.source-location"="https://github.com/christianh814/openshift-php-uploa d-demo" "io.openshift.build.image"="image-registry.openshift-image-regi stry.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610 c0e05b593844b41d5494ea" STEP 3: ENV OPENSHIFT_BUILD_NAME="file-uploader-1" OPENSHIFT_BUILD_NAMESP ACE="my-shared-storage" OPENSHIFT_BUILD_SOURCE="https://github.com/christ ianh814/openshift-php-upload-demo" OPENSHIFT_BUILD_COMMIT="288eda3dff43b0 2f7f7b6b6b6f93396ffdf34cb2" STEP 4: USER root STEP 5: COPY upload/src /tmp/src STEP 6: RUN chown -R 1001:0 /tmp/src STEP 7: USER 1001 STEP 8: RUN /usr/libexec/s2i/assemble ---> Installing application source... => sourcing 20-copy-config.sh ... ---> 17:24:39 Processing additional arbitrary httpd configuration provide d by s2i ... => sourcing 00-documentroot.conf ... => sourcing 50-mpm-tuning.conf ... => sourcing 40-ssl-certs.sh ... STEP 9: CMD /usr/libexec/s2i/run STEP 10: COMMIT temp.builder.openshift.io/my-shared-storage/file-uploader-1:3 b83e447 Getting image source signatures [...]
The command prompt returns out of the tail mode once you see Push successful.
The new-app command deploys the application directly from the git repository and does not use the OpenShift template, hence OpenShift route resource is not created by default. We would need to create the route manually.
Let’s make our application zone aware and available by scaling it to 4 replicas and exposing it’s services:
oc expose svc/file-uploader -n my-shared-storage oc scale --replicas=4 deploy/file-uploader -n my-shared-storage oc get pods -o wide -n my-shared-storage
You should have 4 file-uploader Pods in a few minutes. Repeat the command above until there are 4 file-uploader Pods in Running STATUS.
You can create a PersistentVolumeClaim and attach it into an application with the oc set volume command. Execute the following
oc set volume deploy/file-uploader --add --name=my-shared-storage \ -t pvc --claim-mode=ReadWriteMany --claim-size=10Gi \ --claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs \ --mount-path=/opt/app-root/src/uploaded \ -n my-shared-storage
This command will:
- create a PersistentVolumeClaim
- update the application deployment to include a volume definition
- update the application deployment to attach a volumemount into the specified mount-path
- cause a new deployment of the 4 application Pods
Now, let’s look at the result of adding the volume:
oc get pvc -n my-shared-storage
Example Output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-shared-storage Bound pvc-5402cc8a-e874-4d7e-af76-1eb05bd2e7c7 10Gi RWX ocs-storagecluster-cephfs 52s
Notice the ACCESSMODE being set to RWX (ReadWriteMany).
All 4 file-uploaderPods are using the same RWX volume. Without this ACCESSMODE, OpenShift will not attempt to attach multiple Pods to the same Persistent Volume reliably. If you attempt to scale up deployments that are using RWO ( ReadWriteOnce) persistent volume, the Pods will get co-located on the same node.
5.2. Modify Deployment to be Zone Aware
Currently, the file-uploader
Deployment is not zone aware and can schedule all the Pods in the same zone. In this case if there is a site outage then the application would be unavailable. For more information, see using pod topology spread constraints
To make our app zone aware, we need to add pod placement rule in application deployment configuration. Run the following command and review the output as shown below. In the next step we will modify the Deployment to use the topology zone labels as shown in Start and End sections under below output.
$ oc get deployment file-uploader -o yaml -n my-shared-storage | less
Example Output:
[...] spec: progressDeadlineSeconds: 600 replicas: 4 revisionHistoryLimit: 10 selector: matchLabels: deployment: file-uploader strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: deployment: file-uploader spec: # <-- Start inserted lines after here containers: # <-- End inserted lines before here - image: image-registry.openshift-image-registry.svc:5000/my-shared-storage/file-uploader@sha256:a458ea62f990e431ad7d5f84c89e2fa27bdebdd5e29c5418c70c56eb81f0a26b imagePullPolicy: IfNotPresent name: file-uploader [...]
Edit the deployment and add the following new lines between the start and end as shown above:
$ oc edit deployment file-uploader -n my-shared-storage
[...] spec: topologySpreadConstraints: - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway nodeSelector: node-role.kubernetes.io/worker: "" containers: [...]
Example output:
deployment.apps/file-uploader edited
Scale the deployment to zero Pods and then back to four Pods. This is needed because the deployment changed in terms of Pod placement.
oc scale deployment file-uploader --replicas=0 -n my-shared-storage
Example output:
deployment.apps/file-uploader scaled
And then back to 4 Pods.
$ oc scale deployment file-uploader --replicas=4 -n my-shared-storage
Example output:
deployment.apps/file-uploader scaled
Verify that the four Pods are spread across the four nodes in datacenter1 and datacenter2 zones.
$ oc get pods -o wide -n my-shared-storage | egrep '^file-uploader'| grep -v build | awk '{print $7}' | sort | uniq -c
Example output:
1 perf1-mz8bt-worker-d2hdm 1 perf1-mz8bt-worker-k68rv 1 perf1-mz8bt-worker-ntkp8 1 perf1-mz8bt-worker-qpwsr
$ oc get nodes -L topology.kubernetes.io/zone | grep datacenter | grep -v master
Example output:
perf1-mz8bt-worker-d2hdm Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-k68rv Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-ntkp8 Ready worker 35d v1.20.0+5fbfd19 datacenter2 perf1-mz8bt-worker-qpwsr Ready worker 35d v1.20.0+5fbfd19 datacenter2
Use the file-uploader web application using your browser to upload new files.
Find the Route that has been created:
$ oc get route file-uploader -n my-shared-storage -o jsonpath --template="http://{.spec.host}{'\n'}"
This will return a route similar to this one.
Sample Output:
http://file-uploader-my-shared-storage.apps.cluster-ocs4-abdf.ocs4-abdf.sandbox744.opentlc.com
Point your browser to the web application using your route above. Your route will be different.
The web app simply lists all uploaded files and offers the ability to upload new ones as well as download the existing data. Right now there is nothing.
Select an arbitrary file from your local machine and upload it to the app.
Figure 5.1. A simple PHP-based file upload tool
- Click List uploaded files to see the list of all currently uploaded files.
The OpenShift Container Platform image registry, ingress routing, and monitoring services are not zone-aware