Chapter 9. Reference Information
The content in this section is derived from the engineering documentation for this image. It is provided for reference as it can be useful for development purposes and for testing beyond the scope of the product documentation.
9.1. Persistent Templates
The JBoss EAP database templates, which deploy JBoss EAP and database pods, have both ephemeral and persistent variations.
Persistent templates include an environment variable to provision a persistent volume claim, which binds with an available persistent volume to be used as a storage volume for the JBoss EAP for OpenShift deployment. Information, such as timer schema, log handling, or data updates, is stored on the storage volume, rather than in ephemeral container memory. This information persists if the pod goes down for any reason, such as project upgrade, deployment rollback, or an unexpected error.
Without a persistent storage volume for the deployment, this information is stored in the container memory only, and is lost if the pod goes down for any reason.
For example, an EE timer backed by persistent storage continues to run if the pod is restarted. Any events triggered by the timer during the restart process are enacted when the application is running again.
Conversely, if the EE timer is running in the container memory, the timer status is lost if the pod is restarted, and starts from the beginning when the pod is running again.
9.2. Information Environment Variables
The following environment variables are designed to provide information to the image and should not be modified by the user:
Table 9.1. Information Environment Variables
Variable Name | Description and Value |
---|---|
JBOSS_IMAGE_NAME | The image names. Values:
|
JBOSS_IMAGE_VERSION | The image version. Value: This is the image version number. See the Red Hat Container Catalog for the latest values: |
JBOSS_MODULES_SYSTEM_PKGS | A comma-separated list of JBoss EAP system modules packages that are available to applications.
Value: |
STI_BUILDER |
Provides OpenShift S2I support for
Value: |
9.3. Configuration environment variables
You can configure the following environment variables to adjust the image without requiring a rebuild.
See the JBoss EAP documentation for other environment variables that are not listed here.
Table 9.2. Configuration environment variables
Variable Name | Description |
---|---|
AB_JOLOKIA_AUTH_OPENSHIFT |
Switch on client authentication for OpenShift TLS communication. The value of this parameter can be
|
AB_JOLOKIA_CONFIG | If set, uses this fully qualified file path for the Jolokia JVM agent properties, which are described in the Jolokia reference documentation. If you set your own Jolokia properties config file, the rest of the Jolokia settings in this document are ignored.
If not set,
Example value: |
AB_JOLOKIA_DISCOVERY_ENABLED | Enable Jolokia discovery.
Defaults to |
AB_JOLOKIA_HOST | Host address to bind to.
Defaults to
Example value: |
AB_JOLOKIA_HTTPS | Switch on secure communication with HTTPS.
By default self-signed server certificates are generated if no
Example value: |
AB_JOLOKIA_ID | Agent ID to use.
The default value is the
Example value: |
AB_JOLOKIA_OFF |
If set to Jolokia is enabled by default. |
AB_JOLOKIA_OPTS |
Additional options to be appended to the agent configuration. They should be given in the format
Example value: |
AB_JOLOKIA_PASSWORD | The password for basic authentication. By default, authentication is switched off.
Example value: |
AB_JOLOKIA_PASSWORD_RANDOM |
Determines if a random
Set to |
AB_JOLOKIA_PORT | The port to listen to.
Defaults to
Example value: |
AB_JOLOKIA_USER | The name of the user to use for basic authentication.
Defaults to
Example value: |
AB_PROMETHEUS_ENABLE |
If set to Note The MicroProfile Metrics subsystem is the preferred method to expose data in the Prometheus format. For more information about the MicroProfile Metrics susbsystem, see Eclipse MicroProfile in the Configuration Guide for JBoss EAP. |
AB_PROMETHEUS_JMX_EXPORTER_CONFIG |
The path within the container to a user-specified |
AB_PROMETHEUS_JMX_EXPORTER_PORT |
The port on which the |
CLI_GRACEFUL_SHUTDOWN |
If set to any non-zero length value, the image will prevent shutdown with the
Example value: |
CONTAINER_HEAP_PERCENT | Set the maximum Java heap size, as a percentage of available container memory.
Example value: |
CUSTOM_INSTALL_DIRECTORIES | A list of comma-separated directories used for installation and configuration of artifacts for the image during the S2I process.
Example value: |
DEFAULT_JMS_CONNECTION_FACTORY |
This value is used to specify the default JNDI binding for the JMS connection factory, for example
Example value: |
DISABLE_EMBEDDED_JMS_BROKER | The use of an embedded messaging broker in OpenShift containers is deprecated. Support for an embedded broker will be removed in a future release. If the following conditions are true, a warning is logged.
If this variable is included with the value set to
Include this variable set to |
ENABLE_ACCESS_LOG | Enable logging of access messages to the standard output channel. Logging of access messages is implemented using following methods:
Defaults to |
INITIAL_HEAP_PERCENT | Set the initial Java heap size, as a percentage of the maximum heap size.
Example value: |
JAVA_OPTS_APPEND | Server startup options.
Example value: |
JBOSS_MODULES_SYSTEM_PKGS_APPEND |
A comma-separated list of package names that will be appended to the
Example value: |
JGROUPS_CLUSTER_PASSWORD |
Password used to authenticate the node so it is allowed to join the JGroups cluster. Required, when using
Example value: |
JGROUPS_ENCRYPT_KEYSTORE |
Name of the keystore file within the secret specified via
Example value: |
JGROUPS_ENCRYPT_KEYSTORE_DIR |
Directory path of the keystore file within the secret specified via
Example value: |
JGROUPS_ENCRYPT_NAME |
Name associated with the server’s certificate, when using
Example value: |
JGROUPS_ENCRYPT_PASSWORD |
Password used to access the keystore and the certificate, when using
Example value: |
JGROUPS_ENCRYPT_PROTOCOL |
JGroups protocol to use for encryption of cluster traffic. Can be either
Defaults to
Example value: |
JGROUPS_ENCRYPT_SECRET |
Name of the secret that contains the
Example value: |
JGROUPS_PING_PROTOCOL |
JGroups protocol to use for node discovery. Can be either |
MQ_SIMPLE_DEFAULT_PHYSICAL_DESTINATION |
For backwards compatibility, set to |
OPENSHIFT_DNS_PING_SERVICE_NAME | Name of the service exposing the ping port on the servers for the DNS discovery mechanism.
Example value: |
OPENSHIFT_DNS_PING_SERVICE_PORT |
The port number of the ping port for the DNS discovery mechanism. If not specified, an attempt is made to discover the port number from the SRV records for the service, otherwise the default
Defaults to |
OPENSHIFT_KUBE_PING_LABELS | Clustering labels selector for the Kubernetes discovery mechanism.
Example value: |
OPENSHIFT_KUBE_PING_NAMESPACE | Clustering project namespace for the Kubernetes discovery mechanism.
Example value: |
SCRIPT_DEBUG |
If set to |
9.4. Application Templates
Table 9.3. Application Templates
Variable Name | Description |
---|---|
AUTO_DEPLOY_EXPLODED | Controls whether exploded deployment content should be automatically deployed.
Example value: |
9.5. Exposed Ports
Table 9.4. Exposed Ports
Port Number | Description |
---|---|
8443 | HTTPS |
8778 | Jolokia Monitoring |
9.6. Datasources
Datasources are automatically created based on the value of some of the environment variables.
The most important environment variable is DB_SERVICE_PREFIX_MAPPING
, as it defines JNDI mappings for the datasources. The allowed value for this variable is a comma-separated list of POOLNAME-DATABASETYPE=PREFIX
triplets, where:
-
POOLNAME
is used as thepool-name
in the datasource. -
DATABASETYPE
is the database driver to use. -
PREFIX
is the prefix used in the names of environment variables that are used to configure the datasource.
9.6.1. JNDI Mappings for Datasources
For each POOLNAME-DATABASETYPE=PREFIX
triplet defined in the DB_SERVICE_PREFIX_MAPPING
environment variable, the launch script creates a separate datasource, which is executed when running the image.
The first part (before the equal sign) of the DB_SERVICE_PREFIX_MAPPING
should be lowercase.
The DATABASETYPE
determines the driver for the datasource.
For more information about configuring a driver, see Modules, Drivers, and Generic Deployments. The JDK 8 image has drivers for postgresql
and mysql
configured by default.
Do not use any special characters for the POOLNAME
parameter.
Support for using the Red Hat-provided internal datasource drivers with the JBoss EAP for OpenShift image is now deprecated. Red Hat recommends that you use JDBC drivers obtained from your database vendor for your JBoss EAP applications.
The following internal datasources are no longer provided with the JBoss EAP for OpenShift image:
- MySQL
- PostgreSQL
For more information about installing drivers, see Modules, Drivers, and Generic Deployments.
For more information on configuring JDBC drivers with JBoss EAP, see JDBC drivers in the JBoss EAP Configuration Guide.
Note that you can also create a custom layer to install these drivers and datasources if you want to add them to a provisioned server.
9.6.1.1. Datasource Configuration Environment Variables
To configure other datasource properties, use the following environment variables.
Be sure to replace the values for POOLNAME
, DATABASETYPE
, and PREFIX
in the following variable names with the appropriate values. These replaceable values are described in this section and in the Datasources section.
Variable Name | Description |
---|---|
POOLNAME_DATABASETYPE_SERVICE_HOST |
Defines the database server’s host name or IP address to be used in the datasource’s
Example value: |
POOLNAME_DATABASETYPE_SERVICE_PORT | Defines the database server’s port for the datasource.
Example value: |
PREFIX_BACKGROUND_VALIDATION |
When set to |
PREFIX_BACKGROUND_VALIDATION_MILLIS |
Specifies frequency of the validation, in milliseconds, when the |
PREFIX_CONNECTION_CHECKER | Specifies a connection checker class that is used to validate connections for the particular database in use.
Example value: |
PREFIX_DATABASE | Defines the database name for the datasource.
Example value: |
PREFIX_DRIVER | Defines Java database driver for the datasource.
Example value: |
PREFIX_EXCEPTION_SORTER | Specifies the exception sorter class that is used to properly detect and clean up after fatal database connection exceptions.
Example value: |
PREFIX_JNDI |
Defines the JNDI name for the datasource. Defaults to
Example value: |
PREFIX_JTA | Defines Jakarta Transactions option for the non-XA datasource. The XA datasources are already Jakarta Transactions capable by default.
Defaults to |
PREFIX_MAX_POOL_SIZE | Defines the maximum pool size option for the datasource.
Example value: |
PREFIX_MIN_POOL_SIZE | Defines the minimum pool size option for the datasource.
Example value: |
PREFIX_NONXA |
Defines the datasource as a non-XA datasource. Defaults to |
PREFIX_PASSWORD | Defines the password for the datasource.
Example value: |
PREFIX_TX_ISOLATION | Defines the java.sql.Connection transaction isolation level for the datasource.
Example value: |
PREFIX_URL | Defines connection URL for the datasource.
Example value: |
PREFIX_USERNAME | Defines the username for the datasource.
Example value: |
When running this image in OpenShift, the POOLNAME_DATABASETYPE_SERVICE_HOST
and POOLNAME_DATABASETYPE_SERVICE_PORT
environment variables are set up automatically from the database service definition in the OpenShift application template, while the others are configured in the template directly as env
entries in container definitions under each pod template.
9.6.1.2. Examples
These examples show how value of the DB_SERVICE_PREFIX_MAPPING
environment variable influences datasource creation.
9.6.1.2.1. Single Mapping
Consider value test-postgresql=TEST
.
This creates a datasource with java:jboss/datasources/test_postgresql
name. Additionally, all the required settings like password and username are expected to be provided as environment variables with the TEST_
prefix, for example TEST_USERNAME
and TEST_PASSWORD
.
9.6.1.2.2. Multiple Mappings
You can specify multiple datasource mappings.
Always separate multiple datasource mappings with a comma.
Consider the following value for the DB_SERVICE_PREFIX_MAPPING
environment variable: cloud-postgresql=CLOUD,test-mysql=TEST_MYSQL
.
This creates the following two datasources:
-
java:jboss/datasources/test_mysql
-
java:jboss/datasources/cloud_postgresql
Then you can use TEST_MYSQL
prefix for configuring things like the username and password for the MySQL datasource, for example TEST_MYSQL_USERNAME
. And for the PostgreSQL datasource, use the CLOUD_
prefix, for example CLOUD_USERNAME
.
9.7. Clustering
9.7.1. Configuring a JGroups Discovery Mechanism
To enable JBoss EAP clustering on OpenShift, configure the JGroups protocol stack in your JBoss EAP configuration to use either the kubernetes.KUBE_PING
or the dns.DNS_PING
discovery mechanism.
Although you can use a custom standalone-openshift.xml
configuration file, it is recommended that you use environment variables to configure JGroups in your image build.
The instructions below use environment variables to configure the discovery mechanism for the JBoss EAP for OpenShift image.
If you use one of the available application templates to deploy an application on top of the JBoss EAP for OpenShift image, the default discovery mechanism is dns.DNS_PING
.
The dns.DNS_PING
and kubernetes.KUBE_PING
discovery mechanisms are not compatible with each other. It is not possible to form a supercluster out of two independent child clusters, with one using the dns.DNS_PING
mechanism for discovery and the other using the kubernetes.KUBE_PING
mechanism. Similarly, when performing a rolling upgrade, the discovery mechanism needs to be identical for both the source and the target clusters.
9.7.1.1. Configuring KUBE_PING
To use the KUBE_PING
JGroups discovery mechanism:
The JGroups protocol stack must be configured to use
KUBE_PING
as the discovery mechanism.You can do this by setting the
JGROUPS_PING_PROTOCOL
environment variable tokubernetes.KUBE_PING
:JGROUPS_PING_PROTOCOL=kubernetes.KUBE_PING
The
KUBERNETES_NAMESPACE
environment variable must be set to your OpenShift project name. If not set, the server behaves as a single-node cluster (a "cluster of one"). For example:KUBERNETES_NAMESPACE=PROJECT_NAME
The
KUBERNETES_LABELS
environment variable should be set. This should match the label set at the service level. If not set, pods outside of your application (albeit in your namespace) will try to join. For example:KUBERNETES_LABELS=application=APP_NAME
Authorization must be granted to the service account the pod is running under to be allowed to access Kubernetes' REST API. This is done using the OpenShift CLI. The following example uses the
default
service account in the current project’s namespace:oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)
Using the
eap-service-account
in the project namespace:oc policy add-role-to-user view system:serviceaccount:$(oc project -q):eap-service-account -n $(oc project -q)
NoteSee Prepare OpenShift for Application Deployment for more information on adding policies to service accounts.
9.7.1.2. Configuring DNS_PING
To use the DNS_PING
JGroups discovery mechanism:
The JGroups protocol stack must be configured to use
DNS_PING
as the discovery mechanism.You can do this by setting the
JGROUPS_PING_PROTOCOL
environment variable todns.DNS_PING
:JGROUPS_PING_PROTOCOL=dns.DNS_PING
The
OPENSHIFT_DNS_PING_SERVICE_NAME
environment variable must be set to the name of the ping service for the cluster.OPENSHIFT_DNS_PING_SERVICE_NAME=PING_SERVICE_NAME
The
OPENSHIFT_DNS_PING_SERVICE_PORT
environment variable should be set to the port number on which the ping service is exposed. TheDNS_PING
protocol attempts to discern the port from the SRV records, otherwise it defaults to8888
.OPENSHIFT_DNS_PING_SERVICE_PORT=PING_PORT
A ping service which exposes the ping port must be defined. This service should be headless (ClusterIP=None) and must have the following:
- The port must be named.
The service must be annotated with the
service.alpha.kubernetes.io/tolerate-unready-endpoints
and thepublishNotReadyAddresses
properties, both set totrue
.Note-
Use both the
service.alpha.kubernetes.io/tolerate-unready-endpoints
and thepublishNotReadyAddresses
properties to ensure that the ping service works in both the older and newer OpenShift releases. - Omitting these annotations result in each node forming its own "cluster of one" during startup. Each node then merges its cluster into the other nodes' clusters after startup, because the other nodes are not detected until after they have started.
kind: Service apiVersion: v1 spec: publishNotReadyAddresses: true clusterIP: None ports: - name: ping port: 8888 selector: deploymentConfig: eap-app metadata: name: eap-app-ping annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" description: "The JGroups ping port for clustering."
-
Use both the
DNS_PING
does not require any modifications to the service account and works using the default permissions.
9.7.2. Configuring JGroups to Encrypt Cluster Traffic
To encrypt cluster traffic for JBoss EAP on OpenShift, you must configure the JGroups protocol stack in your JBoss EAP configuration to use either the SYM_ENCRYPT
or ASYM_ENCRYPT
protocol.
Although you can use a custom standalone-openshift.xml
configuration file, it is recommended that you use environment variables to configure JGroups in your image build.
The instructions below use environment variables to configure the protocol for cluster traffic encryption for the JBoss EAP for OpenShift image.
The SYM_ENCRYPT
and ASYM_ENCRYPT
protocols are not compatible with each other. It is not possible to form a supercluster out of two independent child clusters, with one using the SYM_ENCRYPT
protocol for the encryption of cluster traffic and the other using the ASYM_ENCRYPT
protocol. Similarly, when performing a rolling upgrade, the protocol needs to be identical for both the source and the target clusters.
9.7.2.1. Configuring SYM_ENCRYPT
To use the SYM_ENCRYPT
protocol to encrypt JGroups cluster traffic:
The JGroups protocol stack must be configured to use
SYM_ENCRYPT
as the encryption protocol.You can do this by setting the
JGROUPS_ENCRYPT_PROTOCOL
environment variable toSYM_ENCRYPT
:JGROUPS_ENCRYPT_PROTOCOL=SYM_ENCRYPT
The
JGROUPS_ENCRYPT_SECRET
environment variable must be set to the name of the secret containing the JGroups keystore file used for securing the JGroups communications. If not set, cluster communication is not encrypted and a warning is issued. For example:JGROUPS_ENCRYPT_SECRET=eap7-app-secret
The
JGROUPS_ENCRYPT_KEYSTORE_DIR
environment variable must be set to the directory path of the keystore file within the secret specified viaJGROUPS_ENCRYPT_SECRET
variable. If not set, cluster communication is not encrypted and a warning is issued. For example:JGROUPS_ENCRYPT_KEYSTORE_DIR=/etc/jgroups-encrypt-secret-volume
The
JGROUPS_ENCRYPT_KEYSTORE
environment variable must be set to the name of the keystore file within the secret specified viaJGROUPS_ENCRYPT_SECRET
variable. If not set, cluster communication is not encrypted and a warning is issued. For example:JGROUPS_ENCRYPT_KEYSTORE=jgroups.jceks
The
JGROUPS_ENCRYPT_NAME
environment variable must be set to the name associated with the server’s certificate. If not set, cluster communication is not encrypted and a warning is issued. For example:JGROUPS_ENCRYPT_NAME=jgroups
The
JGROUPS_ENCRYPT_PASSWORD
environment variable must be set to the password used to access the keystore and the certificate. If not set, cluster communication is not encrypted and a warning is issued. For example:JGROUPS_ENCRYPT_PASSWORD=mypassword
9.7.2.2. Configuring ASYM_ENCRYPT
JBoss EAP 7.3 includes a new version of the ASYM_ENCRYPT
protocol. The previous version of the protocol is deprecated. If you specify the JGROUPS_CLUSTER_PASSWORD
environment variable, the deprecated version of the protocol is used and a warning is printed in the pod log.
To use the ASYM_ENCRYPT
protocol to encrypt JGroups cluster traffic, specify ASYM_ENCRYPT
as the encryption protocol, and configure it to use a keystore configured in the elytron
subsystem.
-e JGROUPS_ENCRYPT_PROTOCOL="ASYM_ENCRYPT" \ -e JGROUPS_ENCRYPT_SECRET="encrypt_secret" \ -e JGROUPS_ENCRYPT_NAME="encrypt_name" \ -e JGROUPS_ENCRYPT_PASSWORD="encrypt_password" \ -e JGROUPS_ENCRYPT_KEYSTORE="encrypt_keystore" \ -e JGROUPS_CLUSTER_PASSWORD="cluster_password"
9.8. Health Checks
The JBoss EAP for OpenShift image utilizes the liveness and readiness probes included in OpenShift by default. In addition, this image includes Eclipse MicroProfile Health, as discussed in the Configuration Guide.
The following table demonstrates the values necessary for these health checks to pass. If the status is anything other than the values found below, then the check is failed and the image is restarted per the image’s restart policy.
Table 9.5. Liveness and Readiness Checks
Performed Test | Liveness | Readiness |
---|---|---|
Server Status | Any status | Running |
Boot Errors | None | None |
Deployment Status [a] |
N/A or no |
N/A or no |
Eclipse MicroProfile Health [b] |
N/A or |
N/A or |
[a]
N/A is only a valid state when no deployments are present.
[b]
N/A is only a valid state when the microprofile-health-smallrye subsystem has been disabled.
|
9.9. Messaging
9.9.1. Configuring External Red Hat AMQ Brokers
You can configure the JBoss EAP for OpenShift image with environment variables to connect to external Red Hat AMQ brokers.
Example OpenShift Application Definition
The following example uses a template to create a JBoss EAP application connected to an external Red Hat AMQ 7 broker.
Example: JDK 8
oc new-app eap73-amq-s2i \ -p APPLICATION_NAME=eap73-mq \ -p MQ_USERNAME=MY_USERNAME \ -p MQ_PASSWORD=MY_PASSWORD
Example: JDK 11
oc new-app eap73-openjdk11-amq-s2i \ -p APPLICATION_NAME=eap73-mq \ -p MQ_USERNAME=MY_USERNAME \ -p MQ_PASSWORD=MY_PASSWORD
The template used in this example provides valid default values for the required parameters. If you do not use a template and provide your own parameters, be aware that the MQ_SERVICE_PREFIX_MAPPING
name must match the APPLICATION_NAME
name, appended with "-amq7=MQ".
9.10. Security Domains
To configure a new Security Domain, the user must define the SECDOMAIN_NAME
environment variable.
This results in the creation of a security domain named after the environment variable. The user may also define the following environment variables to customize the domain:
Table 9.6. Security Domains
Variable name | Description |
---|---|
SECDOMAIN_NAME | Defines an additional security domain.
Example value: |
SECDOMAIN_PASSWORD_STACKING |
If defined, the
Example value: |
SECDOMAIN_LOGIN_MODULE | The login module to be used.
Defaults to |
SECDOMAIN_USERS_PROPERTIES | The name of the properties file containing user definitions.
Defaults to |
SECDOMAIN_ROLES_PROPERTIES | The name of the properties file containing role definitions.
Defaults to |
9.11. HTTPS Environment Variables
Variable name | Description |
---|---|
HTTPS_NAME |
If defined along with
This should be the value specified as the alias name of your keystore if you created it with the
Example value: |
HTTPS_PASSWORD |
If defined along with
Example value: |
HTTPS_KEYSTORE |
If defined along with
Example value: |
9.12. Administration Environment Variables
Table 9.7. Administration Environment Variables
Variable name | Description |
---|---|
ADMIN_USERNAME |
If both this and
Example value: |
ADMIN_PASSWORD |
The password for the specified
Example value: |
9.13. S2I
The image includes S2I scripts and Maven.
Maven is currently only supported as a build tool for applications that are supposed to be deployed on JBoss EAP-based containers (or related/descendant images) on OpenShift.
Only WAR deployments are supported at this time.
9.13.1. Custom Configuration
It is possible to add custom configuration files for the image. All files put into configuration/
directory will be copied into EAP_HOME/standalone/configuration/
. For example to override the default configuration used in the image, just add a custom standalone-openshift.xml
into the configuration/
directory. See example for such a deployment.
9.13.1.1. Custom Modules
It is possible to add custom modules. All files from the modules/
directory will be copied into EAP_HOME/modules/
. See example for such a deployment.
9.13.2. Deployment Artifacts
By default, artifacts from the source target
directory will be deployed. To deploy from different directories set the ARTIFACT_DIR
environment variable in the BuildConfig definition. ARTIFACT_DIR
is a comma-delimited list. For example: ARTIFACT_DIR=app1/target,app2/target,app3/target
9.13.3. Artifact Repository Mirrors
A repository in Maven holds build artifacts and dependencies of various types, for example, all of the project JARs, library JARs, plug-ins, or any other project specific artifacts. It also specifies locations from where to download artifacts while performing the S2I build. Besides using central repositories, it is a common practice for organizations to deploy a local custom mirror repository.
Benefits of using a mirror are:
- Availability of a synchronized mirror, which is geographically closer and faster.
- Ability to have greater control over the repository content.
- Possibility to share artifacts across different teams (developers, CI), without the need to rely on public servers and repositories.
- Improved build times.
Often, a repository manager can serve as local cache to a mirror. Assuming that the repository manager is already deployed and reachable externally at https://10.0.0.1:8443/repository/internal/
, the S2I build can then use this manager by supplying the MAVEN_MIRROR_URL
environment variable to the build configuration of the application as follows:
Identify the name of the build configuration to apply
MAVEN_MIRROR_URL
variable against.oc get bc -o name buildconfig/eap
Update build configuration of
eap
with aMAVEN_MIRROR_URL
environment variable.oc env bc/eap MAVEN_MIRROR_URL="https://10.0.0.1:8443/repository/internal/" buildconfig "eap" updated
Verify the setting.
oc env bc/eap --list # buildconfigs eap MAVEN_MIRROR_URL=https://10.0.0.1:8443/repository/internal/
- Schedule new build of the application.
During application build, you will notice that Maven dependencies are pulled from the repository manager, instead of the default public repositories. Also, after the build is finished, you will see that the mirror is filled with all the dependencies that were retrieved and used during the build.
9.13.3.1. Secure Artifact Repository Mirror URLs
To prevent "man-in-the-middle" attacks through the Maven repository, JBoss EAP requires the use of secure URLs for artifact repository mirror URLs.
The URL should specify a secure http ("https") and a secure port.
By default, if you specify an unsecure URL, an error will be returned. You can override this behavior using the the property -Dinsecure.repositories=WARN
.
9.13.4. Scripts
run
-
This script uses the
openshift-launch.sh
script that configures and starts JBoss EAP with thestandalone-openshift.xml
configuration. assemble
-
This script uses Maven to build the source, create a package (WAR), and move it to the
EAP_HOME/standalone/deployments
directory.
9.13.5. Custom Scripts
You can add custom scripts to run when starting a pod, before JBoss EAP is started.
You can add any script valid to run when starting a pod, including CLI scripts.
Two options are available for including scripts when starting JBoss EAP from an image:
- Mount a configmap to be executed as postconfigure.sh
- Add an install.sh script in the nominated installation directory
9.13.5.1. Mounting a configmap to execute custom scripts
Mount a configmap when you want to mount a custom script at runtime to an existing image (in other words, an image that has already been built).
To mount a configmap:
Create a configmap with content you want to include in the postconfigure.sh.
For example, create a directory called
extensions
in the project root directory to include the scriptspostconfigure.sh
andextensions.cli
and run the following command:$ oc create configmap jboss-cli --from-file=postconfigure.sh=extensions/postconfigure.sh --from-file=extensions.cli=extensions/extensions.cli
Mount the configmap into the pods via the deployment controller (dc).
$ oc set volume dc/eap-app --add --name=jboss-cli -m /opt/eap/extensions -t configmap --configmap-name=jboss-cli --default-mode='0755' --overwrite
Example postconfigure.sh
#!/usr/bin/env bash set -x echo "Executing postconfigure.sh" $JBOSS_HOME/bin/jboss-cli.sh --file=$JBOSS_HOME/extensions/extensions.cli
Example extensions.cli
embed-server --std-out=echo --server-config=standalone-openshift.xml :whoami quit
9.13.5.2. Using install.sh to execute custom scripts
Use install.sh when you want to include the script as part of the image when it is built.
To execute custom scripts using install.sh:
-
In the git repository of the project that will be used during s2i build, create a directory called
.s2i
. Inside the
s2i
directory, add a file called environment, with the following content:$ cat .s2i/environment CUSTOM_INSTALL_DIRECTORIES=extensions
-
Create a directory called
extensions
. In the
extensions
directory, create the file postconfigure.sh with contents similar to the following (replace placeholder code with appropriate code for your environment):$ cat extensions/postconfigure.sh #!/usr/bin/env bash echo "Executing patch.cli" $JBOSS_HOME/bin/jboss-cli.sh --file=$JBOSS_HOME/extensions/some-cli-example.cli
In the extensions directory, create the file install.sh with contents similar to the following (replace placeholder code with appropriate code for your environment):
$ cat extensions/install.sh #!/usr/bin/env bash set -x echo "Running $PWD/install.sh" injected_dir=$1 # copy any needed files into the target build. cp -rf ${injected_dir} $JBOSS_HOME/extensions
9.13.6. Environment Variables
You can influence the way the build is executed by supplying environment variables to the s2i build
command. The environment variables that can be supplied are:
Table 9.8. s2i Environment Variables
Variable name | Description |
---|---|
ARTIFACT_DIR |
The
Example value: |
ENABLE_GENERATE_DEFAULT_DATASOURCE |
Optional. When included with the value |
GALLEON_PROVISION_DEFAULT_FAT_SERVER |
Optional. When included with the value |
GALLEON_PROVISION_LAYERS | Optional. Instructs the S2I process to provision the specified layers. The value is a comma-separated list of layers to provision, including one base layer and any number of decorator layers.
Example value: |
HTTP_PROXY_HOST | Host name or IP address of a HTTP proxy for Maven to use.
Example value: |
HTTP_PROXY_PORT | TCP Port of a HTTP proxy for Maven to use.
Example value: |
HTTP_PROXY_USERNAME |
If supplied with
Example value: |
HTTP_PROXY_PASSWORD |
If supplied with
Example value: |
HTTP_PROXY_NONPROXYHOSTS | If supplied, a configured HTTP proxy will ignore these hosts.
Example value: |
MAVEN_ARGS | Overrides the arguments supplied to Maven during build.
Example value: |
MAVEN_ARGS_APPEND | Appends user arguments supplied to Maven during build.
Example value: |
MAVEN_MIRROR_URL | URL of a Maven Mirror/repository manager to configure.
Example value: Note that the specified URL should be secure. For details see Section 9.13.3.1, “Secure Artifact Repository Mirror URLs”. |
MAVEN_CLEAR_REPO | Optionally clear the local Maven repository after the build. If the server present in the image is strongly coupled to the local cache, the cache is not deleted and a warning is printed.
Example value: |
APP_DATADIR | If defined, directory in the source from where data files are copied.
Example value: |
DATA_DIR |
Directory in the image where data from
Example value: |
For more information, see Build and Run a Java Application on the JBoss EAP for OpenShift Image, which uses Maven and the S2I scripts included in the JBoss EAP for OpenShift image.
9.14. Single Sign-On image
This image includes the Red Hat Single Sign-On-enabled applications.
For more information on deploying the Red Hat Single Sign-On for OpenShift image with the JBoss EAP for OpenShift image, see Deploy the Red Hat Single Sign-On-enabled JBoss EAP Image on the Red Hat Single Sign-On for OpenShift guide.
Table 9.9. Single Sign-On environment variables
Variable name | Description |
---|---|
SSO_URL | URL of the Single Sign-On server. |
SSO_REALM | Single Sign-On realm for the deployed applications. |
SSO_PUBLIC_KEY | Public key of the Single Sign-On realm. This field is optional but if omitted can leave the applications vulnerable to man-in-middle attacks. |
SSO_USERNAME | Single Sign-On user required to access the Single Sign-On REST API.
Example value: |
SSO_PASSWORD |
Password for the Single Sign-On user defined by the
Example value: |
SSO_SAML_KEYSTORE |
Keystore location for SAML. Defaults to |
SSO_SAML_KEYSTORE_PASSWORD |
Keystore password for SAML. Defaults to |
SSO_SAML_CERTIFICATE_NAME |
Alias for keys/certificate to use for SAML. Defaults to |
SSO_BEARER_ONLY | Single Sign-On client access type. (Optional)
Example value: |
SSO_CLIENT |
Path for Single Sign-On redirects back to the application. Defaults to match |
SSO_ENABLE_CORS |
If |
SSO_SECRET | The Single Sign-On client secret for confidential access.
Example value: |
SSO_DISABLE_SSL_CERTIFICATE_VALIDATION |
If
Example value: |
9.15. Transaction Recovery
When a cluster is scaled down, it is possible for transaction branches to be in doubt. There is a technology preview automated recovery pod that is meant to complete these branches, but there are rare scenarios, such as a network split, where the recovery may fail. In these cases, manual transaction recovery might be necessary.
The Automated Transaction Recovery feature is only supported for OpenShift 3 and the feature is provided as Technology Preview only.
For OpenShift 4 you can use the EAP Operator to safely recovery transactions. See EAP Operator for Safe Transaction Recovery.
9.15.1. Unsupported Transaction Recovery Scenarios
JTS transactions
Because the network endpoint of the parent is encoded in recovery coordinator IORs, recovery cannot work reliably if either the child or parent node recovers with either a new IP address, or if it is intended to be accessed using a virtualized IP address.
XTS transactions
XTS does not work in a clustered scenario for recovery purposes. See JBTM-2742 for details.
- Transactions propagated over JBoss Remoting is unsupported with OpenShift 3.
Transactions propagated over JBoss Remoting is supported with OpenShift 4 and the EAP operator.
Transactions propagated over XATerminator
Because the EIS is intended to be connected to a single instance of a Java EE application server, there are no well-defined ways to couple these processes.
9.15.2. Manual Transaction Recovery Process
The goal of the following procedure is to find and manually resolve in-doubt branches in cases where automated recovery has failed.
9.15.2.1. Caveats
This procedure only describes how to manually recover transactions that were wholly self-contained within a single JVM. The procedure does not describe how to recover JTA transactions that have been propagated to other JVMs.
There are various network partition scenarios in which OpenShift might start multiple instances of the same pod with the same IP address and same node name and where, due to the partition, the old pod is still running. During manual recovery, this might result in a situation where you might be connected to a pod that has a stale view of the object store. If you think you are in this scenario, it is recommended that all JBoss EAP pods be shut down to ensure that none of the resource managers or object stores are in use.
When you enlist a resource in an XA transaction, it is your responsibility to ensure that each resource type is supported for recovery. For example, it is known that PostgreSQL and MySQL are well-behaved with respect to recovery, but for others, such as A-MQ and JDV resource managers, you should check documentation of the specific OpenShift release.
The deployment must use a JDBC object store.
The transaction manager relies on the uniqueness of node identifiers. The maximum byte length of an XID is set by the XA specification and cannot be changed. Due to the data that the JBoss EAP for OpenShift image must include in the XID, this leaves room for 23 bytes in the node identifier.
OpenShift coerces the node identifier to fit this 23 byte limit:
-
For all node names, even those under 23 bytes, the
-
(dash) character is stripped out. - If the name is still over 23 bytes, characters are truncated from the beginning of the name until length of the name is within the 23 byte limit.
However, this process might impact the uniqueness of the identifier. For example, the names aaa123456789012345678m0jwh
and bbb123456789012345678m0jwh
are both truncated to 123456789012345678m0jwh
, which breaks the uniqueness of the names that are expected. In another example, this-pod-is-m0jwh
and thispod-is-m0jwh
are both truncated to thispodism0jwh
, again breaking the uniqueness of the names.
It is your responsibility to ensure that the node names you configure are unique, keeping in mind the above truncation process.
9.15.2.2. Prerequisite
It is assumed the OpenShift instance has been configured with a JDBC store, and that the store tables are partitioned using a table prefix corresponding to the pod name. This should be automatic whenever a JBoss EAP deployment is in use. This is different from the automated recovery example, which uses a file store with split directories on a shared volume. You can verify that the JBoss EAP instance is using a JDBC object store by looking at the configuration of the transactions subsystem in a running pod:
Determine if the
/opt/eap/standalone/configuration/openshift-standalone.xml
configuration file contains an element for the transaction subsystem:<subsystem xmlns="urn:jboss:domain:transactions:3.0">
If the JDBC object store is in use, then there is an entry similar to the following:
<jdbc-store datasource-jndi-name="java:jboss/datasources/jdbcstore_postgresql"/>
NoteThe JNDI name identifies the datasource used to store the transaction logs.
9.15.2.3. Procedure
The following procedure details the process of manual transaction recovery solely for datasources.
- Use the database vendor tooling to list the XIDs (transaction branch identifiers) for in-doubt branches. It is necessary to list XIDs for all datasources that were in use by any deployments running on the pod that failed or was scaled down. Refer to the vendor documentation for the database product in use.
For each such XID, determine which pod created the transaction and check to see if that pod is still running.
- If it is running, then leave the branch alone.
If the pod is not running, assume it was removed from the cluster and you must apply the manual resolution procedure described here. Look in the transaction log storage that was used by the failed pod to see if there is a corresponding transaction log:
- If there is a log, then manually commit the XID using the vendor tooling.
- If there is not a log, assume it is an orphaned branch and roll back the XID using the vendor tooling.
The rest of this procedure explains in detail how to carry out each of these steps.
9.15.2.3.1. Resolving In-doubt Branches
First, find all the resources that the deployment is using.
It is recommended that you do this using the JBoss EAP managagement CLI. Although the resources should be defined in the JBoss EAP standalone-openshift.xml
configuration file, there are other ways they can be made available to the transaction subsystem within the application server. For example, this can be done using a file in a deployment, or dynamically using the management CLI at runtime.
- Open a terminal on a pod running a JBoss EAP instance in the cluster of the failed pod. If there is no such pod, scale up to one.
-
Create a management user using the
/opt/eap/bin/add-user.sh
script. -
Log into the management CLI using the
/opt/eap/bin/jboss-cli.sh
script. List the datasources configured on the server. These are the ones that may contain in-doubt transaction branches.
/subsystem=datasources:read-resource { "outcome" => "success", "result" => { "data-source" => { "ExampleDS" => undefined, ... }, ... }
Once you have the list, find the connection URL for each of the datasources. For example:
/subsystem=datasources/data-source=ExampleDS:read-attribute(name=connection-url) { "outcome" => "success", "result" => "jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE", "response-headers" => {"process-state" => "restart-required"} }
Connect to each datasource and list any in-doubt transaction branches.
NoteThe table name that stores in-doubt branches will be different for each datasource vendor.
JBoss EAP has a default SQL query tool (H2) that you can use to check each database. For example:
java -cp /opt/eap/modules/system/layers/base/com/h2database/h2/main/h2-1.3.173.jar \ -url "jdbc:postgresql://localhost:5432/postgres" \ -user sa \ -password sa \ -sql "select gid from pg_prepared_xacts;"
Alternatively, you can use the resource’s native tooling. For example, for a PostGreSQL datasource called
sampledb
, you can use the OpenShift client tools to remotely log in to the pod and query the in-doubt transaction table:$ oc rsh postgresql-2-vwf9n # rsh to the named pod sh-4.2$ psql sampledb psql (9.5.7) Type "help" for help. sampledb=# select gid from pg_prepared_xacts; 131077_AAAAAAAAAAAAAP//rBEAB440GK1aJ72oAAAAGHAtanRhLWNyYXNoLXJlYy0zLXAyY2N3_AAAAAAAAAAAAAP//rBEAB440GK1aJ72oAAAAGgAAAAEAAAAA
9.15.2.3.2. Extract the Global Transaction ID and Node Identifier from Each XID
When all XIDs for in-doubt branches are identified, convert the XIDs into a format that you can compare to the logs stored in the transaction tables of the transaction manager.
For example, the following Bash script can be used to perform this conversion. Assuming that $PG_XID
holds the XID from the select statement above, then the JBoss EAP transaction ID can be obtained as follows:
PG_XID="$1" IFS='_' read -ra lines <<< "$PG_XID" [[ "${lines[0]}" = 131077 ]] || exit 0; # this script only works for our own FORMAT ID PG_TID=${lines[1]} a=($(echo "$PG_TID"| base64 -d | xxd -ps |tr -d '\n' | while read -N16 i ; do echo 0x$i ; done)) b=($(echo "$PG_TID"| base64 -d | xxd -ps |tr -d '\n' | while read -N8 i ; do echo 0x$i ; done)) c=("${b[@]:4}") # put the last 3 32-bit hexadecimal numbers into array c # the negative elements of c need special handling since printf below only works with positive # hexadecimal numbers for i in "${!c[@]}"; do arg=${c[$i]} # inspect the MSB to see if arg is negative - if so convert it from a 2’s complement number [[ $(($arg>>31)) = 1 ]] && x=$(echo "obase=16; $(($arg - 0x100000000 ))" | bc) || x=$arg if [[ ${x:0:1} = \- ]] ; then # see if the first character is a minus sign neg[$i]="-"; c[$i]=0x${x:1} # strip the minus sign and make it hex for use with printf below else neg[$i]="" c[$i]=$x fi done EAP_TID=$(printf %x:%x:${neg[0]}%x:${neg[1]}%x:${neg[2]}%x ${a[0]} ${a[1]} ${c[0]} ${c[1]} ${c[2]})
After completion, the $EAP_TID
variable holds the global transaction ID of the transaction that created this XID. The node identifier of the pod that started the transaction is given by the output of the following bash command:
echo "$PG_TID"| base64 -d | tail -c +29
The node identifier starts from the 29th character of the PostgreSQL global transaction ID field.
- If this pod is still running, then leave this in-doubt branch alone since the transaction is still in flight.
If this pod is not running, then you need to search the relevant transaction log storage for the transaction log. The log storage is located in a JDBC table, which is named following the
os<node-identifier>jbosststxtable
pattern.- If there is no such table, leave the branch alone as it is owned by some other transaction manager. The URL for the datasource containing this table is defined in the transaction subsystem description shown below.
If there is such a table, look for an entry that matches the global transaction ID.
- If there is an entry in the table that matches the global transaction ID, then the in-doubt branch needs to be committed using the datasource vendor tooling as described below.
- If there is no such entry, then the branch is an orphan and can safely be rolled back.
An example of how to commit an in-doubt PostgreSQL branch is shown below:
$ oc rsh postgresql-2-vwf9n sh-4.2$ psql sampledb psql (9.5.7) Type "help" for help. psql sampledb commit prepared '131077_AAAAAAAAAAAAAP//rBEAB440GK1aJ72oAAAAGHAtanRh ---- LWNyYXNoLXJlYy0zLXAyY2N3_AAAAAAAAAAAAAP//rBEAB440GK1aJ72oAAAAGgAAAAEAAAAA';
Repeat this procedure for all datasources and in-doubt branches.
9.15.2.3.3. Obtain the List of Node Identifiers of All Running JBoss EAP Instances in Any Cluster that Can Contact the Resource Managers
Node identifiers are configured to be the same name as the pod name. You can obtain the pod names in use using the oc
command. Use the following command to list the running pods:
$ oc get pods | grep Running eap-manual-tx-recovery-app-4-26p4r 1/1 Running 0 23m postgresql-2-vwf9n 1/1 Running 0 41m
For each running pod, look in the output of the pod’s log and obtain the node name. For example, for first pod shown in the above output, use the following command:
$ oc logs eap-manual-tx-recovery-app-4-26p4r | grep "jboss.node.name" | head -1 jboss.node.name = tx-recovery-app-4-26p4r
The aforementioned JBoss node name identifier will always be truncated to the maximum length of 23 characters in total by removing characters from the beginning and retaining the trailing characters until the maximum length of 23 characters is reached.
9.15.2.3.4. Find the Transaction Logs
-
The transaction logs reside in a JDBC-backed object store. The JNDI name of this store is defined in the
transaction
subsystem definition of the JBoss EAP configuration file. - Look in the configuration file to find the datasource definition corresponding to the above JNDI name.
- Use the JNDI name to derive the connection URL.
You can use the URL to connect to the database and issue a
select
query on the relevant in-doubt transaction table.Alternatively, if you know which pod the database is running on, and you know the name of the database, it might be easier to open an OpenShift remote shell into the pod and use the database tooling directly.
For example, if the JDBC store is hosted by a PostgreSQL database called
sampledb
running on podpostgresql-2-vwf9n
, then you can find the transaction logs using the following commands:NoteThe ostxrecoveryapp426p4rjbosststxtable table name listed in the following command has been chosen since it follows the pattern for JDBC table names holding the log storage entries. In your environment the table name will have similar form:
-
Starting with
os
prefix. - The part in the middle is derived from the JBoss node name above, possibly deleting the "-" (dash) character if present.
-
Finally the
jbosststxtable
suffix is appended to create the final name of the table.
$ oc rsh postgresql-2-vwf9n sh-4.2$ psql sampledb psql (9.5.7) Type "help" for help. sampledb=# select uidstring from ostxrecoveryapp426p4rjbosststxtable where TYPENAME='StateManager/BasicAction/TwoPhaseCoordinator/AtomicAction' ; uidstring ------------------------------------- 0:ffff0a81009d:33789827:5a68b2bf:40 (1 row)
-
Starting with
9.15.2.3.5. Cleaning Up the Transaction Logs for Reconciled In-doubt Branches
Do not delete the log unless you are certain that there are no remaining in-doubt branches.
When all the branches for a given transaction are complete, and all potential resources managers have been checked, including A-MQ and JDV, it is safe to delete the transaction log.
Issue the following command, specify the transaction log to be removed using the appropriate uidstring
:
DELETE FROM ostxrecoveryapp426p4rjbosststxtable where uidstring = UIDSTRING
If you do not delete the log, then completed transactions which failed after prepare, but which have now been resolved, will never be removed from the transaction log storage. The consequence of this is that unnecessary storage is used and future manual reconciliation will be more difficult.
9.16. Included JBoss Modules
The table below lists included JBoss Modules in the JBoss EAP for OpenShift image.
Table 9.10. Included JBoss Modules
JBoss Module |
---|
org.jboss.as.clustering.common |
org.jboss.as.clustering.jgroups |
org.jboss.as.ee |
org.jboss.logmanager.ext |
org.jgroups |
org.openshift.ping |
net.oauth.core |
9.17. EAP Operator: API Information
The EAP operator introduces the following APIs:
9.17.1. WildFlyServer
WildFlyServer
defines a custom JBoss EAP resource.
Table 9.11. WildFlyServer
Field | Description | Scheme | Required |
---|---|---|---|
| Standard object’s metadata | false | |
| Specification of the desired behaviour of the JBoss EAP deployment. | true | |
| Most recent observed status of the JBoss EAP deployment. Read-only. | false |
9.17.2. WildFlyServerList
WildFlyServerList
defines a list of JBoss EAP deployments.
Table 9.12. Table
Field | Description | Scheme | Required |
---|---|---|---|
| Standard list’s metadata | false | |
|
List of | true |
9.17.3. WildFlyServerSpec
WildFlyServerSpec
is a specification of the desired behavior of the JBoss EAP resource.
It uses a StatefulSet
with a pod spec that mounts the volume specified by storage on /opt/jboss/wildfly/standalone/data.
Table 9.13. WildFlyServerSpec
Field | Description | Scheme | Required |
---|---|---|---|
| Name of the application image to be deployed | string | false |
| the desired number of replicas for the application | int32] | true |
|
Spec to specify how a standalone configuration can be read from a | false | |
|
Storage spec to specify how storage should be used. If omitted, an | false | |
| Name of the ServiceAccount to use to run the JBoss EAP pods | string | false |
|
List of environment variables present in the containers from | false | |
| List of environment variable present in the containers | false | |
|
List of secret names to mount as volumes in the containers. Each secret is mounted as a read-only volume at | string | false |
|
List of | string | false |
| Disable the creation a route to the HTTP port of the application service (false if omitted) | boolean | false |
| If connections from the same client IP are passed to the same JBoss EAP instance/pod each time (false if omitted) | boolean | false |
9.17.4. StorageSpec
StorageSpec
defines the configured storage for a WildFlyServer
resource. If neither an EmptyDir
nor a volumeClaimTemplate
is defined, a default EmptyDir
is used.
The EAP Operator configures the StatefulSet
using information from this StorageSpec
to mount a volume dedicated to the standalone/data directory used by JBoss EAP to persist its own data. For example, transaction log). If an EmptyDir
is used, the data does not survive a pod restart. If the application deployed on JBoss EAP relies on transaction, specify a volumeClaimTemplate
, so that the same persistent volume can be reused upon pod restarts.
Table 9.14. Table
Field | Description | Scheme | Required |
---|---|---|---|
|
| false | |
|
A PersistentVolumeClaim spec to configure | false |
9.17.5. StandaloneConfigMapSpec
StandaloneConfigMapSpec
defines how JBoss EAP standalone configuration can be read from a ConfigMap
. If omitted, JBoss EAP uses its standalone.xml
configuration from its image.
Table 9.15. StandaloneConfigMapSpec
Field | Description | Scheme | Required |
---|---|---|---|
|
Name of the | string | true |
key |
Key of the | string | false |
9.17.6. WildFlyServerStatus
WildFlyServerStatus
is the most recent observed status of the JBoss EAP deployment. Read-only.
Table 9.16. WildFlyServerStatus
Field | Description | Scheme | Required |
---|---|---|---|
| The actual number of replicas for the application | int32 | true |
| Hosts that route to the application HTTP service | string | true |
| Status of the pods | true | |
| Number of pods that are under scale down cleaning process | int32 | true |
9.17.7. PodStatus
PodStatus
is the most recent observed status of a pod running the JBoss EAP application.
Table 9.17. PodStatus
Field | Description | Scheme | Required |
---|---|---|---|
| Name of the pod | string | true |
| IP address allocated to the pod | string | true |
| State of the pod in the scale down process. The state is ACTIVE by default, which means it serves requests. | string | false |
Revised on 2021-12-27 11:43:21 UTC