Installing and Managing AMQ Online on OpenShift
For use with AMQ Online 1.5
Abstract
Chapter 1. Introduction
1.1. AMQ Online overview
Red Hat AMQ Online is an OpenShift-based mechanism for delivering messaging as a managed service. With Red Hat AMQ Online, administrators can configure a cloud-native, multi-tenant messaging service either in the cloud or on premise. Developers can provision messaging using the Red Hat AMQ Console. Multiple development teams can provision the brokers and queues from the Console, without requiring each team to install, configure, deploy, maintain, or patch any software.
AMQ Online can provision different types of messaging depending on your use case. A user can request messaging resources by creating an address space. AMQ Online currently supports two address space types, standard and brokered, each with different semantics. The following diagrams illustrate the high-level architecture of each address space type:
Figure 1.1. Standard address space

Figure 1.2. Brokered address space

1.2. Supported features
The following table shows the supported features for AMQ Online 1.5:
Table 1.1. Supported features reference table
Feature | Brokered address space | Standard address space | |
---|---|---|---|
Address type | Queue | Yes | Yes |
Topic | Yes | Yes | |
Multicast | No | Yes | |
Anycast | No | Yes | |
Subscription | No | Yes | |
Messaging protocol | AMQP | Yes | Yes |
MQTT | Yes | Technology preview only | |
CORE | Yes | No | |
OpenWire | Yes | No | |
STOMP | Yes | No | |
Transports | TCP | Yes | Yes |
WebSocket | Yes | Yes | |
Durable subscriptions | JMS durable subscriptions | Yes | No |
"Named" durable subscriptions | No | Yes | |
JMS | Transaction support | Yes | No |
Selectors on queues | Yes | No | |
Message ordering guarantees (including prioritization) | Yes | No | |
Scalability | Scalable distributed queues and topics | No | Yes |
1.3. AMQ Online user roles
AMQ Online users can be defined broadly in terms of two user roles: service administrator and messaging tenant. Depending on the size of your organization, these roles might be performed by the same person or different people.
The service administrator performs the initial installation and any subsequent upgrades. The service administrator might also deploy and manage the messaging infrastructure, such as monitoring the routers, brokers, and administration components; and creating the address space plans and address plans. Installing and Managing AMQ Online on OpenShift provides information about how to set up and manage AMQ Online as well as configure the infrastructure and plans as a service administrator.
The messaging tenant can request messaging resources, using both cloud-native APIs and tools. The messaging tenant can also manage the users and permissions of a particular address space within the messaging system as well as create address spaces and addresses. For more information about how to manage address spaces, addresses, and users, see Using AMQ Online on OpenShift Container Platform.
1.4. Supported configurations
For more information about AMQ Online supported configurations see Red Hat AMQ 7 Supported Configurations.
1.5. Document conventions
1.5.1. Variable text
This document contains code blocks with variables that you must replace with values specific to your installation. In this document, such text is styled as italic monospace.
For example, in the following code block, replace my-namespace
with the namespace used in your installation:
sed -i 's/amq-online-infra/my-namespace/' install/bundles/enmasse-with-standard-authservice/*.yaml
Chapter 2. Installing AMQ Online
AMQ Online can be installed by applying the YAML files using the OpenShift Container Platform command-line interface, or by running the Ansible playbook.
Prerequisites
To install AMQ Online, the OpenShift Container Platform command-line interface (CLI) is required.
- For more information about how to install the CLI on OpenShift 3.x, see the OpenShift Container Platform 3.11 documentation.
- For more information about how to install the CLI on OpenShift 4.1, see the OpenShift Container Platform 4.1 documentation.
- An OpenShift cluster is required.
-
A user on the OpenShift cluster with
cluster-admin
permissions is required to set up the required cluster roles and API services.
2.1. Downloading AMQ Online
Procedure
-
Download and extract the
amq-online-install.zip
file from the AMQ Online download site.
Although container images for AMQ Online are available in the Red Hat Container Catalog, we recommend that you use the YAML files provided instead.
2.2. Installing AMQ Online using a YAML bundle
The simplest way to install AMQ Online is to use the predefined YAML bundles.
Procedure
Log in as a user with
cluster-admin
privileges:oc login -u system:admin
(Optional) If you want to deploy to a project other than
amq-online-infra
you must run the following command and substituteamq-online-infra
in subsequent steps:sed -i 's/amq-online-infra/my-project/' install/bundles/amq-online/*.yaml
Create the project where you want to deploy AMQ Online:
oc new-project amq-online-infra
- Change the directory to the location of the downloaded release files.
Deploy using the
amq-online
bundle:oc apply -f install/bundles/amq-online
(Optional) Install the example plans and infrastructure configuration:
oc apply -f install/components/example-plans
(Optional) Install the example roles:
oc apply -f install/components/example-roles
(Optional) Install the
standard
authentication service:oc apply -f install/components/example-authservices/standard-authservice.yaml
(Optional) Install the Service Catalog integration:
oc apply -f install/components/service-broker oc apply -f install/components/cluster-service-broker
2.3. Installing AMQ Online using Ansible
Installing AMQ Online using Ansible requires creating an inventory file with the variables for configuring the system. Example inventory files can be found in the ansible/inventory
folder.
The following example inventory file enables a minimal installation of AMQ Online:
[enmasse] localhost ansible_connection=local [enmasse:vars] namespace=amq-online-infra enable_rbac=False api_server=True service_catalog=False register_api_server=True keycloak_admin_password=admin authentication_services=["standard"] standard_authentication_service_postgresql=False monitoring_namespace=enmasse-monitoring monitoring_operator=False monitoring=False
The following Ansible configuration settings are supported:
Table 2.1. Ansible configuration settings
Name | Description | Default value | Required |
---|---|---|---|
namespace | Specifies the project where AMQ Online is installed. | Not applicable | yes |
enable_rbac | Specifies whether to enable RBAC authentication of REST APIs | True | no |
service_catalog | Specifies whether to enable integration with the Service Catalog | False | no |
authentication_services |
Specifies the list of authentication services to deploy. Supported values are |
| no |
keycloak_admin_password |
Specifies the admin password to use for the | Not applicable |
yes (if |
api_server | Specifies whether to enable the REST API server | True | no |
register_api_server | Specifies whether to register the API server with OpenShift master | False | no |
secure_api_server | Specifies whether to enable mutual TLS for the API server | False | no |
install_example_plans | Specifies whether to install example plans and infrastructure configurations | True | no |
monitoring_namespace | Specifies the project where AMQ Online monitoring is installed. | Not applicable | yes |
monitoring_operator | Specifies whether to install the monitoring infrastructure | Not applicable | no |
Procedure
- Create an inventory file.
Run the Ansible playbook:
ansible-playbook -i inventory-file ansible/playbooks/openshift/deploy_all.yml
2.4. Installing and configuring AMQ Online using the Operator Lifecycle Manager
You can use the Operator Lifecycle Manager to install and configure an instance of AMQ Online.
In OpenShift 4.x, the Operator Lifecycle Manager (OLM) helps users install, update, and manage the life cycle of all Operators and their associated services running across their clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes native applications (Operators) in an effective, automated, and scalable way.
The OLM runs by default in OpenShift 4.x, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift console provides management screens for cluster administrators to install Operators and grant specific projects access to use the catalog of Operators available on the cluster.
OperatorHub is the graphical interface that OpenShift cluster administrators use to discover, install, and upgrade Operators. With one click, these Operators can be pulled from OperatorHub, installed on the cluster, and managed by the OLM, ready for engineering teams to self-service manage the software in development, test, and production environments.
2.4.1. Installing AMQ Online from the OperatorHub using the OpenShift console
You can install the AMQ Online Operator on an OpenShift 4.x cluster by using OperatorHub in the OpenShift console.
Prerequisites
-
Access to an OpenShift 4.x cluster and an account with
cluster-admin
permissions.
Procedure
-
In the OpenShift 4.x console, log in using an account with
cluster-admin
privileges. - To create the project where you want to deploy AMQ Online, click Home > Projects, and then click Create Project. The Create Project window opens.
-
In the Name field, type
amq-online-infra
and click Create. Theamq-online-infra
project is created. - Click Operators > OperatorHub.
-
In the Filter by keyword box, type
AMQ Online
to find the AMQ Online Operator. - Click the AMQ Online Operator. Information about the Operator is displayed.
- Read the information about the Operator and click Install. The Create Operator Subscription page opens.
- On the Create Operator Subscription page, for Installation Mode, click A specific namespace on the cluster, and then select the amq-online-infra namespace from the drop-down list.
Accept all of the remaining default selections and click Subscribe.
The amq-online page is displayed, where you can monitor the installation progress of the AMQ Online Operator subscription.
After the subscription upgrade status is shown as Up to date, click Operators > Installed Operators to verify that the AMQ Online ClusterServiceVersion (CSV) is displayed and its Status ultimately resolves to InstallSucceeded in the amq-online-infra namespace.
For troubleshooting information, see the OpenShift documentation.
2.4.2. Configuring AMQ Online using the OpenShift console
After installing AMQ Online from the OperatorHub using the OpenShift console, create a new instance of a custom resource for the following items within the amq-online-infra
project:
- an authentication service
- infrastructure configuration for an address space type (the example uses the standard address space type)
- an address space plan
- an address plan
After creating the new instances of the custom resources, next:
The following procedures use the example data that is provided when using the OpenShift console.
2.4.2.1. Creating an authentication service custom resource using the OpenShift console
You must create a custom resource for an authentication service to use AMQ Online. This example uses the standard authentication service.
Procedure
- In the top right, click the Plus icon (+). The Import YAML window opens.
-
From the top left drop-down menu, select the
amq-online-infra
project. Copy the following code:
apiVersion: admin.enmasse.io/v1beta1 kind: AuthenticationService metadata: name: standard-authservice spec: type: standard
- In the Import YAML window, paste the copied code and click Create. The AuthenticationService overview page is displayed.
-
Click Workloads > Pods. In the Readiness column, the Pod status is
Ready
when the custom resource has been deployed.
2.4.2.2. Creating an infrastructure configuration custom resource using the OpenShift console
You must create an infrastructure configuration custom resource to use AMQ Online. This example uses StandardInfraConfig
for a standard address space.
Procedure
- In the top right, click the Plus icon (+). The Import YAML window opens.
-
From the top left drop-down menu, select the
amq-online-infra
project. Copy the following code:
apiVersion: admin.enmasse.io/v1beta1 kind: StandardInfraConfig metadata: name: default
- In the Import YAML window, paste the copied code and click Create. The StandardInfraConfig overview page is displayed.
- Click Operators > Installed Operators.
- Click the AMQ Online Operator and click the Standard Infra Config tab to verify that its Status displays as Active.
2.4.2.3. Creating an address space plan custom resource using the OpenShift console
You must create an address space plan custom resource to use AMQ Online. This procedure uses the example data that is provided when using the OpenShift console.
Procedure
- In the top right, click the Plus icon (+). The Import YAML window opens.
-
From the top left drop-down menu, select the
amq-online-infra
project. Copy the following code:
apiVersion: admin.enmasse.io/v1beta2 kind: AddressSpacePlan metadata: name: standard-small spec: addressSpaceType: standard infraConfigRef: default addressPlans: - standard-small-queue resourceLimits: router: 2.0 broker: 3.0 aggregate: 4.0
- In the Import YAML window, paste the copied code and click Create. The AddressSpacePlan overview page is displayed.
- Click Operators > Installed Operators.
- Click the AMQ Online Operator and click the Address Space Plan tab to verify that its Status displays as Active.
2.4.2.4. Creating an address plan custom resource using the OpenShift console
You must create an address plan custom resource to use AMQ Online. This procedure uses the example data that is provided when using the OpenShift console.
Procedure
- In the top right, click the Plus icon (+). The Import YAML window opens.
-
From the top left drop-down menu, select the
amq-online-infra
project. Copy the following code:
apiVersion: admin.enmasse.io/v1beta2 kind: AddressPlan metadata: name: standard-small-queue spec: addressType: queue resources: router: 0.01 broker: 0.1
- In the Import YAML window, paste the copied code and click Create. The AddressPlan overview page is displayed.
- Click Operators > Installed Operators.
- Click the AMQ Online Operator and click the Address Plan tab to verify that its Status displays as Active.
Chapter 3. Upgrading AMQ Online
AMQ Online supports upgrades between minor versions using cloud-native tools. When upgrading, applying the configuration change automatically triggers the upgrade process to begin.
Using the same method that was used to initially install AMQ Online to upgrade to a newer version of AMQ Online is recommended.
Upgrading AMQ Online is accomplished by applying the YAML files for the new version.
3.1. Upgrading AMQ Online using a YAML bundle
Prerequisites
- A new release of AMQ Online. For more information, see Downloading AMQ Online.
Procedure
Log in as a service operator:
oc login -u system:admin
Select the project where AMQ Online is installed:
oc project amq-online-infra
Apply the new release bundle:
oc apply -f install/bundles/amq-online
Monitor pods while they are restarted:
oc get pods -w
The pods restart and become active within several minutes.
Delete
api-server
resources not needed after upgrade:oc delete sa api-server -n amq-online-infra oc delete clusterrolebinding enmasse.io:api-server-amq-online-infra oc delete clusterrole enmasse.io:api-server oc delete rolebinding api-server -n amq-online-infra oc delete role enmasse.io:api-server -n amq-online-infra
3.2. Upgrading AMQ Online using Ansible
Prerequisites
- A new release of AMQ Online. For more information, see Downloading AMQ Online.
Procedure
Log in as a service operator:
oc login -u system:admin
Run the Ansible playbook from the new release:
ansible-playbook -i inventory-file ansible/playbooks/openshift/deploy_all.yml
Monitor pods while they are restarted:
oc get pods -w
The pods restart and become active within several minutes.
Delete
api-server
resources not needed after upgrade:oc delete sa api-server -n amq-online-infra oc delete clusterrolebinding enmasse.io:api-server-amq-online-infra oc delete clusterrole enmasse.io:api-server oc delete rolebinding api-server -n amq-online-infra oc delete role enmasse.io:api-server -n amq-online-infra
Chapter 4. Uninstalling AMQ Online
You must uninstall AMQ Online using the same method that you used to install AMQ Online.
4.1. Uninstalling AMQ Online using the YAML bundle
This method uninstalls AMQ Online that was installed using the YAML bundle.
Procedure
Log in as a user with
cluster-admin
privileges:oc login -u system:admin
Delete the cluster-level resources:
oc delete crd -l app=enmasse,enmasse-component=iot oc delete crd -l app=enmasse --timeout=600s oc delete clusterrolebindings -l app=enmasse oc delete clusterroles -l app=enmasse oc delete apiservices -l app=enmasse oc delete oauthclients -l app=enmasse
(OpenShift 4) Delete the console integration:
oc delete consolelinks -l app=enmasse
(Optional) Delete the service catalog integration:
oc delete clusterservicebrokers -l app=enmasse
Delete the project where AMQ Online is deployed:
oc delete project amq-online-infra
4.2. Uninstalling AMQ Online using Ansible
Uninstalling AMQ Online using Ansible requires using the same inventory file that was used for installing AMQ Online.
The playbook deletes the amq-online-infra
project.
Procedure
Run the Ansible playbook, where
inventory-file
specifies the inventory file used at installation:ansible-playbook -i inventory-file ansible/playbooks/openshift/uninstall.yml
4.3. Uninstalling AMQ Online using the Operator Lifecycle Manager (OLM)
This method uninstalls AMQ Online that was installed using the Operator Lifecycle Manager (OLM).
Procedure
Log in as a user with
cluster-admin
privileges:oc login -u system:admin
Remove all
IoTProject
andAddressSpace
instances:oc delete iotprojects -A --all oc delete addressspaces -A --all --timeout=600s
Delete the subscription (replace
amq-online
with the name of the subscription used in the installation):oc delete subscription amq-online -n amq-online-infra
Remove the CSV for the Operator:
oc delete csv -l app=enmasse -n amq-online-infra
Remove any remaining resources (replace
amq-online-infra
with the project where you installed AMQ Online):oc delete all -l app=enmasse -n amq-online-infra oc delete cm -l app=enmasse -n amq-online-infra oc delete secret -l app=enmasse amq-online-infra oc delete consolelinks -l app=enmasse oc delete oauthclients -l app=enmasse oc delete crd -l app=enmasse
(Optional: Skip this step if AMQ Online is installed in the
openshift-operators
namespace) Delete the namespace where AMQ Online was installed:oc delete namespace amq-online-infra
4.4. Uninstalling AMQ Online using the OpenShift console
This method uninstalls AMQ Online that was installed using the Operator Lifecycle Manager (OLM) in the OpenShift Container Platform console.
Procedure
- From the Project list, select the project where you installed AMQ Online.
- Click Catalog → Operator Management. The Operator Management page opens.
- Click the Operator Subscriptions tab.
- Find the AMQ Online Operator you want to uninstall. In the far right column, click the vertical ellipsis icon and select Remove Subscription.
When prompted by the Remove Subscription window, select the Also completely remove the AMQ Online Operator from the selected namespace check box to remove all components related to the installation.
- Click Remove. The AMQ Online Operator will stop running and no longer receive updates.
Remove any remaining resources by running the following command (replace
amq-online-infra
with the project where you installed AMQ Online):oc delete all -l app=enmasse -n amq-online-infra oc delete cm -l app=enmasse -n amq-online-infra oc delete secret -l app=enmasse amq-online-infra oc delete consolelinks -l app=enmasse oc delete oauthclients -l app=enmasse
(Optional: Skip this step if AMQ Online is installed in the
openshift-operators
namespace) Delete the namespace where AMQ Online was installed:oc delete namespace amq-online-infra
Chapter 5. Configuring AMQ Online
5.1. Minimal service configuration
Configuring AMQ Online for production takes some time and consideration. The following procedure will get you started with a minimal service configuration. For a more complete example, navigate to the install/components/example-plans
folder of the AMQ Online distribution. All of the commands must be run in the namespace where AMQ Online is installed.
Procedure
Save the example configuration:
apiVersion: admin.enmasse.io/v1beta1 kind: StandardInfraConfig metadata: name: default spec: {} --- apiVersion: admin.enmasse.io/v1beta2 kind: AddressPlan metadata: name: standard-small-queue spec: addressType: queue resources: router: 0.01 broker: 0.1 --- apiVersion: admin.enmasse.io/v1beta2 kind: AddressSpacePlan metadata: name: standard-small spec: addressSpaceType: standard infraConfigRef: default addressPlans: - standard-small-queue resourceLimits: router: 2.0 broker: 3.0 aggregate: 4.0 --- apiVersion: admin.enmasse.io/v1beta1 kind: AuthenticationService metadata: name: none-authservice spec: type: none
Apply the example configuration:
oc apply -f service-config.yaml
5.2. Address space plans
Address space plans are used to configure quotas and control the resources consumed by address spaces. Address space plans are configured by the AMQ Online service operator and are selected by the messaging tenant when creating an address space.
AMQ Online includes a default set of plans that are sufficient for most use cases.
Plans are configured as custom resources. The following example shows a plan for the standard address space:
apiVersion: admin.enmasse.io/v1beta2 kind: AddressSpacePlan metadata: name: restrictive-plan labels: app: enmasse spec: displayName: Restrictive Plan displayOrder: 0 infraConfigRef: default 1 shortDescription: A plan with restrictive quotas longDescription: A plan with restrictive quotas for the standard address space addressSpaceType: standard 2 addressPlans: 3 - small-queue - small-anycast resourceLimits: 4 router: 2.0 broker: 2.0 aggregate: 2.0
- 1
- A reference to the
StandardInfraConfig
(for thestandard
address space type) or theBrokeredInfraConfig
(for thebrokered
address space type) describing the infrastructure deployed for address spaces using this plan. - 2
- The address space type this plan applies to, either
standard
orbrokered
. - 3
- A list of address plans available to address spaces using this plan.
- 4
- The maximum number of routers (
router
) and brokers (broker
) for address spaces using this plan. For thebrokered
address space type, only thebroker
field is required.
The other fields are used by the Red Hat AMQ Console UI. Note the field spec.infraConfigRef
, which points to an infrastructure configuration that must exist when an address space using this plan is created. For more information about infrastructure configurations, see Infrastructure configuration.
5.3. Creating address space plans
Procedure
Log in as a service admin:
oc login -u system:admin
Select the project where AMQ Online is installed:
oc project amq-online-infra
Create an address space plan definition:
apiVersion: admin.enmasse.io/v1beta2 kind: AddressSpacePlan metadata: name: restrictive-plan labels: app: enmasse spec: displayName: Restrictive Plan displayOrder: 0 infraConfigRef: default shortDescription: A plan with restrictive quotas longDescription: A plan with restrictive quotas for the standard address space addressSpaceType: standard addressPlans: - small-queue - small-anycast resourceLimits: router: 2.0 broker: 2.0 aggregate: 2.0
Create the address space plan:
oc create -f restrictive-plan.yaml
Verify that schema has been updated and contains the plan:
oc get addressspaceschema standard -o yaml
5.4. Address plans
Address plans specify the expected resource usage of a given address. The sum of the resource usage for all resource types determines the amount of infrastructure provisioned for an address space. A single router and broker pod has a maximum usage of one. If a new address requires additional resources and the resource consumption is within the address space plan limits, a new pod will be created automatically to handle the increased load.
Address plans are configured by the AMQ Online service operator and are selected when creating an address.
AMQ Online includes a default set of address plans that are sufficient for most use cases.
In the Address space plans section, the address space plan references two address plans: small-queue
and small-anycast
. These address plans are stored as custom resources and are defined as follows:
apiVersion: admin.enmasse.io/v1beta2 kind: AddressPlan metadata: name: small-queue labels: app: enmasse spec: displayName: Small queue plan displayOrder: 0 shortDescription: A plan for small queues longDescription: A plan for small queues that consume little resources addressType: queue 1 resources: 2 router: 0.2 broker: 0.3 partitions: 1 3 messageTtl: 4 minimim: 30000 maximum: 300000
- 1
- The address type to which this plan applies.
- 2
- The resources consumed by addresses using this plan. The
router
field is optional for address plans referenced by abrokered
address space plan. - 3
- The number of partitions that should be created for queues using this plan. Only available in the
standard
address space. - 4
- (Optional) Restricts message time-to-live (TTL). Applies to address types
queue
andtopic
only.
The other fields are used by the Red Hat AMQ Console UI.
A single router can support five instances of addresses and broker can support three instances of addresses with this plan. If the number of addresses with this plan increases to four, another broker is created. If it increases further to six, another router is created as well.
In the standard
address space, address plans for the queue
address type may contain a field partitions
, which allows a queue to be sharded across multiple brokers for HA and improved performance. Specifying an amount of broker
resource above 1 will automatically cause a queue to be partitioned.
The messageTtl
field is used to restrict the effective absolute-expiry-time
of any message put to a queue or topic. The maximum
and minimum
values are defined in milliseconds. The system adjusts the TTL value of an incoming message to a particular address based on these values:
-
If a messages arrives at the address with a TTL value that is greater than the
maximum
value, the system changes the message TTL to the maximum value. -
If a message arrives at the address with a TTL value that is less than the
minimum
value, the system changes the message TTL to the minimum value.
Messages that arrive without a TTL defined are considered to have a TTL value of infinity.
Expired messages will be automatically removed from the queue
, subscription
or temporary topic subscription periodically. These messages are lost. This occurs every 30 seconds.
A sharded queue no longer guarantees message ordering.
Although the example address space plan in Address space plans allows two routers and two brokers to be deployed, it only allows two pods to be deployed in total. This means that the address space is restricted to three addresses with the small-queue
plan.
The small-anycast
plan does not consume any broker resources, and can provision two routers at the expense of not being able to create any brokers:
apiVersion: admin.enmasse.io/v1beta2 kind: AddressPlan metadata: name: small-anycast labels: app: enmasse spec: addressType: anycast resources: router: 0.2
With this plan, up to 10 addresses can be created.
5.5. Creating address plans
Procedure
Log in as a service admin:
oc login -u system:admin
Select the project where AMQ Online is installed:
oc project amq-online-infra
Create an address plan definition:
apiVersion: admin.enmasse.io/v1beta2 kind: AddressPlan metadata: name: small-anycast labels: app: enmasse spec: addressType: anycast resources: router: 0.2
Create the address plan:
oc create -f small-anycast-plan.yaml
Verify that schema has been updated and contains the plan:
oc get addressspaceschema standard -o yaml
5.6. Infrastructure configuration
AMQ Online creates infrastructure components such as routers, brokers, and consoles. These components can be configured while the system is running, and AMQ Online automatically updates the components with the new settings. The AMQ Online service operator can edit the AMQ Online default infrastructure configuration or create new configurations.
Infrastructure configurations can be referred to from one or more address space plans. For more information about address space plans, see Address space plans.
Infrastructure configuration can be managed for both brokered
and standard
infrastructure using BrokeredInfraConfig
and StandardInfraConfig
resources.
5.6.1. Brokered infrastructure configuration
BrokeredInfraConfig
resources are used to configure infrastructure deployed by brokered
address spaces. Address space plans reference the brokered infrastructure configuration using the spec.infraConfigRef
field. For more information about address space plans, see Address space plans.
For detailed information about the available brokered infrastructure configuration fields, see the Brokered infrastructure configuration fields table.
5.6.1.1. Brokered infrastructure configuration example
The following example of a brokered infrastructure configuration file shows the various settings that can be specified.
apiVersion: admin.enmasse.io/v1beta1 kind: BrokeredInfraConfig metadata: name: brokered-infra-config-example spec: version: "0.32" 1 admin: 2 resources: memory: 256Mi podTemplate: metadata: labels: key: value broker: 3 resources: memory: 2Gi storage: 100Gi addressFullPolicy: PAGE globalMaxSize: 256Mb podTemplate: 4 spec: priorityClassName: messaging
- 1
- Specifies the AMQ Online version used. When upgrading, AMQ Online uses this field to determine whether to upgrade the infrastructure to the requested version. If omitted, the version is assumed to be the same version as the controllers reading the configuration.
- 2
- Specifies the settings you can configure for the
admin
components. - 3
- Specifies the settings you can configure for the
broker
components. Note that changing the.broker.resources.storage
setting does not configure the existing broker storage size. - 4
- For both
admin
andbroker
components you can configure the followingpodTemplate
elements:-
metadata.labels
-
spec.priorityClassName
-
spec.tolerations
-
spec.affinity
-
spec.containers.readinessProbe
-
spec.containers.livenessProbe
-
spec.containers.resources
spec.containers.env
All other
podTemplate
elements are ignored. For more information about these elements, see the OpenShift documentation in the following Related links section.For more information about how to set a readiness probe timeout, see Overriding the readiness probe timing for brokered infrastructure configuration.
-
For detailed information about all of the available brokered infrastructure configuration fields, see the Brokered infrastructure configuration fields table.
Related links
For more information about the
podTemplate
settings, see the following OpenShift documentation:
5.6.1.2. Overriding the probe timing for brokered infrastructure configuration
You can override the default values for the probe timing on broker resources. You might want to change the default values if, for example, it takes longer than expected for the broker storage to become available, or a server is slow.
The following example shows how to override certain default values of the readiness probe for broker resources.
apiVersion: admin.enmasse.io/v1beta1 kind: BrokeredInfraConfig metadata: name: brokered-infra-config spec: broker: ... podTemplate: spec: containers: - name: broker 1 readinessProbe: failureThreshold: 6 2 initialDelaySeconds: 20 3
- 1
- The
name
value must match the target container name. For a broker, thepodTemplate
name isbroker
. - 2
- Specifies the number of times that OpenShift tries when a Pod starts and the probe fails before either the Pod is marked
Unready
for a readiness probe, or restarting the container for a liveness probe. The default value is3
, and the minimum value is1
. - 3
- Specifies the number of seconds before performing the first probe after the container starts.
5.6.2. Standard infrastructure configuration
StandardInfraConfig
resources are used to configure infrastructure deployed by standard
address spaces. Address space plans reference the standard infrastructure configuration using the spec.infraConfigRef
field. For more information about address space plans, see Address space plans.
For detailed information about the available standard infrastructure configuration fields, see the Standard infrastructure configuration fields table.
5.6.2.1. Standard infrastructure configuration example
The following example of a standard infrastructure configuration file shows the various settings that can be specified.
apiVersion: admin.enmasse.io/v1beta1 kind: StandardInfraConfig metadata: name: myconfig spec: version: "0.32" 1 admin: 2 resources: memory: 256Mi broker: 3 resources: cpu: 0.5 memory: 2Gi storage: 100Gi addressFullPolicy: PAGE router: 4 resources: cpu: 1 memory: 256Mi linkCapacity: 1000 minReplicas: 1 policy: maxConnections: 1000 maxConnectionsPerHost: 1 maxConnectionsPerUser: 10 maxSessionsPerConnection: 10 maxSendersPerConnection: 5 maxReceiversPerConnection: 5 podTemplate: 5 spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: e2e-az-EastWest operator: In values: - e2e-az-East - e2e-az-West
- 1
- Specifies the AMQ Online version used. When upgrading, AMQ Online uses this field to determine whether to upgrade the infrastructure to the requested version. If omitted, the version is assumed to be the same version as the controllers reading the configuration.
- 2
- Specifies the settings you can configure for the
admin
components. - 3
- Specifies the settings you can configure for the
broker
components. Changing the.broker.resources.storage
setting does not configure the existing broker storage size. - 4
- Specifies the settings you can configure for the
router
components. - 5
- For
admin
,broker
, androuter
components you can configure the followingpodTemplate
elements:-
metadata.labels
-
spec.priorityClassName
-
spec.tolerations
-
spec.affinity
-
spec.containers.resources
-
spec.containers.readinessProbe
-
spec.containers.livenessProbe
spec.containers.env
All other
podTemplate
elements are ignored. For more information about these elements, see the OpenShift documentation in the following Related links section.For more information about how to set a readiness probe timeout, see Overriding the readiness probe timing for standard infrastructure configuration.
-
For detailed information about all of the available standard infrastructure configuration fields, see the Standard infrastructure configuration fields table.
Related links
For more information about the
podTemplate
settings, see the following OpenShift documentation:
5.6.2.2. Overriding the probe timing for standard infrastructure configuration
You can override the default values for probe timing on broker and router resources. You might want to change the default values if, for example, it takes longer than expected for the broker storage to become available, or a server is slow.
The following example shows how to override certain default values of the readiness probe timeout for a broker resource and a liveness probe for a router resource.
apiVersion: admin.enmasse.io/v1beta1 kind: StandardInfraConfig metadata: name: standard-infra-config spec: broker: ... podTemplate: spec: containers: - name: broker 1 readinessProbe: failureThreshold: 6 2 initialDelaySeconds: 20 3 router: ... podTemplate: spec: containers: - name: router 4 livenessProbe: failureThreshold: 6 5 initialDelaySeconds: 20 6
- 1 4
- The
name
value must match the target container name. For example, for a brokerpodTemplate
,name
isbroker
, and for a routerpodTemplate
, it isrouter
. - 2 5
- Specifies the number of times that OpenShift tries when a Pod starts and the probe fails before either the Pod is marked
Unready
for a readiness probe, or restarting the container for a liveness probe. The default value is3
, and the minimum value is1
. - 3 6
- Specifies the number of seconds before performing the first probe after the container starts.
5.7. Creating and editing infrastructure configurations
You can create a new infrastructure configuration or edit an existing one. For more information, see Infrastructure configuration.
Procedure
Log in as a service operator:
oc login -u developer
Change to the project where AMQ Online is installed:
oc project _amq-online-infra_
Edit the existing infrastructure configuration, or create a new infrastructure configuration using the following example:
apiVersion: admin.enmasse.io/v1beta1 kind: StandardInfraConfig metadata: name: myconfig spec: version: "0.32" admin: resources: memory: 256Mi broker: resources: memory: 2Gi storage: 100Gi addressFullPolicy: PAGE router: resources: memory: 256Mi linkCapacity: 1000 minReplicas: 1
Apply the configuration changes:
oc apply -f standard-infra-config-example.yaml
Monitor the pods while they are restarted:
oc get pods -w
The configuration changes are applied within several minutes.
5.8. Authentication services
Authentication services are used to configure the authentication and authorization endpoints available to messaging clients. The authentication services are configured by the AMQ Online service operator, and are specified when creating an address space.
Authentication services are configured as Custom Resources. An authentication service has a type, which can be standard
, external
, or none
.
5.8.1. Standard authentication service
The standard
authentication service type allows the tenant administrator to manage users and their related permissions through the MessagingUser
Custom Resource. This is achieved by using a Red Hat Single Sign-On instance to store user credentials and access policies. For typical use cases only one standard
authentication service needs to be defined.
5.8.1.1. Standard authentication service example
The following example shows an authentication service of type standard
:
apiVersion: admin.enmasse.io/v1beta1 kind: AuthenticationService metadata: name: standard spec: type: standard 1 standard: credentialsSecret: 2 name: my-admin-credentials certificateSecret 3 name: my-authservice-certificate resources: 4 requests: memory: 2Gi limits: memory: 2Gi storage: 5 type: persistent-claim size: 5Gi datasource: 6 type: postgresql host: example.com port: 5432 database: authdb
- 1
- Valid values for
type
arenone
,standard
, orexternal
. - 2
- (Optional) The secret must contain the
admin.username
field for the user and theadmin.password
field for the password of the Red Hat Single Sign-On admin user. If not specified, a random password will be generated and stored in a secret. - 3
- (Optional on OpenShift) A custom certificate can be specified. On OpenShift, a certificate is automatically created if not specified.
- 4
- (Optional) Resource limits for the Red Hat Single Sign-On instance can be specified.
- 5
- (Optional) The storage type can be specified as
ephemeral
orpersistent-claim
. Forpersistent-claim
, you should also configure the size of the claim. The default type isephemeral
. - 6
- (Optional) Specifies the data source to be used by Red Hat Single Sign-On. The default option is the embedded
h2
data source. For production usage, thepostgresql
data source is recommended.
5.8.1.2. Deploying the standard
authentication service
To implement the standard
authentication service, you deploy it.
Procedure
Log in as a service admin:
oc login -u admin
Change to the project where AMQ Online is installed:
oc project amq-online-infra
Create an
AuthenticationService
definition:apiVersion: admin.enmasse.io/v1beta1 kind: AuthenticationService metadata: name: standard-authservice spec: type: standard
Deploy the authentication service:
oc create -f standard-authservice.yaml
5.8.1.3. Deploying the standard
authentication service for high availability (HA)
For production deployment, the authentication services should be setup for high availability in order to reduce downtime during OpenShift updates or in the event of a node failure. To implement the standard
authentication service in HA mode, you deploy it using a PostgreSQL database as the backend.
Prerequisites
- A PostgreSQL database.
Procedure
Log in as a service admin:
oc login -u admin
Create a secret with the database credentials:
oc create secret generic db-creds -n amq-online-infra --from-literal=database-user=admin --from-literal=database-password=secure-password
Create an
AuthenticationService
definition:apiVersion: admin.enmasse.io/v1beta1 kind: AuthenticationService metadata: name: standard-authservice spec: type: standard standard: replicas: 2 datasource: type: postgresql host: database.example.com port: 5431 database: auth credentialsSecret: name: db-creds
Deploy the authentication service:
oc create -f standard-authservice.yaml -n amq-online-infra
5.8.2. External authentication service
With the external
authentication service you can configure an external provider of authentication and authorization policies through an AMQP SASL handshake. This configuration can be used to implement a bridge for your existing identity management system.
Depending on your use case, you might define several external
authentication services.
5.8.2.1. External authentication service example
The following example shows an authentication service of type external
:
apiVersion: admin.enmasse.io/v1beta1 kind: AuthenticationService metadata: name: my-external-1 spec: type: external realm: myrealm 1 external: host: example.com 2 port: 5671 3 caCertSecret: 4 name: my-ca-cert
- 1
- (Optional) The
realm
is passed in the authentication request. If not specified, an identifier in the form of namespace-addressspace is used as the realm. - 2
- The host name of the external authentication server.
- 3
- The port number of the external authentication server.
- 4
- (Optional) The CA certificate to trust when connecting to the authentication server.
The external authentication server must implement the API described in the External authentication server API.
5.8.2.2. External authentication service example allowing overrides
The following example shows an authentication service of type external
that allows overrides to the host name, port number, and realm by the messaging tenant:
apiVersion: admin.enmasse.io/v1beta1 kind: AuthenticationService metadata: name: my-external-2 spec: type: external realm: myrealm 1 external: host: example.org 2 port: 5671 3 caCertSecret: 4 name: my-ca-cert allowOverride: true 5
- 1
- (Optional) The
realm
is passed in the authentication request. If not specified, an identifier in the form of namespace-addressspace is used as the realm. - 2
- The host name of the external authentication server.
- 3
- The port number of the external authentication server.
- 4
- (Optional) The CA certificate to trust when connecting to the authentication server.
- 5
- (Optional) Specifies whether address space overrides are allowed to the host name, port number, realm, and CA certificate. Valid values are
true
orfalse
. If not specified, the default value isfalse
.
The external authentication server must implement the API described in the External authentication server API.
5.8.2.3. External authentication server API
An external authentication server must implement an AMQP SASL handshake, read the connection properties of the client, and respond with the expected connection properties containing the authentication and authorization information. The authentication server is queried by the address space components, such as the router and broker, whenever a new connection is established to the messaging endpoints.
5.8.2.3.1. Authentication
The requested identity of the client can be read from the SASL handshake username
. The implementation can then authenticate the user.
The authenticated identity is returned in the authenticated-identity
map with the following key/values. While this example uses JSON, it must be set as an AMQP map on the connection property.
{ "authenticated-identity": { "sub": "myid", "preferred_username": "myuser" } }
5.8.2.3.2. Authorization
Authorization is a capability that can be requested by the client using the ADDRESS-AUTHZ
connection capability. If this is set on the connection, the server responds with this capability in the offered capabilities, and add the authorization information to the connection properties.
The authorization information is stored within a map that correlates the address to a list of operations allowed on that address. The following connection property information contains the policies for the addresses myqueue
and mytopic
:
{ "address-authz": { "myqueue": [ "send", "recv" ], "mytopic": [ "send" ] } }
The allowed operations are:
-
send
- User can send to the address. -
recv
- User can receive from the address.
5.8.3. None authentication service
The none
authentication service type allows any client using any user name and password to send and receive messages to any address.
It is not recommended to use the none
authentication service in production environments. It is intended only to be used in non-production environments, such as internal test or development environments.
5.8.3.1. Deploying the none
authentication service
To implement the none
authentication service, you deploy it.
Procedure
Log in as a service admin:
oc login -u admin
Change to the project where AMQ Online is installed:
oc project amq-online-infra
Create an
AuthenticationService
definition:apiVersion: admin.enmasse.io/v1beta1 kind: AuthenticationService metadata: name: none-authservice spec: type: none
Deploy the authentication service:
oc create -f none-authservice.yaml
5.9. AMQ Online example roles
AMQ Online provides the following example roles that you can use directly or use as models to create your own roles.
For more information about service administrator resources, see the AMQ Online service administrator resources table.
For more information about messaging tenant resources, see the AMQ Online messaging tenant resources table.
Table 5.1. AMQ Online example roles table
Role | Description |
---|---|
enmasse.io:tenant-view |
Specifies |
enmasse.io:tenant-edit |
Specifies |
|
Specifies |
Chapter 6. Monitoring AMQ Online
You can monitor AMQ Online by deploying built-in monitoring tools or using your pre-existing monitoring infrastructure.
6.1. Enabling Monitoring on OpenShift 4
In order to monitor AMQ Online on OpenShift 4 using the existing monitoring stack, user-workload monitoring must be enabled.
6.2. (Optional) Deploying the Application Monitoring Operator
To monitor AMQ Online, an operator that acts on the monitoring Custom Resource Definitions must be deployed. You may skip this step if you have such an operator installed on your OpenShift cluster.
Procedure
Log in as a user with
cluster-admin
privileges:oc login -u system:admin
(Optional) If you want to deploy to a namespace other than
enmasse-monitoring
you must run the following command and substituteenmasse-monitoring
in subsequent steps:sed -i 's/enmasse-monitoring/my-namespace/' install/bundles/amq-online/*.yaml
Create the enmasse-monitoring namespace:
oc new-project enmasse-monitoring
Deploy the
monitoring-operator
resources:oc apply -f install/components/monitoring-operator
Deploy the
monitoring-operator
component:oc apply -f install/components/monitoring-deployment
6.3. (Optional) Deploying the kube-state-metrics agent
You can monitor AMQ Online pods using the kube-state-metrics
agent.
Procedure
Log in as a user with
cluster-admin
privileges:oc login -u system:admin
Select the
amq-online-infra
project:oc project amq-online-infra
Deploy the
kube-state-metrics
component:oc apply -f install/components/kube-state-metrics
6.4. Enabling monitoring
If you are not using a default installation configuration, the simplest way to deploy monitoring is to enable the monitoring environment variable on the enmasse-operator
deployment.
Prerequisites
- The Application Monitoring Operator or an operator managing the same resources must be installed.
Procedure
Label the amq-online-infra namespace:
oc label namespace amq-online-infra monitoring-key=middleware
Enable monitoring on the operator:
oc set env deployment -n amq-online-infra enmasse-operator ENABLE_MONITORING=true
6.5. Configuring alert notifications
To configure alert notifications, such as emails, you must change the default configuration of Alertmanager.
Prerequisites
Create an Alertmanager configuration file following the Alertmanager documentation. An example configuration file for email notifications is shown:
apiVersion: v1 kind: ConfigMap metadata: labels: app: enmasse name: alertmanager-config data: alertmanager.yml: | global: resolve_timeout: 5m smtp_smarthost: localhost smtp_from: alerts@localhost smtp_auth_username: admin smtp_auth_password: password route: group_by: ['alertname'] group_wait: 60s group_interval: 60s repeat_interval: 1h receiver: 'sysadmins' receivers: - name: 'sysadmins' email_configs: - to: sysadmin@localhost inhibit_rules: - source_match: severity: 'critical' target_match: severity: 'warning' equal: ['alertname']
-
Your Alertmanager configuration file must be named
alertmanager.yaml
so it can be read by the Prometheus Operator.
Procedure
Delete the secret containing the default configuration:
oc delete secret alertmanager-application-monitoring
Create a secret containing your new configuration:
oc create secret generic alertmanager-application-monitoring --from-file=alertmanager.yaml
6.6. Metrics and rules
6.6.1. Common metrics
The following components export these common metrics:
-
enmasse-operator
-
address-space-controller
standard-controller
enmasse_version
- Type
- version
- Description
-
Provides the current version of each component in AMQ Online using the version label. The metric always returns a value of
1
. - Example
enmasse_version{job="address-space-controller",version="1.0.1"} 1 enmasse_version{job="enmsse-operator",version="1.0.1"} 1 enmasse_version{job="standard-controller",version="1.0.1"} 1
6.6.2. Address space controller metrics
The following metrics for address-space-controller
are available for AMQ Online.
6.6.2.1. Summary
For every metric exported of the type enmasse_address_space_status_ready
there is a corresponding metric of type enmasse_address_space_status_not_ready
. The values of each can never be the same.
For example:
enmasse_address_space_status_ready{name="my-address-space"} 1 enmasse_address_space_status_not_ready{name="my-address-space"} 0
The total number of address spaces equals the sum of all address spaces in the ready state plus the sum of all address spaces in the not ready state:
enmasse_address_spaces_total == (sum(enmasse_address_space_status_ready) + sum(enmasse_address_space_status_not_ready))
enmasse_address_space_status_ready
- Type
- Boolean
- Description
-
Indicates each address space that is in a
ready
state. - Example
enmasse_address_space_status_ready{name="prod-space"} 1 enmasse_address_space_status_ready{name="dev-space"} 0
enmasse_address_space_status_not_ready
- Type
- Boolean
- Description
-
Indicates each address space that is in a
not ready
state. - Example
enmasse_address_space_status_not_ready{name="prod-space"} 0 enmasse_address_space_status_not_ready{name="dev-space"} 1
enmasse_address_spaces_total
- Type
- Gauge
- Description
-
Returns the total number of address spaces, regardless of whether they are in a
ready
ornot ready
state. - Example
-
enmasse_address_spaces_total 1
enmasse_address_space_connectors_total
- Type
- Gauge
- Description
- Returns the total number of address space connectors in each address space.
- Example
enmasse_address_space_connectors_total{name="space-one"} 0 enmasse_address_space_connectors_total{name="space-two"} 2
6.6.3. Standard controller and agent metrics
The following standard-controller
and agent
metrics are available for Brokered address spaces only in AMQ Online.
6.6.3.1. Summary
The total number of addresses equals the sum of the total number of addresses in the ready state and the total number of addresses in the not ready state:
enmasse_addresses_total == enmasse_addresses_ready_total + enmasse_addresses_not_ready_total
The total number of addresses equals the total number of addresses in all phases:
enmasse_addresses_total == enmasse_addresses_active_total + enmasse_addresses_configuring_total + enmasse_addresses_failed_total + enmasse_addresses_pending_total + enmasse_addresses_terminating_total
enmasse_addresses_total
- Description
- Provides the total number of addresses, per address space, regardless of state.
- Type
- Gauge
- Example
enmasse_addresses_total{addressspace="space-one"} 5 enmasse_addresses_total{addressspace="space-two"} 3
enmasse_addresses_ready_total
- Type
- Gauge
- Description
- Provides the total number of addresses currently in the ready state.
- Example
enmasse_addresses_ready_total{addressspace="space-one"} 3 enmasse_addresses_ready_total{addressspace="space-two"} 2
enmasse_addresses_not_ready_total
- Type
- Gauge
- Description
- Provides the total number of addresses currently in the not ready state.
- Example
enmasse_addresses_not_ready_total{addressspace="space-one"} 2 enmasse_addresses_not_ready_total{addressspace="space-two"} 1
enmasse_addresses_active_total
- Type
- Gauge
- Description
- Provides the total number of addresses currently in the active phase.
- Example
-
enmasse_addresses_active_total{addressspace="space-one"} 2
enmasse_addresses_configuring_total
- Type
- Gauge
- Description
- Provides the total number of addresses currently in the configuring phase.
- Example
-
enmasse_addresses_configuring_total{addressspace="space-one"} 2
enmasse_addresses_failed_total
- Type
- Gauge
- Description
- Provides the total number of addresses currently in the failed phase.
- Example
-
enmasse_addresses_failed_total{addressspace="space-one"} 2
enmasse_addresses_pending_total
- Type
- Gauge
- Description
- Provides the total number of addresses currently in the pending phase.
- Example
-
enmasse_addresses_pending_total{addressspace="space-one"} 2
enmasse_addresses_terminating_total
- Type
- Gauge
- Description
- Provides the total number of addresses currently in the terminating phase.
- Example
-
enmasse_addresses_terminating_total{addressspace="space-one"} 2
enmasse_standard_controller_loop_duration_seconds
- Type
- Gauge
- Description
- Provides the execution time, in seconds, for the most recent standard controller reconcile loop.
- Example
-
enmasse_standard_controller_loop_duration_seconds 0.33
enmasse_standard_controller_router_check_failures_total
- Type
- Counter
- Description
- Provies the total number of router check failures during reconciliation loop.
- Example
enmasse_standard_controller_router_check_failures_total{addressspace="firstspace"} 0 enmasse_standard_controller_router_check_failures_total{addressspace="myspace"} 0
enmasse_addresses_forwarders_ready_total
- Type
- Gauge
- Description
- Provides the total number of address forwarders in the ready state.
- Example
-
enmasse_addresses_forwarders_ready_total{addressspace="myspace"} 2
enmasse_addresses_forwarders_not_ready_total
- Type
- Gauge
- Description
- Provides the total number of address forwarders in the not ready state.
- Example
-
enmasse_addresses_forwarders_not_ready_total{addressspace="myspace"} 0
enmasse_addresses_forwarders_total
- Type
- Gauge
- Description
- Provides the total number of address forwarders, regardless of whether they are in a ready or not ready state.
- Example
-
enmasse_addresses_forwarders_total{addressspace="myspace"} 2
enmasse_address_canary_health_failures_total
- Type
- Gauge
- Description
- Total number of health check failures due to failure to send and receive messages to probe addresses.
- Example
-
enmasse_address_canary_health_failures_total{addressspace="myspace"} 2
enmasse_address_canary_health_check_failures_total
- Type
- Gauge
- Description
- Total number of attempted health check runs that failed due to controller errors.
- Example
-
enmasse_address_canary_health_check_failures_total{addressspace="myspace"} 1
6.6.4. Rules
This section details Prometheus rules installed using the PrometheusRule CRD with AMQ Online. Two types of Prometheus rules are available in AMQ Online:
- Record: Pre-computed expressions saved as a new set of time series.
-
Alert: Expressions that trigger an alert when evaluated as
true
.
6.6.4.1. Records
Records are a type of Prometheus rule that are pre-computed expressions saved as a new set of time series. The following records are available for AMQ Online.
enmasse_address_spaces_ready_total
- Description
-
Aggregates the
enmasse_address_space_status_ready
in a single gauge-type metric that provides the total number of addresses in aready
state. - Expression
sum by(service, exported_namespace) (enmasse_address_space_status_ready)
- Example
enmasse_address_spaces_ready_total{exported_namespace="prod_namespace",service="address-space-controller"} 1
enmasse_address_spaces_not_ready_total
- Description
-
Aggregates the
enmasse_address_space_not_status_ready
in a single gauge-type metric that provides the total number of addresses in anot ready
state. - Expression
sum by(service, exported_namespace) (enmasse_address_space_status_not_ready)
- Example
enmasse_address_spaces_not_ready_total{exported_namespace="prod_namespace",service="address-space-controller"} 1
enmasse_component_health
- Description
-
Provides a Boolean-style metric for each
address-space-controller
andapi-server
indicating if they are up and running. - Expression
up{job="address-space-controller"} or on(namespace) (1 - absent(up{job="address-space-controller"})) up{job="api-server"} or on(namespace) (1 - absent(up{job="api-server"}))
- Example
enmasse_component_health{job="address-space-controller"} 1 enmasse_component_health{job="api-server"} 1
6.6.4.2. Alerts
Alerts are a type of Prometheus rule that are expressions that trigger an alert when evaluated as true. The following alerts are available for AMQ Online.
ComponentHealth
- Description
- Triggers when a component is not in a healthy state.
- Expression
-
component_health == 0
AddressSpaceHealth
- Description
-
Triggers when one or more address spaces are not in a
ready
state. - Expression
-
enmasse_address_spaces_not_ready_total > 0
AddressHealth
- Description
-
Triggers when one or more addresses are not in a
ready
state. - Expressions
-
enmasse_addresses_not_ready_total > 0
6.7. Enabling Tenant Metrics
Metrics from brokers and routers can be exposed to tenants without exposing system-admin metrics. To expose tenant metrics create a service monitor in any non-amq-online-infra
namespace, ideally the namespace of the concerned address space(s).
Prerequisites
-
The
servicemonitor
Custom Resource Definition provided by the Prometheus Operator must be installed. - The tenant must have their own monitoring stack installed.
Procedure
Creata a
servicemonitor
resource with a the selector configured to match labels ofmonitoring-key: enmasse-tenants
and theamq-online-infra
as the namespace selector. An example service monitor is shown below:apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: enmasse-tenants labels: app: enmasse spec: selector: matchLabels: monitoring-key: enmasse-tenants endpoints: - port: health namespaceSelector: matchNames: - amq-online-infra
-
Ensure the tenant’s monitoring stack has read permissions for service monitors in the service monitor’s namespace but not in the
amq-online-infra
as this would expose service-admin metrics too.
6.8. Using qdstat
You can use qdstat
to monitor the AMQ Online service.
6.8.1. Viewing router connections using qdstat
You can view the router connections using qdstat
.
Procedure
On the command line, run the following command to obtain the
podname
value needed in the following step:oc get pods
On the command line, run the following command:
oc exec -n namespace -it qdrouterd-podname -- qdstat -b 127.0.0.1:7777 -c Connections id host container role dir security authentication tenant ========================================================================================================================================================= 3 172.17.0.9:34998 admin-78794c68c8-9jdd6 normal in TLSv1.2(ECDHE-RSA-AES128-GCM-SHA256) CN=admin,O=io.enmasse(x.509) 12 172.30.188.174:5671 27803a14-42d2-6148-9491-a6c1e69e875a normal out TLSv1.2(ECDHE-RSA-AES128-GCM-SHA256) x.509 567 127.0.0.1:43546 b240c652-82df-48dd-b54e-3b8bbaef16c6 normal in no-security PLAIN
6.8.2. Viewing router addresses using qdstat
You can view the router addresses using qdstat
.
Procedure
On the command line, run the following command to obtain the
podname
value needed in the following step:oc get pods
Run the following command:
oc exec -n namespace -it qdrouterd-podname -- qdstat -b 127.0.0.1:7777 -a Router Addresses class addr phs distrib in-proc local remote cntnr in out thru to-proc from-proc =========================================================================================================================== local $_management_internal closest 1 0 0 0 0 0 0 588 588 link-in $lwt linkBalanced 0 0 0 0 0 0 0 0 0 link-out $lwt linkBalanced 0 0 0 0 0 0 0 0 0 mobile $management 0 closest 1 0 0 0 601 0 0 601 0 local $management closest 1 0 0 0 2,925 0 0 2,925 0 local qdhello flood 1 0 0 0 0 0 0 0 5,856 local qdrouter flood 1 0 0 0 0 0 0 0 0 topo qdrouter flood 1 0 0 0 0 0 0 0 196 local qdrouter.ma multicast 1 0 0 0 0 0 0 0 0 topo qdrouter.ma multicast 1 0 0 0 0 0 0 0 0 local temp.VTXOKyyWsq7OEei balanced 0 1 0 0 0 0 0 0 0 local temp.k2RGQNPe6sDMvz4 balanced 0 1 0 0 0 3,511 0 0 3,511 local temp.xg+y8I_Tr4Y94LA balanced 0 1 0 0 0 5 0 0 5
6.8.3. Viewing router links using qdstat
You can view the router links using qdstat
.
Procedure
On the command line, run the following command to obtain the
podname
value needed in the following step:oc get pods
On the command line, run the following command:
oc exec -n namespace -it qdrouterd-podname -- qdstat -b 127.0.0.1:7777 -l Router Links type dir conn id id peer class addr phs cap undel unsett del presett psdrop acc rej rel mod admin oper ==================================================================================================================================================== endpoint in 3 8 250 0 0 3829 0 0 3829 0 0 0 enabled up endpoint out 3 9 local temp.k2RGQNPe6sDMvz4 250 0 0 3829 3829 0 0 0 0 0 enabled up endpoint in 12 10 250 0 0 5 0 0 5 0 0 0 enabled up endpoint out 12 11 local temp.xg+y8I_Tr4Y94LA 250 0 0 5 5 0 0 0 0 0 enabled up endpoint in 645 26 mobile $management 0 50 0 0 1 0 0 1 0 0 0 enabled up endpoint out 645 27 local temp.0BrHJ1O+fi6whyg 50 0 0 0 0 0 0 0 0 0 enabled up
6.8.4. Viewing link routes using qdstat
You can view the link routes using qdstat
.
Procedure
On the command line, run the following command to obtain the
podname
value needed in the following step:oc get pods
On the command line, run the following command:
oc exec -n namespace -it qdrouterd-podname -- qdstat -b 127.0.0.1:7777 --linkroutes Link Routes address dir distrib status ====================================== $lwt in linkBalanced inactive $lwt out linkBalanced inactive
Chapter 7. Operation procedures for AMQ Online
7.1. Restarting components to acquire security fixes
Restarting AMQ Online components is required to get image updates for CVEs. The scripts are provided in the AMQ Online installation files within the script
folder. To restart all components, run all scripts.
7.1.1. Restarting Operators
Operators can be restarted without affecting the messaging system.
Procedure
Run the
restart-operators.sh
script:./scripts/restart-operators.sh amq-online-infra
7.1.2. Restarting authentication services
Authentication service restarts will temporarily affect new messaging connections. Existing connections will continue to work even if the authentication service is restarted.
Procedure
Run the
restart-authservices.sh
script:./scripts/restart-authservices.sh amq-online-infra
7.1.3. Restarting routers
Messaging routers are only deployed in the standard
address space type. The script assumes that at least two replicas of the router are running and performs a rolling restart. Messaging clients connected to the restarting router are disconnected and must reconnect to be served by a different router.
Procedure
Run the
restart-routers.sh
script, which requires at least one router to be available:./scripts/restart-routers.sh amq-online-infra 1
7.1.4. Restarting brokers
For the brokered
address space type, restarting the broker causes downtime temporarily to messaging clients while the broker is restarted. For the standard
address space type, messaging clients are not disconnected from the messaging routers, but clients are not able to consume messages stored on the restarting broker.
Procedure
Run the
restart-brokers.sh
script:./scripts/restart-brokers.sh amq-online-infra
7.2. Viewing router logs
For the standard
address space type, you can view the router logs to troubleshoot issues with clients not connecting or issues with sending and receiving messages.
Procedure
List all router Pods and choose the Pod for the relevant address space:
oc get pods -l name=qdrouterd -o go-template --template '{{range .items}}{{.metadata.name}}{{"\t"}}{{.metadata.annotations.addressSpace}}{{"\n"}}{{end}}'
Display the logs for the Pod:
oc logs pod -c router
7.3. Viewing broker logs
For the brokered
or standard
address space type, you can view the broker logs to troubleshoot issues with clients not connecting or issues with sending and receiving messages.
Procedure
List all broker Pods and choose the Pod for the relevant address space:
oc get pods -l role=broker -o go-template --template '{{range .items}}{{.metadata.name}}{{"\t"}}{{.metadata.annotations.addressSpace}}{{"\n"}}{{end}}'
Display the logs for the Pod:
oc logs pod
7.4. Enabling an AMQP protocol trace for the router
For diagnostic purposes, you can enable an AMQP protocol trace for a router. This can be helpful when troubleshooting issues related to client connectivity or with sending and receiving messages. There are two methods for enabling a protocol trace for the router.
-
You can dynamically enable/disable the protocol trace for a single router using a
qdmange
command. This method avoids the need to restart the router. The setting will be lost the next time the router restarts. -
Alternatively, you can apply configuration to the
standardinfraconfig
that enables the protocol trace for all routers of all address spaces using thatstandardinfraconfig
. This method will cause all the routers to restart.
Enabling the protocol trace increases the CPU overhead of the router(s) and may decrease messaging performance. It may also increase the disk space requirements associated with any log retention system. Therefore, it is recommended that you enable the protocol trace for as short a time as possible.
7.4.1. Dynamically enabling the protocol trace for a single router
Procedure
Log in as a service operator:
oc login -u developer
Change to the project where AMQ Online is installed:
oc project amq-online-infra
List all router Pods and choose the Pod for the relevant address space:
oc get pods -l name=qdrouterd -o go-template --template '{{range .items}}{{.metadata.name}}{{"\t"}}{{.metadata.annotations.addressSpace}}{{"\n"}}{{end}}'
Enable the protocol trace for a single router:
echo '{"enable":"trace+"}' | oc exec qdrouterd-podname --stdin=true --tty=false -- qdmanage update -b 127.0.0.1:7777 --type=log --name=log/PROTOCOL --stdin
Display the logs for the Pod that will include the protocol trace:
oc logs pod
Disable the protocol trace:
echo '{"enable":"info"}' | oc exec qdrouterd-podname --stdin=true --tty=false -- qdmanage update -b 127.0.0.1:7777 --type=log --name=log/PROTOCOL --stdin
7.4.2. Enabling the protocol trace using the StandardInfraConfig
environment variable
Procedure
Log in as a service operator:
oc login -u developer
Change to the project where AMQ Online is installed:
oc project amq-online-infra
Determine the
addresspaceplan
name for the address space concerned:oc get addressspace -n namespace address-space-name --output 'jsonpath={.spec.plan}{"\n"}'
Determine the
standardinfraconfig
name for theaddressspaceplan
name:oc get addressspaceplan address-space-plan --output 'jsonpath={.spec.infraConfigRef}{"\n"}'
Enable the protocol trace for all routers of all address spaces using that
standardinfraconfig
:oc patch standardinfraconfig standardinfraconfig-name --type=merge -p '{"spec":{"router":{"podTemplate":{"spec":{"containers":[{"env":[{"name":"PN_TRACE_FRM","value":"true"}],"name":"router"}]}}}}}'
Display the logs for the Pod that will include the protocol trace:
oc logs pod
Disable the protocol trace:
oc patch standardinfraconfig standardinfraconfig-name --type=merge -p '{"spec":{"router":{"podTemplate":{"spec":{"containers":[{"env":[{"name":"PN_TRACE_FRM"}],"name":"router"}]}}}}}'
7.5. Enabling an AMQP protocol trace for the broker
For diagnostic purposes, you can enable an AMQP protocol trace for a broker. This can be helpful when troubleshooting issues with sending or receiving messages.
To enable the protocol trace, you apply configuration to the standardinfraconfig
(for standard address spaces) or brokeredinfraconfig
(for brokered address spaces) that enables the protocol trace for all brokers of all address spaces using that configuration. Applying this configuration will cause the brokers to restart.
Enabling the protocol trace increases the CPU overhead of the broker(s) and may decrease messaging performance. It may also increase the disk space requirements associated with any log retention system. Therefore, it is recommended that you enable the protocol trace for as short a time as possible.
Procedure
Log in as a service operator:
oc login -u developer
Change to the project where AMQ Online is installed:
oc project amq-online-infra
Determine the
addresspaceplan
name for the address space concerned:oc get addressspace -n namespace address-space-name --output 'jsonpath={.spec.plan}{"\n"}'
Determine the
standardinfraconfig
orbrokeredinfraconfig
name for theaddressspaceplan
name:oc get addressspaceplan address-space-plan --output 'jsonpath={.spec.infraConfigRef}{"\n"}'
Enable the protocol trace for all brokers of all address spaces using that
standardinfraconfig
orbrokeredinfraconfig
:oc patch infraconfig-resource infraconfig-name --type=merge -p '{"spec":{"broker":{"podTemplate":{"spec":{"containers":[{"env":[{"name":"PN_TRACE_FRM","value":"true"}],"name":"broker"}]}}}}}'
Display the logs for the Pod that will include the protocol trace:
oc logs pod
Disable the protocol trace:
oc patch infraconfig-resource infraconfig-name --type=merge -p '{"spec":{"broker":{"podTemplate":{"spec":{"containers":[{"env":[{"name":"PN_TRACE_FRM"}],"name":"broker"}]}}}}}'
7.6. Examining the state of a broker using the AMQ Broker management interfaces
If a problem is suspected with a Broker associated with an address space, you can examine the state of the broker directly using its built-in management interfaces. AMQ Online exposes the AMQ Broker’s CLI and JMX (via Jolokia). It does not expose the AMQ Broker Console.
Procedure
Log in as a service admin:
oc login -u admin
Change to the project where AMQ Online is installed:
oc project amq-online-infra
Retrieve the uuid for the address space:
oc get addressspace myspace -o jsonpath='{.metadata.annotations.enmasse\.io/infra-uuid}'
Retrieve the broker support credentials (username and password) for the address space:
oc get secret broker-support-uuid --template='{{.data.username}}' | base64 --decode oc get secret broker-support-uuid --template='{{.data.password}}' | base64 --decode
Identify the broker pod name:
oc get pods -l infraUuid=uuid,role=broker
In the standard address, there may be many brokers. To identify the broker(s) hosting a particular queue, use this command:
oc get address address-resource-name -o jsonpath="{.status.brokerStatuses[*].containerId}"
Execute support commands on the broker’s pod:
To execute an AMQ Broker CLI command, use a command similar to the following:
oc exec broker-pod-name -- /opt/amq/bin/artemis address show --user username --password password
To execute an AMQ Broker Jolokia JMX command, use a command similar to the following:
oc exec broker-pod-name -- curl --silent --insecure --user username:_password_ -H "Origin: https://localhost:8161" 'https://localhost:8161/console/jolokia/read/org.apache.activemq.artemis:broker="broker pod name"/AddressMemoryUsage'
ImportantThe double quotes around the broker pod name within the URL are required. Make sure you protect them from your command shell using single quotes surrounding the whole URL, as shown in the above command. If they are not present, you will receive an authorization failure.
Chapter 8. AMQ Online configuration sizing guidelines
The following information provides guidelines on how to size AMQ Online installations. More specifically, these guidelines offer specific configuration recommendations for components and plans based on use cases, and the trade-offs involved when adjusting the configuration settings. Sizing AMQ Online involves configuration of:
- Brokers
- Routers (standard address space only)
- Operator(s)
- Plans
For example, each address space type has certain distinct features that need to be considered when creating the address plans.
For more information about address space types and their semantics, see address spaces.
Properly sizing AMQ Online components also requires taking into consideration the following points regarding your OpenShift cluster:
- The OpenShift cluster must have sufficient capacity to handle the requested resources. If the OpenShift nodes are configured with 4 GB of memory, you cannot configure brokers and routers with memory sizes larger than 4 GB.
- Since each address space creates a dedicated piece of infrastructure, you need to ensure that cluster capacity can meet demand as the number of address spaces increases.
- The use of affinity and tolerations might also restrict the nodes available for the messaging infrastructure to use.
8.1. Broker component sizing
Brokers are configured using the BrokeredInfraConfig
and StandardInfraConfig
resources, depending on the type of address space. When sizing a broker, consider:
- The average message size
- The number of messages stored
- The number of queues and topics
- The address full policy
In AMQ Online, you can only restrict the total amount of memory allocated for a broker. You cannot restrict the amount of memory used by individual addresses.
The broker persists all messages to disk. When the BLOCK
, FAIL
, or DROP
address full policy is specified, the number of messages that can be persisted is limited to the amount of memory in the broker. By using the PAGE
address full policy, more messages can be stored than can be held in memory, at the expense of a potential performance degradation from reading data from disk. Therefore, paging is useful in the case of large messages or a large backlog of messages in your system.
8.1.1. Example use case for a broker component configuration
Given 10 queues with a maximum of 1000 messages stored per queue and an average message size of 128 kB, the amount of storage space required to store messages is:
10 queues * 1000 messages * (128 + (128 kB * 1024)) = 1.25 GB
In addition, the broker has a fixed storage footprint of about 50 MB.
The amount of memory required for the broker depends on which address full policy is specified. If the PAGE
policy is used, the memory requirements can be reduced since the messages are stored separately from the journal (which always needs to fit in memory). If the FAIL
, BLOCK
, or DROP
policies are specified, all messages must also be held in memory, even if they are persisted.
There is also constant memory cost associated with running the broker as well as the JVM. The memory available to store message is automatically derived from the memory set in the broker configuration and is set to be half the JVM memory, which in turn is set to half of the system memory.
In the standard
address space type, multiple broker instances might be created. The sizing of these broker instances also depends on the address plan configuration and how many addresses you expect each broker to be able to handle before another broker is spawned.
8.1.1.1. Example broker component configuration without paging
For broker configurations not using a PAGE
policy, take into consideration an additional 5 percent bookkeeping overhead per address should be taken into account (1.05 * 1.25 = 1.35 GB
):
apiVersion: admin.enmasse.io/v1beta1 kind: BrokeredInfraConfig metadata: name: cfg1 spec: broker: addressFullPolicy: FAIL globalMaxSize: 1.35Gb resources: memory: 8Gi storage: 2Gi ...
8.1.1.2. Example broker component configuration with paging
When paging is enabled, the original formula can be modified to only account for a reference to the message as well as holding 1000 in-flight messages in memory:
(1000 messages * 1000 * 128 kB) + (10 queues * 128 kB * 1024) = 123.5 MB
So, the amount of memory specified for the broker can now be reduced, as seen in this configuration example:
apiVersion: admin.enmasse.io/v1beta1 kind: BrokeredInfraConfig metadata: name: cfg1 spec: broker: addressFullPolicy: PAGE globalMaxSize: 124Mb resources: memory: 1Gi storage: 2Gi ...
8.1.2. Broker scaling (standard address space only)
Brokers are deployed on demand, that is, when addresses of type queue
or topic
are created. The number of brokers deployed is restricted by the resource limits specified in the AddressSpacePlan
configuration. The following AddressSpacePlan
configuration example specifies a limit of four brokers in total per address space:
apiVersion: admin.enmasse.io/v1beta2 kind: AddressSpacePlan metadata: name: cfg1 spec: resourceLimits: broker: 4.0 ...
In terms of capacity, multiply the memory requirements for the broker by the limit.
The number of broker instances are scaled dynamically between one and the maximum limit specified based on the AddressPlan
used for the different addresses. An AddressPlan
specifies the fraction of a broker that is required by an address. The fraction specified in the plan is multiplied by the number of addresses referencing this plan, and then rounded up to produce the number of desired broker replicas.
AddressPlan
configuration example
apiVersion: admin.enmasse.io/v1beta2 kind: AddressPlan metadata: name: plan1 spec: ... resources: broker: 0.01
If you create 110 addresses with plan1
as the address plan, the number of broker replicas is ceil(110 addresses * 0.01 broker) = 2 replicas
.
The total number of brokers is capped by the address space plan resource limits.
8.2. Router component sizing
Routers are configured in the StandardInfraConfig
resource. In determining router sizing, consider:
- The number of addresses
- The number of connections and links
- Link capacity
The router does not persist any state and therefore does not require persistent storage.
Address configuration itself does not require a significant amount of router memory. However, queues and subscriptions require an additional two links between the router and broker per address.
The total number of links is then two times the number of queues/subscriptions plus the number of client links. Each link requires metadata and buffers in the router to handle routing messages for that link.
The router link capacity affects how many messages the router can handle per link. Setting the link capacity to a higher value might improve performance, but at the cost of potentially more memory being used to hold in-flight messages if senders are filling the links. If you have many connections and links, consider specifying a lower value to balance the memory usage.
In addition, the router has to parse the message headers, manage dispositions and settlements of messages, and other per-link activities. The per-link cost can be derived using a constant factor of the link capacity and message size. This factor varies depending on the message size. The following table provides an approximation of this factor for different message size ranges:
Table 8.1. Link multiplication factor
Message size (bytes) | Factor |
---|---|
20-1000 | 18,000 |
1000-4000 | 22,000 |
4000-10,000 | 30,000 |
>10,000 | 50,000 |
8.2.1. Example use case for router component sizing
Consider the following example use case:
- 500 anycast and 1000 queued addresses
- 10,000 connected clients (one link per client)
- Link capacity of 10
- An average message size of 512 bytes
Based on measurements, an estimated 7 kB overhead per anycast address is realistic, so:
500 anycast addresses * 7 kB overhead per address = 3.5 MB
Memory usage of queues and topics is slightly higher than that of anycast addresses, with an estimated 32 kB overhead per address. In addition, each router-broker link can have up to linkCapacity
message deliveries to keep track of. Also, we need to multiply the link capacity with the multiplication factor to account for the worst-case scenario:
(1000 queued addresses * 32,768) + (2000 * 18,000 link multiplication factor * 100 links) = 374 MB
Memory usage of client connections/links:
10,000 clients * 10 link capacity * 18,000 link multiplication factor = 1717 MB
The memory usage of client connections/links can be divided by the number of router instances.
If you have N routers, the total amount of router memory required for this configuration, including a constant base memory of 50 MB, is 50 + 3.5 + (374 + 1717)/N MB
.
To ensure the maximum number of connections and links is not exceeded, a router policy can be applied as well. The following configuration example shows two routers with a router policy specified:
apiVersion: admin.enmasse.io/v1beta1 kind: StandardInfraConfig metadata: name: cfg1 spec: router: resources: memory: 1100Mi linkCapacity: 10 policy: maxConnections: 5000 maxSessionsPerConnection: 1 maxSendersPerConnection: 1 maxReceiversPerConnection: 1 ...
8.2.2. High availability (HA)
To configure routers for high availability (HA), multiply the minimum number of required router replicas by the amount of memory per router to calculate the amount of expected memory usage. Although all connections and links are distributed across all routers, if one router fails, you must plan for those connections and links to be redistributed across the remaining routers.
8.2.3. Router scaling
Routers are scaled dynamically on demand within the limits specified for minReplicas
in the StandardInfraConfig
resource and the resourceLimits.router
specified in the AddressSpacePlan
. To restrict the number of routers to a maximum number of four, but require a minimum amount of two routers for HA purposes, the following configuration is needed:
apiVersion: admin.enmasse.io/v1beta1 kind: StandardInfraConfig metadata: name: cfg1 spec: router: minReplicas: 2 ... --- apiVersion: admin.enmasse.io/v1beta2 kind: AddressSpacePlan metadata: name: plan1 spec: infraConfigRef: cfg1 resourceLimits: router: 4 ...
In terms of capacity, multiply the memory requirements for the router by the resource limit. The router will then scale up to the resource limits specified in the AddressSpacePlan
for the address space.
The number of router replicas is scaled dynamically between the minimum and maximum limits based on the AddressPlan
used for the different addresses. An AddressPlan
describes the fraction of a router that is required by an address. The fraction defined in the plan is multiplied by the number of addresses referencing this plan, and then rounded up to produce the number of desired router replicas.
AddressPlan
configuration example:
apiVersion: admin.enmasse.io/v1beta2 kind: AddressPlan metadata: name: plan1 spec: ... resources: router: 0.01
If you create 110 addresses with plan1
as the address plan, the number of router replicas is ceil(110 addresses * 0.01 router) = 2 replicas
.
If the number of replicas exceeds the address space plan limit, the addresses exceeding the maximum number remain in the Pending
state and an error message describing the issue is displayed in the Address
status section.
8.3. Operator component sizing
The operator component is tasked with reading all address configuration and applying these configurations to the routers and brokers. It is important to size the operator component proportionally to the number of addresses.
In the standard
address space, the admin
Pod contains two processes, agent
and standard-controller
. These processes cannot be sized individually, but the memory usage of both is proportional to the number of addresses. In the brokered
address space, only a single agent
process exists.
The operator processes are running on either a JVM or a Node.JS VM. Sizing the amount of memory for these processes at twice the amount of memory required for the address configuration itself is recommended.
8.3.1. Operator component configuration example
Each address adds about 20 kB overhead to the operator process. With 1500 addresses, an additional 1500 * 2 kB = 30 MB
is needed for the operator process.
In addition, these processes have a base memory requirement of 256 MB. So, the total operator memory needed is 256 MB + 30 MB = 286 MB
. This value can be configured in both the StandardInfraConfig
and BrokeredInfraConfig
resources:
apiVersion: admin.enmasse.io/v1beta1 kind: StandardInfraConfig metadata: name: cfg1 spec: admin: resources: memory: 300Mi ...
8.4. Plan sizing
Plans enable dynamic scaling in the standard
address space, as shown in the broker and router sizing sections. At the cluster level, the combination of plans and infrastructure configuration settings determines the maximum number of Pods that can be deployed on the cluster. Since AMQ Online does not support limiting the number of address spaces that can be created, it is a best practice to apply a policy to limit who is allowed to create address spaces. Such policy configuration can be handled through the standard OpenShift policies.
From a capacity-planning perspective, it is useful to calculate the maximum number of Pods and the maximum amount of memory that can be consumed for a given address space. To make this calculation using a script, see Running the check-memory calculation script.
8.4.1. Running the check-memory calculation script
You can use this script to calculate the maximum number of Pods and the maximum amount of memory that can be consumed for a given address space.
In this script, memory is assumed to be specified using the Mi
unit, while storage is assumed to be specified using the Gi
unit. Also, all three components, admin
, router
, and broker
, must have limits specified for the script to work as intended.
Procedure
Save the following script as
check-memory.sh
:#!/usr/bin/env bash PLAN=$1 total_pods=0 total_memory_mb=0 total_storage_gb=0 routers=$(oc get addressspaceplan $PLAN -o jsonpath='{.spec.resourceLimits.router}') brokers=$(oc get addressspaceplan $PLAN -o jsonpath='{.spec.resourceLimits.broker}') infra=$(oc get addressspaceplan $PLAN -o jsonpath='{.spec.infraConfigRef}') operator_memory=$(oc get standardinfraconfig $infra -o jsonpath='{.spec.admin.resources.memory}') broker_memory=$(oc get standardinfraconfig $infra -o jsonpath='{.spec.broker.resources.memory}') broker_storage=$(oc get standardinfraconfig $infra -o jsonpath='{.spec.broker.resources.storage}') router_memory=$(oc get standardinfraconfig $infra -o jsonpath='{.spec.router.resources.memory}') total_pods=$((routers + brokers + 1)) total_memory_mb=$(( (routers * ${router_memory%Mi}) + (brokers * ${broker_memory%Mi}) + ${operator_memory%Mi})) total_storage_gb=$(( brokers * ${broker_storage%Gi})) echo "Pods: ${total_pods}. Memory: ${total_memory_mb} MB. Storage: ${total_storage_gb} GB"
Run the script using the following command:
bash calculate-memory.sh standard-small
If all components have limits defined in the assumed units, the script outputs the total resource limits for address spaces using this plan, as in the following example:
Pods: 3. Memory: 1280 MB. Storage: 2 GB
8.5. Address sizing
Per address broker memory limits are calculated from the address plan configuration. AMQ Online determines the maximum size allowed for each queue by multiplying the broker configuration globalMaxSize
(specified in the standardinfraconfig
or brokeredinfraconfig
) by the address plan’s broker resource limit. The behavior when the queue reaches its memory limit is governed by the address full policy. For more information on the address full policy, see Broker component sizing.
For example, if the broker’s configuration specifies globalMaxSize
= 124 MB and the address plan configuration specifies addressplan.spec.resources.broker
= 0.2, the maximum size allowed for each queue is 25 MB (124 * 0.2 = 25 MB
).
Chapter 9. Understanding AMQ Online resource configuration
9.1. Address space and address concepts in AMQ Online
Before you begin configuring resources for AMQ Online, you must first understand the concepts of an address space and an address in AMQ Online.
9.1.1. Address space
An address space is a group of addresses that can be accessed through a single connection (per protocol). This means that clients connected to the endpoints of an address space can send messages to or receive messages from any authorized address within that address space. An address space can support multiple protocols, as defined by the address space type.
You cannot modify endpoints for an existing address space.
AMQ Online has two types of address spaces:
9.1.2. Address
An address is part of an address space and represents a destination for sending and receiving messages. An address has a type, which defines the semantics of sending messages to and receiving messages from that address.
The types of addresses available in AMQ Online depend on the address space type.
9.2. Service configuration resources and definition
The service administrator configures AMQ Online by creating Custom Resources that comprise the "service configuration." This service configuration contains instances of the following Custom Resource types:
Custom Resource type | Description |
---|---|
| Specifies an authentication service instance used to authenticate messaging clients. |
| Specifies the messaging resources available for address spaces using this plan, such as the available address plans and the amount of router and broker resources that can be used. |
| Specifies the messaging resources consumed by a particular address using this plan, such as the fraction of routers and brokers an address can use and other properties that can be specified for multiple addresses. |
|
For the |
|
For the |
When created, these Custom Resources define the configuration that is available to the messaging tenants.
The following diagram illustrates the relationship between the different service configuration resources and how they are referenced by the messaging tenant resources.

9.3. Example use case for configuring AMQ Online
To help illustrate how the service configuration resources can be defined to satisfy a particular use case, the requirements of Company X for using AMQ Online are outlined. This use case is referenced throughout the following documentation describing the service configuration resource types in further detail.
Company X has the following requirements:
- Ability to accommodate multiple separate teams—for example, engineering and quality assurance (QA) work teams—that use messaging independently. To meet this requirement, multiple address spaces are needed.
- Since the applications for Company X are written to use JMS APIs and make extensive use of local transactions and they use a mixture of AMQP and OpenWire clients, using the brokered address space type is required.
For engineering work, restricting the messaging infrastructure to support storage of no more than 1000 messages of approximately 1 KB per message, with up to 10 queues and topics is required.
For QA work, restricting the messaging infrastructure to support storage of no more than 10,000 messages of approximately 100 KB, with up to 50 queues and topics is required.
- For engineering work, the ability to restrict who can connect into the address space is required.
For engineering work, the engineering team does not need to create distinct users that need to be individually authenticated.
For QA work, the QA team must be able to create users for each instance.
Each of these requirements and how they can be met by configuring the appropriate resources is discussed in the following sections.
9.3.1. Restricting messaging infrastructure
Company X has the following requirements for using AMQ Online:
For engineering work, restricting the messaging infrastructure to support storage of no more than 1000 messages of approximately 1 KB per message, with up to 10 queues and topics is required.
For QA work, restricting the messaging infrastructure to support storage of no more than 10,000 messages of approximately 100 KB, with up to 50 queues and topics is required.
Meeting this requirement involves configuring the BrokeredInfraConfig
resource. The following points need to be taken into consideration:
- Calculate the memory size for the broker: Given the requirements, specifying a relatively small memory size for engineering work is likely sufficient, while more memory is required for the QA work. For more information about broker sizing guidelines, see Broker component sizing.
- Calculate the minimum amount of storage for the broker. For more information about broker sizing guidelines, see Broker component sizing.
9.3.1.1. Examples of brokered infrastructure configurations
The following brokered infrastructure configuration examples show broker component resource values that meet the requirements of Company X.
Brokered infrastructure configuration example for engineering
apiVersion: admin.enmasse.io/v1beta1 kind: BrokeredInfraConfig metadata: name: engineering spec: broker: resources: memory: 512Mi storage: 20Mi
Brokered infrastructure configuration example for QA
apiVersion: admin.enmasse.io/v1beta1 kind: BrokeredInfraConfig metadata: name: qa spec: broker: resources: memory: 4Gi storage: 50Gi
9.3.2. Ability to restrict address space connections
Company X has the following requirement for using AMQ Online: For engineering work, the ability to restrict who can connect into the address space is required.
To meet this requirement you must set a network policy in the brokered infrastructure configuration. For more information about network policies, see
- OpenShift Container Platform 3.11 documentation about Enabling Network Policy.
- OpenShift Container Platform 4.2 documentation about Configuring network policy with OpenShift SDN.
Brokered infrastructure configuration example showing network policy setting
apiVersion: admin.enmasse.io/v1beta1 kind: BrokeredInfraConfig metadata: name: engineering spec: networkPolicy: ingress: - from: - namespaceSelector: matchLabels: org: engineering broker: resources: memory: 512Mi storage: 20Mi
In addition, the address space plan references the previous BrokeredInfraConfig
Custom Resource.
Address space plan example
apiVersion: admin.enmasse.io/v1beta2 kind: AddressSpacePlan metadata: name: engineering spec: infraConfigRef: engineering addressSpaceType: brokered addressPlans: - brokered-queue - brokered-topic
9.3.3. Authentication service resource examples
Company X has the following requirement for using AMQ Online: For engineering work, the engineering team does not need to create distinct users that need to be individually authenticated. To meet this requirement, you specify the none
authentication service:
None authentication service example
apiVersion: admin.enmasse.io/v1beta1 kind: AuthenticationService metadata: name: engineering spec: type: none
For QA work, the QA team must be able to create users for each instance. Also, QA has a database they want to use for persisting the users. To meet this requirement, you must use the standard
authentication service and specify a data source:
Standard authentication service example
apiVersion: admin.enmasse.io/v1beta1 kind: AuthenticationService metadata: name: qa spec: type: standard standard: storage: type: persistent-claim size: 5Gi datasource: type: postgresql host: db.example.com port: 5432 database: authdb
Appendix A. AMQ Online resources for service administrators
The following table describes the AMQ Online resources that pertain to the service administrator role.
Table A.1. AMQ Online service administrator resources table
Resource | Description |
---|---|
| Specifies the address plan. |
| Specifies the address space plan. |
|
Defines the service characteristics available to an |
| Specifies the infrastructure configuration for brokered address spaces. For more information see Brokered infrastructure configuration fields table. |
| Specifies the infrastructure configuration for standard address spaces. For more information see Standard infrastructure configuration fields table. |
Appendix B. Brokered infrastructure configuration fields
This table shows the fields available for the brokered infrastructure configuration and a brief description.
Table B.1. Brokered infrastructure configuration fields table
Field | Description |
| Specifies the AMQ Online version used. When upgrading, AMQ Online uses this field to determine whether to upgrade the infrastructure to the requested version. |
| Specifies the amount of memory allocated to the admin Pod. |
| Specifies the labels added to the admin Pod. |
| Specifies the affinity settings for the admin Pod so you can specify where on particular nodes a Pod runs, or if it cannot run together with other instances. |
| Specifies the priority class to use for the admin Pod so you can prioritize admin Pods over other Pods in the OpenShift cluster. |
| Specifies the toleration settings for the admin Pod, which allows this Pod to run on certain nodes that other Pods cannot run on. |
|
Specifies the action taken when a queue is full: |
| Specifies the maximum amount of memory used for queues in the broker. |
| Specifies the amount of memory allocated to the broker. |
| Specifies the amount of storage requested for the broker. |
| Specifies the labels added to the broker Pod. |
| Specifies the affinity settings for the broker Pod so you can specify where on particular nodes a Pod runs, or if it cannot run together with other instances. |
| Specifies the priority class to use for the broker Pod so you can prioritize broker Pods over other Pods in the OpenShift cluster. |
| Specifies the toleration settings for the broker Pod, which allows this Pod to run on certain nodes that other Pods cannot run on. |
| Specifies the security context for the broker Pod. |
| Specifies environment variables for the broker Pod. |
| Specifies the number of times that OpenShift tries when a broker Pod starts and the probe fails before restarting the container. |
| Specifies the probe delay value in seconds for the broker Pod. |
| Specifies the probe timeout value in seconds for the broker Pod. |
|
Specifies the number of times that OpenShift tries when a broker Pod starts and the probe fails before the Pod is marked |
| Specifies the probe delay value in seconds for the broker Pod. |
| Specifies the probe timeout value in seconds for the broker Pod. |
| Specifies broker Pod resource requests and limits for CPU and memory. |
| Specifies what storage class to use for the persistent volume for the broker. |
|
If the persistent volume supports resizing, setting this value to |
Appendix C. Standard infrastructure configuration fields
This table shows the fields available for the standard infrastructure configuration and a brief description.
Table C.1. Standard infrastructure configuration fields table
Field | Description |
| Specifies the AMQ Online version used. When upgrading, AMQ Online uses this field to determine whether to upgrade the infrastructure to the requested version. |
| Specifies the amount of memory allocated to the admin Pod. |
| Specifies the labels added to the admin Pod. |
| Specifies the affinity settings for the admin Pod so you can specify where on particular nodes a Pod runs, or if it cannot run together with other instances. |
| Specifies the priority class to use for the admin Pod so you can prioritize admin pods over other Pods in the OpenShift cluster. |
| Specifies the toleration settings for the admin Pod, which allow this Pod to run on certain nodes on which other Pods cannot run. |
|
Specifies the action taken when a queue is full: |
| Specifies the maximum amount of memory used for queues in the broker. |
| Specifies the amount of memory allocated to the broker. |
| Specifies the amount of storage requested for the broker. |
| Specifies the labels added to the broker Pod. |
| Specifies the affinity settings for the broker Pod so you can specify where on particular nodes a Pod runs, or if it cannot run together with other instances. |
| Specifies the priority class to use for the broker Pod so you can prioritize broker Pods over other Pods in the OpenShift cluster. |
| Specifies the toleration settings for the broker Pod, which allow this Pod to run on certain nodes on which other Pods cannot run. |
| Specifies the security context for the broker Pod. |
| Specifies environment variables for the broker Pod. |
| Specifies the number of times that OpenShift tries when a broker Pod starts and the probe fails before restarting the container. |
| Specifies the probe delay value in seconds for the broker Pod. |
| Specifies the probe timeout value in seconds for the broker Pod. |
|
Specifies the number of times that OpenShift tries when a broker Pod starts and the probe fails before the Pod is marked |
| Specifies the probe delay value in seconds for the broker Pod. |
| Specifies the probe timeout value in seconds for the broker Pod. |
| Specifies broker Pod resource requests and limits for CPU and memory. |
| Specifies the AMQP idle timeout to use for connection to router. |
| Specifies the number of worker threads of the connection to the router. |
| Specifies what storage class to use for the persistent volume for the broker. |
|
If the persistent volume supports resizing, setting this value to |
|
Treat rejected delivery outcome as modified delivery failed. This causes the message to be re-sent to the consumer by default. The default value is |
|
Respond with modified for transient delivery errors to allow sender to retry. The default value is |
|
Specifies the minimum size of a message for it to be treated as a large message. A large message is always paged to disk with a reference in the journal.The default value is |
| Specifies the amount of memory allocated to the router. |
| Specifies the default number of credits issued on AMQP links for the router. |
| Specifies the amount of time in seconds to wait for the secure handshake to be initiated. |
| Specifies the minimum number of router Pods to run; a minimum of two are required for high availability (HA) configuration. |
| Specifies the labels added to the router Pod. |
| Specifies the affinity settings for the router Pod so you can specify where on particular nodes a pod runs, or if it cannot run together with other instances. |
| Specifies the priority class to use for the router Pod so you can prioritize router pods over other pods in the OpenShift cluster. |
| Specifies the toleration settings for the router Pod, which allow this Pod to run on certain nodes on which other Pods cannot run. |
| Specifies the security context for the router Pod. |
| Specifies the environment variables for the router Pod. |
| Specifies the number of times that OpenShift tries when a router Pod starts and the probe fails before restarting the container. |
| Specifies the probe delay value in seconds for the router Pod. |
| Specifies the probe timeout value in seconds for the router Pod. |
|
Specifies the number of times that OpenShift tries when a router Pod starts and the probe fails before the Pod is marked |
| Specifies the probe delay value in seconds for the router Pod. |
| Specifies the probe timeout value in seconds for the router Pod. |
| Specifies router Pod resource requests and limits for CPU and memory. |
| Specifies the AMQP idle timeout to use for all router listeners. |
| Specifies the number of worker threads to use for the router. |
| Specifies the maximum number of router connections allowed. |
| Specifies the maximum number of router connections allowed per user. |
| Specifies the maximum number of router connections allowed per host. |
| Specifies the maximum number of sessions allowed per router connection. |
| Specifies the maximum number of senders allowed per router connection. |
| Specifies the maximum number of receivers allowed per router connection. |
Appendix D. REST API Reference
D.1. EnMasse REST API
D.1.1. Overview
This is the EnMasse API specification.
D.1.1.1. Version information
Version : 0.32-SNAPSHOT
D.1.1.2. URI scheme
Schemes : HTTPS
D.1.1.3. Tags
- addresses : Operating on Addresses.
- addressplans : Operating on AddressPlans.
- addressspaceplans : Operating on AddressSpacePlans.
- addressspaces : Operate on AddressSpaces
- brokeredinfraconfigs : Operating on BrokeredInfraConfigs.
- messagingusers : Operating on MessagingUsers.
- standardinfraconfigs : Operating on StandardInfraConfigs.
D.1.1.4. External Docs
Description : Find out more about EnMasse
URL : https://enmasse.io/documentation/
D.1.2. Paths
D.1.2.1. POST /apis/admin.enmasse.io/v1beta2/namespaces/{namespace}/addressspaceplans
D.1.2.1.1. Description
create an AddressSpacePlan
D.1.2.1.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Body |
body |
D.1.2.1.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
201 | Created | |
401 | Unauthorized | No Content |
D.1.2.1.4. Consumes
-
application/json
D.1.2.1.5. Produces
-
application/json
D.1.2.1.6. Tags
- addressspaceplan
- admin
- enmasse_v1beta2
D.1.2.2. GET /apis/admin.enmasse.io/v1beta2/namespaces/{namespace}/addressspaceplans
D.1.2.2.1. Description
list objects of kind AddressSpacePlan
D.1.2.2.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Query |
labelSelector | A selector to restrict the list of returned objects by their labels. Defaults to everything. | string |
D.1.2.2.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
D.1.2.2.4. Produces
-
application/json
D.1.2.2.5. Tags
- addressspaceplan
- admin
- enmasse_v1beta2
D.1.2.3. GET /apis/admin.enmasse.io/v1beta2/namespaces/{namespace}/addressspaceplans/{name}
D.1.2.3.1. Description
read the specified AddressSpacePlan
D.1.2.3.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of AddressSpacePlan to read. | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
D.1.2.3.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
404 | Not found | No Content |
D.1.2.3.4. Consumes
-
application/json
D.1.2.3.5. Produces
-
application/json
D.1.2.3.6. Tags
- addressspaceplan
- admin
- enmasse_v1beta2
D.1.2.4. PUT /apis/admin.enmasse.io/v1beta2/namespaces/{namespace}/addressspaceplans/{name}
D.1.2.4.1. Description
replace the specified AddressSpacePlan
D.1.2.4.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of AddressSpacePlan to replace. | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Body |
body |
D.1.2.4.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
201 | Created | |
401 | Unauthorized | No Content |
D.1.2.4.4. Produces
-
application/json
D.1.2.4.5. Tags
- addressspaceplan
- admin
- enmasse_v1beta2
D.1.2.5. DELETE /apis/admin.enmasse.io/v1beta2/namespaces/{namespace}/addressspaceplans/{name}
D.1.2.5.1. Description
delete an AddressSpacePlan
D.1.2.5.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of AddressSpacePlan to delete. | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
D.1.2.5.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
404 | Not found | No Content |
D.1.2.5.4. Produces
-
application/json
D.1.2.5.5. Tags
- addressspaceplan
- admin
- enmasse_v1beta2
D.1.2.6. POST /apis/enmasse.io/v1beta1/namespaces/{namespace}/addresses
D.1.2.6.1. Description
create an Address
D.1.2.6.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Body |
body |
D.1.2.6.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
201 | Created | |
401 | Unauthorized | No Content |
D.1.2.6.4. Consumes
-
application/json
D.1.2.6.5. Produces
-
application/json
D.1.2.6.6. Tags
- addresses
- enmasse_v1beta1
D.1.2.7. GET /apis/enmasse.io/v1beta1/namespaces/{namespace}/addresses
D.1.2.7.1. Description
list objects of kind Address
D.1.2.7.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Query |
labelSelector | A selector to restrict the list of returned objects by their labels. Defaults to everything. | string |
D.1.2.7.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
D.1.2.7.4. Produces
-
application/json
D.1.2.7.5. Tags
- addresses
- enmasse_v1beta1
D.1.2.8. GET /apis/enmasse.io/v1beta1/namespaces/{namespace}/addresses/{name}
D.1.2.8.1. Description
read the specified Address
D.1.2.8.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of Address to read | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
D.1.2.8.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
404 | Not found | No Content |
D.1.2.8.4. Consumes
-
application/json
D.1.2.8.5. Produces
-
application/json
D.1.2.8.6. Tags
- addresses
- enmasse_v1beta1
D.1.2.9. PUT /apis/enmasse.io/v1beta1/namespaces/{namespace}/addresses/{name}
D.1.2.9.1. Description
replace the specified Address
D.1.2.9.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of Address to replace | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Body |
body |
D.1.2.9.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
201 | Created | |
401 | Unauthorized | No Content |
D.1.2.9.4. Produces
-
application/json
D.1.2.9.5. Tags
- addresses
- enmasse_v1beta1
D.1.2.10. DELETE /apis/enmasse.io/v1beta1/namespaces/{namespace}/addresses/{name}
D.1.2.10.1. Description
delete an Address
D.1.2.10.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of Address to delete | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
D.1.2.10.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
404 | Not found | No Content |
D.1.2.10.4. Produces
-
application/json
D.1.2.10.5. Tags
- addresses
- enmasse_v1beta1
D.1.2.11. PATCH /apis/enmasse.io/v1beta1/namespaces/{namespace}/addresses/{name}
D.1.2.11.1. Description
patches (RFC6902) the specified Address
D.1.2.11.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of Address to replace | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Body |
body |
D.1.2.11.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
D.1.2.11.4. Consumes
-
application/json-patch+json
D.1.2.11.5. Produces
-
application/json
D.1.2.11.6. Tags
- addresses
- enmasse_v1beta1
D.1.2.12. POST /apis/enmasse.io/v1beta1/namespaces/{namespace}/addressspaces
D.1.2.12.1. Description
create an AddressSpace
D.1.2.12.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Body |
body |
D.1.2.12.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
201 | Created | |
401 | Unauthorized | No Content |
D.1.2.12.4. Consumes
-
application/json
D.1.2.12.5. Produces
-
application/json
D.1.2.12.6. Tags
- addressspaces
- enmasse_v1beta1
D.1.2.13. GET /apis/enmasse.io/v1beta1/namespaces/{namespace}/addressspaces
D.1.2.13.1. Description
list objects of kind AddressSpace
D.1.2.13.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Query |
labelSelector | A selector to restrict the list of returned objects by their labels. Defaults to everything. | string |
D.1.2.13.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
D.1.2.13.4. Produces
-
application/json
D.1.2.13.5. Tags
- addressspaces
- enmasse_v1beta1
D.1.2.14. GET /apis/enmasse.io/v1beta1/namespaces/{namespace}/addressspaces/{name}
D.1.2.14.1. Description
read the specified AddressSpace
D.1.2.14.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of AddressSpace to read | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
D.1.2.14.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
404 | Not found | No Content |
D.1.2.14.4. Consumes
-
application/json
D.1.2.14.5. Produces
-
application/json
D.1.2.14.6. Tags
- addressspaces
- enmasse_v1beta1
D.1.2.15. PUT /apis/enmasse.io/v1beta1/namespaces/{namespace}/addressspaces/{name}
D.1.2.15.1. Description
replace the specified AddressSpace
D.1.2.15.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of AddressSpace to replace | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Body |
body |
D.1.2.15.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
201 | Created | |
401 | Unauthorized | No Content |
D.1.2.15.4. Produces
-
application/json
D.1.2.15.5. Tags
- addressspaces
- enmasse_v1beta1
D.1.2.16. DELETE /apis/enmasse.io/v1beta1/namespaces/{namespace}/addressspaces/{name}
D.1.2.16.1. Description
delete an AddressSpace
D.1.2.16.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of AddressSpace to delete | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
D.1.2.16.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
404 | Not found | No Content |
D.1.2.16.4. Produces
-
application/json
D.1.2.16.5. Tags
- addressspaces
- enmasse_v1beta1
D.1.2.17. PATCH /apis/enmasse.io/v1beta1/namespaces/{namespace}/addressspaces/{name}
D.1.2.17.1. Description
patches (RFC6902) the specified AddressSpace
D.1.2.17.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of AddressSpace to replace | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Body |
body |
D.1.2.17.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
D.1.2.17.4. Consumes
-
application/json-patch+json
D.1.2.17.5. Produces
-
application/json
D.1.2.17.6. Tags
- addressspaces
- enmasse_v1beta1
D.1.2.18. POST /apis/user.enmasse.io/v1beta1/namespaces/{namespace}/messagingusers
D.1.2.18.1. Description
create a MessagingUser
D.1.2.18.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Body |
body |
D.1.2.18.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
201 | Created | |
401 | Unauthorized | No Content |
D.1.2.18.4. Consumes
-
application/json
D.1.2.18.5. Produces
-
application/json
D.1.2.18.6. Tags
- auth
- enmasse_v1beta1
- user
D.1.2.19. GET /apis/user.enmasse.io/v1beta1/namespaces/{namespace}/messagingusers
D.1.2.19.1. Description
list objects of kind MessagingUser
D.1.2.19.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Query |
labelSelector | A selector to restrict the list of returned objects by their labels. Defaults to everything. | string |
D.1.2.19.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
D.1.2.19.4. Produces
-
application/json
D.1.2.19.5. Tags
- auth
- enmasse_v1beta1
- user
D.1.2.20. GET /apis/user.enmasse.io/v1beta1/namespaces/{namespace}/messagingusers/{name}
D.1.2.20.1. Description
read the specified MessagingUser
D.1.2.20.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of MessagingUser to read. Must include addressSpace and dot separator in the name (that is, 'myspace.user1'). | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
D.1.2.20.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
404 | Not found | No Content |
D.1.2.20.4. Consumes
-
application/json
D.1.2.20.5. Produces
-
application/json
D.1.2.20.6. Tags
- auth
- enmasse_v1beta1
- user
D.1.2.21. PUT /apis/user.enmasse.io/v1beta1/namespaces/{namespace}/messagingusers/{name}
D.1.2.21.1. Description
replace the specified MessagingUser
D.1.2.21.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of MessagingUser to replace. Must include addressSpace and dot separator in the name (that is, 'myspace.user1'). | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Body |
body |
D.1.2.21.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
201 | Created | |
401 | Unauthorized | No Content |
D.1.2.21.4. Produces
-
application/json
D.1.2.21.5. Tags
- auth
- enmasse_v1beta1
- user
D.1.2.22. DELETE /apis/user.enmasse.io/v1beta1/namespaces/{namespace}/messagingusers/{name}
D.1.2.22.1. Description
delete a MessagingUser
D.1.2.22.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of MessagingUser to delete. Must include addressSpace and dot separator in the name (that is, 'myspace.user1'). | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
D.1.2.22.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
404 | Not found | No Content |
D.1.2.22.4. Produces
-
application/json
D.1.2.22.5. Tags
- auth
- enmasse_v1beta1
- user
D.1.2.23. PATCH /apis/user.enmasse.io/v1beta1/namespaces/{namespace}/messagingusers/{name}
D.1.2.23.1. Description
patches (RFC6902) the specified MessagingUser
D.1.2.23.2. Parameters
Type | Name | Description | Schema |
---|---|---|---|
Path |
name | Name of MessagingUser to replace. Must include addressSpace and dot separator in the name (that is, 'myspace.user1' | string |
Path |
namespace | object name and auth scope, such as for teams and projects | string |
Body |
body |
D.1.2.23.3. Responses
HTTP Code | Description | Schema |
---|---|---|
200 | OK | |
401 | Unauthorized | No Content |
D.1.2.23.4. Consumes
-
application/json-patch+json
D.1.2.23.5. Produces
-
application/json
D.1.2.23.6. Tags
- auth
- enmasse_v1beta1
- user
D.1.3. Definitions
D.1.3.1. JsonPatchRequest
Name | Schema |
---|---|
document | object |
patch | < Patch > array |
D.1.3.2. ObjectMeta
ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.
Name | Schema |
---|---|
name | string |
namespace | string |
D.1.3.3. Patch
Name | Description | Schema |
---|---|---|
from | Required for operations copy, replace | string |
op | enum (add, remove, replace, move, copy, test) | |
path | Slash separated format | string |
value | Required for operations add, replace, test | string |
D.1.3.4. Status
Status is a return value for calls that do not return other objects.
Name | Description | Schema |
---|---|---|
code | Suggested HTTP return code for this status, 0 if not set. | integer (int32) |
D.1.3.5. io.enmasse.admin.v1beta1.BrokeredInfraConfig
Name | Schema |
---|---|
apiVersion | enum (admin.enmasse.io/v1beta1) |
kind | enum (BrokeredInfraConfig) |
metadata | |
spec |
D.1.3.6. io.enmasse.admin.v1beta1.BrokeredInfraConfigList
Name | Schema |
---|---|
apiVersion | enum (admin.enmasse.io/v1beta1) |
items | |
kind | enum (BrokeredInfraConfigList) |
D.1.3.7. io.enmasse.admin.v1beta1.BrokeredInfraConfigSpec
Name | Schema |
---|---|
admin | |
broker | |
networkPolicy | |
version | string |
networkPolicy
Name | Schema |
---|---|
egress | |
ingress |
D.1.3.8. io.enmasse.admin.v1beta1.BrokeredInfraConfigSpecAdmin
Name | Schema |
---|---|
podTemplate | |
resources |
resources
Name | Schema |
---|---|
memory | string |
D.1.3.9. io.enmasse.admin.v1beta1.BrokeredInfraConfigSpecBroker
Name | Schema |
---|---|
addressFullPolicy | enum (PAGE, BLOCK, FAIL) |
podTemplate | |
resources | |
storageClassName | string |
updatePersistentVolumeClaim | boolean |
resources
Name | Schema |
---|---|
memory | string |
storage | string |
D.1.3.10. io.enmasse.admin.v1beta1.InfraConfigPodSpec
metadata
Name | Schema |
---|---|
labels | object |
spec
Name | Schema |
---|---|
affinity | object |
containers | < containers > array |
priorityClassName | string |
securityContext | object |
tolerations | < object > array |
containers
Name | Schema |
---|---|
resources | object |
D.1.3.11. io.enmasse.admin.v1beta1.StandardInfraConfig
Name | Schema |
---|---|
apiVersion | enum (admin.enmasse.io/v1beta1) |
kind | enum (StandardInfraConfig) |
metadata | |
spec |
D.1.3.12. io.enmasse.admin.v1beta1.StandardInfraConfigList
Name | Schema |
---|---|
apiVersion | enum (admin.enmasse.io/v1beta1) |
items | |
kind | enum (StandardInfraConfigList) |
D.1.3.13. io.enmasse.admin.v1beta1.StandardInfraConfigSpec
Name | Schema |
---|---|
admin | |
broker | |
networkPolicy | |
router | |
version | string |
networkPolicy
Name | Schema |
---|---|
egress | |
ingress |
D.1.3.14. io.enmasse.admin.v1beta1.StandardInfraConfigSpecAdmin
Name | Schema |
---|---|
podTemplate | |
resources |
resources
Name | Schema |
---|---|
memory | string |
D.1.3.15. io.enmasse.admin.v1beta1.StandardInfraConfigSpecBroker
Name | Schema |
---|---|
addressFullPolicy | enum (PAGE, BLOCK, FAIL) |
connectorIdleTimeout | integer |
connectorWorkerThreads | integer |
podTemplate | |
resources | |
storageClassName | string |
updatePersistentVolumeClaim | boolean |
resources
Name | Schema |
---|---|
memory | string |
storage | string |
D.1.3.16. io.enmasse.admin.v1beta1.StandardInfraConfigSpecRouter
Name | Schema |
---|---|
idleTimeout | integer |
initialHandshakeTimeout | integer |
linkCapacity | integer |
minAvailable | integer |
minReplicas | integer |
podTemplate | |
policy | |
resources | |
workerThreads | integer |
policy
Name | Schema |
---|---|
maxConnections | integer |
maxConnectionsPerHost | integer |
maxConnectionsPerUser | integer |
maxReceiversPerConnection | integer |
maxSendersPerConnection | integer |
maxSessionsPerConnection | integer |
resources
Name | Schema |
---|---|
memory | string |
D.1.3.17. io.enmasse.admin.v1beta2.AddressPlan
Name | Schema |
---|---|
apiVersion | enum (admin.enmasse.io/v1beta2) |
kind | enum (AddressPlan) |
metadata | |
spec |
D.1.3.18. io.enmasse.admin.v1beta2.AddressPlanList
Name | Schema |
---|---|
apiVersion | enum (admin.enmasse.io/v1beta2) |
items | < io.enmasse.admin.v1beta2.AddressPlan > array |
kind | enum (AddressPlanList) |
D.1.3.19. io.enmasse.admin.v1beta2.AddressPlanSpec
Name | Schema |
---|---|
addressType | string |
displayName | string |
displayOrder | integer |
longDescription | string |
partitions | integer |
resources | |
shortDescription | string |
resources
Name | Schema |
---|---|
broker | number |
router | number |
D.1.3.20. io.enmasse.admin.v1beta2.AddressSpacePlan
Name | Schema |
---|---|
apiVersion | enum (admin.enmasse.io/v1beta2) |
kind | enum (AddressSpacePlan) |
metadata | |
spec |
D.1.3.21. io.enmasse.admin.v1beta2.AddressSpacePlanList
Name | Schema |
---|---|
apiVersion | enum (admin.enmasse.io/v1beta2) |
items | |
kind | enum (AddressSpacePlanList) |
D.1.3.22. io.enmasse.admin.v1beta2.AddressSpacePlanSpec
Name | Schema |
---|---|
addressPlans | < string > array |
addressSpaceType | string |
displayName | string |
displayOrder | integer |
infraConfigRef | string |
longDescription | string |
resourceLimits | |
shortDescription | string |
resourceLimits
Name | Schema |
---|---|
aggregate | number |
broker | number |
router | number |
D.1.3.23. io.enmasse.user.v1beta1.MessagingUser
Name | Schema |
---|---|
apiVersion | enum (user.enmasse.io/v1beta1) |
kind | enum (MessagingUser) |
metadata | |
spec |
D.1.3.24. io.enmasse.user.v1beta1.MessagingUserList
Name | Schema |
---|---|
apiVersion | enum (user.enmasse.io/v1beta1) |
items | < io.enmasse.user.v1beta1.MessagingUser > array |
kind | enum (MessagingUserList) |
D.1.3.25. io.enmasse.user.v1beta1.UserSpec
Name | Schema |
---|---|
authentication | |
authorization | < authorization > array |
username | string |
authentication
Name | Description | Schema |
---|---|---|
federatedUserid | User id of the user to federate when 'federated' type is specified. | string |
federatedUsername | User name of the user to federate when 'federated' type is specified. | string |
password | Base64 encoded value of password when 'password' type is specified. | string |
provider | Name of provider to use for federated identity when 'federated' type is specified. | string |
type | enum (password, serviceaccount) |
authorization
Name | Schema |
---|---|
addresses | < string > array |
operations | < enum (send, recv, view, manage) > array |
D.1.3.26. io.enmasse.v1beta1.Address
Name | Schema |
---|---|
apiVersion | enum (enmasse.io/v1beta1) |
kind | enum (Address) |
metadata | |
spec | |
status |
D.1.3.27. io.enmasse.v1beta1.AddressList
Name | Description | Schema |
---|---|---|
apiVersion |
Default : | enum (enmasse.io/v1beta1) |
items | < io.enmasse.v1beta1.Address > array | |
kind | enum (AddressList) |
D.1.3.28. io.enmasse.v1beta1.AddressSpace
Name | Schema |
---|---|
apiVersion | enum (enmasse.io/v1beta1) |
kind | enum (AddressSpace) |
metadata | |
spec | |
status |
D.1.3.29. io.enmasse.v1beta1.AddressSpaceList
Name | Description | Schema |
---|---|---|
apiVersion |
Default : | enum (enmasse.io/v1beta1) |
items | < io.enmasse.v1beta1.AddressSpace > array | |
kind | enum (AddressSpaceList) |
D.1.3.30. io.enmasse.v1beta1.AddressSpaceSpec
Name | Description | Schema |
---|---|---|
authenticationService | ||
connectors | List of connectors to create. | |
endpoints | < endpoints > array | |
networkPolicy | ||
plan | string | |
type |
authenticationService
Name | Schema |
---|---|
name | string |
overrides | |
type | string |
overrides
Name | Schema |
---|---|
host | string |
port | integer |
realm | string |
endpoints
Name | Schema |
---|---|
cert | |
exports | < exports > array |
expose | |
name | string |
service | string |
cert
Name | Schema |
---|---|
provider | string |
secretName | string |
tlsCert | string |
tlsKey | string |
exports
Name | Schema |
---|---|
kind | enum (ConfigMap, Secret, Service) |
name | string |
expose
Name | Schema |
---|---|
annotations | object |
loadBalancerPorts | < string > array |
loadBalancerSourceRanges | < string > array |
routeHost | string |
routeServicePort | string |
routeTlsTermination | string |
type | enum (route, loadbalancer) |
networkPolicy
Name | Schema |
---|---|
egress | |
ingress |
D.1.3.31. io.enmasse.v1beta1.AddressSpaceSpecConnector
Name | Description | Schema |
---|---|---|
addresses | Addresses to make be accessible via this address space. | < addresses > array |
credentials | Credentials used when connecting to endpoints. Either 'username' and 'password', or 'secret' must be defined. | |
endpointHosts | List of hosts that should be connected to. Must contain at least 1 entry. | < endpointHosts > array |
name | Name of the connector. | string |
tls | TLS settings for the connectors. If not specified, TLS will not be used. |
addresses
Name | Description | Schema |
---|---|---|
name | Identifier of address pattern. Used to uniquely identify a pattern | string |
pattern | Pattern used to match addresses. The pattern will be prefixed by the connector name and a forward slash ('myconnector/'). A pattern consists of one or more tokens separated by a forward slash /. A token can be one of the following: a * character, a # character, or a sequence of characters that do not include /, *, or #. The * token matches any single token. The # token matches zero or more tokens. * has higher precedence than #, and exact match has the highest precedence. | string |
credentials
Name | Description | Schema |
---|---|---|
password | Password to use for connector. Either 'value' or 'secret' must be specified. | |
username | Username to use for connector. Either 'value' or 'secret' must be specified. |
password
Name | Schema |
---|---|
value | string |
valueFromSecret |
valueFromSecret
Name | Description | Schema |
---|---|---|
key |
Key to use for looking up password entry. | string |
name | Name of Secret containing password. | string |
username
Name | Schema |
---|---|
value | string |
valueFromSecret |
valueFromSecret
Name | Description | Schema |
---|---|---|
key |
Key to use for looking up username entry. | string |
name | Name of Secret containing username. | string |
endpointHosts
Name | Description | Schema |
---|---|---|
host | Host to connect to. | string |
port | Port to connect to. | integer |
tls
Name | Description | Schema |
---|---|---|
caCert | CA certificate to be used by the connector. Either 'value' or 'secret'. | |
clientCert | Client certificate to be used by the connector. Either 'value' or 'secret'. |
caCert
Name | Description | Schema |
---|---|---|
value | PEM encoded value of CA certificate | string |
valueFromSecret | Secret containing CA certificate to be used by the connector. |
valueFromSecret
Name | Description | Schema |
---|---|---|
key |
Key to use for looking up CA certificate entry. | string |
name | Name of Secret containing CA certificate. | string |
clientCert
Name | Description | Schema |
---|---|---|
value | PEM encoded value of client certificate | string |
valueFromSecret | Secret containing client certificate to be used by the connector. |
valueFromSecret
Name | Description | Schema |
---|---|---|
key |
Key to use for looking up client certificate entry. | string |
name | Name of Secret containing client certificate. | string |
D.1.3.32. io.enmasse.v1beta1.AddressSpaceStatus
Name | Description | Schema |
---|---|---|
connectors | List of connectors with status. | |
endpointStatuses | < endpointStatuses > array | |
isReady | boolean | |
messages | < string > array |
endpointStatuses
Name | Schema |
---|---|
cert | string |
externalHost | string |
externalPorts | < externalPorts > array |
name | string |
serviceHost | string |
servicePorts | < servicePorts > array |
externalPorts
Name | Schema |
---|---|
name | string |
port | integer |
servicePorts
Name | Schema |
---|---|
name | string |
port | integer |
D.1.3.33. io.enmasse.v1beta1.AddressSpaceStatusConnector
Name | Description | Schema |
---|---|---|
isReady | 'true' if connector is operating as expected, 'false' if not. | boolean |
messages | Messages describing the connector state. | < string > array |
name | Name of connector. | string |
D.1.3.34. io.enmasse.v1beta1.AddressSpaceType
AddressSpaceType is the type of address space (standard, brokered). Each type supports different types of addresses and semantics for those types.
Type : enum (standard, brokered)
D.1.3.35. io.enmasse.v1beta1.AddressSpec
Name | Description | Schema |
---|---|---|
address | string | |
forwarders | List of forwarders to enable for this address. | < io.enmasse.v1beta1.AddressSpecForwarder > array |
plan | string | |
type |
D.1.3.36. io.enmasse.v1beta1.AddressSpecForwarder
Name | Description | Schema |
---|---|---|
direction | Direction of forwarder. 'in' means pulling from 'remoteAddress' into this address. 'out' means pushing from this address to 'remoteAddress'. | enum (in, out) |
name | Name of forwarder. | string |
remoteAddress | Remote address to send/receive messages to. | string |
D.1.3.37. io.enmasse.v1beta1.AddressStatus
Name | Description | Schema |
---|---|---|
forwarders | List of forwarders with status. | |
isReady | boolean | |
messages | < string > array | |
phase | enum (Pending, Configuring, Active, Failed, Terminating) |
D.1.3.38. io.enmasse.v1beta1.AddressStatusForwarder
Name | Description | Schema |
---|---|---|
isReady | 'true' if forwarder is operating as expected, 'false' if not. | boolean |
messages | Messages describing the forwarder state. | < string > array |
name | Name of forwarder. | string |
D.1.3.39. io.enmasse.v1beta1.AddressType
Type of address (queue, topic, …). Each address type support different kinds of messaging semantics.
Type : enum (queue, topic, anycast, multicast)
D.1.3.40. io.k8s.api.networking.v1.IPBlock
IPBlock describes a particular CIDR (Ex. "192.168.1.1/24") that is allowed to the pods matched by a NetworkPolicySpec’s podSelector. The except entry describes CIDRs that should not be included within this rule.
Name | Description | Schema |
---|---|---|
cidr | CIDR is a string representing the IP Block Valid examples are "192.168.1.1/24" | string |
except | Except is a slice of CIDRs that should not be included within an IP Block Valid examples are "192.168.1.1/24" Except values will be rejected if they are outside the CIDR range | < string > array |
D.1.3.41. io.k8s.api.networking.v1.NetworkPolicyEgressRule
NetworkPolicyEgressRule describes a particular set of traffic that is allowed out of pods matched by a NetworkPolicySpec’s podSelector. The traffic must match both ports and to. This type is beta-level in 1.8
Name | Description | Schema |
---|---|---|
ports | List of destination ports for outgoing traffic. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. | |
to | List of destinations for outgoing traffic of pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all destinations (traffic not restricted by destination). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the to list. |
D.1.3.42. io.k8s.api.networking.v1.NetworkPolicyIngressRule
NetworkPolicyIngressRule describes a particular set of traffic that is allowed to the pods matched by a NetworkPolicySpec’s podSelector. The traffic must match both ports and from.
Name | Description | Schema |
---|---|---|
from | List of sources which should be able to access the pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all sources (traffic not restricted by source). If this field is present and contains at least on item, this rule allows traffic only if the traffic matches at least one item in the from list. | |
ports | List of ports which should be made accessible on the pods selected for this rule. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. |
D.1.3.43. io.k8s.api.networking.v1.NetworkPolicyPeer
NetworkPolicyPeer describes a peer to allow traffic from. Only certain combinations of fields are allowed
Name | Description | Schema |
---|---|---|
ipBlock | IPBlock defines policy on a particular IPBlock. If this field is set then neither of the other fields can be. | |
namespaceSelector | Selects Namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces. If PodSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects all Pods in the Namespaces selected by NamespaceSelector. | |
podSelector | This is a label selector which selects Pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the Pods matching PodSelector in the policy’s own Namespace. |
D.1.3.44. io.k8s.api.networking.v1.NetworkPolicyPort
NetworkPolicyPort describes a port to allow traffic on
Name | Description | Schema |
---|---|---|
port | The port on the given protocol. This can either be a numerical or named port on a pod. If this field is not provided, this matches all port names and numbers. | |
protocol | The protocol (TCP or UDP) which traffic must match. If not specified, this field defaults to TCP. | string |
D.1.3.45. io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector
A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
Name | Description | Schema |
---|---|---|
matchExpressions | matchExpressions is a list of label selector requirements. The requirements are ANDed. | < io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelectorRequirement > array |
matchLabels | matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. | < string, string > map |
D.1.3.46. io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelectorRequirement
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
Name | Description | Schema |
---|---|---|
key | key is the label key that the selector applies to. | string |
operator | operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. | string |
values | values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. | < string > array |
D.1.3.47. io.k8s.apimachinery.pkg.util.intstr.IntOrString
IntOrString is a type that can hold an int32 or a string. When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type. This allows you to have, for example, a JSON field that can accept a name or number.
Type : string (int-or-string)
Appendix E. Using your subscription
AMQ Online is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.
Accessing your account
- Go to access.redhat.com.
- If you do not already have an account, create one.
- Log in to your account.
Activating a subscription
- Go to access.redhat.com.
- Navigate to My Subscriptions.
- Navigate to Activate a subscription and enter your 16-digit activation number.
Downloading zip and tar files
To access zip or tar files, use the Red Hat Customer Portal to find the relevant files for download. If you are using RPM packages, this step is not required.
- Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
- Locate the Red Hat AMQ Online entries in the JBOSS INTEGRATION AND AUTOMATION category.
- Select the desired AMQ Online product. The Software Downloads page opens.
- Click the Download link for your component.
Registering your system for packages
To install RPM packages on Red Hat Enterprise Linux, your system must be registered. If you are using zip or tar files, this step is not required.
- Go to access.redhat.com.
- Navigate to Registration Assistant.
- Select your OS version and continue to the next page.
- Use the listed command in your system terminal to complete the registration.
To learn more see How to Register and Subscribe a System to the Red Hat Customer Portal.
Revised on 2020-11-16 19:15:17 UTC