Evaluating AMQ Online on OpenShift

Red Hat AMQ 7.7

For use with AMQ Online 1.5

Abstract

This guide describes how to install and manage AMQ Online to evaluate its potential use in a production environment.

Chapter 1. Introduction

1.1. AMQ Online overview

Red Hat AMQ Online is an OpenShift-based mechanism for delivering messaging as a managed service. With Red Hat AMQ Online, administrators can configure a cloud-native, multi-tenant messaging service either in the cloud or on premise. Developers can provision messaging using the Red Hat AMQ Console. Multiple development teams can provision the brokers and queues from the Console, without requiring each team to install, configure, deploy, maintain, or patch any software.

AMQ Online can provision different types of messaging depending on your use case. A user can request messaging resources by creating an address space. AMQ Online currently supports two address space types, standard and brokered, each with different semantics. The following diagrams illustrate the high-level architecture of each address space type:

Figure 1.1. Standard address space

Standard address space

Figure 1.2. Brokered address space

Brokered address space

1.2. Supported features

The following table shows the supported features for AMQ Online 1.5:

Table 1.1. Supported features reference table

Feature Brokered address spaceStandard address space

Address type

Queue

Yes

Yes

Topic

Yes

Yes

Multicast

No

Yes

Anycast

No

Yes

Subscription

No

Yes

Messaging protocol

AMQP

Yes

Yes

MQTT

Yes

Technology preview only

CORE

Yes

No

OpenWire

Yes

No

STOMP

Yes

No

Transports

TCP

Yes

Yes

WebSocket

Yes

Yes

Durable subscriptions

JMS durable subscriptions

Yes

No

"Named" durable subscriptions

No

Yes

JMS

Transaction support

Yes

No

Selectors on queues

Yes

No

Message ordering guarantees (including prioritization)

Yes

No

Scalability

Scalable distributed queues and topics

No

Yes

1.3. Supported configurations

For more information about AMQ Online supported configurations see Red Hat AMQ 7 Supported Configurations.

1.4. Document conventions

1.4.1. Variable text

This document contains code blocks with variables that you must replace with values specific to your installation. In this document, such text is styled as italic monospace.

For example, in the following code block, replace my-namespace with the namespace used in your installation:

sed -i 's/amq-online-infra/my-namespace/' install/bundles/enmasse-with-standard-authservice/*.yaml

Chapter 2. Getting started

This guide describes the process of setting up AMQ Online on OpenShift with clients for sending and receiving messages to evaluate its potential use in a production environment.

Prerequisites

  • To install AMQ Online, the OpenShift Container Platform command-line interface (CLI) is required.

  • An OpenShift cluster is required.
  • A user on the OpenShift cluster with cluster-admin permissions is required to set up the required cluster roles and API services.

2.1. Installing AMQ Online using a YAML bundle

After completing the download and installation procedures, you must then:

2.1.1. Downloading AMQ Online

Procedure

Note

Although container images for AMQ Online are available in the Red Hat Container Catalog, we recommend that you use the YAML files provided instead.

2.1.2. Installing AMQ Online using a YAML bundle

The simplest way to install AMQ Online is to use the predefined YAML bundles.

Procedure

  1. Log in as a user with cluster-admin privileges:

    oc login -u system:admin
  2. (Optional) If you want to deploy to a project other than amq-online-infra you must run the following command and substitute amq-online-infra in subsequent steps:

    sed -i 's/amq-online-infra/my-project/' install/bundles/amq-online/*.yaml
  3. Create the project where you want to deploy AMQ Online:

    oc new-project amq-online-infra
  4. Change the directory to the location of the downloaded release files.
  5. Deploy using the amq-online bundle:

    oc apply -f install/bundles/amq-online
  6. (Optional) Install the example plans and infrastructure configuration:

    oc apply -f install/components/example-plans
  7. (Optional) Install the example roles:

    oc apply -f install/components/example-roles
  8. (Optional) Install the standard authentication service:

    oc apply -f install/components/example-authservices/standard-authservice.yaml
  9. (Optional) Install the Service Catalog integration:

    oc apply -f install/components/service-broker
    oc apply -f install/components/cluster-service-broker

2.2. Installing and configuring AMQ Online using the Operator Lifecycle Manager

You can use the Operator Lifecycle Manager to install and configure an evaluation instance of AMQ Online.

In OpenShift Container Platform 4.1, the Operator Lifecycle Manager (OLM) helps users install, update, and manage the life cycle of all Operators and their associated services running across their clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes native applications (Operators) in an effective, automated, and scalable way.

The OLM runs by default in OpenShift Container Platform 4.1, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster.

OperatorHub is the graphical interface that OpenShift cluster administrators use to discover, install, and upgrade Operators. With one click, these Operators can be pulled from OperatorHub, installed on the cluster, and managed by the OLM, ready for engineering teams to self-service manage the software in development, test, and production environments.

2.2.1. Installing AMQ Online from the OperatorHub using the OpenShift Container Platform console

You can install the AMQ Online Operator on an OpenShift Container Platform 4.1 cluster by using OperatorHub in the OpenShift Container Platform console.

Note

You must install and deploy the AMQ Online Operator in the openshift-operator project.

Prerequisites

  • Access to an OpenShift Container Platform 4.1 cluster using an account with cluster-admin permissions.

Procedure

  1. In the OpenShift Container Platform console, log in using an account with cluster-admin privileges.
  2. Click Operators > OperatorHub.
  3. In the Filter by keyword box, type AMQ Online to find the AMQ Online Operator.
  4. Click the AMQ Online Operator. Information about the Operator is displayed.
  5. Read the information about the Operator and click Install. The Create Operator Subscription page opens.
  6. On the Create Operator Subscription page, accept all of the default selections and click Subscribe.

    Note

    All namespaces on the cluster (default) installs the Operator in the default openshift-operators project and makes the Operator available to all projects in the cluster.

    The amq-online page is displayed, where you can monitor the installation progress of the AMQ Online Operator subscription.

  7. After the subscription upgrade status is shown as Up to date, click Operators > Installed Operators to verify that the AMQ Online ClusterServiceVersion (CSV) is displayed and its Status ultimately resolves to InstallSucceeded in the openshift-operators project.

    Note

    For the All namespaces…​ installation mode, the status resolves to InstallSucceeded in the openshift-operators project, but the status is Copied if you view other projects.

    For troubleshooting information, see the OpenShift Container Platform documentation.

2.2.2. Configuring AMQ Online using the OpenShift Container Platform console

After installing AMQ Online from the OperatorHub using the OpenShift Container Platform console, create a new instance of a custom resource for the following items within the openshift-operators project:

  • service infrastructure configuration for an address space type (the example uses the standard address space type)
  • an authentication service
  • an address space plan
  • an address plan

After creating the new instances of the custom resources, next:

The following procedures use the example data that is provided when using the OpenShift Container Platform console.

2.2.2.1. Creating an infrastructure configuration custom resource using the OpenShift Container Platform console

You must create an infrastructure configuration custom resource to use AMQ Online. This example uses StandardInfraConfig for a standard address space.

Procedure

  1. In the top right, click the Plus icon (+). The Import YAML window opens.
  2. From the top left drop-down menu, select the amq-online-infra project.
  3. Copy the following code:

    apiVersion: admin.enmasse.io/v1beta1
    kind: StandardInfraConfig
    metadata:
      name: default
  4. In the Import YAML window, paste the copied code and click Create. The StandardInfraConfig overview page is displayed.
  5. Click Operators > Installed Operators.
  6. Click the AMQ Online Operator and click the Standard Infra Config tab to verify that its Status displays as Active.

2.2.2.2. Creating an authentication service custom resource using the OpenShift Container Platform console

You must create a custom resource for an authentication service to use AMQ Online. This example uses the standard authentication service.

Procedure

  1. In the top right, click the Plus icon (+). The Import YAML window opens.
  2. From the top left drop-down menu, select the amq-online-infra project.
  3. Copy the following code:

    apiVersion: admin.enmasse.io/v1beta1
    kind: AuthenticationService
    metadata:
      name: standard-authservice
    spec:
      type: standard
  4. In the Import YAML window, paste the copied code and click Create. The AuthenticationService overview page is displayed.
  5. Click Workloads > Pods. In the Readiness column, the Pod status is Ready when the custom resource has been deployed.

2.2.2.3. Creating an address space plan custom resource using the OpenShift Container Platform console

You must create an address space plan custom resource to use AMQ Online. This procedure uses the example data that is provided when using the OpenShift Container Platform console.

Procedure

  1. In the top right, click the Plus icon (+). The Import YAML window opens.
  2. From the top left drop-down menu, select the amq-online-infra project.
  3. Copy the following code:

    apiVersion: admin.enmasse.io/v1beta2
    kind: AddressSpacePlan
    metadata:
      name: standard-small
    spec:
      addressSpaceType: standard
      infraConfigRef: default
      addressPlans:
        - standard-small-queue
      resourceLimits:
        router: 2.0
        broker: 3.0
        aggregate: 4.0
  4. In the Import YAML window, paste the copied code and click Create. The AddressSpacePlan overview page is displayed.
  5. Click Operators > Installed Operators.
  6. Click the AMQ Online Operator and click the Address Space Plan tab to verify that its Status displays as Active.

2.2.2.4. Creating an address plan custom resource using the OpenShift Container Platform console

You must create an address plan custom resource to use AMQ Online. This procedure uses the example data that is provided when using the OpenShift Container Platform console.

Procedure

  1. In the top right, click the Plus icon (+). The Import YAML window opens.
  2. From the top left drop-down menu, select the amq-online-infra project.
  3. Copy the following code:

    apiVersion: admin.enmasse.io/v1beta2
    kind: AddressPlan
    metadata:
      name: standard-small-queue
    spec:
      addressType: queue
      resources:
        router: 0.01
        broker: 0.1
  4. In the Import YAML window, paste the copied code and click Create. The AddressPlan overview page is displayed.
  5. Click Operators > Installed Operators.
  6. Click the AMQ Online Operator and click the Address Plan tab to verify that its Status displays as Active.

2.3. Creating address spaces using the command line

In AMQ Online, you create address spaces using standard command-line tools.

Procedure

  1. Log in as a messaging tenant:

    oc login -u developer
  2. Create the project for the messaging application:

    oc new-project myapp
  3. Create an address space definition:

    apiVersion: enmasse.io/v1beta1
    kind: AddressSpace
    metadata:
      name: myspace
    spec:
      type: standard
      plan: standard-unlimited
  4. Create the address space:

    oc create -f standard-address-space.yaml
  5. Check the status of the address space:

    oc get addressspace myspace -o jsonpath={.status.isReady}

    The address space is ready for use when the previous command outputs true.

2.4. Creating addresses using the command line

You can create addresses using the command line.

Procedure

  1. Create an address definition:

    apiVersion: enmasse.io/v1beta1
    kind: Address
    metadata:
        name: myspace.myqueue
    spec:
        address: myqueue
        type: queue
        plan: standard-small-queue
    Note

    Prefixing the name with the address space name is required to ensure addresses from different address spaces do not collide.

  2. Create the address:

    oc create -f standard-small-queue.yaml
  3. List the addresses:

    oc get addresses -o yaml

2.5. Creating users using the command line

In AMQ Online users can be created using standard command-line tools.

Prerequisites

Procedure

  1. To correctly base64 encode a password for the user definition file, run the following command:

    echo -n password | base64 #cGFzc3dvcmQ=
    Note

    Be sure to use the -n parameter when running this command. Not specifying that parameter will result in an improperly coded password and cause log-in issues.

  2. Save the user definition to a file:

    apiVersion: user.enmasse.io/v1beta1
    kind: MessagingUser
    metadata:
      name: myspace.user1
    spec:
      username: user1
      authentication:
        type: password
        password: cGFzc3dvcmQ= # Base64 encoded
      authorization:
        - addresses: ["myqueue", "queue1", "queue2", "topic*"]
          operations: ["send", "recv"]
        - addresses: ["anycast1"]
          operations: ["send"]
  3. Create the user and associated user permissions:

    oc create -f user-example1.yaml
  4. Confirm that the user was created:

    oc get messagingusers

2.6. Sending and receiving messages

Prerequisites

  • Installed Apache Qpid Proton Python bindings.
  • An address space named myspace must be created.
  • An address named myqueue must be created.
  • A user named user1 with password password must be created.

Procedure

  1. Save Python client example to a file:

    from __future__ import print_function, unicode_literals
    import optparse
    from proton import Message
    from proton.handlers import MessagingHandler
    from proton.reactor import Container
    
    class HelloWorld(MessagingHandler):
        def __init__(self, url):
            super(HelloWorld, self).__init__()
            self.url = url
    
        def on_start(self, event):
            event.container.create_receiver(self.url)
            event.container.create_sender(self.url)
    
        def on_sendable(self, event):
            event.sender.send(Message(body="Hello World!"))
            event.sender.close()
    
        def on_message(self, event):
            print("Received: " + event.message.body)
            event.connection.close()
    
    parser = optparse.OptionParser(usage="usage: %prog [options]")
    parser.add_option("-u", "--url", default="amqps://localhost:5672/myqueue",
                      help="url to use for sending and receiving messages")
    opts, args = parser.parse_args()
    
    try:
        Container(HelloWorld(opts.url)).run()
    except KeyboardInterrupt: pass
  2. Retrieve the address space messaging endpoint host name:

    oc get addressspace myspace -o 'jsonpath={.status.endpointStatuses[?(@.name=="messaging")].externalHost}'

    Use the output as the host name in the following step.

  3. Run the client:

    python client-example1.py -u amqps://user1:password@messaging.example1.com:443/myqueue

Chapter 3. Internet of Things (IoT) on AMQ Online

Important

The Internet of Things (IoT) feature of AMQ Online is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about support scope, see Technology Preview Features Support Scope.

3.1. Getting started using IoT

The following information describes how to set up and manage AMQ Online IoT features.

3.1.1. IoT connectivity concepts

The Internet of Things (IoT) connectivity feature enables AMQ Online to be used for managing and connecting devices with back-end applications. In a typical IoT application, devices have different requirements than ordinary messaging applications. Instead of using arbitrary addresses and security configurations that are typically available, developers can use the IoT services to handle device identities and security configurations explicitly, support multiple protocols often used in the IoT space, and provide uniform support for expected device communication patterns.

IoT connectivity

One of the key concepts is a device registry, which developers use to register devices and provide their credentials. With these credentials, devices can then connect to protocol adapters using one of the supported protocols: HTTP, MQTT, LoRaWAN, and SigFox. Once connected, devices can send and receive messages from back-end applications using one of the following messaging semantics:

  • Telemetry: Allows devices to send non-durable data to back-end applications, so messages are sent using the multicast address type. This option is best for sending non-critical sensor readings.
  • Events: Allows devices to send durable data to the back-end applications, so messages are sent using the queue address type. This option is best for sending more important device data such as alerts and notifications.

Back-end applications can also send Command messages to devices. Commands can be used to trigger actions on devices. Examples include updating a configuration property, installing a software component, or switching the state of an actuator.

3.1.2. Installing AMQ Online using a YAML bundle

The simplest way to install AMQ Online is to use the predefined YAML bundles.

Procedure

  1. Log in as a user with cluster-admin privileges:

    oc login -u system:admin
  2. (Optional) If you want to deploy to a project other than amq-online-infra you must run the following command and substitute amq-online-infra in subsequent steps:

    sed -i 's/amq-online-infra/my-project/' install/bundles/amq-online/*.yaml
  3. Create the project where you want to deploy AMQ Online:

    oc new-project amq-online-infra
  4. Change the directory to the location of the downloaded release files.
  5. Deploy using the amq-online bundle:

    oc apply -f install/bundles/amq-online
  6. (Optional) Install the example plans and infrastructure configuration:

    oc apply -f install/components/example-plans
  7. (Optional) Install the example roles:

    oc apply -f install/components/example-roles
  8. (Optional) Install the standard authentication service:

    oc apply -f install/components/example-authservices/standard-authservice.yaml
  9. (Optional) Install the Service Catalog integration:

    oc apply -f install/components/service-broker
    oc apply -f install/components/cluster-service-broker

3.1.3. Installing IoT services

To get started using the IoT feature on AMQ Online, you must first install the IoT services.

Procedure

  1. (Optional) If you want to deploy to a project other than amq-online-infra you must run the following command and substitute amq-online-infra in subsequent steps:

    sed -i 's/amq-online-infra/my-project/' install/preview-bundles/iot/*.yaml
  2. Deploy the IoT bundles:

    oc apply -f install/preview-bundles/iot
  3. Create certificates for the MQTT protocol adapter. For testing purposes, you can create a self-signed certificate:

    ./install/components/iot/examples/k8s-tls/create
    oc create secret tls iot-mqtt-adapter-tls --key=install/components/iot/examples/k8s-tls/build/iot-mqtt-adapter-key.pem --cert=install/components/iot/examples/k8s-tls/build/iot-mqtt-adapter-fullchain.pem

    You can override the namespace to which the deploy script installs the keys and certificates by setting the environment variable NAMESPACE when calling the script. For example:

    NAMESPACE=my-namespace ./install/components/iot/examples/k8s-tls/deploy
    Note

    If your cluster is not running on localhost, you need to specify the cluster host name when creating certificates to allow external clients (like MQTT) to properly connect to the appropriate services. For example:

    CLUSTER=x.x.x.x.nip.io install/components/iot/examples/k8s-tls/create
  4. (Optional) Install the PostgreSQL server and create database:

    oc apply -f install/components/iot/examples/postgresql/deploy

    You may skip this step if you already have a PostgreSQL instance and created a database with a user to access to it.

  5. Apply database schema:

    You will need to execute the following SQL files on the database instance you created. Depenending on your setup, this may require database admin privileges:

    • install/components/iot/examples/postgresql/create.sql
    • install/components/iot/examples/postgresql/create.devcon.sql

    You can execute the SQL file using the psql command, connected to your database. The following shows an example, how to execute psql from inside the container, when you installed PostgreSQL as described in the previous step:

    oc exec -ti deployment/postgresql -- bash -c "PGPASSWORD=user12 psql device-registry registry" < install/components/iot/examples/postgresql/create.sql
    oc exec -ti deployment/postgresql -- bash -c "PGPASSWORD=user12 psql device-registry registry" < install/components/iot/examples/postgresql/create.devcon.sql
  6. Install an example IoT infrastructure configuration:

    oc apply -f install/components/iot/examples/iot-config.yaml

3.1.4. Creating an IoT project

After installing the IoT services, you create an IoT project.

Procedure

  1. Log in as a messaging tenant:

    oc login -u developer
  2. Create a managed IoT project:

    oc new-project myapp
    oc create -f install/components/iot/examples/iot-project-managed.yaml
  3. Wait for the resources to be ready:

    oc get addressspace iot
    oc get iotproject iot
    Note

    Make sure that the Phase field shows Ready status for both resources.

  4. Create a messaging consumer user:

    oc create -f install/components/iot/examples/iot-user.yaml

3.1.5. Creating an IoT device

After installing the IoT services and creating an IoT project, you can create an IoT device for the device you want to monitor.

3.1.5.1. Registering a new device

To create a new device, you must first register the device.

Procedure

  1. Export the device registry host:

    export REGISTRY_HOST=$(oc -n amq-online-infra get routes device-registry --template='{{ .spec.host }}')
  2. Export the device registry access token:

    export TOKEN=$(oc whoami --show-token)

    This token is used to authenticate against the device registry management API.

  3. Register a device with a defined ID (this example uses 4711):

    curl --insecure -X POST -i -H 'Content-Type: application/json' -H "Authorization: Bearer ${TOKEN}" https://$REGISTRY_HOST/v1/devices/myapp.iot/4711
  4. (Optional) If you need to provide additional registration information, do so as follows:

    curl --insecure -X POST -i -H 'Content-Type: application/json' -H "Authorization: Bearer ${TOKEN}" --data-binary '{
    	"via": ["gateway1"]
    }' https://$REGISTRY_HOST/v1/devices/myapp.iot/4711

3.1.5.2. Setting user name and password credentials for a device

After registering a new device, you must set the user name and password credentials for the device.

Procedure

  1. Add the credentials for a device:

    curl --insecure -X PUT -i -H 'Content-Type: application/json' -H "Authorization: Bearer ${TOKEN}" --data-binary '[{
    	"type": "hashed-password",
    	"auth-id": "sensor1",
    	"secrets": [{
    		"pwd-plain":"'hono-secret'"
    	}]
    }]' https://$REGISTRY_HOST/v1/credentials/myapp.iot/4711

3.1.6. Installing the Eclipse Hono command-line client

Procedure

  1. Download the Eclipse Hono command-line client.
  2. Obtain the messaging endpoint certificate:

    oc -n myapp get addressspace iot -o jsonpath={.status.caCert} | base64 --decode > tls.crt
  3. Export the messaging endpoint host and port:

    export MESSAGING_HOST=$(oc get -n myapp addressspace iot -o jsonpath='{.status.endpointStatuses[?(@.name=="messaging")].externalHost}')
    export MESSAGING_PORT=443

3.1.7. Starting the telemetry consumer

You can send telemetry data such as sensor readings, for example, from a device to the cloud. To do this, you must first start the telemetry consumer by running the customer application.

Procedure

  1. Run the customer application to receive telemetry:

    java -jar hono-cli-*-exec.jar --hono.client.host=$MESSAGING_HOST --hono.client.port=$MESSAGING_PORT --hono.client.username=consumer --hono.client.password=foobar --tenant.id=myapp.iot --hono.client.trustStorePath=tls.crt --message.type=telemetry

3.1.8. Sending telemetry using HTTP

You can send telemetry from a device to the cloud using the HTTP protocol.

Procedure

  1. Send telemetry using the HTTP protocol:

    curl --insecure -X POST -i -u sensor1@myapp.iot:hono-secret -H 'Content-Type: application/json' --data-binary '{"temp": 5}' https://$(oc -n amq-online-infra get routes iot-http-adapter --template='{{ .spec.host }}')/telemetry

3.2. IoT for service administrators

Service administrators typically install and configure the Internet of Things (IoT) services for AMQ Online.

3.2.1. IoT monitoring

Deploying AMQ Online monitoring also allows you to monitor the IoT-related components of AMQ Online. No additional configuration is required.

For more information about AMQ Online monitoring, see Monitoring AMQ Online.

For detailed information about the available metrics specific to IoT, see IoT-specific metrics.

3.2.2. IoT logging

By default, the IoT components of AMQ Online use a conservative logging configuration for the infrastructure services to preserve resources for normal operation.

AMQ Online also allows operation-level tracing using Jaeger, for more in-depth tracing scenarios.

3.2.2.1. Configuration options

For IoT components, it might be necessary to increase the logging output at the following application levels, listed in order from lowest priority to highest priority:

  • A default log level for all IoT components
  • A default configuration for specific log channels
  • A service specific default log level
  • A service specific configuration for specific log channels
  • A service specific custom log configuration file

All configuration is part of the IoTConfig instance and the AMQ Online Operator applies the configuration to the services. The only exception is the service specific log configuration file, which can either be provided through the IoTConfig resource or by creating an entry in the service specific ConfigMap resource.

3.2.2.2. Log levels

The following log levels are available, listed in order from most severe to least severe:

error
Error conditions. Indicates an unexpected condition, which may impact the stability of the system. Displays only error messages.
warn
Warning conditions. Indicates an expected condition, which may impact the current operation or stability of the system. Displays only warning and error messages.
info
Informational messages. Indicates an expected event, which may impact the current operation. Displays only informational, warning, and error messages.
debug
Displays debug messages, in addition to all of the above.
trace
Displays all messages.

3.2.3. IoT tracing

AMQ Online allows application-level tracing in the IoT infrastructure using Jaeger. This feature allows service administrators to gain insight into the inner workings of the IoT services to analyze system performance and issues.

By default, tracing support is not enabled and needs to be activated manually. For more information, see Configuring tracing.

For more information about Jaeger tracing, see: https://www.jaegertracing.io/.

3.2.4. Device registry

The IoT components of AMQ Online store all device related information in a service called "device registry". This makes the device registry an important component of the overall IoT functionality, so it might be necessary to tweak the configuration of the device registry.

Warning

Although the device registry storage backend can be configured in different ways, once the configuration has been made, and IoT tenants have been created, the storage configuration must not be change. Otherwise this may result in the loss of data, in data inconsistencies, or other kinds of unexpected behavior.

The configuration of the device registry can be changed by editing the global IoTConfig custom resource object. Any changes made to this custom resource, will be applied by the AMQ Online operator.

The database backed device registry, named "JDBC", can be configured to either use an existing, "external", database.

By default only PostgreSQL is supported. However it is possible to extend the installation, by providing custom JDBC drivers and custom SQL statements to the configuration. This allows to integrate with database, other than PostgreSQL.

3.2.5. Installing IoT services

To get started using the IoT feature on AMQ Online, you must first install the IoT services.

Procedure

  1. (Optional) If you want to deploy to a project other than amq-online-infra you must run the following command and substitute amq-online-infra in subsequent steps:

    sed -i 's/amq-online-infra/my-project/' install/preview-bundles/iot/*.yaml
  2. Deploy the IoT bundles:

    oc apply -f install/preview-bundles/iot
  3. Create certificates for the MQTT protocol adapter. For testing purposes, you can create a self-signed certificate:

    ./install/components/iot/examples/k8s-tls/create
    oc create secret tls iot-mqtt-adapter-tls --key=install/components/iot/examples/k8s-tls/build/iot-mqtt-adapter-key.pem --cert=install/components/iot/examples/k8s-tls/build/iot-mqtt-adapter-fullchain.pem

    You can override the namespace to which the deploy script installs the keys and certificates by setting the environment variable NAMESPACE when calling the script. For example:

    NAMESPACE=my-namespace ./install/components/iot/examples/k8s-tls/deploy
    Note

    If your cluster is not running on localhost, you need to specify the cluster host name when creating certificates to allow external clients (like MQTT) to properly connect to the appropriate services. For example:

    CLUSTER=x.x.x.x.nip.io install/components/iot/examples/k8s-tls/create
  4. (Optional) Install the PostgreSQL server and create database:

    oc apply -f install/components/iot/examples/postgresql/deploy

    You may skip this step if you already have a PostgreSQL instance and created a database with a user to access to it.

  5. Apply database schema:

    You will need to execute the following SQL files on the database instance you created. Depenending on your setup, this may require database admin privileges:

    • install/components/iot/examples/postgresql/create.sql
    • install/components/iot/examples/postgresql/create.devcon.sql

    You can execute the SQL file using the psql command, connected to your database. The following shows an example, how to execute psql from inside the container, when you installed PostgreSQL as described in the previous step:

    oc exec -ti deployment/postgresql -- bash -c "PGPASSWORD=user12 psql device-registry registry" < install/components/iot/examples/postgresql/create.sql
    oc exec -ti deployment/postgresql -- bash -c "PGPASSWORD=user12 psql device-registry registry" < install/components/iot/examples/postgresql/create.devcon.sql
  6. Install an example IoT infrastructure configuration:

    oc apply -f install/components/iot/examples/iot-config.yaml

3.2.6. Deploy JDBC external device registry

Using an external database requires you to create a database instance and create the required tables and indices.

Important

Although it should technically be possible to use any database which provides a JDBC driver and support SQL, at the moment, only PostgreSQL is supported by AMQ Online. Unless explicitly mentioned, this documentation will assume your are using PostgreSQL. If you do not, then you might need to adapter provided commands, and SQL statements.

In order to set up a JDBC based device registry, you need to perform the following steps:

3.2.6.1. Choose a data storage model

The JDBC based device registry supports the following data models:

  • Flat JSON
  • Hierarchical JSON
  • Plain tables

The JSON based data models require no locking or foreign keys. However they do rely on PostgreSQL support for JSONB. The flat JSON model is more flexible when it comes to storing different types of credentials. The hierarchical JSON model has better performance over the flat JSON model, but requires dedicated indices for each credentials type in order to achieve this performance.

The plain table model does not require any JSON specific database support, but require multiple tables, linked with foreign keys, and will need support for locking when making changes. On the other side it will have better read performance in most cases.

The default choice is the hierarchical JSON model.

Note

It is not possible to change the data model later on without loosing all data or manual data migration.

3.2.6.2. Create a database instance

First you need to create a database instance. It is recommended that you also create at least two types of users. One for administrating the database and one for accessing the device registry specific tables. In the following sections the former user is assumed to be named "admin", and the latter "registry".

3.2.6.3. Deploy the SQL schema to the database instance

Prerequisites

  • Created a database instance
  • Have access credentials for the "admin" database user

Procedure

  1. Connect to your database instance using the admin user
  2. Review and deploy the SQL schema templates/iot/examples/postgresql/create.sql

3.2.6.4. Configure the IoT infrastructure

In order to enable the external JDBC device registry implementation, you will need to configure the section .spec.services.deviceRegistry.jdbc.server.external, and provide the chosen data model, database connection information and access credentials.

For an example see Configuration for JDBC with external PostgreSQL.

3.2.7. Configuring logging

If the default logging settings are not sufficient, the following sections describe different methods for configuring the logging system.

3.2.7.1. Configuring global log levels

The global logging configuration is applied to all services that do not have any explicit logging configuration.

By default, the global log level is info.

Procedure

  1. Edit the IoTConfig instance named default:

    oc edit iotconfig default
  2. Configure the logging options, save and exit the editor:

    apiVersion: iot.enmasse.io/v1alpha1
    kind: IoTConfig
    metadata:
      namespace: enmasse-infra
      name: default
    spec:
      logging:
        level: info 1
        loggers: 2
          io.enmasse: debug 3
          io.netty: error 4
    1
    The default global log level. If omitted, info is used.
    2
    The section for log channel specific entries.
    3
    Lowers the filtering to debug level for messages in the channel io.enmasse.
    4
    Raises the filtering to error level for messages in the channel io.netty.
  3. The operator applies the logging configuration and re-deploys all required components.

In the example above:

  • An info message for the logger org.eclipse.hono would be logged because the logger does not match any explicit configuration and the global default is info.
  • An info message for the logger io.enmasse would be logged because the configuration for io.enmasse is debug and the info message is of higher severity.
  • A warn message for the logger io.netty would be dropped because the configuration for io.netty is set to only display error messages.

3.2.7.2. Configuring application-specific log levels

To override the global defaults, you can configure logging specifically for an IoT service.

Procedure

  1. Edit the IoTConfig instance named default:

    oc edit iotconfig default
  2. Configure the logging options, save and exit the editor:

    apiVersion: iot.enmasse.io/v1alpha1
    kind: IoTConfig
    metadata:
      namespace: enmasse-infra
      name: default
    spec:
      adapters:
        mqtt:
          containers:
            adapter:
              logback:
                level: info 1
                loggers: 2
                  io.enmasse: debug 3
                  io.netty: error 4
    1
    The application global log level. If omitted, the default global level is used.
    2
    The section for log channel specific entries. If omitted and the application global log level is also omitted, the default log channel configuration of the infrastructure is used. If the application global log level is set, it is considered an empty set, and no log channel specific configuration is applied.
    3
    Lowers the filtering to debug level for messages in the channel io.enmasse.
    4
    Raises the filtering to error level for messages in the channel io.netty.
  3. The operator applies the logging configuration and re-deploys all required components.

3.2.7.3. Applying a custom logback specific configuration

For containers running applications using the Logback logging implementation, it is possible to provide a custom, XML-based, logback configuration file. This will override any other logging configuration in the system.

Warning

The logging configuration is not checked by AMQ Online. Providing an incorrect configuration may result in the loss of performance, stability, or may lead to a total system failure.

For more information about configuring Logback, see http://logback.qos.ch/manual/configuration.html.

Prerequisites

3.2.7.3.1. Using the IoTConfig resource

You can apply the configuration using the IoTConfig resource.

Procedure

  1. Edit the IoTConfig instance named default:

    oc edit iotconfig default
  2. Configure the logging options, save and exit the editor:

    apiVersion: iot.enmasse.io/v1alpha1
    kind: IoTConfig
    metadata:
      namespace: enmasse-infra
      name: default
    spec:
      adapters:
        mqtt:
          containers:
            adapter:
              logback:
                logback: | 1
                  <configuration>
                    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
                      <encoder>
                        <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
                      </encoder>
                    </appender>
                    <root level="debug">
                      <appender-ref ref="STDOUT" />
                    </root>
                  </configuration>
    1
    The full XML-based logback configuration.
  3. The operator applies the logging configuration and re-deploys all required components.
3.2.7.3.2. Using the service’s ConfigMap resource

In addition to providing the custom configuration using the IoTConfig, it is possible to put the custom logging configuration into the service’s ConfigMap source.

Procedure

  1. Edit the ConfigMap instance for the service. For example, iot-http-adapter-config for the HTTP protocol adapter.

    oc edit cm iot-http-adapter-config
  2. Add the XML-based logback configuration in the data section with the key logback-custom.xml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: enmasse-infra
      name: iot-http-adapter-config
    data:
      application.yaml: … 1
      logback-spring.xml: … 2
      logback-custom.xml: | 3
        <configuration>
          <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
            <encoder>
              <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
            </encoder>
          </appender>
          <root level="debug">
            <appender-ref ref="STDOUT" />
          </root>
        </configuration>
    1
    The application specific configuration file. The operator generates this file and overwrites any changes.
    2
    The effective logback configuration, applied by the system. Do not change this, as it will be overwritten by the operator.
    3
    The full XML-based logback configuration.
  3. The operator detects changes on the ConfigMap resource, applies the logging configuration and re-deploys all required components.

3.2.8. Configuring tracing

Prerequisites

  • The IoT services are installed.
  • The "Jaeger Operator" from OperatorHub is installed in the AMQ Online namespace.
  • An instance of Jaeger is deployed using the operator in the AMQ Online namespace.
  • Know whether the Jaeger instance was deployed using the "sidecar" or "daemonset" agent.

Procedure

  1. Edit the IoTConfig instance named default:

    oc edit iotconfig default
  2. Modify the configuration according to your Jaeger agent configuration:

    1. If the Jaeger instance is deployed with the "sidecar" agent, add the following configuration:

      apiVersion: iot.enmasse.io/v1alpha1
      kind: IoTConfig
      metadata:
        namespace: amq-online-infra
        name: default
      spec:
        tracing:
          strategy: 1
            sidecar: {} 2
      1
      The field to select the strategy. Only one strategy must be configured.
      2
      Enables the "sidecar" strategy. The use of an empty empty object ({}) is intentional.
    2. If the Jaeger instance is deployed with the "daemonset" agent, add the following configuration:

      apiVersion: iot.enmasse.io/v1alpha1
      kind: IoTConfig
      metadata:
        namespace: amq-online-infra
        name: default
      spec:
        tracing:
          strategy: 1
            daemonset: {} 2
      1
      The field to select the strategy. Only one strategy must be configured.
      2
      Enables the "daemonset" strategy. The use of an empty empty object ({}) is intentional.
  3. Save and exit the editor.
  4. The operator applies the tracing configuration and re-deploys all required components.

3.2.9. IoT services configuration examples

3.2.9.1. Minimal IoT configuration example

This IoT configuration example shows only the required options to create an iotConfig.

kind: IoTConfig
apiVersion: iot.enmasse.io/v1alpha1
metadata:
  name: default
spec:
  services:
    deviceRegistry:
      infinispan:
        server:
          external: 1
            host: infinispan
            port: 11222
            username: app
            password: test12
            saslServerName: hotrod
            saslRealm: ApplicationRealm
  adapters:
    mqtt:
      endpoint:
        secretNameStrategy:
          secretName: iot-mqtt-adapter-tls
1
The Red Hat Data Grid service must be provided.

3.2.9.2. Tuning the IoT protocol adapters example

This IoT configuration example shows how the protocol adapters can be individually tuned.

kind: IoTConfig
apiVersion: iot.enmasse.io/v1alpha1
metadata:
  name: default
spec:
  services:
    deviceRegistry:
      infinispan:
        server:
          external:
            host: infinispan
            port: 11222
            username: app
            password: test12
            saslServerName: hotrod
            saslRealm: ApplicationRealm
  adapters:
    mqtt:
      enabled: true 1
      replicas: 1
      options:
        tenantIdleTimeout: 30m 2
        maxPayloadSize: 2048
    http:
      enabled: true
      replicas: 1 3
      options:
        tenantIdleTimeout: 30m
        maxPayloadSize: 2048 4
      containers:
        adapter:
          resources: 5
            limits:
              memory: 128Mi
              cpu: 500m
    lorawan:
      enabled: false
    sigfox:
      enabled: false
1
Protocol adapters can be disabled if necessary. The default value is true.
2
Specifies the duration to keep alive the client connection.
4
Specifies the maximum allowed size of an incoming message in bytes.
3 5
Container resources and instances can be adjusted if necessary.

3.2.9.3. Configuration for JDBC with external PostgreSQL

kind: IoTConfig
apiVersion: iot.enmasse.io/v1alpha1
metadata:
  name: default
spec:
  services:
    deviceRegistry:
      jdbc:
        server:
          external:
            url: jdbc://postgresql.namespace.svc:5432/database-name 1
            username: app 2
            password: test12 3
1
The JDBC URL to the PostgreSQL database. This includes the hostname, port, and database name. For more information also see: https://jdbc.postgresql.org/documentation/head/connect.html
2
The username used to connect to the PostgreSQL server
3
The password used to connect to the PostgreSQL server

3.2.10. IoT-specific metrics

The IoT-specific components of AMQ Online provide the metrics described in this section.

3.2.10.1. Common tags and metrics

The following tags are available on all IoT-related components:

TagValueDescription

host

string

Specifies the name of the host that the component reporting the metric is running on.

component-type

adapter, service

Specifies the type of component reporting the metric.

component-name

string

The name of the component reporting the metric. For a list of components, see the following table.

Table 3.1. Component names

Componentcomponent-name

HTTP protocol adapter

hono-http

MQTT protocol adapter

hono-mqtt

LoRaWAN protocol adapter

hono-lora

Sigfox protocol adapter

hono-sigfox

3.2.10.2. Protocol adapters

Protocol adapters, components of type adapter, have some additional tags. .Protocol adapter tags

NameValueDescription

direction

one-way, request, response

Specifies the direction in which a Command & Control message is being sent: one-way indicates a command sent to a device for which the sending application does not expect to receive a response; request indicates a command request message sent to a device; and response indicates a command response received from a device.

qos

0, 1, unknown

Indicates the quality of service used for a telemetry or event message: 0 indicates at most once,1 indicates at least once, and none indicates unknown delivery semantics.

status

forwarded, unprocessable, undeliverable

Indicates the processing status of a message. forwarded indicates that the message has been forwarded to a downstream consumer; unprocessable indicates that the message has not been processed or forwarded, for example, because the message was malformed; and undeliverable indicates that the message could not be forwarded, for example, because there is no downstream consumer or due to an infrastructure problem.

tenant

string

Specifies the identifier of the tenant that the metric is being reported on.

ttd

command, expired, none

Indicates the status of the outcome of processing a TTD value contained in a message received from a device. command indicates that a command for the device has been included in the response to the device’s request for uploading the message; expired indicates that a response without a command has been sent to the device; and none indicates that either no TTD value has been specified by the device or that the protocol adapter does not support it.

type

telemetry, event

Indicates the type of (downstream) message for the metric.

Table 3.2. Protocol adapter metrics

MetricTypeTagsDescription

hono.commands.received

Timer

host, component-type, component-name, tenant, type, status, direction

Indicates the amount of time it took to process a message conveying a command or a response to a command.

hono.commands.payload

DistributionSummary

host, component-type, component-name, tenant, type, status, direction

Indicates the number of bytes conveyed in the payload of a command message.

hono.connections.authenticated

Gauge

host, component-type, component-name, tenant

Current number of connected, authenticated devices.

NOTE: This metric is only supported by protocol adapters that maintain a connection state with authenticated devices. In particular, the HTTP adapter does not support this metric.

hono.connections.unauthenticated

Gauge

host, component-type, component-name

Current number of connected, unauthenticated devices.

NOTE: This metric is only supported by protocol adapters that maintain a connection state with authenticated devices. In particular, the HTTP adapter does not support this metric.

hono.messages.received

Timer

host, component-type, component-name, tenant, type, status, qos, ttd

Indicates the amount of time it took to process a message conveying a telemetry or event message.

hono.messages.payload

DistributionSummary

host, component-type, component-name, tenant, type, status

Indicates the number of bytes conveyed in the payload of a telemetry or event message.

3.2.11. Troubleshooting guide

3.2.11.1. Fix IoTProject stuck in termination

When an IoTProject instance is deleted, the resource will not be deleted immediately. It is only marked for deletion and necessary cleanup operations will be performed in the background. The resource will automatically be deleted once the cleanup has been performed successfully.

In some situations it might be that, due do to infrastructure issues, the clean-up operation cannot be performed at this point in time. The IoTProject will still be kept, and the operator will re-try periodically to clean up the resources. The cleanup will succeed once the infrastructure is back in operational state.

In the case that the infrastructure is not expected to function ever again, it might be desirable to force the destruction of the IoTProject resources.

Warning

Manually removing the resource cleanup finalizer will skip the cleanup process, and prevent the system from properly cleaning up.

Procedure

  1. Evaluate if the project is stuck in the termination state using the oc tool:

    oc get iotproject iot -n myapp
    NAME   IOT TENANT  DOWNSTREAM HOST                       DOWNSTREAM PORT   TLS    PHASE
    iot    myapp.iot   messaging-be482a6.enmasse-infra.svc   5671              true   Terminating

    The output should show the project in the state "Terminating". In addition, verify that the cleanup finalizer is still present:

    oc get iotproject iot -n myapp -ojsonpath='{range .metadata.finalizers[*]}{..}{"\n"}{end}'
    iot.enmasse.io/deviceRegistryCleanup

    If the list contains an entry of iot.enmasse.io/deviceRegistryCleanup, then resource cleanup process is still pending.

  2. Manually remove the finalizer iot.enmasse.io/deviceRegistryCleanup from the list of finalizers:

    oc edit iotproject iot -n myapp

    This will open up a text editor with the content of the resource:

    apiVersion: iot.enmasse.io/v1alpha1
    kind: IoTProject
    metadata:
      creationTimestamp: "2019-12-09T15:00:00Z"
      deletionTimestamp: "2019-12-09T16:00:00Z"
      finalizers:
      - iot.enmasse.io/deviceRegistryCleanup 1
      name: iot
      namespace: myapp
    1
    The line with the finalizer to delete

    Delete the line of the finalizer. Save and exit the editor. This will automatically trigger an update on the server, and the system will continue deleting the IoTProject resource.

  3. After the finalizer has been removed, the resource should be deleted and disappear from the system.

3.3. IoT for projects owners

3.3.1. IoT project configuration examples

IoT projects define the messaging resources an IoT tenant can consume.

3.3.1.1. Using a managed messaging infrastructure

This IoT project configuration example relies on AMQ Online to manage the messaging infrastructure used by IoT traffic. The AMQ Online standard address space and address plans are used.

kind: IoTProject
apiVersion: iot.enmasse.io/v1alpha1
metadata:
  name: user-1
spec:
  downstreamStrategy:
    managedStrategy: 1
      addressSpace:
        name: iot-user-1
        plan: standard-unlimited 2
        type: standard 3
      addresses:
        telemetry:
          plan: standard-small-anycast 4
          type: standard 5
        event:
          plan: standard-small-queue 6
        command:
          plan: standard-small-anycast 7
1
The managedStrategy value refers to a messaging infrastructure managed by AMQ Online.
2
Specifies the address space plan, defining the resource usage of the address space. Each IoT tenant must have its own address space.
4 6 7
Each address must be associated with a valid address plan.
3
Specifies the type of address space. The default value is standard.

For more information, see Managing address spaces.

5
Specifies the type of address.

For more information, see Managing address spaces.

3.3.1.2. Using an external messaging infrastructure

This IoT configuration example shows how an external messaging infrastructure can be configured.

kind: IoTProject
apiVersion: iot.enmasse.io/v1alpha1
metadata:
  name: user-1
spec:
  downstreamStrategy:
    externalStrategy:
      host: messaging-hono-default.enmasse-infra.svc
      port: 5672
      username: http
      tls: true
      password: http-secret

3.4. IoT for device managers

Device managers are typically responsible for creating and managing device identities and credentials in the system.

3.4.1. Obtaining an authentication token

To access the device management API, you must obtain a token to authenticate yourself to the API.

Access to an IoT tenant’s devices is mapped by the device registry based on access to the IoTProject resource. If an account has read access to the IoTProject, this account can also execute read operations on the device registry for this IoT tenant.

The token has to be presented to the API as a bearer token by adding an HTTP header value: Authorization: Bearer <token>. For more information, see RFC 6750.

In the following configuration examples, replace ${TOKEN} with the actual token.

3.4.1.1. Obtaining an authentication token for a user

If you want to use the token of the current OpenShift, you can extract the token.

Prerequisites

  • You must be logged in to your OpenShift instance as a user that supports tokens.

Procedure

  1. Extract the token for the current user:

    oc whoami -t
Note

User tokens have a limited lifetime, so it may be necessary to renew the token after it has expired.

3.4.1.2. Obtaining an authentication token for a service account

Perform the following steps to create a new service account and extract the token.

Prerequisites

  • You must be logged in to your OpenShift instance with permissions to create new service accounts, roles and role bindings.

Procedure

  1. Create a new service account:

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: my-device-manager-account 1
    1
    The name of the service account.
  2. Create a new role, allowing access to the IoTProject:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: device-manager-role 1
    rules: 2
    - apiGroups: ["iot.enmasse.io"]
      resources: ["iotprojects"]
      verbs: ["create", "update", "get", "list", "delete"]
    1
    The name of the role.
    2
    The access rules, which must grant CRUD access to the IoTProject.

    This example grants access to all IoTProjects in a namespace. To further restrict access, use more specific rules.

  3. Create a new role binding, assigning the role to the service account:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: my-device-manager-account-role-binding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: device-manager-role 1
    subjects:
    - kind: ServiceAccount
      name: my-device-manager-account 2
    1
    The name of the role.
    2
    The name of the service account.
  4. Retrieve the token from the service account:

    oc serviceaccounts get-token my-device-manager-account

3.4.2. Creating an IoT device

After installing the IoT services and creating an IoT project, you can create an IoT device for the device you want to monitor.

3.4.2.1. Registering a new device

To create a new device, you must first register the device.

Procedure

  1. Export the device registry host:

    export REGISTRY_HOST=$(oc -n amq-online-infra get routes device-registry --template='{{ .spec.host }}')
  2. Export the device registry access token:

    export TOKEN=$(oc whoami --show-token)

    This token is used to authenticate against the device registry management API.

  3. Register a device with a defined ID (this example uses 4711):

    curl --insecure -X POST -i -H 'Content-Type: application/json' -H "Authorization: Bearer ${TOKEN}" https://$REGISTRY_HOST/v1/devices/myapp.iot/4711
  4. (Optional) If you need to provide additional registration information, do so as follows:

    curl --insecure -X POST -i -H 'Content-Type: application/json' -H "Authorization: Bearer ${TOKEN}" --data-binary '{
    	"via": ["gateway1"]
    }' https://$REGISTRY_HOST/v1/devices/myapp.iot/4711

3.4.2.2. Setting user name and password credentials for a device

After registering a new device, you must set the user name and password credentials for the device.

Procedure

  1. Add the credentials for a device:

    curl --insecure -X PUT -i -H 'Content-Type: application/json' -H "Authorization: Bearer ${TOKEN}" --data-binary '[{
    	"type": "hashed-password",
    	"auth-id": "sensor1",
    	"secrets": [{
    		"pwd-plain":"'hono-secret'"
    	}]
    }]' https://$REGISTRY_HOST/v1/credentials/myapp.iot/4711

3.5. IoT for solution developers

IoT solution developers are typically responsible for writing IoT cloud applications.

3.5.1. Installing the Eclipse Hono command-line client

Procedure

  1. Download the Eclipse Hono command-line client.
  2. Obtain the messaging endpoint certificate:

    oc -n myapp get addressspace iot -o jsonpath={.status.caCert} | base64 --decode > tls.crt
  3. Export the messaging endpoint host and port:

    export MESSAGING_HOST=$(oc get -n myapp addressspace iot -o jsonpath='{.status.endpointStatuses[?(@.name=="messaging")].externalHost}')
    export MESSAGING_PORT=443

3.5.2. Starting the telemetry consumer

You can send telemetry data such as sensor readings, for example, from a device to the cloud. To do this, you must first start the telemetry consumer by running the customer application.

Procedure

  1. Run the customer application to receive telemetry:

    java -jar hono-cli-*-exec.jar --hono.client.host=$MESSAGING_HOST --hono.client.port=$MESSAGING_PORT --hono.client.username=consumer --hono.client.password=foobar --tenant.id=myapp.iot --hono.client.trustStorePath=tls.crt --message.type=telemetry

3.5.3. Starting the events consumer

You can send events, such as alerts and other important data, from a device to a consumer application. To do this, you must first start the events consumer by running the customer application.

Procedure

  1. Run the customer application to receive events:

    java -jar hono-cli-*-exec.jar --hono.client.host=$MESSAGING_HOST --hono.client.port=$MESSAGING_PORT --hono.client.username=consumer --hono.client.password=foobar --tenant.id=myapp.iot --hono.client.trustStorePath=tls.crt --message.type=events

3.5.4. Starting the command sender

You can send commands from a cloud to the device. To do this, you must start the command sender by running the customer application.

Procedure

  1. Run the customer application to send commands to a device with an ID of 4711:

    java -jar hono-cli-*-exec.jar --hono.client.host=$MESSAGING_HOST --hono.client.port=$MESSAGING_PORT --hono.client.username=consumer --hono.client.password=foobar --tenant.id=myapp.iot --hono.client.trustStorePath=tls.crt --device.id=4711 --spring.profiles.active=command
  2. Follow the instructions for entering the command’s name, payload, and content type. For example:

    >>>>>>>>> Enter name of command for device [4711] in tenant [myapp.iot] (prefix with 'ow:' to send one-way command):
    ow:setVolume
    >>>>>>>>> Enter command payload:
    {"level": 50}
    >>>>>>>>> Enter content type:
    application/json
    
    INFO  org.eclipse.hono.cli.app.Commander - Command sent to device

3.6. IoT for device developers

Device developers are typically responsible for either connecting existing devices to the cloud platform or writing software for devices. This information describes how to connect devices using one of the supported protocols.

3.6.1. HTTP devices

3.6.1.1. Sending telemetry using HTTP

You can send telemetry from a device to the cloud using the HTTP protocol.

Procedure

  1. Send telemetry using the HTTP protocol:

    curl --insecure -X POST -i -u sensor1@myapp.iot:hono-secret -H 'Content-Type: application/json' --data-binary '{"temp": 5}' https://$(oc -n amq-online-infra get routes iot-http-adapter --template='{{ .spec.host }}')/telemetry

3.6.1.2. Sending events using HTTP

You can send an event message from the customer application to the device using the HTTP protocol.

Procedure

  1. Send events using the HTTP protocol:

    curl --insecure -X POST -i -u sensor1@myapp.iot:hono-secret -H 'Content-Type: application/json' --data-binary '{"temp": 5}' https://$(oc -n amq-online-infra get routes iot-http-adapter --template='{{ .spec.host }}')/events

3.6.1.3. Receiving commands using the HTTP protocol

You can send commands from the cloud to a device using the HTTP protocol.

Procedure

  1. Send a telemetry message using the HTTP protocol, specifying the hono-ttd parameter indicating how long the client will wait for the command:

    curl --insecure -X POST -i -u sensor1@myapp.iot:hono-secret -H 'Content-Type: application/json' --data-binary '{"temp": 5}' https://$(oc -n amq-online-infra get routes iot-http-adapter --template='{{ .spec.host }}')/telemetry?hono-ttd=600
  2. Run the customer application to send commands to a device with an ID of 4711:

    java -jar hono-cli-*-exec.jar --hono.client.host=$MESSAGING_HOST --hono.client.port=$MESSAGING_PORT --hono.client.username=consumer --hono.client.password=foobar --tenant.id=myapp.iot --hono.client.trustStorePath=tls.crt --device.id=4711 --spring.profiles.active=command
  3. Follow the instructions for entering the command’s name, payload, and content type. For example:

    >>>>>>>>> Enter name of command for device [4711] in tenant [myapp.iot] (prefix with 'ow:' to send one-way command):
    ow:setVolume
    >>>>>>>>> Enter command payload:
    {"level": 50}
    >>>>>>>>> Enter content type:
    application/json
    
    INFO  org.eclipse.hono.cli.app.Commander - Command sent to device

    The client receives the command in the HTTP response:

    HTTP/1.1 200 OK
    hono-command: setVolume
    content-type: application/json
    content-length: 13
    
    {"level": 50}

3.6.2. MQTT devices

3.6.2.1. Sending telemetry using MQTT

You can send telemetry from a device to the cloud using the MQTT protocol.

Procedure

  1. Send telemetry using the MQTT protocol:

    mosquitto_pub -d -h $(oc -n amq-online-infra get routes iot-mqtt-adapter --template='{{ .spec.host }}') -p 443 -u 'sensor1@myapp.iot' -P hono-secret -t telemetry -m '{"temp": 5}' -i 4711 --cafile install/components/iot/examples/k8s-tls/build/iot-mqtt-adapter-fullchain.pem

3.6.2.2. Sending events using MQTT

You can send an event message from the customer application to the device using the MQTT protocol.

Procedure

  1. Send events using the MQTT protocol:

    mosquitto_pub -d -h $(oc -n amq-online-infra get routes iot-mqtt-adapter --template='{{ .spec.host }}') -p 443 -u 'sensor1@myapp.iot' -P hono-secret -t events -m '{"temp": 5}' -i 4711 --cafile install/components/iot/examples/k8s-tls/build/iot-mqtt-adapter-fullchain.pem

3.6.2.3. Receiving commands using the MQTT protocol

You can send commands from the cloud to a device using the MQTT protocol.

Procedure

  1. Use the MQTT client to subscribe to the MQTT topic for receiving commands:

    mosquitto_sub -v -d -h $(oc -n amq-online-infra get routes iot-mqtt-adapter --template='{{ .spec.host }}') -p 443 -u 'sensor1@myapp.iot' -P hono-secret -t command/+/+/req/# -i 4711 --cafile install/components/iot/examples/k8s-tls/build/iot-mqtt-adapter-fullchain.pem
  2. Run the customer application to send commands to a device with an ID of 4711:

    java -jar hono-cli-*-exec.jar --hono.client.host=$MESSAGING_HOST --hono.client.port=$MESSAGING_PORT --hono.client.username=consumer --hono.client.password=foobar --tenant.id=myapp.iot --hono.client.trustStorePath=tls.crt --device.id=4711 --spring.profiles.active=command
  3. Follow the instructions for entering the command’s name, payload, and content type. For example:

    >>>>>>>>> Enter name of command for device [4711] in tenant [myapp.iot] (prefix with 'ow:' to send one-way command):
    ow:setVolume
    >>>>>>>>> Enter command payload:
    {"level": 50}
    >>>>>>>>> Enter content type:
    application/json
    
    INFO  org.eclipse.hono.cli.app.Commander - Command sent to device

    The client receives the command in the MQTT message:

    Client 4711 received PUBLISH (d0, q0, r0, m0, 'command///req//setVolume', ... (13 bytes))
    command///req//setVolume {"level": 50}

3.6.3. Configuring Sigfox devices

After installing the IoT services and creating an IoT project, you can configure the Sigfox backend integration.

Prerequisites

3.6.3.1. Registering the Sigfox backend as a gateway device

Prerequisites

  • The Sigfox backend is set up in AMQ Online as a gateway device.
  • The credentials assigned to this device are required for the configuration of the "callback" in the Sigfox backend.
  • The actual devices are configured to use this gateway device as their transport.

Procedure

  1. Register a new device.

    In step 3 of this procedure, specify sigfox-backend as the ID.

  2. Set up password credentials for this device (for example, sigfox-user / sigfox-password).

3.6.3.2. Registering the Sigfox device

Procedure

  1. Locate the device ID for the Sigfox device you want to register. You obtain this ID as part of the registration process in the Sigfox backend.
  2. Register a new device.

    Specify the device ID as the name (for example, 1AB2C3) and specify the name of the gateway device in the via field, as part of the registration information (for example, {"via": ["sigfox-backend"]}).

Note

Do not set a password for this device.

3.6.3.3. Preparing the Sigfox connection information

Prepare the following connection information, which is used in the Creating a new callback in the Sigfox backend procedure.

IoT tenant name
The name of the IoT tenant consists of the OpenShift namespace and the name of the IoT project resource, for example, namespace.iotproject.
HTTP authorization header

For the Sigfox backend to authenticate, you must convert the username and password combination of the gateway device into an HTTP "Basic Authorization" header. Be sure to specify the full user name using the following syntax: authentication id @IoT tenant name.

Example: sigfox-user@namespace.iotproject

The basic authentication header value can be generated on the command line using the following command:

echo "Basic $(echo -n "sigfox-user@namespace.iotproject:password" | base64)"
URL pattern

The URL pattern consists of the URL to the Sigfox protocol adapter and Sigfox-specific query parameters:

https://<ADAPTER URL>/data/telemetry/<TENANT>?device={device}&data={data}

Run the following command to obtain the URL of the protocol adapter:

echo "https://$(oc -n amq-online-infra get routes iot-sigfox-adapter --template='{{ .spec.host }}')"

The path segment /data/telemetry indicates to the protocol adapter to handle messages as telemetry data. You can use /data/event to instead process messages as events.

Note

{device} and {data} are literal values and must not be replaced.

3.6.3.4. Creating a new callback in the Sigfox backend

Procedure

  1. Log in to https://backend.sigfox.com.
  2. In the Device Type open a type for editing and switch to the Callbacks section.
  3. Create a new "Custom" callback, with the following settings:

    Type
    DATAUPLINK
    Channel
    URL
    Url pattern
    The URL pattern. For example, https://iot-sigfox-adapter.my.cluster/data/telemetry/<TENANT>?device={device}&data={data}
    Use HTTP Method
    GET
    Headers
    AuthorizationBasic…
    Send SNI
    ☑ (Enabled)

3.6.3.5. Enabling command and control in Sigfox

Procedure

  1. Log in to https://backend.sigfox.com.
  2. In the Device Type open the type for editing and switch to the Callbacks section.
  3. Edit the callback configuration for which you want to enable command and control.

    Type
    Switch to DATABIDIR
    Url Pattern
    Add the ack parameter. For example, https://iot-sigfox-adapter.my.cluster/data/telemetry/<TENANT>?device={device}&data={data}&ack={ack}

Chapter 4. Uninstalling AMQ Online

You must uninstall AMQ Online using the same method that you used to install AMQ Online.

4.1. Uninstalling AMQ Online using the YAML bundle

This method uninstalls AMQ Online that was installed using the YAML bundle.

Procedure

  1. Log in as a user with cluster-admin privileges:

    oc login -u system:admin
  2. Delete the cluster-level resources:

    oc delete crd -l app=enmasse,enmasse-component=iot
    oc delete crd -l app=enmasse --timeout=600s
    oc delete clusterrolebindings -l app=enmasse
    oc delete clusterroles -l app=enmasse
    oc delete apiservices -l app=enmasse
    oc delete oauthclients -l app=enmasse
  3. (OpenShift 4) Delete the console integration:

    oc delete consolelinks -l app=enmasse
  4. (Optional) Delete the service catalog integration:

    oc delete clusterservicebrokers -l app=enmasse
  5. Delete the project where AMQ Online is deployed:

    oc delete project amq-online-infra

4.2. Uninstalling the AMQ Online Operator using the OpenShift Container Platform 4.x console

You can uninstall the AMQ Online Operator on an OpenShift Container Platform 4.1 cluster in the OpenShift Container Platform console.

Prerequisites

  • An installed AMQ Online Operator on a OpenShift Container Platform 4.1 cluster.

Procedure

  1. From the Project list, select the openshift-operators project.
  2. Click Catalog → Operator Management. The Operator Management page opens.
  3. Click the Operator Subscriptions tab.
  4. Find the AMQ Online Operator you want to uninstall. In the far right column, click the vertical ellipsis icon and select Remove Subscription.
  5. When prompted by the Remove Subscription window, select the Also completely remove the AMQ Online Operator from the selected namespace check box to remove all components related to the installation.
  6. Click Remove. The AMQ Online Operator will stop running and no longer receive updates.

4.2.1. Removing remaining resources after uninstalling AMQ Online using the Operator Lifecycle Manager

Due to ENTMQMAAS-1281, some resources remain after uninstalling AMQ Online using the Operator Lifecycle Manager. This procedure removes the remaining resources, which completely uninstalls AMQ Online.

Procedure

  1. On the command line, log in as a user with permissions to run commands in the openshift-operators project:

    oc login -u system:admin
  2. Change to the openshift-operators project:

    oc project openshift-operators
  3. Run the following commands to remove any remaining resources:

    oc delete all -l app=enmasse
    oc delete crd -l app=enmasse
    oc delete apiservices -l app=enmasse
    oc delete cm -l app=enmasse
    oc delete secret -l app=enmasse

Appendix A. Using your subscription

AMQ Online is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.

Accessing your account

  1. Go to access.redhat.com.
  2. If you do not already have an account, create one.
  3. Log in to your account.

Activating a subscription

  1. Go to access.redhat.com.
  2. Navigate to My Subscriptions.
  3. Navigate to Activate a subscription and enter your 16-digit activation number.

Downloading zip and tar files

To access zip or tar files, use the Red Hat Customer Portal to find the relevant files for download. If you are using RPM packages, this step is not required.

  1. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
  2. Locate the Red Hat AMQ Online entries in the JBOSS INTEGRATION AND AUTOMATION category.
  3. Select the desired AMQ Online product. The Software Downloads page opens.
  4. Click the Download link for your component.

Registering your system for packages

To install RPM packages on Red Hat Enterprise Linux, your system must be registered. If you are using zip or tar files, this step is not required.

  1. Go to access.redhat.com.
  2. Navigate to Registration Assistant.
  3. Select your OS version and continue to the next page.
  4. Use the listed command in your system terminal to complete the registration.

To learn more see How to Register and Subscribe a System to the Red Hat Customer Portal.

Revised on 2020-08-06 17:25:10 UTC

Legal Notice

Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.