-
Language:
English
-
Language:
English
Evaluating AMQ Streams on OpenShift
For use with AMQ Streams 1.3 on OpenShift Container Platform
Abstract
Chapter 1. Overview of AMQ Streams
AMQ Streams is based on Apache Kafka, a popular platform for streaming data delivery and processing. AMQ Streams makes it easy to run Apache Kafka on OpenShift.
AMQ Streams provides three operators:
- Cluster Operator
- Responsible for deploying and managing Apache Kafka clusters within an OpenShift cluster.
- Topic Operator
- Responsible for managing Kafka topics within a Kafka cluster running within an OpenShift cluster.
- User Operator
- Responsible for managing Kafka users within a Kafka cluster running within an OpenShift cluster.
The Cluster Operator can deploy the Topic Operator and User Operator (as part of an Entity Operator configuration) at the same time as a Kafka cluster.
Operators within the AMQ Streams architecture
1.1. Kafka Key Features
- Designed for horizontal scalability
- Message ordering guarantee at the partition level
Message rewind/replay
- "Long term" storage allows the reconstruction of an application state by replaying the messages
- Combines with compacted topics to use Kafka as a key-value store
Additional resources
- For more information about Apache Kafka, see the Apache Kafka website.
1.2. Document Conventions
Replaceables
In this document, replaceable text is styled in monospace and italics.
For example, in the following code, you will want to replace my-namespace
with the name of your namespace:
sed -i 's/namespace: .*/namespace: my-namespace/' install/cluster-operator/*RoleBinding*.yaml
Chapter 2. Try AMQ Streams
Install AMQ Streams and start sending and receiving messages from a topic in minutes.
You will:
- Install AMQ Streams
- Create a Kafka cluster
- Access the Kafka cluster to send and receive messages
Ensure you have the prerequisites and then follow the tasks in the order provided in this chapter.
2.1. Prerequisites
- OpenShift Container Platform cluster (3.11 and later) running on which to deploy AMQ Streams
2.2. Downloading AMQ Streams
Download a zip file that contains the resources required for installation along with examples for configuration.
Prerequisites
- Access to the AMQ Streams download site.
Procedure
-
Download the
amq-streams-x.y.z-ocp-install-examples.zip
file from the AMQ Streams download site. Unzip the file to any destination.
- On Windows or Mac, you can extract the contents of the ZIP archive by double clicking on the ZIP file.
On Red Hat Enterprise Linux, open a terminal window in the target machine and navigate to where the ZIP file was downloaded.
Extract the ZIP file by executing the following command:
unzip amq-streams-x.y.z-ocp-install-examples.zip
2.3. Installing AMQ Streams
Install AMQ Streams with the Custom Resource Definitions (CRDs) required for deployment.
In this task you create namespaces in the cluster for your deployment. It is good practice to use namespaces to separate functions.
Prerequisites
-
Installation requires a user with
cluster-admin
role, such assystem:admin
Procedure
Login in to the OpenShift cluster with cluster admin privileges.
For example:
oc login -u system:admin
Create a new
kafka
(project) namespace for the AMQ Streams Kafka Cluster Operator.oc new-project kafka
Modify the installation files to reference the new
kafka
namespace where you will install the AMQ Streams Kafka Cluster Operator.NoteBy default, the files work in the
myproject
namespace.On Linux, use:
sed -i 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml
On Mac, use:
sed -i '' 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml
Deploy the CRDs and role-based access control (RBAC) resources to manage the CRDs.
oc new-project kafka oc apply -f install/cluster-operator/
Create a new
my-kafka-project
namespace where you will deploy your Kafka cluster.oc new-project my-kafka-project
Give access to
my-kafka-project
to a non-admin userdeveloper
.For example:
oc adm policy add-role-to-user admin developer -n my-kafka-project
Give permission to the Cluster Operator to watch the
my-kafka-project
namespace.oc set env deploy/strimzi-cluster-operator STRIMZI_NAMESPACE=kafka,my-kafka-project -n kafka
oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n my-kafka-project
oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n my-kafka-project
oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n my-kafka-project
The commands create role bindings that grant permission for the Cluster Operator to access the Kafka cluster.
Create a new cluster role
strimzi-admin
.oc apply -f install/strimzi-admin
Add the role to the non-admin user
developer
.oc adm policy add-cluster-role-to-user strimzi-admin developer
2.4. Creating a cluster
Create a Kafka cluster, then a topic within the cluster.
When you create a cluster, the Cluster Operator you deployed watches for new Kafka resources.
Prerequisites
- For the Kafka cluster, a Cluster Operator is deployed
- For the topic, a running Kafka cluster
Procedure
Log in to the
my-kafka-project
namespace as userdeveloper
.For example:
oc login -u developer oc project my-kafka-project
After new users log in to OpenShift Container Platform, an account is created for that user.
Create a new
my-cluster
Kafka cluster with 3 Zookeeper and 3 broker nodes.-
Use
ephemeral
storage Expose the Kafka cluster outside of the OpenShift cluster using an external listener configured to use
route
.cat << EOF | oc create -f - apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 listeners: plain: {} tls: {} external: type: route storage: type: ephemeral zookeeper: replicas: 3 storage: type: ephemeral entityOperator: topicOperator: {} EOF
-
Use
Wait for the cluster to be deployed:
oc wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka
Now that your cluster is running, create a topic to publish and subscribe from your external client.
Create the following
my-topic
custom resource definition with 3 replicas and 3 partitions in themy-cluster
Kafka cluster:cat << EOF | oc create -f - apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: "my-cluster" spec: partitions: 3 replicas: 3 EOF
2.5. Accessing the cluster
As route
is used for external access to the cluster, a cluster CA certificate is required to enable TLS (Transport Layer Security) encryption between the broker and the client.
Prerequisites
- A Kafka cluster running within the OpenShift cluster
- A running Cluster Operator
Procedure
Find the address of the bootstrap
route
:oc get routes my-cluster-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}'
Use the address together with port 443 in your Kafka client as the bootstrap address.
Extract the public certificate of the broker certification authority:
oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt
Import the trusted certificate to a truststore:
keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt
You are now ready to start sending and receiving messages.
2.6. Sending and receiving messages from a topic
Test your AMQ Streams installation by sending and receiving messages outside the cluster from my-topic
.
Use a terminal to run a Kafka producer and consumer on a local machine.
Prerequisites
- AMQ Streams is installed on the OpenShift cluster
- Zookeeper and Kafka are running
- Cluster CA certificate for access to the cluster
- Access to the latest version of the Red Hat AMQ Streams archive from the AMQ Streams download site
Procedure
Download the latest version of the AMQ Streams archive (
amq-streams-x.y.z-bin.zip
) from the AMQ Streams download site.Unzip the file to any destination.
Open a terminal and start the Kafka console producer with the topic
my-topic
and the authentication properties for TLS:bin/kafka-console-producer.sh --broker-list <route-address>:443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=./client.truststore.jks --topic my-topic
- Type your message into the console where the producer is running.
- Press Enter to send the message.
Open a new terminal tab or window and start the Kafka console consumer to receive the messages:
bin/kafka-console-consumer.sh --bootstrap-server <route-address>:443 --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=./client.truststore.jks --topic my-topic --from-beginning
- Confirm that you see the incoming messages in the consumer console.
- Press Crtl+C to exit the Kafka console producer and consumer.
Appendix A. Using Your Subscription
AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal.
Accessing Your Account
- Go to access.redhat.com.
- If you do not already have an account, create one.
- Log in to your account.
Activating a Subscription
- Go to access.redhat.com.
- Navigate to My Subscriptions.
- Navigate to Activate a subscription and enter your 16-digit activation number.
Downloading Zip and Tar Files
To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required.
- Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads.
- Locate the Red Hat AMQ Streams entries in the JBOSS INTEGRATION AND AUTOMATION category.
- Select the desired AMQ Streams product. The Software Downloads page opens.
- Click the Download link for your component.
Revised on 2019-11-07 07:14:09 UTC