Chapter 1. Overview of AMQ Streams
Red Hat AMQ Streams is a massively-scalable, distributed, and high-performance data streaming platform based on the Apache Zookeeper and Apache Kafka projects. It consists of the following main components:
- Service for highly reliable distributed coordination.
- Kafka Broker
- Messaging broker responsible for delivering records from producing clients to consuming clients.
- Kafka Connect
- A toolkit for streaming data between Kafka brokers and other systems using Connector plugins.
- Kafka Consumer and Producer APIs
- Java based APIs for producing and consuming messages to and from Kafka brokers.
- Kafka Streams API
- API for writing stream processor applications.
A cluster of Kafka brokers is the hub connecting all these components. The broker uses Apache Zookeeper for storing configuration data and for cluster coordination. Before running Apache Kafka, an Apache Zookeeper cluster has to be ready.
Figure 1.1. Example Architecture diagram of AMQ Streams
1.1. Key features
Scalability and performance
- Designed for horizontal scalability
Message ordering guarantee
- At partition level
- "Long term" storage
- Allows to reconstruct application state by replaying the messages
- Combined with compacted topics allows to use Kafka as key-value store
1.2. Supported Configurations
In order to be running in a supported configuration, AMQ Streams must be running in one of the following JVM versions and on one of the supported operating systems.
Table 1.1. List of supported Java Virtual Machines
Table 1.2. List of supported Operating Systems
Red Hat Enterprise Linux
1.3. Document conventions
In this document, replaceable text is styled in monospace and surrounded by angle brackets.
For example, in the following code, you will want to replace
<topic-name> with your own address and topic name:
bin/kafka-console-consumer.sh --bootstrap-server <bootstrap-address> --topic <topic-name> --from-beginning