Chapter 4. Running Kafka in KRaft mode (technology preview)

When you run AMQ Streams in KRaft (Kafka Raft metadata) mode, Kafka clusters are managed by an internal quorum of controllers instead of ZooKeeper.

Apache Kafka is in the process of phasing out the need for ZooKeeper. KRaft mode is now available to try. You can deploy a Kafka cluster in KRaft mode without ZooKeeper.

Caution

KRaft mode is experimental, intended only for development and testing, and must not be enabled for a production environment.

Currently, the KRaft mode in AMQ Streams has the following major limitations:

  • Moving from Kafka clusters with ZooKeeper to KRaft clusters or the other way around is not supported.
  • Upgrades and downgrades of Apache Kafka versions are not supported.
  • SCRAM-SHA-512 authentication is not supported.
  • JBOD storage with multiple disks is not supported.
  • Many configuration options are still in development.

4.1. Using AMQ Streams with Kafka in KRaft mode

If you use Kafka in KRaft mode, you do not need to use ZooKeeper for cluster coordination or storing metadata. Kafka coordinates the cluster itself using brokers that act as controllers. Kafka also stores the metadata used to track the status of brokers and partitions.

To identify a cluster, create an ID. The ID is used when creating logs for the brokers you add to the cluster.

In the configuration of each broker node, specify the following:

  • A node ID
  • Broker roles
  • A list of brokers (or voters) that act as controllers

A broker performs the role of broker, controller, or both.

Broker role
A broker, sometimes referred to as a node or server, orchestrates the storage and passing of messages.
Controller role
A controller coordinates the cluster and manages the tracking metadata.

You can use combined broker and controller nodes, though you might want to separate these functions. Brokers performing combined roles can be more efficient in simpler deployments.

You specify a list of controllers, configured as voters, using the node ID and connection details (hostname and port) for each controller.

4.2. Running a Kafka cluster in KRaft mode

Configure and run Kafka in KRaft mode. You can run Kafka in KRaft mode if you are using a single-node or multi-node Kafka cluster. Run a minimum of three broker and controller nodes for stability and availability.

You set roles for brokers so that they can also be controllers. You apply broker configuration, including the setting of roles, using a configuration properties file. Broker configuration differs according to role. KRaft provides three example broker configuration properties files.

  • /opt/kafka/config/kraft/broker.properties has example configuration for a broker role
  • /opt/kafka/config/kraft/controller.properties has example configuration for a controller role
  • /opt/kafka/config/kraft/server.properties has example configuration for a combined role

You can base your broker configuration on these example properties files. In this procedure, the example server.properties configuration is used.

Prerequisites

  • AMQ Streams is installed on all hosts which will be used as Kafka brokers.

Procedure

  1. Generate an ID for the Kafka cluster using the kafka-storage tool:

    /opt/kafka/bin/kafka-storage.sh random-uuid

    The command returns an ID. A cluster ID is required in KRaft mode.

  2. Create a configuration properties file for each broker in the cluster.

    You can base the file on the examples provided with Kafka.

    1. Specify a role as broker, `controller or broker, controller

      For example, process.roles=broker, controller.

    2. Specify a unique node.id for each node in the cluster starting from 0.

      For example, node.id=1.

    3. Specify a list of controller.quorum.voters in the format <node_id>@<hostname:port>.

      For example, controller.quorum.voters=1@localhost:9093.

  3. Set up log directories for each node in your Kafka cluster:

    /opt/kafka/bin/kafka-storage.sh format -t <uuid> -c /opt/kafka/config/kraft/server.properties

    Returns:

    Formatting /tmp/kraft-combined-logs

    Replace <uuid> with the cluster ID you generated. Use the same ID for each node in your cluster.

    Apply the broker configuration using the properties file you created for the broker.

    The default log directory location specified in the server.properties configuration file is /tmp/kraft-combined-logs. You can add a comma-separated list to set up multiple log directories.

  4. Start each Kafka broker.

    /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server.properties
  5. Check that Kafka is running:

    jcmd | grep kafka

    Returns:

    number kafka.Kafka /opt/kafka/config/kraft/server.properties

You can now create topics, and send and receive messages from the brokers.

For brokers passing messages, you can use topic replication across the brokers in a cluster for data durability. Configure topics to have a replication factor of at least three and a minimum number of in-sync replicas set to 1 less than the replication factor. For more information, see Section 6.7, “Creating a topic”.