Chapter 6. Using the User Operator
The User Operator provides a way of managing Kafka users via OpenShift resources.
6.1. Overview of the User Operator component
The User Operator manages Kafka users for a Kafka cluster by watching for KafkaUser OpenShift resources that describe Kafka users and ensuring that they are configured properly in the Kafka cluster. For example:
-
if a
KafkaUseris created, the User Operator will create the user it describes -
if a
KafkaUseris deleted, the User Operator will delete the user it describes -
if a
KafkaUseris changed, the User Operator will update the user it describes
Unlike the Topic Operator, the User Operator does not sync any changes from the Kafka cluster with the OpenShift resources. Unlike the Kafka topics which might be created by applications directly in Kafka, it is not expected that the users will be managed directly in the Kafka cluster in parallel with the User Operator, so this should not be needed.
The User Operator allows you to declare a KafkaUser as part of your application’s deployment. When the user is created, the credentials will be created in a Secret. Your application needs to use the user and its credentials for authentication and to produce or consume messages.
In addition to managing credentials for authentication, the User Operator also manages authorization rules by including a description of the user’s rights in the KafkaUser declaration.
6.2. Mutual TLS authentication for clients
6.2.1. Mutual TLS authentication
Mutual authentication or two-way authentication is when both the server and the client present certificates. AMQ Streams can configure Kafka to use TLS (Transport Layer Security) to provide encrypted communication between Kafka brokers and clients either with or without mutual authentication. When you configure mutual authentication, the broker authenticates the client and the client authenticates the broker. Mutual TLS authentication is always used for the communication between Kafka brokers and Zookeeper pods.
In many common uses of TLS (such as the HTTPS protocol used between a web browser and a web server) the authentication is not mutual: Only one party to the communication gets proof of the identity of the other party.
TLS authentication is more commonly one-way, where only one party authenticates to another. For example, when the HTTPS protocol is used between a web browser and a web server, the authentication is not usually mutual and only the server gets proof of the identity of the browser.
6.2.2. When to use mutual TLS authentication for clients
Mutual TLS authentication is recommended for authenticating Kafka clients when:
- The client supports authentication using mutual TLS authentication
- It is necessary to use the TLS certificates rather than passwords
- You can reconfigure and restart client applications periodically so that they do not use expired certificates.
6.3. Creating a Kafka user with mutual TLS authentication
Prerequisites
- A running Kafka cluster configured with a listener using TLS authentication.
- A running User Operator.
Procedure
Prepare a YAML file containing the
KafkaUserto be created.An example
KafkaUserapiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: literal operation: ReadCreate the
KafkaUserresource in OpenShift.On OpenShift this can be done using
oc apply:oc apply -f your-file-
Use the credentials from the secret
my-userin your application
Additional resources
- For more information about deploying the Cluster Operator, see Section 2.2, “Cluster Operator”.
- For more information about configuring a listener that authenticates using TLS see Section 3.1.4, “Kafka broker listeners”.
- For more information about deploying the Entity Operator, see Section 3.1.8, “Entity Operator”.
-
For more information about the
KafkaUserobject, seeKafkaUserschema reference.
6.4. SCRAM-SHA authentication
SCRAM (Salted Challenge Response Authentication Mechanism) is an authentication protocol that can establish mutual authentication using passwords. AMQ Streams can configure Kafka to use SASL SCRMA-SHA-512 to provide authentication on both unencrypted and TLS-encrypted client connections. TLS authentication is always used internally between Kafka brokers and Zookeeper nodes. When used with a TLS client connection, the TLS protocol provides encryption, but is not used for authentication.
The following properties of SCRAM make it safe to use SCRAM-SHA even on unencrypted connections:
- The passwords are not sent in the clear over the communication channel. Instead the client and the server are each challenged by the other to offer proof that they know the password of the authenticating user.
- The server and client each generate a new challenge one each authentication exchange. This means that the exchange is resilient against replay attacks.
6.4.1. Supported SCRAM credentials
AMQ Streams supports SCRMA-SHA-512 only. When a KafkaUser.spec.authentication.type is configured with scram-sha-512 the User Operator will generate a random 12 character password consisting of upper and lowercase ASCII letters and numbers.
6.4.2. When to use SCRAM-SHA authentication for clients
SCRAM-SHA is recommended for authenticating Kafka clients when:
- The client supports authentication using SCRAM-SHA-512
- It is necessary to use passwords rather than the TLS certificates
- When you want to have authentication for unencrypted communication
6.5. Creating a Kafka user with SCRAM SHA authentication
Prerequisites
- A running Kafka cluster configured with a listener using SCRAM SHA authentication.
- A running User Operator.
Procedure
Prepare a YAML file containing the
KafkaUserto be created.An example
KafkaUserapiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: literal operation: ReadCreate the
KafkaUserresource in OpenShift.On OpenShift this can be done using
oc apply:oc apply -f your-file-
Use the credentials from the secret
my-userin your application
Additional resources
- For more information about deploying the Cluster Operator, see Section 2.2, “Cluster Operator”.
- For more information about configuring a listener that authenticates using SCRAM SHA see Section 3.1.4, “Kafka broker listeners”.
- For more information about deploying the Entity Operator, see Section 3.1.8, “Entity Operator”.
-
For more information about the
KafkaUserobject, seeKafkaUserschema reference.
6.6. Editing a Kafka user
This procedure describes how to change the configuration of an existing Kafka user by using a KafkaUser OpenShift resource.
Prerequisites
- A running Kafka cluster.
- A running User Operator.
-
An existing
KafkaUserto be changed
Procedure
Prepare a YAML file containing the desired
KafkaUser.apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls authorization: type: simple acls: - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: literal operation: ReadUpdate the
KafkaUserresource in OpenShift.+ On OpenShift this can be done using
oc apply:oc apply -f your-file-
Use the updated credentials from the
my-usersecret in your application.
Additional resources
- For more information about deploying the Cluster Operator, see Section 2.2, “Cluster Operator”.
- For more information about deploying the Entity Operator, see Section 3.1.8, “Entity Operator”.
-
For more information about the
KafkaUserobject, seeKafkaUserschema reference.
6.7. Deleting a Kafka user
This procedure describes how to delete a Kafka user created with KafkaUser OpenShift resource.
Prerequisites
- A running Kafka cluster.
- A running User Operator.
-
An existing
KafkaUserto be deleted.
Procedure
Delete the
KafkaUserresource in OpenShift.On OpenShift this can be done using
oc:oc delete kafkauser your-user-name- For more information about deploying the Cluster Operator, see Section 2.2, “Cluster Operator”.
-
For more information about the
KafkaUserobject, seeKafkaUserschema reference.
6.8. Kafka User resource
The KafkaUser resource is used to declare a user with its authentication mechanism, authorization mechanism, and access rights.
6.8.1. Authentication
Authentication is configured using the authentication property in KafkaUser.spec. The authentication mechanism enabled for this user will be specified using the type field. Currently, the only supported authentication mechanisms are the TLS Client Authentication mechanism and the SCRAM-SHA-512 mechanism.
When no authentication mechanism is specified, User Operator will not create the user or its credentials.
6.8.1.1. TLS Client Authentication
To use TLS client authentication, set the type field to tls.
An example of KafkaUser with enabled TLS Client Authentication
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaUser
metadata:
name: my-user
labels:
strimzi.io/cluster: my-cluster
spec:
authentication:
type: tls
# ...
When the user is created by the User Operator, it will create a new secret with the same name as the KafkaUser resource. The secret will contain a public and private key which should be used for the TLS Client Authentication. Bundled with them will be the public key of the client certification authority which was used to sign the user certificate. All keys will be in X509 format.
An example of the Secret with user credentials
apiVersion: v1
kind: Secret
metadata:
name: my-user
labels:
strimzi.io/kind: KafkaUser
strimzi.io/cluster: my-cluster
type: Opaque
data:
ca.crt: # Public key of the Clients CA
user.crt: # Public key of the user
user.key: # Private key of the user
6.8.1.2. SCRAM-SHA-512 Authentication
To use SCRAM-SHA-512 authentication mechanism, set the type field to scram-sha-512.
An example of KafkaUser with enabled SCRAM-SHA-512 authentication
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaUser
metadata:
name: my-user
labels:
strimzi.io/cluster: my-cluster
spec:
authentication:
type: scram-sha-512
# ...
When the user is created by the User Operator, the User Operator will create a new secret with the same name as the KafkaUser resource. The secret will contain the generated password.
An example of the Secret with user credentials
apiVersion: v1
kind: Secret
metadata:
name: my-user
labels:
strimzi.io/kind: KafkaUser
strimzi.io/cluster: my-cluster
type: Opaque
data:
password: # Generated password
6.8.2. Authorization
Authorization is configured using the authorization property in KafkaUser.spec. The authorization type enabled for this user will be specified using the type field. Currently, the only supported authorization type is the Simple authorization.
When no authorization is specified, the User Operator will not provision any access rights for the user.
6.8.2.1. Simple Authorization
To use Simple Authorization, set the type property to simple. Simple authorization is using the SimpleAclAuthorizer plugin. SimpleAclAuthorizer is the default authorization plugin which is part of Apache Kafka. Simple Authorization allows you to specify list of ACL rules in the acls property.
The acls property should contain a list of AclRule objects. AclRule specifies the access rights whcih will be granted to the user. The AclRule object contains following properties:
type-
Specifies the type of the ACL rule. The type can be either
allowordeny. Thetypefield is optional and when not specified, the ACL rule will be treated asallowrule. operationSpecifies the operation which will be allowed or denied. Following operations are supported:
- Read
- Write
- Delete
- Alter
- Describe
- All
- IdempotentWrite
- ClusterAction
- Create
- AlterConfigs
DescribeConfigs
NoteNot every operation can be combined with every resource.
host-
Specifies a remote host from which is the rule allowed or denied. Use
*to allow or deny the operation from all hosts. Thehostfield is optional and when not specified, the value*will be used as default. resourceSpecifies the resource for which does the rule apply. Simple Authorization supports 3 different resource types:
- Topics
- Consumer Groups
Clusters
The resource type can be specified in the
typeproperty. Usetopicfor Topics,groupfor Consumer Groups andclusterfor clusters.Topic and Group resources additionally allow to specify the name of the resource for which the rule applies. The name can be specified in the
nameproperty. The name can be either specified as literal or as a prefix. To specify the name as literal, set thepatternTypeproperty to the valueliteral. Literal names will be taken exactly as they are specified in thenamefield. To specify the name as a prefix, set thepatternTypeproperty to the valueprefix. Prefix type names will use the value from thenameonly a prefix and will apply the rule to all resources with names starting with the value. The cluster type resources have no name.
For more details about SimpleAclAuthorizer, its ACL rules and the allowed combinations of resources and operations, see Authorization and ACLs.
For more information about the AclRule object, see AclRule schema reference.
An example KafkaUser
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaUser
metadata:
name: my-user
labels:
strimzi.io/cluster: my-cluster
spec:
# ...
authorization:
type: simple
acls:
- resource:
type: topic
name: my-topic
patternType: literal
operation: Read
- resource:
type: topic
name: my-topic
patternType: literal
operation: Describe
- resource:
type: group
name: my-group
patternType: prefix
operation: Read
6.8.3. Additional resources
-
For more information about the
KafkaUserobject, seeKafkaUserschema reference. - For more information about the TLS Client Authentication, see Section 6.2, “Mutual TLS authentication for clients”.
- For more information about the SASL SCRAM-SHA-512 authentication, see Section 6.4, “SCRAM-SHA authentication”.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.