How to configure kafka notification in OCS
Environment
OCS 4.x
Issue
How to configure Kafka Notification in OCS
Resolution
- Prerequisite: Install Strimzi operator from OperatorHub. It's an upstream operator:
NAME DISPLAY VERSION REPLACES PHASE
strimzi-cluster-operator.v0.21.1 Strimzi 0.21.1 strimzi-cluster-operator.v0.20.1 Succeeded
- Enable the RGW in your environment if it's not there (like on AWS)
$ oc project openshift-storage
$ oc get pods|grep rgw
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-6bc6fdddkwhb 1/1 Running 0 6d3h
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-b-55798b6qx66m 1/1 Running 0 6d3h
-
Create a Kafka cluster and a Kafka topic cluster (you can use the Strimzi operator)
- Created Kafka Cluster with name kafka-cluster from kafka api (from ocp dashboard)
- Created topic from OCP dashboard, but it doesn't get in Ready status. Later created topic from kafka UI with name: test2. From OCP dashboard, I see its in Ready state.
-
To conigure kafka UI, create kafdrop dc, route, service using yaml file (kafdrop.yaml attached)
Change the name of KAFKA_BROKERCONNECT parameter in yaml. The value should be your kafka service (This service is created during creation of kafka cluster)
$ oc get service
my-cluster-kafka-bootstrap ClusterIP 172.30.202.25 <none> 9091/TCP,9092/TCP,9093/TCP 41m <----
my-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP,9093/TCP 41m
my-cluster-zookeeper-client ClusterIP 172.30.255.171 <none> 2181/TCP 42m
my-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 42m
Changes looks like this in yaml:
env:
- name: KAFKA_BROKERCONNECT
value: "my-cluster-kafka-bootstrap:9092"
$ oc create -f kafdrop.yaml
This will create dc, route, service.
$ oc get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafdrop ClusterIP 172.30.88.102 <none> 9000/TCP 25m
[..]
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
kafdrop kafdrop-kafka-test.apps.dkochuka-ocssdspnq.ceeindia.support kafdrop 9000 None
$ oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
kafdrop 1 1 1 config
The kafdrop route will serve as kafka UI where you can check notifications for topics.
-
Create a bucket
-
Fetch access key and secret key from rook-ceph-object secrets
oc get secrets rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user -o yaml -
Create bucket from below command:
-
AWS_ACCESS_KEY_ID=VM6A8N5O9Q2KWJVO73BW AWS_SECRET_ACCESS_KEY=mLjqfzWGlawzyuwEwoVV8sJtggNPUCj6k6xXMqnh aws --endpoint http://test-route-openshift-storage.apps.sdsupi.ocp.gsslab.pnq2.redhat.com s3api create-bucket --bucket test-bucket
- List bucket
AWS_ACCESS_KEY_ID=VM6A8N5O9Q2KWJVO73BW AWS_SECRET_ACCESS_KEY=mLjqfzWGlawzyuwEwoVV8sJtggNPUCj6k6xXMqnh aws --endpoint http://test-route-openshift-storage.apps.sdsupi.ocp.gsslab.pnq2.redhat.com s3api list-buckets
{
"Buckets": [
{
"Name": "nb.1616137932643.apps.sdsupi.ocp.gsslab.pnq2.redhat.com",
"CreationDate": "2021-03-19T07:12:12.663Z"
},
{
"Name": "test-bucket",
"CreationDate": "2021-03-25T07:39:32.893Z"
}
],
"Owner": {
"DisplayName": "my display name",
"ID": "noobaa-ceph-objectstore-user"
}
}
- Create notifications configuration for ObjectPut on this bucket with the defined topic
Using notify.py from https://github.com/shonpaz123/notify/blob/master/notify.py, created notification configuration
$ python notify.py -e <replace with route to RGW> -a <replace with access key> -s <replace with secret key> -b <replace with bucket name> -ke <replace with service name> -t <replace with topic name>
$ python notify.py -e http://test-route-openshift-storage.apps.sdsupi.ocp.gsslab.pnq2.redhat.com -a VM6A8N5O9Q2KWJVO73BW -s mLjqfzWGlawzyuwEwoVV8sJtggNPUCj6k6xXMqnh -b nb.1616137932643.apps.sdsupi.ocp.gsslab.pnq2.redhat.com -ke kafka-cluster-kafka-bootstrap -t test2
-
Create an object in the bucket
-
Put object (file named date) in the bucket:
$ AWS_ACCESS_KEY_ID=VM6A8N5O9Q2KWJVO73BW AWS_SECRET_ACCESS_KEY=mLjqfzWGlawzyuwEwoVV8sJtggNPUCj6k6xXMqnh aws --endpoint http://test-route-openshift-storage.apps.sdsupi.ocp.gsslab.pnq2.redhat.com s3api put-object --bucket nb.1616137932643.apps.sdsupi.ocp.gsslab.pnq2.redhat.com --key ./date --body date
{
"ETag": "\"62d948162d54ad866e7fd484879542e5\""
}
- List object from bucket
AWS_ACCESS_KEY_ID=VM6A8N5O9Q2KWJVO73BW AWS_SECRET_ACCESS_KEY=mLjqfzWGlawzyuwEwoVV8sJtggNPUCj6k6xXMqnh aws --endpoint http://test-route-openshift-storage.apps.sdsupi.ocp.gsslab.pnq2.redhat.com s3api list-objects --bucket nb.1616137932643.apps.sdsupi.ocp.gsslab.pnq2.redhat.com
{
"Contents": [
{
"Key": "./date",
"LastModified": "2021-03-25T12:48:54.022Z",
"ETag": "\"62d948162d54ad866e7fd484879542e5\"",
"Size": 29,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "my display name",
"ID": "noobaa-ceph-objectstore-user"
}
},
{
"Key": "noobaa_blocks/60544ed0984e9700232350ae/blocks_tree/other.blocks/_test_store_perf",
"LastModified": "2021-03-25T12:06:48.351Z",
"ETag": "\"244b7be1251dc46628725221089c6589\"",
"Size": 1024,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "my display name",
"ID": "noobaa-ceph-objectstore-user"
}
}
]
}
-
Verify in the RGW logs that the message has been pushed without the error:
- Go to kafka console (route : kafdrop-kafka-test.apps.dkochuka-ocssdspnq.ceeindia.support in this example) Select topic and check messages under partition : No messages observed
- Check rgw logs: Below message is observed repetitevely:
debug 2021-03-25 12:23:04.408 7f69aa597700 1 ERROR: failed to create push endpoint: kafka://kafka-cluster-kafka-bootstrap due to: pubsub endpoint configuration error: unknown schema in: kafka://kafka-cluster-kafka-bootstrap
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.
Comments