Kafka Message Size Too Large Error in RHOCP 4

Solution Verified - Updated -

Issue

  • The Kafka server accepts messages up 10MB
  • Vector collector is reporting an error: MessageSizeTooLarge for the Kafka output
  • The error message in the collector states: Message size too large
  • The Vector Kafka Sink is affected
  • The service call failed with no retries or retries exhausted
  • Even if it's set the tuning option maxWrite, it's observed the MessageSizeTooLarge in the collector for the Kafka output
  • If it's used Fluentd as collector, it works
  • Vector collector shows the error:

    2025-08-25T22:52:12.077585Z ERROR sink{component_kind="sink" component_id=output_kafka_app component_type=kafka}: vector_common::internal_event::service: Service call failed. No retries or retries exhausted. error=Some(KafkaError (Message production error: MessageSizeTooLarge (Broker: Message size too large))) request_id=4 error_type="request_failed" stage="sending" internal_log_rate_limit=true
    

Environment

  • Red Hat OpenShift Container Platform (RHOCP)
    • 4
  • Red Hat OpenShift Logging (RHOL)
    • 5
    • 6
  • Vector

Subscriber exclusive content

A Red Hat subscription provides unlimited access to our knowledgebase, tools, and much more.

Current Customers and Partners

Log in for full access

Log In

New to Red Hat?

Learn more about Red Hat subscriptions

Using a Red Hat product through a public cloud?

How to access this content