public class KafkaConsumerImpl<K,V> extends Object implements KafkaConsumer<K,V>
| Constructor and Description |
|---|
KafkaConsumerImpl(KafkaReadStream<K,V> stream) |
| Modifier and Type | Method and Description |
|---|---|
KafkaConsumer<K,V> |
assign(Set<TopicPartition> topicPartitions)
Manually assign a list of partition to this consumer.
|
KafkaConsumer<K,V> |
assign(Set<TopicPartition> topicPartitions,
Handler<AsyncResult<Void>> completionHandler)
Manually assign a list of partition to this consumer.
|
KafkaConsumer<K,V> |
assign(TopicPartition topicPartition)
Manually assign a partition to this consumer.
|
KafkaConsumer<K,V> |
assign(TopicPartition topicPartition,
Handler<AsyncResult<Void>> completionHandler)
Manually assign a partition to this consumer.
|
KafkaConsumer<K,V> |
assignment(Handler<AsyncResult<Set<TopicPartition>>> handler)
Get the set of partitions currently assigned to this consumer.
|
KafkaReadStream<K,V> |
asStream() |
KafkaConsumer<K,V> |
batchHandler(Handler<KafkaConsumerRecords<K,V>> handler)
Set the handler to be used when batches of messages are fetched
from the Kafka server.
|
void |
beginningOffsets(Set<TopicPartition> topicPartitions,
Handler<AsyncResult<Map<TopicPartition,Long>>> handler)
Get the first offset for the given partitions.
|
void |
beginningOffsets(TopicPartition topicPartition,
Handler<AsyncResult<Long>> handler)
Get the first offset for the given partitions.
|
void |
close(Handler<AsyncResult<Void>> completionHandler)
Close the consumer
|
void |
commit()
Commit current offsets for all the subscribed list of topics and partition.
|
void |
commit(Handler<AsyncResult<Void>> completionHandler)
Commit current offsets for all the subscribed list of topics and partition.
|
void |
commit(Map<TopicPartition,OffsetAndMetadata> offsets)
Commit the specified offsets for the specified list of topics and partitions to Kafka.
|
void |
commit(Map<TopicPartition,OffsetAndMetadata> offsets,
Handler<AsyncResult<Map<TopicPartition,OffsetAndMetadata>>> completionHandler)
Commit the specified offsets for the specified list of topics and partitions to Kafka.
|
void |
committed(TopicPartition topicPartition,
Handler<AsyncResult<OffsetAndMetadata>> handler)
Get the last committed offset for the given partition (whether the commit happened by this process or another).
|
KafkaConsumer<K,V> |
endHandler(Handler<Void> endHandler)
Set an end handler.
|
void |
endOffsets(Set<TopicPartition> topicPartitions,
Handler<AsyncResult<Map<TopicPartition,Long>>> handler)
Get the last offset for the given partitions.
|
void |
endOffsets(TopicPartition topicPartition,
Handler<AsyncResult<Long>> handler)
Get the last offset for the given partition.
|
KafkaConsumer<K,V> |
exceptionHandler(Handler<Throwable> handler)
Set an exception handler on the read stream.
|
KafkaConsumer<K,V> |
fetch(long amount)
Fetch the specified
amount of elements. |
KafkaConsumer<K,V> |
handler(Handler<KafkaConsumerRecord<K,V>> handler)
Set a data handler.
|
KafkaConsumer<K,V> |
listTopics(Handler<AsyncResult<Map<String,List<PartitionInfo>>>> handler)
Get metadata about partitions for all topics that the user is authorized to view.
|
void |
offsetsForTimes(Map<TopicPartition,Long> topicPartitionTimestamps,
Handler<AsyncResult<Map<TopicPartition,OffsetAndTimestamp>>> handler)
Look up the offsets for the given partitions by timestamp.
|
void |
offsetsForTimes(TopicPartition topicPartition,
Long timestamp,
Handler<AsyncResult<OffsetAndTimestamp>> handler)
Look up the offset for the given partition by timestamp.
|
KafkaConsumer<K,V> |
partitionsAssignedHandler(Handler<Set<TopicPartition>> handler)
Set the handler called when topic partitions are assigned to the consumer
|
KafkaConsumer<K,V> |
partitionsFor(String topic,
Handler<AsyncResult<List<PartitionInfo>>> handler)
Get metadata about the partitions for a given topic.
|
KafkaConsumer<K,V> |
partitionsRevokedHandler(Handler<Set<TopicPartition>> handler)
Set the handler called when topic partitions are revoked to the consumer
|
KafkaConsumer<K,V> |
pause()
Pause the
ReadStream, it sets the buffer in fetch mode and clears the actual demand. |
KafkaConsumer<K,V> |
pause(Set<TopicPartition> topicPartitions)
Suspend fetching from the requested partitions.
|
KafkaConsumer<K,V> |
pause(Set<TopicPartition> topicPartitions,
Handler<AsyncResult<Void>> completionHandler)
Suspend fetching from the requested partitions.
|
KafkaConsumer<K,V> |
pause(TopicPartition topicPartition)
Suspend fetching from the requested partition.
|
KafkaConsumer<K,V> |
pause(TopicPartition topicPartition,
Handler<AsyncResult<Void>> completionHandler)
Suspend fetching from the requested partition.
|
void |
paused(Handler<AsyncResult<Set<TopicPartition>>> handler)
Get the set of partitions that were previously paused by a call to pause(Set).
|
void |
poll(long timeout,
Handler<AsyncResult<KafkaConsumerRecords<K,V>>> handler)
Executes a poll for getting messages from Kafka
|
KafkaConsumer<K,V> |
pollTimeout(long timeout)
Sets the poll timeout (in ms) for the underlying native Kafka Consumer.
|
void |
position(TopicPartition partition,
Handler<AsyncResult<Long>> handler)
Get the offset of the next record that will be fetched (if a record with that offset exists).
|
KafkaConsumerImpl<K,V> |
registerCloseHook() |
KafkaConsumer<K,V> |
resume()
Resume reading, and sets the buffer in
flowing mode. |
KafkaConsumer<K,V> |
resume(Set<TopicPartition> topicPartitions)
Resume specified partitions which have been paused with pause.
|
KafkaConsumer<K,V> |
resume(Set<TopicPartition> topicPartitions,
Handler<AsyncResult<Void>> completionHandler)
Resume specified partitions which have been paused with pause.
|
KafkaConsumer<K,V> |
resume(TopicPartition topicPartition)
Resume specified partition which have been paused with pause.
|
KafkaConsumer<K,V> |
resume(TopicPartition topicPartition,
Handler<AsyncResult<Void>> completionHandler)
Resume specified partition which have been paused with pause.
|
KafkaConsumer<K,V> |
seek(TopicPartition topicPartition,
long offset)
Overrides the fetch offsets that the consumer will use on the next poll.
|
KafkaConsumer<K,V> |
seek(TopicPartition topicPartition,
long offset,
Handler<AsyncResult<Void>> completionHandler)
Overrides the fetch offsets that the consumer will use on the next poll.
|
KafkaConsumer<K,V> |
seekToBeginning(Set<TopicPartition> topicPartitions)
Seek to the first offset for each of the given partitions.
|
KafkaConsumer<K,V> |
seekToBeginning(Set<TopicPartition> topicPartitions,
Handler<AsyncResult<Void>> completionHandler)
Seek to the first offset for each of the given partitions.
|
KafkaConsumer<K,V> |
seekToBeginning(TopicPartition topicPartition)
Seek to the first offset for each of the given partition.
|
KafkaConsumer<K,V> |
seekToBeginning(TopicPartition topicPartition,
Handler<AsyncResult<Void>> completionHandler)
Seek to the first offset for each of the given partition.
|
KafkaConsumer<K,V> |
seekToEnd(Set<TopicPartition> topicPartitions)
Seek to the last offset for each of the given partitions.
|
KafkaConsumer<K,V> |
seekToEnd(Set<TopicPartition> topicPartitions,
Handler<AsyncResult<Void>> completionHandler)
Seek to the last offset for each of the given partitions.
|
KafkaConsumer<K,V> |
seekToEnd(TopicPartition topicPartition)
Seek to the last offset for each of the given partition.
|
KafkaConsumer<K,V> |
seekToEnd(TopicPartition topicPartition,
Handler<AsyncResult<Void>> completionHandler)
Seek to the last offset for each of the given partition.
|
KafkaConsumer<K,V> |
subscribe(Pattern pattern)
Subscribe to all topics matching specified pattern to get dynamically assigned partitions.
|
KafkaConsumer<K,V> |
subscribe(Pattern pattern,
Handler<AsyncResult<Void>> completionHandler)
Subscribe to all topics matching specified pattern to get dynamically assigned partitions.
|
KafkaConsumer<K,V> |
subscribe(Set<String> topics)
Subscribe to the given list of topics to get dynamically assigned partitions.
|
KafkaConsumer<K,V> |
subscribe(Set<String> topics,
Handler<AsyncResult<Void>> completionHandler)
Subscribe to the given list of topics to get dynamically assigned partitions.
|
KafkaConsumer<K,V> |
subscribe(String topic)
Subscribe to the given topic to get dynamically assigned partitions.
|
KafkaConsumer<K,V> |
subscribe(String topic,
Handler<AsyncResult<Void>> completionHandler)
Subscribe to the given topic to get dynamically assigned partitions.
|
KafkaConsumer<K,V> |
subscription(Handler<AsyncResult<Set<String>>> handler)
Get the current subscription.
|
KafkaConsumer<K,V> |
unsubscribe()
Unsubscribe from topics currently subscribed with subscribe.
|
KafkaConsumer<K,V> |
unsubscribe(Handler<AsyncResult<Void>> completionHandler)
Unsubscribe from topics currently subscribed with subscribe.
|
org.apache.kafka.clients.consumer.Consumer<K,V> |
unwrap() |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitclose, create, create, create, create, create, create, createpipe, pipeTo, pipeTopublic KafkaConsumerImpl(KafkaReadStream<K,V> stream)
public KafkaConsumerImpl<K,V> registerCloseHook()
public KafkaConsumer<K,V> exceptionHandler(Handler<Throwable> handler)
ReadStreamexceptionHandler in interface ReadStream<KafkaConsumerRecord<K,V>>exceptionHandler in interface StreamBaseexceptionHandler in interface KafkaConsumer<K,V>handler - the exception handlerpublic KafkaConsumer<K,V> handler(Handler<KafkaConsumerRecord<K,V>> handler)
ReadStreamhandler in interface ReadStream<KafkaConsumerRecord<K,V>>handler in interface KafkaConsumer<K,V>public KafkaConsumer<K,V> pause()
ReadStreamReadStream, it sets the buffer in fetch mode and clears the actual demand.
While it's paused, no data will be sent to the data handler.
pause in interface ReadStream<KafkaConsumerRecord<K,V>>pause in interface KafkaConsumer<K,V>public KafkaConsumer<K,V> resume()
ReadStreamflowing mode.
If the ReadStream has been paused, reading will recommence on it.resume in interface ReadStream<KafkaConsumerRecord<K,V>>resume in interface KafkaConsumer<K,V>public KafkaConsumer<K,V> fetch(long amount)
ReadStreamamount of elements. If the ReadStream has been paused, reading will
recommence with the specified amount of items, otherwise the specified amount will
be added to the current stream demand.fetch in interface ReadStream<KafkaConsumerRecord<K,V>>fetch in interface KafkaConsumer<K,V>public KafkaConsumer<K,V> pause(Set<TopicPartition> topicPartitions)
KafkaConsumerpause in interface KafkaConsumer<K,V>topicPartitions - topic partition from which suspend fetchingpublic KafkaConsumer<K,V> pause(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumer
Due to internal buffering of messages,
the record handler will
continue to observe messages from the given topicPartition
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the KafkaConsumer.batchHandler(Handler) will not see messages
from the given topicPartition.
pause in interface KafkaConsumer<K,V>topicPartition - topic partition from which suspend fetchingcompletionHandler - handler called on operation completedpublic KafkaConsumer<K,V> pause(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumer
Due to internal buffering of messages,
the record handler will
continue to observe messages from the given topicPartitions
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the KafkaConsumer.batchHandler(Handler) will not see messages
from the given topicPartitions.
pause in interface KafkaConsumer<K,V>topicPartitions - topic partition from which suspend fetchingcompletionHandler - handler called on operation completedpublic void paused(Handler<AsyncResult<Set<TopicPartition>>> handler)
KafkaConsumerpaused in interface KafkaConsumer<K,V>handler - handler called on operation completedpublic KafkaConsumer<K,V> resume(TopicPartition topicPartition)
KafkaConsumerresume in interface KafkaConsumer<K,V>topicPartition - topic partition from which resume fetchingpublic KafkaConsumer<K,V> resume(Set<TopicPartition> topicPartitions)
KafkaConsumerresume in interface KafkaConsumer<K,V>topicPartitions - topic partition from which resume fetchingpublic KafkaConsumer<K,V> resume(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumerresume in interface KafkaConsumer<K,V>topicPartition - topic partition from which resume fetchingcompletionHandler - handler called on operation completedpublic KafkaConsumer<K,V> resume(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumerresume in interface KafkaConsumer<K,V>topicPartitions - topic partition from which resume fetchingcompletionHandler - handler called on operation completedpublic KafkaConsumer<K,V> endHandler(Handler<Void> endHandler)
ReadStreamendHandler in interface ReadStream<KafkaConsumerRecord<K,V>>endHandler in interface KafkaConsumer<K,V>public KafkaConsumer<K,V> subscribe(String topic)
KafkaConsumersubscribe in interface KafkaConsumer<K,V>topic - topic to subscribe topublic KafkaConsumer<K,V> subscribe(Set<String> topics)
KafkaConsumersubscribe in interface KafkaConsumer<K,V>topics - topics to subscribe topublic KafkaConsumer<K,V> subscribe(String topic, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumer
Due to internal buffering of messages, when changing the subscribed topic
the old topic may remain in effect
(as observed by the KafkaConsumer.handler(Handler) record handler})
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the KafkaConsumer.batchHandler(Handler) will only see messages
consistent with the new topic.
subscribe in interface KafkaConsumer<K,V>topic - topic to subscribe tocompletionHandler - handler called on operation completedpublic KafkaConsumer<K,V> subscribe(Set<String> topics, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumer
Due to internal buffering of messages, when changing the subscribed topics
the old set of topics may remain in effect
(as observed by the KafkaConsumer.handler(Handler) record handler})
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the KafkaConsumer.batchHandler(Handler) will only see messages
consistent with the new set of topics.
subscribe in interface KafkaConsumer<K,V>topics - topics to subscribe tocompletionHandler - handler called on operation completedpublic KafkaConsumer<K,V> subscribe(Pattern pattern)
KafkaConsumersubscribe in interface KafkaConsumer<K,V>pattern - Pattern to subscribe topublic KafkaConsumer<K,V> subscribe(Pattern pattern, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumer
Due to internal buffering of messages, when changing the subscribed topics
the old set of topics may remain in effect
(as observed by the KafkaConsumer.handler(Handler) record handler})
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the KafkaConsumer.batchHandler(Handler) will only see messages
consistent with the new set of topics.
subscribe in interface KafkaConsumer<K,V>pattern - Pattern to subscribe tocompletionHandler - handler called on operation completedpublic KafkaConsumer<K,V> assign(TopicPartition topicPartition)
KafkaConsumerassign in interface KafkaConsumer<K,V>topicPartition - partition which want assignedpublic KafkaConsumer<K,V> assign(Set<TopicPartition> topicPartitions)
KafkaConsumerassign in interface KafkaConsumer<K,V>topicPartitions - partitions which want assignedpublic KafkaConsumer<K,V> assign(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumer
Due to internal buffering of messages, when reassigning
the old partition may remain in effect
(as observed by the KafkaConsumer.handler(Handler) record handler)}
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the KafkaConsumer.batchHandler(Handler) will only see messages
consistent with the new partition.
assign in interface KafkaConsumer<K,V>topicPartition - partition which want assignedcompletionHandler - handler called on operation completedpublic KafkaConsumer<K,V> assign(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumer
Due to internal buffering of messages, when reassigning
the old set of partitions may remain in effect
(as observed by the KafkaConsumer.handler(Handler) record handler)}
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the KafkaConsumer.batchHandler(Handler) will only see messages
consistent with the new set of partitions.
assign in interface KafkaConsumer<K,V>topicPartitions - partitions which want assignedcompletionHandler - handler called on operation completedpublic KafkaConsumer<K,V> assignment(Handler<AsyncResult<Set<TopicPartition>>> handler)
KafkaConsumerassignment in interface KafkaConsumer<K,V>handler - handler called on operation completedpublic KafkaConsumer<K,V> listTopics(Handler<AsyncResult<Map<String,List<PartitionInfo>>>> handler)
KafkaConsumerlistTopics in interface KafkaConsumer<K,V>handler - handler called on operation completedpublic KafkaConsumer<K,V> unsubscribe()
KafkaConsumerunsubscribe in interface KafkaConsumer<K,V>public KafkaConsumer<K,V> unsubscribe(Handler<AsyncResult<Void>> completionHandler)
KafkaConsumerunsubscribe in interface KafkaConsumer<K,V>completionHandler - handler called on operation completedpublic KafkaConsumer<K,V> subscription(Handler<AsyncResult<Set<String>>> handler)
KafkaConsumersubscription in interface KafkaConsumer<K,V>handler - handler called on operation completedpublic KafkaConsumer<K,V> pause(TopicPartition topicPartition)
KafkaConsumerpause in interface KafkaConsumer<K,V>topicPartition - topic partition from which suspend fetchingpublic KafkaConsumer<K,V> partitionsRevokedHandler(Handler<Set<TopicPartition>> handler)
KafkaConsumerpartitionsRevokedHandler in interface KafkaConsumer<K,V>handler - handler called on revoked topic partitionspublic KafkaConsumer<K,V> partitionsAssignedHandler(Handler<Set<TopicPartition>> handler)
KafkaConsumerpartitionsAssignedHandler in interface KafkaConsumer<K,V>handler - handler called on assigned topic partitionspublic KafkaConsumer<K,V> seek(TopicPartition topicPartition, long offset)
KafkaConsumerseek in interface KafkaConsumer<K,V>topicPartition - topic partition for which seekoffset - offset to seek inside the topic partitionpublic KafkaConsumer<K,V> seek(TopicPartition topicPartition, long offset, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumer
Due to internal buffering of messages,
the record handler will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the KafkaConsumer.batchHandler(Handler) will only see messages
consistent with the new offset.
seek in interface KafkaConsumer<K,V>topicPartition - topic partition for which seekoffset - offset to seek inside the topic partitioncompletionHandler - handler called on operation completedpublic KafkaConsumer<K,V> seekToBeginning(TopicPartition topicPartition)
KafkaConsumerseekToBeginning in interface KafkaConsumer<K,V>topicPartition - topic partition for which seekpublic KafkaConsumer<K,V> seekToBeginning(Set<TopicPartition> topicPartitions)
KafkaConsumerseekToBeginning in interface KafkaConsumer<K,V>topicPartitions - topic partition for which seekpublic KafkaConsumer<K,V> seekToBeginning(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumer
Due to internal buffering of messages,
the record handler will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the KafkaConsumer.batchHandler(Handler) will only see messages
consistent with the new offset.
seekToBeginning in interface KafkaConsumer<K,V>topicPartition - topic partition for which seekcompletionHandler - handler called on operation completedpublic KafkaConsumer<K,V> seekToBeginning(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumer
Due to internal buffering of messages,
the record handler will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the KafkaConsumer.batchHandler(Handler) will only see messages
consistent with the new offset.
seekToBeginning in interface KafkaConsumer<K,V>topicPartitions - topic partition for which seekcompletionHandler - handler called on operation completedpublic KafkaConsumer<K,V> seekToEnd(TopicPartition topicPartition)
KafkaConsumerseekToEnd in interface KafkaConsumer<K,V>topicPartition - topic partition for which seekpublic KafkaConsumer<K,V> seekToEnd(Set<TopicPartition> topicPartitions)
KafkaConsumerseekToEnd in interface KafkaConsumer<K,V>topicPartitions - topic partition for which seekpublic KafkaConsumer<K,V> seekToEnd(TopicPartition topicPartition, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumer
Due to internal buffering of messages,
the record handler will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the KafkaConsumer.batchHandler(Handler) will only see messages
consistent with the new offset.
seekToEnd in interface KafkaConsumer<K,V>topicPartition - topic partition for which seekcompletionHandler - handler called on operation completedpublic KafkaConsumer<K,V> seekToEnd(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Void>> completionHandler)
KafkaConsumer
Due to internal buffering of messages,
the record handler will
continue to observe messages fetched with respect to the old offset
until some time after the given completionHandler
is called. In contrast, the once the given completionHandler
is called the KafkaConsumer.batchHandler(Handler) will only see messages
consistent with the new offset.
seekToEnd in interface KafkaConsumer<K,V>topicPartitions - topic partition for which seekcompletionHandler - handler called on operation completedpublic void commit()
KafkaConsumercommit in interface KafkaConsumer<K,V>public void commit(Handler<AsyncResult<Void>> completionHandler)
KafkaConsumercommit in interface KafkaConsumer<K,V>completionHandler - handler called on operation completedpublic void commit(Map<TopicPartition,OffsetAndMetadata> offsets)
KafkaConsumercommit in interface KafkaConsumer<K,V>offsets - offsets list to commitpublic void commit(Map<TopicPartition,OffsetAndMetadata> offsets, Handler<AsyncResult<Map<TopicPartition,OffsetAndMetadata>>> completionHandler)
KafkaConsumercommit in interface KafkaConsumer<K,V>offsets - offsets list to commitcompletionHandler - handler called on operation completedpublic void committed(TopicPartition topicPartition, Handler<AsyncResult<OffsetAndMetadata>> handler)
KafkaConsumercommitted in interface KafkaConsumer<K,V>topicPartition - topic partition for getting last committed offsethandler - handler called on operation completedpublic KafkaConsumer<K,V> partitionsFor(String topic, Handler<AsyncResult<List<PartitionInfo>>> handler)
KafkaConsumerpartitionsFor in interface KafkaConsumer<K,V>topic - topic partition for which getting partitions infohandler - handler called on operation completedpublic void close(Handler<AsyncResult<Void>> completionHandler)
KafkaConsumerclose in interface KafkaConsumer<K,V>completionHandler - handler called on operation completedpublic void position(TopicPartition partition, Handler<AsyncResult<Long>> handler)
KafkaConsumerposition in interface KafkaConsumer<K,V>partition - The partition to get the position forhandler - handler called on operation completedpublic void offsetsForTimes(TopicPartition topicPartition, Long timestamp, Handler<AsyncResult<OffsetAndTimestamp>> handler)
KafkaConsumeroffsetsForTimes in interface KafkaConsumer<K,V>topicPartition - TopicPartition to query.timestamp - Timestamp to be used in the query.handler - handler called on operation completedpublic void offsetsForTimes(Map<TopicPartition,Long> topicPartitionTimestamps, Handler<AsyncResult<Map<TopicPartition,OffsetAndTimestamp>>> handler)
KafkaConsumeroffsetsForTimes in interface KafkaConsumer<K,V>topicPartitionTimestamps - A map with pairs of (TopicPartition, Timestamp).handler - handler called on operation completedpublic void beginningOffsets(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Map<TopicPartition,Long>>> handler)
KafkaConsumerbeginningOffsets in interface KafkaConsumer<K,V>topicPartitions - the partitions to get the earliest offsets.handler - handler called on operation completed. Returns the earliest available offsets for the given partitionspublic void beginningOffsets(TopicPartition topicPartition, Handler<AsyncResult<Long>> handler)
KafkaConsumerbeginningOffsets in interface KafkaConsumer<K,V>topicPartition - the partition to get the earliest offset.handler - handler called on operation completed. Returns the earliest available offset for the given partitionpublic void endOffsets(Set<TopicPartition> topicPartitions, Handler<AsyncResult<Map<TopicPartition,Long>>> handler)
KafkaConsumerendOffsets in interface KafkaConsumer<K,V>topicPartitions - the partitions to get the end offsets.handler - handler called on operation completed. The end offsets for the given partitions.public void endOffsets(TopicPartition topicPartition, Handler<AsyncResult<Long>> handler)
KafkaConsumerendOffsets in interface KafkaConsumer<K,V>topicPartition - the partition to get the end offset.handler - handler called on operation completed. The end offset for the given partition.public KafkaReadStream<K,V> asStream()
asStream in interface KafkaConsumer<K,V>KafkaReadStream instancepublic org.apache.kafka.clients.consumer.Consumer<K,V> unwrap()
unwrap in interface KafkaConsumer<K,V>public KafkaConsumer<K,V> batchHandler(Handler<KafkaConsumerRecords<K,V>> handler)
KafkaConsumerrecord handler.batchHandler in interface KafkaConsumer<K,V>handler - handler called when batches of messages are fetchedpublic KafkaConsumer<K,V> pollTimeout(long timeout)
KafkaConsumerpollTimeout in interface KafkaConsumer<K,V>timeout - The time, in milliseconds, spent waiting in poll if data is not available in the buffer.
If 0, returns immediately with any records that are available currently in the native Kafka consumer's buffer,
else returns empty. Must not be negative.public void poll(long timeout,
Handler<AsyncResult<KafkaConsumerRecords<K,V>>> handler)
KafkaConsumerpoll in interface KafkaConsumer<K,V>timeout - The time, in milliseconds, spent waiting in poll if data is not available in the buffer.
If 0, returns immediately with any records that are available currently in the native Kafka consumer's buffer,
else returns empty. Must not be negative.handler - handler called after the poll with batch of records (can be empty).Copyright © 2020. All rights reserved.