Questions About Moving From Sarama To Confluent Kafka Go #775
Unanswered
ianjhoffman
asked this question in
Q&A
Replies: 1 comment 3 replies
-
Also consider enable idempotency, which provides great message delivery guarantees at very little extra cost. See https://github.com/edenhill/librdkafka/blob/master/INTRODUCTION.md#idempotent-producer |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
My team is currently using the
sarama
Go Kafka library as our Kafka client but are thinking of qualifying & switching toconfluent-kafka-go
due to issues with howsarama
handles offline partitions and how it handles partition choices (with a random partitioner not requiring consistency) when a write fails. I had a few questions on how to replicate our current client's behavior and also about howconfluent-kafka-go
handles offline partitions.The documentation for
librdkafka
configuration states that forrequest.required.acks
,The default behavior for sarama is to consider a commit to be a success once a single ack from the leader is received. Is that equivalent to
0
inrequest.required.acks
? The phraseBroker does not send any response/ack to client
is a bit ominous and almost sounds like even the leader won't send an ack.If we use the
random
partitioner, will retries (as configured bymessage.send.max.retries
) pick a new partition each time or is the random partition picked only once per message?The
sarama
SyncProducer
allows us to have several goroutines all writing messages to one producer and learning about their writes' successes/failures synchronously. It wraps an async producer much like the producer inconfluent-kafka-go
and creates a new channel per write for success/failure delivery. Is this an acceptable pattern to use in a synchronous wrapper for theconfluent-kafka-go
producer?Is there any information I can read about how
confluent-kafka-go
/librdkafka
handles offline partitions? We've noticed thatsarama
does not do a great job of detecting/avoiding partitions that cannot be written to. In combination with pinning each message to the first partition picked by the producer's partitioner, this means that some messages are just doomed even with retries.Thank you! I look forward to hearing more about how we can make our Kafka client more resilient through the use of
confluent-kafka-go
.Beta Was this translation helpful? Give feedback.
All reactions