-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
changes about config.go #68
Conversation
…pdating producerbatchbytes type because kafka writer using int64.
…tch.go for int64 datatype
Thanks, LGTM @mhmtszr take a look 👀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't disable infinite retry, if user update that logic they can miss some events if any error happens in Kafka side. We can discuss this.
Also most probably user won't know the usage of batch timeout if they increase it will overwrite our custom batch we can may set it as 1 nanosecond.
config/config.go
Outdated
@@ -67,4 +69,12 @@ func (c *Connector) ApplyDefaults() { | |||
if c.Kafka.MetadataTTL == 0 { | |||
c.Kafka.MetadataTTL = 60 * time.Second | |||
} | |||
|
|||
if c.Kafka.ProducerMaxAttempts == 0 { | |||
c.Kafka.ProducerMaxAttempts = 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ProducerMaxAttempts need to be math.MaxInt
Also we need to set ProducerBatchBytes math.MaxInt
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@halilkocaoz I have a few comments that I just realized.
And we need to update the README.md according to these changes.
If ProducerBatchBytes
works as it is when 0 is set, it can stay that way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ProducerMaxAttempts need to be
math.MaxInt
kafka.Writer.MaxAttempts default value is 10. also @mhmtszr mentioned it, should we use math.MaxInt
for MaxAttempts?
Also we need to set ProducerBatchBytes
math.MaxInt
It's default value is 1048576.
Should we use default values or not? What do you think about it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
by default, ProducerMaxAttempts
need to be infinite. we can use math.MaxInt
for that.
for ProducerBatchBytes
if default value is 1048576 i am okey with that.
basically, we need to preserve old behaviors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ProducerMaxAttempts
default value is now math.MaxInt. ProducerBatchBytes
is configured 10485760
as default. But it was statically setting with using math.MaxInt not from ApplyDefaults()
func.
@@ -180,9 +179,9 @@ func (c *client) Producer() *kafka.Writer { | |||
Addr: kafka.TCP(c.config.Kafka.Brokers...), | |||
Balancer: &kafka.Hash{}, | |||
BatchSize: c.config.Kafka.ProducerBatchSize, | |||
BatchBytes: math.MaxInt, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
batch bytes was setting with statically as maxInt, now it's getting default value from ApplyDefaults()
func as 10485760.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@halilkocaoz I think we should set it math.MaxInt too because we may sent more request while calculating the batches.
Example:
If we set ProducerBatchBytes as 1MB and if our batch size become 1.1MB flush will be triggered, in segmentio side this 1.1MB request will be sent as two request(because segmentio batch size 1MB). We need to disable segmentio batch algorithm I think what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@halilkocaoz I think we should set it math.MaxInt too because we may sent more request while calculating the batches. Example: If we set ProducerBatchBytes as 1MB and if our batch size become 1.1MB flush will be triggered, in segmentio side this 1.1MB request will be sent as two request(because segmentio batch size 1MB). We need to disable segmentio batch algorithm I think what do you think?
@mhmtszr, Is kafka-go/writer.go.writeMessages func you mean with 'two request'?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@halilkocaoz I think we should set it math.MaxInt too because we may sent more request while calculating the batches. Example: If we set ProducerBatchBytes as 1MB and if our batch size become 1.1MB flush will be triggered, in segmentio side this 1.1MB request will be sent as two request(because segmentio batch size 1MB). We need to disable segmentio batch algorithm I think what do you think?
@mhmtszr, Is kafka-go/writer.go.writeMessages func you mean with 'two request'?
it would be better for me to let you decide, also refactoring that might be complicated for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@halilkocaoz yes that writeMessages functions will seperate our one request because of segmentio batch algorithm. We can keep it max I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@halilkocaoz if you set max int for ProducerBatchBytes we won't be able to batch messages(only with ticker) we just need to set max int for segmentio client BatchBytes paramater. Default ProducerBatchBytes should be still 10485760
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm.
resolves #12