-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
76% of time are runtime.goexit
#141
Comments
Hi! Unfortunately, current implementation of Snappy and GZip encoding showed itself quite bad while testing on big data volumes due to inefficient CPU usage. The only way I see is to change them to something that performs better. Though, I haven't seen any options available so far. |
One more thing worth to mention here is that this consumer uses completely different approach than the Java consumer and this affects performance because of workers and acknowledgements handling. I agree there is place for optimizations but roughly this consumer won't be that fast as standard Java consumer. |
@oliveagle umm so you say siesta is doing 40k snappy encoded messages/sec while java client does 250k? This shouldn't definitely happen. I understand if this happens on go_kafka_client level but definitely not on siesta level. What is average message size in the topic? Can you also check performance for non-compressed messages? |
150B per message. compression is not the problem here. maybe too many goroutine should be blamed.
|
@oliveagle ok so if I understand you correctly the performance problem appears on siesta level and only with compressed messages? |
@serejja, no, |
@oliveagle we've just discussed this with Ivan and have an assumption how to fix this. Will try to create a PR for this soon. Thanks! |
@oliveagle please try #142 PR and tell whether it solves your issue or not. Thanks! |
consuming too slow compare to java client.
on the same machine. java client process 250k messages per second in snappy encoding each thread, 3 threads in total. but this client can only reach 30k at maximum. on a better machine this client can reach 70k per second.
The text was updated successfully, but these errors were encountered: