-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce goroutine storm #4
Comments
One remedy @keks and I thought about is buffering unhandled requests and draining them on a worker pool of static or scalable size. The idea is to still drain the incoming conn empty as fast as possible. Otherwise we won’t get answers to our calls. |
The above approach would stall on live querys. For this to be feasable we need some kind of switchboard that can accumulate live querys and serve multiple muxrpc sessions from a worker. It would drain the query up to the latest and then hand it over to a live drain of the root log. |
What about making a connection limit configurable, and just dropping/rejecting anything which exceeds it? |
Yup, i thought about a simple token bucket limiter for new calls as well, would be easier to add into the existing code for sure. |
Right now there is no bound on incoming requests. Each new call starts a goroutine.
Especially for legacy ssb replication this means ~9k once the connection is established. Depending on the load of the remote party this happens in a couple of seconds but I‘ve also Seen strained Systems where this built-up takes nearly 15minutes.
The text was updated successfully, but these errors were encountered: