-
-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High idle CPU usage #378
Comments
I suppose it's a follow-up for cherrypy/cherrypy#1908, right? |
Jep, at least this made me dig deeper into it. I recognised just now that it matches the symptoms exactly. Now I wonder whether we should merge those issues? π€ |
@MichaIng I'm going to wait for @Blindfreddy to confirm this and if it's true, I could transfer his issue into Cheroot and maybe copy some details from this issue + keep that one because it has more comments. |
Yes makes sense. Sorry I should have appended the finds there to not split conversation π . |
That's alright. Now we wait :) |
Yes, we should check carefully what the restored |
Interesting... We may consider making this:
|
I think checking the load would only increase the load and complexity. The goal should be to reduce the steps done within the loop to a minimum and not adding further magic. But that opinion is on pretty minimal insights. Here is the loop, which runs until a Within the loop, it collects and iterates through all connections, giving the
This means that server stop requests would wait indefinitely as long as no connection happens, if I'm not wrong. But the following note is interesting:
So I guess it would be possible to run the |
I previously submitted #352 for this, but there was an issue on windows I was not able to reproduce. |
I'll see if I'm able to setup a test environment on Windows. It's not exactly what I head in mind, but makes sense to align it with the anyway present expiration timeout. But this means that |
this is correct. the original default value of
I am not sure I understand - once the selector |
Ah yes that is true. Then it totally makes sense as you suggest in #352:
Now I see only one issue:
Theoretically the last remaining time until expiry interval could be stored and used as select timeout, so that, aside of the little processing time, the sum of two consecutive select timeouts at max matches the expiration_interval. But then, the other question is if there is any issue when connections are not "expired" (==unregistered + closed) for a longer time. Due to my lack of depth insights: Is there any problem when old connections are not unregistered+closed until a new connections is incoming, or after a much longer time, e.g. 300 seconds (the default on Apache2)? If an issue is seen when a sudden large number of new connections is coming while a large number of old connections was not yet closed, then processing new ones and closing old ones could probably be merged into the same loop, to minimise the risk that both add up to reach a certain limit. Currently it's done in two separate loops through the same connection tuples doing partly the same checks ( |
This has now been merged here: #401 On Windows systems, sadly a capped timeout needs to stay, as the connection handler |
β I'm submitting a ...
π Describe the bug. What is the current behavior?
The idle CPU load has been significantly increased with #199 and further multiplied with #308. The new connection manager seems to imply a loop which significantly loads the CPU, compared to before. With the second PR, the timeout has been reduced from 0.1 to 0.01, which seems to imply a 10 times increased number of loops and hence a multiplication of the CPU load. Now #311 addressed the regression of #199 in a better way, making the reduced timeout obsolete, as far as I understand. I just tried changing the 0.01 here back to 0.1, basically reverting #308, and indeed the CPU load goes back again to match moreless the state of #199 or even lower.
β What is the motivation / use case for changing the behavior?
The absolute CPU load may be small, but especially on smaller SBCs this is quite an issue. So to keep cheroot/CherryPy a great HTTP server for embedded devices, it would be great to reduce idle CPU usage to a minimum. For reference: HTPC-Manager/HTPC-Manager#30
π‘ To Reproduce
Steps to reproduce the behavior:
π‘ Expected behavior
I expect a minimal CPU usage when the HTTP server is not accessed at all.
π Details
No logs, I basically did it via
htop -d 10
. Not a very precise tool for measuring process CPU usage, but the differences are very significant when counting the seconds in which the CPU time in centiseconds raises.π Environment
π Additional context
There are no errors involved, just the connection manager itself checking for idle connections (as far as I understood) very often, which implies CPU usage.
The text was updated successfully, but these errors were encountered: