-
Notifications
You must be signed in to change notification settings - Fork 627
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve Documentation for PooledChannelConnectionFactory #1557
Comments
I can set But even worse, the Application never recovers from an exhausted pool, no matter if I set At this point, I can only kill the application, which will also give the following Exception:
Update: I can observe the same behaviour when I remove the Routing/Caching Connection Factory und just use a single |
I tried using the
|
There is something strange going on; it shouldn't be creating new connections...
There is one shared connection... Lines 136 to 144 in 1d5d0c2
Also that stack trace is creating the pool, not trying to check out a channel. It doesn't look like it has anything to do with runtime... // Populate the creation stack trace
this.creationStackTrace = getStackTrace(new Exception()); <<<<<<<<<<<<<< Line 407 Finally, this connection factory is not really designed for consumers because they have long-lived channels. If used there, the channel pool size must be large enough for the number of containers multiplied by their concurrency (plus a few more for publishing operations - but it is better to use a different connection for consumers and producers). You can configure the pool size by adding a configurer. Lines 96 to 101 in 1d5d0c2
|
Yes, I'm just monitoring/printing the pool stats to see how many channels are currently borrowed.
I can disable the consuming part (the
As I increase the load, more channels are borrowed from the pool, but they are never returned. If I stop all requests, the borrowed count stays the same. If the borrow count reaches the maximum (8) the application hangs and never recovers, as the channels are never returned to the pool.
If I increase the pool size, the problem would still be the same, or am I mistaken? As load increases, channels are borrowed, never returned, at some point the pool is exhausted. We just tried it with a pool size of 30, and it is still exhausted at some point. Is the The docs says:
|
Well, the stack trace in those stats is useless; it is from ancient history; a thread dump would have been better.
The template reliably closes the channel (returns it to the pool) in a finally block in spring-amqp/spring-rabbit/src/main/java/org/springframework/amqp/rabbit/core/RabbitTemplate.java Lines 2254 to 2271 in ae7ba84
However, when using direct reply-to (the default), by default the template uses a Or, you can set https://docs.spring.io/spring-amqp/docs/current/reference/html/#direct-reply-to |
With |
That would not be a big issue, at least not if it only blocks during the request/reply duration, not forever (because channels are never returned)
There can always be load spikes, and then the application would just hang forever until restarted. That's really the issue here. Having "only" degraded performance because actual load > expected load would be acceptable. So, if I understood you correctly:
What solution would you prefer? At this point, as it looks like everything is working as intended, I think the documentation should be updated to "warn" about this behaviour. I'd guess the use case I have shown is not that exotic and others might run into the same issue. |
Correct.
It's up to you. In order of best performance...
|
Thanks for the clarification! I would leave it up to you if you want to close this issue, because it's not a real bug, or leave it open to update the/change the type to "documentation". |
In what version(s) of Spring AMQP are you seeing this issue?
Tested with 2.4.3 and 2.4.8
Describe the bug
We have a Spring Boot application that uses a
PooledChannelConnectionFactory
and aCachingConnectionFactory
inside aSimpleRoutingConnectionFactory
(the default connection used is thePooledChannelConnectionFactory
, the other one is only used for few requests which where not used when experiencing the described problem). We noticed that after some time and loading the application with (HTTP) requests, the container is shut down because the/actuator/health
endpoint stops responding.We traced it down to the channel pool being exhausted, that is, this line (
PooledChannelConnectionFactory.java:196
) waits forever:We are using the default
RabbitHealthIndicator
provided by theorg.springframework.boot.actuate.amqp
package, which does the check by executing:The connection configuration is straight forward:
We have a RabbitListener for incoming messages:
And send outgoing messages using
rabbitTemplate.convertSendAndReceive()
.This is the state of the pool when it stops working:
To Reproduce
I am not yet able to provide a minimum working example to easily reproduce this behaviour. It requires an application that runs for some time and is loaded with requests. At some point, the
RabbitHealthIndicator
stops working, that is, the liveness/readiness endpoints time out.Expected behavior
The channel pool should not get exhausted.
Sample
Not yet available.
The text was updated successfully, but these errors were encountered: