You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In what version(s) of Spring AMQP are you seeing this issue?
3.2.0
Describe the bug
When upgrading our app to the latest spring version we discovered that the new verison produces way more metrics then before. While the old version produced about 20k timelines in prometheus, the new one generates 200k per instance. The main problematic metrics are:
spring_rabbit_listener_seconds_bucket
spring_rabbit_listener_active_seconds_bucket
which have not been in the older version 3.1.x
To Reproduce
Create an application, with message listeners, activate exporting of histogram buckets (e.g. by adding a meterFilter to the MeterRegistry)
Have Observations active (e.g. to trace the message processing even when they are processed async.)
public class MeterRegistryConfigurator implements BeanPostProcessor {
@Override
public Object postProcessAfterInitialization(@NotNull Object bean, @NotNull String beanName) {
if (bean instanceof MeterRegistry meterRegistry) {
var config = meterRegistry.config();
config.meterFilter(new MeterFilter() {
@Override
public DistributionStatisticConfig configure(
@NotNull Meter.Id id,
@NotNull DistributionStatisticConfig config
) {
return config.merge(DistributionStatisticConfig.builder()
.percentilesHistogram(true)
.build());
}
});
}
return bean;
}
}
Expected behavior
A relatively small amount of timers are exported (max one per queue/error/listener(instace))
Actual behaviour
a new timer is created per delivery tag (which leads to an enormous amount of timers.
jensbaitingerbosch
changed the title
Too many metrice generated leading to memory leak
delivery tag in the observations leading to too many timers generated -> leading to memory leak
Nov 26, 2024
In a previous state of this bug report I also mentioned anoter metric. That was caused by the inheritence of the low kardinality values to child observations.
Yes, it feels like a bug that this messaging.rabbitmq.message.delivery_tag has to be a high cardinality. If that is a case, please, consider to contribute the fix. I’m on vacation until next week.
In what version(s) of Spring AMQP are you seeing this issue?
3.2.0
Describe the bug
When upgrading our app to the latest spring version we discovered that the new verison produces way more metrics then before. While the old version produced about 20k timelines in prometheus, the new one generates 200k per instance. The main problematic metrics are:
which have not been in the older version 3.1.x
To Reproduce
Create an application, with message listeners, activate exporting of histogram buckets (e.g. by adding a meterFilter to the MeterRegistry)
Have Observations active (e.g. to trace the message processing even when they are processed async.)
Expected behavior
A relatively small amount of timers are exported (max one per queue/error/listener(instace))
Actual behaviour
a new timer is created per delivery tag (which leads to an enormous amount of timers.
error="none", messaging_destination_name=xxx.queue", messaging_rabbitmq_message_delivery_tag="1", spring_rabbit_listener_id="org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#2"}
Reference
The rabbit documentation refers the delivery tag as
Therefore it is not suitable for a 'low cardinality key value'.
Workaround
Add an observation filer, that removes the key
messaging.rabbitmq.message.delivery_tag
e.g. by adding it in a bean post processor:
The text was updated successfully, but these errors were encountered: