You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
imqbroker implements its own memory management. It continuously monitors "available memory" and determines its own "usage level":
GREEN: "Plenty of memory is available"
YELLOW (default: 80% memory usage): "Broker memory is beginning to run low"
ORANGE (default: 90% memory usage): "The broker is low on memory"
RED (default: 98% memory usage): "The broker is out of memory"
Beginning with YELLOW level, the broker starts swapping messages to persistent storage, limits producers and - when reaching the RED level, rejects new producers.
Associated messages from log.txt:
[B1089]: In low memory condition, Broker is attempting to free up resources
WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
The theory behind this seems to be that you can actually "free" memory and thus prevent a memory overload in time.
Unfortunately, JVM memory management doesn't work this way.
When using the configuration shipped with Glassfish 4.1.1, the maximum heap size is set to 192m and the default garbage collector is used. The JVM heap is thus divided into
OldGen space: ~128m (67%)
Eden space: ~50m (26%)
Survivor spaces: 2x ~7m (7%)
Only one of the survivor spaces can be used at a time, so the value returned by Runtime.maxMemory is not 192m but ~185m.
New objects are allocated in the Eden space. When it runs full, a "minor GC" is triggered and surviving objects are moved to one of the Survivor spaces (or directly to OldGen if they exceed 7m). When the survivor space runs full, objects are moved to the OldGen. When the OldGen runs full, a "full GC" is triggered. When the full GC cannot free up memory, you get an OutOfMemoryError.
All this happens in the background. Especially the "full GC" is a costly operation that is run rarely by a JVM. This means that
The "available memory" calculated by the broker is the sum of unused memory in all areas of the heap.
Since the Eden space is cleared frequently while the JVM is running, this amount of "available memory" is expected to vary by +/- 26% just by running the broker.
Once an object reaches the OldGen, its memory will not be freed until the next full GC, no matter what you do. You can't "free up resources" and expect the memory allocation level to decrease instantly.
A full GC is triggered as late as possible - that is when memory allocation reaches 100%.
The effects I observed were the following:
Just connecting JConsole to an otherwise idle broker process can cause a RED memory condition within a few hours, where the usage level oscillates between YELLOW/ORANGE/RED in the last minutes (remember: 26% variation) before the JVM triggers a full GC and usage level drops back to GREEN for the next few hours.
The same happens when there is a constant but slow flow of messages. With an example application, I initially produced 1000 messages, each 700 bytes in size. The application then started removing the oldest messages and producing a new one each time.
With this setup, the memory used by messages in the broker was ~700k at any point in time. Nevertheless the broker quickly ran into a RED memory condition and rejected new messages for a few seconds - until the JVM triggered a full GC.
The cause seems to be that the Eden space fills up very quickly when messages are sent to the broker (I assume a lot of copying data between byte or char arrays that are not reused). The messages themselves stay in the queue for some time and therefore get moved to the OldGen. Once there, the memory is not reclaimed until the next full GC even though they are not referenced anymore.
The only reasonable measure to prevent the broker from rejecting producers for some minutes every few hours is to disable the RED memory level by setting imq.red.threshold=100.
Since you CAN NOT KNOW the amount of memory that is really "available", i.e. the result of the next GC, this kind of "memory management" makes it easy for any client to put the broker into a DoS condition.
imqbroker implements its own memory management. It continuously monitors "available memory" and determines its own "usage level":
Beginning with YELLOW level, the broker starts swapping messages to persistent storage, limits producers and - when reaching the RED level, rejects new producers.
Associated messages from log.txt:
[B1089]: In low memory condition, Broker is attempting to free up resources
WARNING [B2076]: Broker is rejecting new producers, because it is extremely low on memory
The theory behind this seems to be that you can actually "free" memory and thus prevent a memory overload in time.
Unfortunately, JVM memory management doesn't work this way.
When using the configuration shipped with Glassfish 4.1.1, the maximum heap size is set to 192m and the default garbage collector is used. The JVM heap is thus divided into
Only one of the survivor spaces can be used at a time, so the value returned by Runtime.maxMemory is not 192m but ~185m.
New objects are allocated in the Eden space. When it runs full, a "minor GC" is triggered and surviving objects are moved to one of the Survivor spaces (or directly to OldGen if they exceed 7m). When the survivor space runs full, objects are moved to the OldGen. When the OldGen runs full, a "full GC" is triggered. When the full GC cannot free up memory, you get an OutOfMemoryError.
All this happens in the background. Especially the "full GC" is a costly operation that is run rarely by a JVM. This means that
The effects I observed were the following:
Just connecting JConsole to an otherwise idle broker process can cause a RED memory condition within a few hours, where the usage level oscillates between YELLOW/ORANGE/RED in the last minutes (remember: 26% variation) before the JVM triggers a full GC and usage level drops back to GREEN for the next few hours.
The same happens when there is a constant but slow flow of messages. With an example application, I initially produced 1000 messages, each 700 bytes in size. The application then started removing the oldest messages and producing a new one each time.
With this setup, the memory used by messages in the broker was ~700k at any point in time. Nevertheless the broker quickly ran into a RED memory condition and rejected new messages for a few seconds - until the JVM triggered a full GC.
The cause seems to be that the Eden space fills up very quickly when messages are sent to the broker (I assume a lot of copying data between byte or char arrays that are not reused). The messages themselves stay in the queue for some time and therefore get moved to the OldGen. Once there, the memory is not reclaimed until the next full GC even though they are not referenced anymore.
The only reasonable measure to prevent the broker from rejecting producers for some minutes every few hours is to disable the RED memory level by setting imq.red.threshold=100.
Since you CAN NOT KNOW the amount of memory that is really "available", i.e. the result of the next GC, this kind of "memory management" makes it easy for any client to put the broker into a DoS condition.
Environment
================================================================================
Message Queue 5.1.1
Oracle
Version: 5.1.1 (Build 2-c)
Compile: March 17 2015 1045
Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved.
Java Runtime: 1.7.0_13 Oracle Corporation /opt/jdk1.7.0_13/jre
Affected Versions
[5.1.1_b01]
The text was updated successfully, but these errors were encountered: