-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example Java "Listener" code appears to cause memory leaks #148
Comments
Dear Rdghickman, Thank you for your inputs. You can get Java5 API reference document from below link: Reading/Taking Samples in Java 5 is pretty simple, highlights are below:
Here are the example codes to read/take samples in two different ways:
Code snipped are below: DataState ds = subscriber.createDataState(); List<Sample> samples = new ArrayList<Sample>(); /* Process each Sample. For example print its name and production time */ **2: Take ALL samples available that represent NEW instances who are still ALIVE using ‘loaned’ buffers ** Iterator<Sample> samples = fooDR.select().dataState(ds).take(); /* Process each Sample. For example print its name and production time / Since Java 5 API implicitly returns the loaned buffer, there is no chance to memory leak. I am investigating your reported issue at my end. I will keep you posted. |
Hi Richard, I guess you are using protobuf when running into this issue? Because I don't notice this behavior for normal sample types. Let me explain a little bit what happens under the hood. The Java5 API is built on top of the classical Java API, and so the read() call on the Java5 API delegates to the read() call of the classical API. Since the Java5 API returns a List<> of samples, whereas the classical Java API return an array of samples and an array of SampleInfo (whereas the representation of the individual samples in Java5 is identical to their representation in the classical Java API), the sample references will need to be copied from the arrays to their corresponding List<> objects, and this happens transparently under the hood for you. When you want to benefit from preallocating your own samples, you intend to re-use not only the List<> object, but also all samples contained therein. However, since the Java5 API forwards to the classic Java API, you want to not only re-use the samples, but also the arrays used for carry these samples and their SampleInfo in the underlying classical API. And that is exactly the purpose of the HashList you encountered. You are right that if you use the read()/take() call with the pre-allocated List<>, but you don't intend to recycle that List, that this mechanism will leak every List (including its samples) away. However, that is not what the pre-allocated pattern was intended for. So to get rid of the Leak in your example, you might want to consider using the following alternatives:
Hope that helps you work around your problem. Regards, Indeed there is a MAP that keeps track of all pre-allocated sample Lists, and their underlying cla |
Thanks for the response. Yes, I follow what's going on now, my point was simply that given what example code was supplied in the package, little of this can be inferred and hence I tripped over these issues and had to dig around myself with little clue from the API or DDS itself as to what was going wrong. For example, in I appreciate there is a Javadoc output for the Java 5 API but this also does not really aid in the "correct" use of it at least from a beginner's point of view, which is why I jumped straight into the example code. I am sure it is useful if you already understand how to correctly write code in the DDS paradigm but otherwise it is of far less use. It's been a bit painful getting started even though I've finally managed to get somewhere. I feel the only reason I've been able to actually progress this far is by picking into the DDS source code and trying to guess what I'd done wrong, which is a little bit suboptimal for a new developer. As such, I was just offering my opinion that perhaps the example code (at least for Java) should have another glance over in order to aid other developers in the future. Currently we are not using protobuf. At least, all code paths seem to have gone through |
You are right, this is sort of a poor example on how to use pre-allocated memory. It would be better for this example to use loaned memory instead. I will write a ticket internally to get it changed, but don't know how fast this will be picked up. |
Little update to this ticket. In the upcoming 6.11 release the examples have been updated to use loaned memory instead of pre-allocated memory for this particular reading pattern. Also we decided to only keep track of the last pre-allocated batch so if in the future you do use pre-allocation the wrong way, you don't leak away all previous allocations. Instead, the Reader will only keep track of the last pre-allocated batch. This would still be helpful in the typical scenario where you pre-allocate a batch, and then use the same batch over and over again in a while loop to access data as soon as you are notified it comes in. |
Since there's not an awful lot of Java documentation and tutorials available for this product, I utilised the example code for getting started with Java. One of the key things is of course using callbacks for delivery of data, hence the "Listener" example shown here.
Looking at examples/dcps/Listener/java5/ListenerDataListener.java (180404OSS):
When
read(List<Sample<T>>)
ortake(List<Sample<T>>)
is called, the OpenSplice code looks at aConcurrentHashMap<List<Sample<T>>, PreAllocatorImpl<T>>
namedpreallocated
inDataReaderImpl
. If theList<Sample<T>>
is not a key in the map, theList<Sample<T>>
is added to the map with a newPreAllocatorImpl
inDataReaderImpl.getPreAllocator(List<Sample<T>>, Class<?>, Field)
. If the key already exists in the map, the new samples are added to theList<Sample<T>>
. However, since in this case we create a newList<Sample<T>>
every time inonDataAvailable(DataAvailableEvent<T>)
, a new key/value pair is added to the map every time data is delivered and OpenSplice never cleans the map. Memory grows and eventually the application dies.After discovering this I changed the code to simply
read()
ortake()
and iterate over the returned samples.This still resulted in slow leaks detected in the form of increasing heap usage. Another heap dump this time showed that again
DataReaderImpl
was retaining a large amount of data this time inAbstractDataReader<T>
contained in the protected member variableHashSet<AbstractIterator<T>>
namediterators
. Firstly, despite the fact thisDataReader<T>
contains theHashSet<AbstractIterator<T>>
, it is the constructor of theAbstractIterator<T>
that ends up adding itself to theHashSet
contained within theAbstractDataReader<T>
. It is worth noting here that despite being aHashSet
, neitherAbstractIterator<T>
orIteratorImpl<T>
containequals()
orhashCode()
methods.Again what is unclear from the example is that
read()
ortake()
returns anIterator<T>
and this is not ajava.util.Iterator
but a different type that is a DDS iterator that has aclose()
method on it (which is not present injava.util.Iterator
). It seems thisclose()
method must be called in order to prevent thisiterators
object growing.Since the Java reference documentation doesn't really help a lot in the API presented in the OpenSplice package, it seems natural for people to use the example code provided as a starting point. However, this has caused a lot of pain in development here because of subtle issues like this and similar, so I thought it would be worth pointing it out. In the end, the working code looked similar to the following:
The text was updated successfully, but these errors were encountered: