[
http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-557?pag...
]
Johnny B commented on HSEARCH-557:
----------------------------------
Hi Guys,
A bit of history on this. We have noticed over a period of time(1-2weeks) that our VM
eventually runs out of memory and crashes in our test environment. On closer inspection of
the heap dump I could see that the FieldCacheImpl was the culprit hogging all the whole
memory.(90% of the VM size). I also noticed what seemed to be duplication of data in the
FieldCacheImpl, when i drove down into the buckets in the maps in the FieldCacheImpl i
could see the same data been held.
Our overall index size is quite small in lucene terms, 1.9g and our biggest index(class
index) is 1.6g. The VM size is 1g and we are running our application in tomcat using the
Master/Slave paradigm but without JMS, we have a custom built app for updating the master
index.
I carried out some load testing using queries against the 1.6g index, the test involved no
index swapping during execution, VM size grew to 250-300MB but then held steady, no signs
of a leak just from querying. I then carried out the same test but with index swapping
happening in the background, VM size grew and noticed that the memory wasnt getting
released.
In order to fast forward the process i adapted the SlaveDirectory slightly, when the
marker file changes, instead of switching the index, i closed the current directory and
opened the new one.Those steps happen every time the marker file changes.(take a look at
the forum post for a sample of how it looks like). What i would have expected from doing
this is that we have alot more frequent GC cycles but memory should still be getting
freed. This was not the case, started querying, vm size went to 250-300, changed marker
file,slave switches over to new index, VM size rose to 500MB, waited a period, stayed
level, changed marker again,slave switched over, VM size rose again to 750-800, waited a
while, VM level stayed consistent, changed marker file etc etc........i carried out these
steps until eventually the VM went boom. I would have expected the memory to grow alot
quicker but eventually get collected but it seems that upon switching over to the latest
index the FieldcacheImpl is not getting cleared up because when looking at the heap dump,
i could see again that the FieldCacheImpl was the culprit.
I changed my reader strategy from the "default" to "not-shared" and i
dont have any memory problems running the same tests. Memory rises quickly but it gets
collected and app runs fine.
So its my belief that when the Slave switches from the current index to the new index that
it is not properly cleaning up the old index references properly i.e purging the old
entries in the FieldCacheImpl.
Memory leak when using default ReaderProvider with Master/Slave
Directories
---------------------------------------------------------------------------
Key: HSEARCH-557
URL:
http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-557
Project: Hibernate Search
Issue Type: Bug
Components: directory provider
Affects Versions: 3.1.0.GA, 3.1.1.GA, 3.2.0.Final
Reporter: Sanne Grinovero
Fix For: 3.3.0
The default ReaderProvider, org.hibernate.search.reader.SharingBufferReaderProvider is
keeping a reference to an open index of the most current IndexReader, so it's able to
refresh this on demand and track references to previously opened instances.
The currently open indexReaders for the not-active directory might consume a lot of
memory; follows a forum reference where it appears to be a memory leak; still not sure if
it's a leak or just needing twice as much memory as otherwise, please comment here.
https://forum.hibernate.org/viewtopic.php?f=9&t=1005540
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira