[
http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-469?pag...
]
Sanne Grinovero commented on HSEARCH-469:
-----------------------------------------
{quote}It would certainly reduce the size of the memory leak and probably avoid a server
crash{quote}
why do you say reduce? if you set it to zero no memory would be leaked; and I wouldn't
call it a leak as this value is being respected by use request: filters aren't cached
by default.
{quote}However, it would still be wasting a fair amount of memory.{quote}
right, I still think it's important to do - just not classifying it as a bug but as a
nice improvement.
{quote}in my quest for a speedy and reliable setup I've gotten rid of the use of
filters and, subsequently, hibernate search itself{quote}
While I almost understand your position about the filters, what did you find in the rest
of Search to work against your desire for speed and reliability?
I have been in a similar position two years ago but I think I contributed all improvements
I could spot, and I'm not finding evident areas which could have some exciting
improvement. Sorry for OT, feel free to use the forum or mailing-list if you have good
ideas.
Filter caching using causes excessive memory use
------------------------------------------------
Key: HSEARCH-469
URL:
http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-469
Project: Hibernate Search
Issue Type: Bug
Components: query
Affects Versions: 3.1.1.GA
Environment: hibernate-core-3.3.2.GA
Reporter: Dobes Vandermeer
Assignee: Sanne Grinovero
Fix For: 3.2.0
The CachingWrapperFilter uses the reader instance (CacheableMultiReader) as a key for the
caching.
However, the reader instance keeps pointers to byte arrays in its "normsCache"
and in the "normsCache" of its sub-readers; each array has one byte for each
document in the index and in some cases there will be multiple of these arrays associated
with differet fields.
For an index with millions of records this can result in an apparent "leak" of
hundreds of megabytes of memory as those readers are not re-used and the MRU cache used by
default will keep up to 128 hard references to the readers by default.
The search system must either re-use or delete the normsCache, OR the cache key for these
filters should be tied to something else that doesn't keep references to potentially
huge data arrays. Otherwise the scalability of the search subsystem is significantly
impacted when using filters, as you must have enough heap to accommodate up to 128 times
as many copies of the norms arrays.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira