[
http://opensource.atlassian.com/projects/hibernate/browse/HHH-3028?page=c...
]
Jesus Salvo commented on HHH-3028:
----------------------------------
We just hit this bug, and it seems to be existing since as far back as Hibernate core
3.1.3 ( What we have been using ... but will upgrade soon to the latest release ).
Like what the reporter said, this happens when you have a lot of DML operations before
committing or rolling back. We are using Hibernate also against an Oracle temporary table
as a "working table" before the final data is "transferred" to the
actual table. We cannot commit or rollback on that session, because when you do, you lose
the contents of the temporary table. Thus, the contents of ActionQueue.executions
continually builds up. It was not noticable because more often that not, the contents of
the "working table" is only small ( e.g. a few rows ) but we suddenly have cases
where the contents of the "working table" is now large.
We will try turning off query cache on this session and see if that helps.
Memory consumption when query cache is enabled
----------------------------------------------
Key: HHH-3028
URL:
http://opensource.atlassian.com/projects/hibernate/browse/HHH-3028
Project: Hibernate Core
Issue Type: Bug
Components: caching (L2), core
Affects Versions: 3.2.5
Environment: Hibernate 3.2.5, Oracle 9i
Reporter: Markus Heiden
As discussed in the hibernate-dev mailing list from 9.11.2007 to 12.11.2007 this bug
describes a memory consumption issue which is located in ActionQueue/EntityAction.
Some snippets from ActionQueue:
private ArrayList executions;
public void execute(Executable executable) {
final boolean lockQueryCache =
session.getFactory().getSettings().isQueryCacheEnabled();
if ( executable.hasAfterTransactionCompletion() || lockQueryCache ) {
executions.add( executable );
}
...
}
This code leads to a kind of memory leak, because if the "executable" is added
to "executions", the related entity which is referenced from the
"executable" is prevented from being garbage collected until the transaction
ends. So if one needs to insert large amounts of transient objects in one transaction,
there is no chance to get rid of the inserted objects by flushing and evicting them, if
e.g. the query cache is enabled.
One solution to this problem might be to rework the above "if" condition to
only add objects to "executions" if this is really needed. The problem is to
determine when it is really needed.
Some snippets from EntityAction (which implements Executable):
private final Object instance;
public final Serializable getId() {
if ( id instanceof DelayedPostInsertIdentifier ) {
return session.getPersistenceContext().getEntry( instance ).getId();
}
return id;
}
public final Object getInstance() {
return instance;
}
Another solution might be to set the reference to the related entity (field
"instance" in EntityAction) to null after flushing. This does not prevent
"executions" from being filled, but the related entities might be garbage
collected and so the memory consumption is acceptable. The problem is that subclasses of
EntityAction use the "instance" field for post transaction work.
The are currently two workarounds to this problems:
1) To always disable the query cache
2) To use shorter transactions
Workaround 1 is not really acceptable, because it prohibits the use of a very useful
feature.
Workaround 2 is sometimes acceptable but not wanted in most cases, because it breaks
transactional safety.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira