batch_size to prevent OutOfMemoryException while inserting many objects
-----------------------------------------------------------------------
Key: HSEARCH-33
URL:
http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-33
Project: Hibernate Search
Type: New Feature
Components: engine
Versions: 3.0.0.beta1
Reporter: Stefan
While inserting many objects in a batch the application runs out of memory because the
FullTextIndexEventListener for Lucene collects the updates.
This is a tough problem. Today the solution is to apply the work in n transactions rather
than 1. Maybe the notion of batch_size at some point to force a "flush" when the
queue goes up can help. But it sacrifices the transactional semantic.
hibernate.cfg.xml
<event type="post-insert">
<listener
class="org.hibernate.search.event.FullTextIndexEventListener"/>
</event>
Inserting code something like:
Session session = sessionFactory.openSession();
session.setCacheMode(CacheMode.IGNORE);
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ ) {
Item item = new Item(...);
session.save(item);
if ( i % 100 == 0 ) {
session.flush();
session.clear();
}
}
tx.commit();
session.close();
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira