]
radhakrishna commented on HHH-4042:
-----------------------------------
If we are committing 100 records in one commit using the same statelessSession/session,
shouldn't it be intelligent enough to automatically commit 100 records in jdbc batch,
why even specify the batch_size somewhere in the configuration, may be they should expose
it as a runtime rather than a global constant.
Committing 1000 records using statelessSession with a batch size of 500, will commit 500
in a batch and 500 again rather than 500 separate inserts?
StatelessSession does not flush when using jdbc batch_size > 1
--------------------------------------------------------------
Key: HHH-4042
URL:
http://opensource.atlassian.com/projects/hibernate/browse/HHH-4042
Project: Hibernate Core
Issue Type: Bug
Components: core
Affects Versions: 3.3.1
Environment: JBoss 4.2.3, Linux, java 1.6, hibernate 3.3.1, entityManager 3.4.0,
jboss seam 2.1.2, postgresql 8.3
Reporter: Gael Beaudoin
I'm using a StetelessSession to insert millions of rows : it works great and without
using much memory. But I've just seen that with a jdbc batch size of 50 for example
(<property name="hibernate.jdbc.batch_size" value="0"/> in my
persistence.xml) the last round of inserts aren't flushed to the database. For
example, with 70 insert, only the first 50 are sent to the database.
I've searched a lot about this issues and on this thread
(
https://forum.hibernate.org/viewtopic.php?f=1&t=987882&start=0), the only
solution found is to set the batch_size to 1, which is really a shame.
I've tried to flush the session, close the jdbc connection, etc etc ... no luck.
I'd be fine with a way to set the batch_size to 1 only for this method, pro
grammatically, but I've not found any way to do that.
If you don't pay attention, it's a easy way to lose data.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: