[
http://opensource.atlassian.com/projects/hibernate/browse/HHH-2224?page=c...
]
Vladimir Kralik commented on HHH-2224:
--------------------------------------
Hi all,
I've similar problem, i need use createSQLQuery(SQL).executeUpdate().
It's have to be used because of lack of hint in Hibernate and also speed is important
to me.
Now I setup second level cache ( Ehcache ) and I was surprised, that there isn't
objects in cache.
Simple unit-test said, that second level cache is working, but when I put any
createSQLQuery(SQL).executeUpdate(). between command, all caches were cleared.
All means everything in all regions, including read-only caches.
So regurally executing of "createSQLQuery(SQL).executeUpdate()" makes second
level cache useless :-(
I don't agree with Patrick :
If you know the ids of the objects you want to modify, you donĀ“t
need
a bulk update operation. You can retrieve the objects from the 2nd level
cache and manipulate them directly
It's a time problem to read a thousands
object into memory only for deleting it.
it will definitely be worse than the "performance loss" of
another DB round trip.
But when I've a lot of caches (read-only objects),
performance is going down, because each call "executeUpdate()" needs to
invalidate every cache, and maybe there is a locking problems.
I thing, that Hibernate should have options to switch off second level cache cleaning,
when "executeUpdate()" is called.
For me this means, that this piece of code should be configurable in some way.
http://viewvc.jboss.org/cgi-bin/viewvc.cgi/hibernate/core/trunk/core/src/...
--------------
/** returns true if no queryspaces or if there are a match */
private boolean affectedEntity(Set querySpaces, Serializable[] entitySpaces) {
if(querySpaces==null || querySpaces.isEmpty()) {
return true;
}
----------------
executeUpdate causes coarse cache invalidation
----------------------------------------------
Key: HHH-2224
URL:
http://opensource.atlassian.com/projects/hibernate/browse/HHH-2224
Project: Hibernate Core
Issue Type: Improvement
Affects Versions: 3.2.0.ga
Environment: Hibernate 3.2.0.ga, Oracle 9.2
Reporter: Stefan Fleiter
Attachments: bulk_testcase.zip
I am developing an application and want to mix bulk-updates
with normal hibernate operations.
The bulk updates work fine, but invalidate the whole region and I've found
no possibility to prevent this.
There would be 3 options to improve:
- Invalidate only the modified objects if the ids where given as Query-parameters.
- Let me deactivate the invalidation so I can invalidate the affected objects myself.
- Transform the DML to a select to gather the objects to invalidate before executing the
DML
The reference documentation does not mention caching at all:
http://www.hibernate.org/hib_docs/v3/reference/en/html_single/#batch-direct
The best documentation I've found is:
http://blog.hibernate.org/cgi-bin/blosxom.cgi/2005/07/19#dml-basic
Maybe this could be added to the reference documentation...
I already posted this at the forum
http://forum.hibernate.org/viewtopic.php?t=966775
but did not get a single answer.
I've attached a testcase for this.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira