This guy's using a combo of @EmbeddedId and @Embeddable in the tests his provided, and this is resulting in cache key serialized payloads differing for identical instances. Even in between runs, the last bit of the payload varies. This obviously results in keys not being found in the cache.
Apart from this issue which I'm currently investigating, I've spotted that when the CacheKey is marshalled, it also marshalls the entity as part of the key. That sounds rather inefficient, so is this guy:
a) using the right pattenr for @EmbeddedId + @Embeddable
b) is Hibernate behaving the right way here by having a reference to the entity from cache key
Sr. Software Engineer
Infinispan, JBoss Cache
We decided to handle integration between the master and metamodel
branches by doing intermediate merges from master to metamodel and
ultimately rebasing metamodel on top of master for the final integration.
As we discussed in the last dev meeting, our main dev resources are moving to the new metamodel branch, since I just find time to start working on it,
and not sure about the current status of where we are now, and how much left to finish it.
So, I'm starting this mail and wondering maybe others can share the current status so we can get the big picture wrt works left and estimate ETA.
I think there are 3 areas here:
1.1 basic entity binding
1.2 component binding
1.3 association binding
1.5 id binding
1.6 secondary table / join table binding ?
2. persister integration
3. test results
Strong Liu <stliu at hibernate.org>
I want to start discussing a plan to integrate the master and metamodel
One point of discussion is whether we want to pull master over on to
metamodel on a regular basis or whether we want to just wait until the
"very end" and do a single painful integration. I did some searches and
found quite a few recommendations for ongoing integration; however, they
all specifically mentioned merges over rebasing.
Another point of discussion is merging test and matrix back together. I
specifically asked this on StackOverflow to get other perspectives on
Still not sure of the right answer there if we decide to go for the
one-big integration event. If we do decide to move to regular
integrations from master to metamodel then I think we should merge test
and matrix back together immediately and do an integration.
I'm working on this org.hibernate.ejb.test.lock.LockTest#testFindWithPessimisticWriteLockTimeoutException test, it failed on lots of DBs
for now, I found:
sql server : supports nowait, but not other lock timeout value
DB2 : doesn't support either
sybase: doesn't support either
by "doesn't support" I mean, can't set this timeout from sql statement level, but they do support set it as a DB global config.
so, I'm wondering if we could choose this way:
1. if DB supports both, then good
2. if DB supports nowait only, and lock timeout is set to nowait, then good
2.1 if lock timeout is set to 5 milliseconds, for example, then we setQueryTimeout(1) // (locktimeout +500)/1000+1
3. if DB doesn't supports either, then same as 2.1, use java.sql.Statement#setQueryTimeout
does this acceptable? or we just ignore lock timeout if the underlying Db doesn't support it?
Strong Liu <stliu at hibernate.org>
The new type registry capability in Hibernate 4 and above is really useful. I have implemented an integrator that autoregisters some types, however what I have found is that whilst this works perfectly, the registered types are not being resolved correctly when they are used within @Embeddable classes.
Is this the expected behaviour?
I have tried debugging through the sources and it looks like the SessionFactory's TypeResolver is not used in the case of properties of such classes. See the comments at http://blog.jadira.co.uk/blog/2012/1/19/release-jadira-usertype-300cr1-wi....
I'm working on the programatic mapping of HSEARCH-923 (spatial support) and
I had to implements @Spatials for
@Spatial as @Fields is to @Field
Emmanuel checked my work on my gtihub repo and some questions raised we
would like to share with you :
- Is @Spatials the right name for such a property ? It mimics @Fields but
it does not feels right.
Emmanuel proposes @Spatial.List (which I am not fond of, sorry) but it is
against Hibernate coding habits
- @Spatial is supported at entity level and at property level. But if it
seems ok for an entity to have multiples location
through @Spatials (think a user with a work and a personnal
address/position) it seems more weird to have @Spatials
at property level.
The only use case I see is having a Coordinate property indexed in
different ways by having different
parameters sets for each @Spatial instance.
What do you think ?
PS : working branch is here
I have a mysql master/slave setup which works fine with jdbc datasource.
All inserts goes to the master and all queries goes to the slaves.
I try to access mysql master/slaves using hibernate/spring configuration.
I use org.springframework.orm.hibernate3.HibernateTransactionManager
transaction manager and configure it as follows.
<property name="sessionFactory" ref="sessionFactory" />
<tx:advice id="txHbAdvice" transaction-manager="transactionManager">
<tx:method name="getPopupPage" propagation="REQUIRED" />
<tx:method name="get*" propagation="SUPPORTS" read-only="true" />
<tx:method name="find*" propagation="SUPPORTS" read-only="true" />
<tx:method name="search*" propagation="SUPPORTS" read-only="true" />
<tx:method name="*" propagation="REQUIRED" />
But when i run application, all the requests goes to the master. Tomcat
context configuration is as follows :
<Resource name="jdbc/database" auth="Container"
factory="org.apache.naming.factory.BeanFactory" user="root" password="root"
I searched the net but could not find any solution. I only found that
setting transaction manager to read-only could not set
Any help would be appreciated.
I've had Buildhive configured on some of our projects for the last
week as an experiment (Hibernate Validator, Search, OGM).
Apologies for all the notifications it made, especially since I didn't
warn about enabling it.
Apparently it creates lots of false positives so it has been quite
noisy on all pull requests and commits.
It was super easy to setup, they definitely made an example to follow
in terms of service usability and UI: just login with your github
account, you can setup all projects you are admin of with a single
Now the bad news:
it's very limited, for example it wasn't able to build Hibernate ORM
as you can't choose the gradle version, and it's unable to run the
Infinispan testsuite as there is a 15 minutes build time limit.
Hibernate Search builds fails all Byteman related tests, I guess it's
missing the JDK's tools.jar from the classpath (not available in JRE).
OGM wasn't that bad, still occasionally it failed with apparently no
I've contacted Cloudbees to ask about the gradle version and I was
suggested to use their standard build platform, for which the same
pull-request-review plugin will be available soon, and which is much
more flexible in terms of configuration.
I think that's a reasonable suggestion and that would give us
most of the options we need, and we could get it for free as an open
It has never been my intention to replace JBoss's internal QA Jenkins
instances, nor I think that will be possible as only there we have all
different platforms to test on and all different databases, so I'm
exploring these options exclusively to get a filter between broken
pull requests and our reviews: if we can save some time and
efficiency, I think any additional help is welcome. Also some of the
tasks run by QA labs would be redundant - like most H2 run tests - and
we might save resources there for the other tasks by running a
selection of tests less frequently.
In conclusion: yesterday night I disabled it as I think it had way too
many false positives.
Shall we proceed in making an Hibernate account to setup some tasks on
the full-powered Cloudbees version? Again, not with the intention to
replace the role of "reference CI", but only to get some extra
processing power - especially the preventive tests on pull requests
are IMHO very nice.
I don't think it's a big cost as it's quite easy to configure and
maintain an additional set of Jenkins instances.
On a side note:
# I was looking into cloudbees anyway as they have free MongoDB
instances, so this would help in testing Hibernate OGM.
# I've tested openshift too for this purpose. Apparently the
expectation is that you have to commit "to" openshift directly to have
it trigger a test run: it's currently not able to monitor a different
git repository so that didn't seem very suited; it might be a good
place to run tests of demos to run on AS7 though.
Of course we might "push" the jenkins source code to it an use it as
was a self-made webapp, but then I think cloudbees would be more
effective as someone else will manage the platform.