CUBRID Database Dialect for Hibernate
by 에센 사그노브
Hi,
I am the CUBRID open source RDBMS Project Manager. Our parent company NHN has extensively been using Hibernate in its services together with CUBRID Database Server.
Here I would like to attach the CUBRIDDialect.java class file which supports CUBRID version 8.3.0 and higher (8.3.0, 8.3.1, and current stable 8.4.0) that we want to submit to Hibernate project.
If I should create an issue in JIRA, please let me know which project is the most appropriate.
Regards,
Esen Sagynov.
CUBRID Project Manager.
http://www.cubrid.org
http://twitter.com/cubrid
http://facebook.com/cubrid
13 years, 3 months
Hibernate4 artifact names, Persistence provider name, maven...
by Scott Marlow
If someone wanted to include both Hibernate 3 + Hibernate 4 in the same
project, that might be easier if the Hibernate 4 artifacts had a version
number in it or was changed for every new major release. I don't think
Maven supports building two versions of the same artifact (at the same
dependency level).
For the persistence provider name,
org.hibernate.ejb.HibernatePersistence, I'm wondering if we could have a
org.hibernate.ejb.HibernatePersistence4 in addition, that could be used
to uniquely reference Hibernate 4.x persistence providers.
I assume this is too late in the Hibernate 4 cycle to change, but wanted
to bring the idea up.
Changing the artifact names would impact other projects that depends on
Hibernate4 and would need to sync up with the changes as well.
What do you think?
Scott
13 years, 3 months
classloading issue when trying to add envers to as7
by Strong Liu
Hi there,
first of all, I have finished this task, and test pass.
but :), i have to make the following changes, i'd like to hear you guys' thoughts before i go to much away.
1. adding envers into org.hibernate module in as7, so user's app can see both hibernate and envers class
(with an separate envers module i ran into some cycle dependency issue)
2. envers throws "listeners were not registed" exception, means hibernate's IntegratorServiceImpl can't see envers class/resource
IntegratorServiceImpl is using java.util.ServiceLoader#load(Class<S> service), which internally using TCCL, (I think)
that's the reason why core can't see envers' integrator.
so, i created a custom ServiceLoader which use ClassLoaderService to find the integrator, but this doesn't work either.
since, we need org.hibernate.service.classloading.internal.ClassLoaderServiceImpl#locateResources first (META-INF/services/org.hibernate.integrator.spi.Integrator) and ClassLoaderServiceImpl using resourceClassLoader to do this.
by default, resourceClassLoader is set to applicationClassLoader in ClassLoaderServiceImpl.
then, i changed IntegratorServiceImpl to use java.util.ServiceLoader#load(Class<S> service, IntegratorServiceImpl.class.getClassLoader()
), test pass, but this of course is not the fix.
so, i changed the custom ServiceLoader to use classLoaderService to locateResources first, and using ServiceLoader.class.getClassLoader() to reload the resource again.
here are the changes:
https://github.com/stliu/hibernate-core/commit/09ce5defabea8cfb87d06c3d7b...
https://github.com/stliu/jboss-as/commit/616237755626672157fb2ae565fadb16...
thoughts?
-----------
Strong Liu <stliu(a)hibernate.org>
http://hibernate.org
http://github.com/stliu
13 years, 3 months
Patch for HHH-6361 / HHH-6349 : Envers loses audit information due to a Hibernate core bug
by Scheper, Erik-Berndt
Hi,
I'd like someone to review my proposed fix of HHH-6361 (https://github.com/hibernate/hibernate-core/pull/117) for the 3.6 branch.
>From a Hibernate core perspective, this may seem like a minor issue, but it is the cause of HHH-6349, where Envers forever loses the audit information when objects were added to or removed from a collection. Since there is no way to retrieve this information from the database at a later moment, this is really bad from the Envers (auditing) perspective.
The proposed fix causes both the provided testcase for HHH-6361 (the Hibernate core issue) and the testcase for HHH-6349 (the Envers issue) to pass by ensuring that after a merge() operation the snapshot value of the collection, as obtained by collectionEntry.getSnapshot(), corresponds with the database contents. This is a good thing, of course.
However, apart from the possible performance implication (though I'm not sure if there's a remedy for this), I am a bit worried about the fact that I had to fix a unit test in the Hibernate testsuite (org.hibernate.test.manytomanyassociationclass.compositeid.ManyToManyAssociationClassCompositeIdTest) to make the 3.6 build succeed. As a rule, that's a bad sign.
What happens is that after the patch this test crashes with an NPE in the hashCode() method of MembershipWithCompositeId.Id class. The reason of the NPE is that MembershipWithCompositeId now has a non-null "Id" property, with a null value of the userId and groupId properties. Even though it was easy to fix the test by overriding the deleteMembership()-method in ManyToManyAssociationClassCompositeIdTest by setting the "Id" property of MembershipWithCompositeId to null, this doesn't feel good to me.
I'd really like to see a fix for HHH-6361 without these changes in ManyToManyAssociationClassCompositeIdTest, but I couldn't find one. Any help there would be appreciated of course.
After an initial review, I'd be more than happy to provide a fix for the Hibernate 4.0.x series, which is also plagued by the same bug.
Regards,
Erik-Berndt
Disclaimer:
This message contains information that may be privileged or confidential and is the property of Sogeti Nederland B.V. or its Group members. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorized to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message.
13 years, 3 months
[HSEARCH-566] Indexing null values for an @ElementCollection
by Sanne Grinovero
Hello,
initially we dealt with null values as by not indexing them; since we
have the "indexNullAs" option on @Field [1]
we allow to use a custom token in the index instead, so to make it
possible to search for entities having that field at null.
In case you would have a
@ElementCollection
@Field(indexNullAsl="nullToken")
String[] tags
having the value {"A","B",null}
I would expect this to be encoded in the index with three field
writes, all named "tags", and having values "A", "B", "nullToken".
But what if the array value is null by itself?
Personally I would expect it to *not* write anything, not encode the
"nullToken" which is reserved for an element being null: not having
the collection itself means in my opinion that there are no elements,
being similar as having an empty collection.
An alternative way could be to write the "nullToken" in both cases,
but I'm not liking this much as it would imply that the item will
match for a search similar to "all items having a null tag", while in
fact it had no tags at all.
I'd avoid adding more options as it would need a new annotation, or
adding options to other annotations which only apply in this case.
What do others think?
Sanne
1 - http://docs.jboss.org/hibernate/search/3.4/api/org/hibernate/search/annot...
13 years, 3 months
[Search] Serialization issues of Lucene Work
by Sanne Grinovero
#### background ####
Search used to be split in two main components:
engine ---> indexing backend
- there was a contract (public API) between the two so that indexing
backends could be replaced
- cluster configuration needed the "-->" invocation to be replaced by
an RPC, therefore forcing the parameters required by this public API
to be Serializable.
These parameters are essentially a List<LuceneWork>, where a
LuceneWork instance contains some simple primitives and an instance of
org.apache.lucene.document.Document.
This Document *used* to be Serializable, and so this worked fine, with
the minor inconvenience that we could not add more properties to
LuceneWork without introducing some painful class incompatibilities in
clustered deployments.
The Lucene project decided that maintaining the guarantees of
implementing Serializable is too much of a burden, and in fact the
NumericField has never been Serializable, hence this bug is open on
Search since we introduced NumericField support:
HSEARCH-681 - NotSerializableException when NumericField gets
serialized in JMSBackendQueueProcessor
#### new architecture ####
In Hibernate Search 4 there is an additional level of indirection to
the actual communication, it looks like
engine ---> index manager --> backend
and both components are replaceable; in fact you could plug in an
IndexManager which deals with backends in a totally different way, so
the RPC channel can use a different format which is not mandated by
the API (the second indirection still defines an interface, and by
using that you can reuse a larger set of provided components, but you
don't have to).
Example: an Infinispan IndexManager would not use a standard backend
but rather use Infinispan's own communication channels to send write
operations to the index. It could still use a JMS backend by
assembling the existing components.
#### the problem ####
So we still have to find a way to serialize the Document instances, tracked by
HSEARCH-757 - Explicitly control binary format of communication with
the backend
I started this mail from the architecture to clarify that we don't
need to replace the API making use of LuceneWork instances, which is
doing a pretty good job (and is not necessarily the final API for v.
4.0).
We also don't need to mandate a specific binary format, as this could
be a detail left to different backends; but certainly all
implementations would need to deal with this so we need an helper
service which could be reused by JMS backends, JGroups, Infinispan,
possibly others.
As soon as we have such a toy, implementing a new Infinispan
IndexManager is going to be pretty easy so I'm looking forward for
this as a great means to simplify configuration (and have it working
with NumericFields); it's also possible that other fields in the
Lucene implementation might drop Serializable soon.
# Solution option A)
Code a new utility from scratch which provides this bi-directional
transformation:
List<LuceneWork> <--> byte[]
Pros:
- flexible, lovely do-it-yourself with no dependencies.
Cons:
- since Lucene doesn't want to care about Serializable, it's possible
that they will sneak in new fields / different fields without notice
in minor releases. This is going to need excellent tests as it
requires manual code inspection and will become a maintenance overhead
(more than usually).
# Solution option B)
Use JBoss Marshaller to implement the same. We will likely still need
to write the details of how to externalize specific Lucene classes,
but it's supposed to provide many high performance helpers.
Pros:
- via Infinispan we already depend on this, but this applies only to
the hibernate-search-infinispan module.
- when Lucene changes class format, it will help to deal with it as
it adapts to the class definition ( we might notice better ).
Cons:
- will add more dependencies to hibernate-search-core, or we split
out all the support for clustering in sub modules.
- while it adapts to the class format, produced byte[] streams will
be incompatible; we can deal with this by storing example streams in
constants and use them in tests.
# Solution option C)
Don't serialize the Document at all, but send over only the metadata
we need encoded in a different ad hoc structure.
## Solution option C+JBM)
Even doing so, we could optionally introduce JBoss Marshaller to avoid
slow java Serialization.
Pros:
- better isolation from Lucene changes
Cons:
- slower "time to market" to expose new Lucene features: until we add
it, people won't be able to use it.
- We might forget some use case/ make wrong assumptions on the data,
making it impossible for people to workaround it unless they plug a
different backend implementation.
####
WDYT?
[Davide, you're in CC as we where considering upgrading your
contributor status from beginner, to do some more hardcore stuff.. how
would you feel to get this one assigned?]
Cheers,
Sanne
13 years, 3 months
Plural attributes and the metamodel
by Steve Ebersole
I started working on support for sets but soon realized that support for
collections in general was basically non-existent. So I started working
on that.
Here is what I have so far if anyone wanted to look it over:
https://github.com/sebersole/hibernate-core/tree/HHH-6503
It is still early on, but the basic ideas are there. Basically, I break
down the "source" aspect of mapping collections into the individual
pieces or information we need about persistent collections:
1) general meta about the collection (java type, hibernate type, etc) :
PluralAttributeSource
2) info about the collection key : PluralAttributeKeySource
3) info about the collection elements : PluralAttributeElementSource
eventually we will need to describe the key/index of maps and lists, but
this is a start.
--
steve(a)hibernate.org
http://hibernate.org
13 years, 3 months
when i use hibernate with config jpa/hibernate
by max max
hi people i dont understand when i could use hibernate.
my configuration is ok now but i use only JPA librairy because i cannot find a way to use detachedcriteria .
tx people
13 years, 4 months