JPA API jar artifacts
by Steve Ebersole
I am contemplating duplicating[1] our existing JPA API jars to use a
better GAV naming scheme, specifically the GAV naming scheme we plan on
adopting for any new JPA specs. We have used completely different
naming scheme for 1.0 then we did for 2.0 and 2.1. And even for 2.0 and
2.1 we used the JPA version in the artifactId rather than the version
portion of GAV.
The new scheme being proposed would be to use the groupId we have been
using for 2.0/2.1 ("org.hibernate.javax.persistence"). We would use the
artifactId we have been using for 2.0/2.1, but without the 2.0/2.1
portion. Currently, for example, we have "hibernate-jpa-2.1-api" as the
artifactId; this would become just "hibernate-jpa-api". We'd then move
the JPA version as *part of* the GAV version. Essentially the GAV
version would be broken into buckets with JPA version taking up the
first 2 positions, a "bugfix" position, and then a qualifier. Given
1.0, 2.0 and 2.1 that would give us:
1) org.hibernate.javax.persistence:hibernate-jpa-api:1.0.0.Final.jar
2) org.hibernate.javax.persistence:hibernate-jpa-api:2.0.0.Final.jar
3) org.hibernate.javax.persistence:hibernate-jpa-api:2.1.0.Final.jar
I would only duplicate the last of each of 1.0, 2.0 and 2.1 into the new
naming.
Moving forward, the only thing that "changes" would be qualifiers if/as
we start working on new spec versions and possibly "bugfix" portion (the
last '0') if we encounter problems in the jpa api jars after the fact
(normal bugfix semantics). We are discussing standardizing on this
across the JBoss community and specifically discussing how to handle the
qualifiers for ongoing work. One option would be a new qualifier
"Draft". It fits reasonably well in the existing (OSGi defined) alpha
sorting of qualifiers aside from the Draft->Final jump (what about
"Proposed Final Drafts"?). Personally I do not like the direct tie to
specific spec Drafts; personally I know sometimes I publish spec jars
that do not cleanly map to a Draft. I personally prefer using Beta for
Drafts, CR for Proposed Final Drafts and Final for, well, Final
Drafts. We'll have to see how that works itself out though.
Anyway, any issues/concerns with duplicating these historical artifacts?
[1] I am thinking of duplicating rather than "relocating" since I am not
sure how well tools handle relocated artifacts in general. In fact I
think tools (Maven itself included) simply fail to resolve the relocated
artifact.
10 years, 7 months
[Parser] Prefixes used for log messages
by Gunnar Morling
Hi all,
Emmanuel and I are wondering which prefix should be used for log messages
created by the parser component ("project code" in JBoss logging
nomenclature).
I can see the following possibilities:
1) Use HHH-... as in ORM, using a reserved interval of ids
2) Use a new prefix such as HQLPARSER in all messages of all parser
components (currently hqlparser-core and hqlparser-lucene), using a
reserved id interval for each such component
3) Use a specific prefix for each parser component, e.g. HQLPARSER, HQLLUCN
etc.
I think 3) is the simplest from a dev perspective (no ranges to consider),
but it may cause a proliferation of prefixes, possibly confusing users. 1)
may be irritating when using the parser in alternative contexts such as
ISPN. As an indicator, I feel it makes sense to use different prefixes for
code bases living in different repos and with independent release cycle (as
is the case with ORM and the parser). So I'd vote for 2.
Any thoughts?
--Gunnar
10 years, 7 months
[OGM] Polyglot module: wdyt ?
by Guillaume SCHEIBEL
Hello everyone,
Following a discussion I had with a colleague of mine about polyglot
persisting, I was wondering if it could be possible to store entities into
one datastore and associations into another one.
Let's take an example:
I'm developing an application like a social network, it can be interesting
to suggest new relations so a graph database is an interesting choice (to
store people relations) but profiles information can easily be stored into
a document store.
It can really be tough to manage the consistency between both datastores.
So here comes the polyglot module idea. We already have methods for
entities and methods for associations we just need to delegate calls either
to the dialect responsible for the entities or the dialect responsible for
the associations.
Starting from here I had 2 choices:
- create a module and some specific submodules to enable polyglot
capabilities
- create a module and invoke the existing modules (Infinispan, Ehcache and
MongoDB), manage the lifecycle of their datastore provider and dialect and
then delegate the appropriate calls
I choose the second option because it would a significant amount of work to
recreate new dialects (for " polyglot" mongodb, infinispan and ehcache)
and maintain the "classic" module and their "polyglot" siblings.
I have test the new module with all 6 combinations and all tests are
passing, the documentation has been updated and a branch is available on my
github repo [1].
So now I'm coming to you with a simple question:
WDYT ?
Cheers,
Guillaume
[1] https://github.com/gscheibel/hibernate-ogm/tree/polyglot
10 years, 8 months
HSEARCH - Different analyzers for Indexing and Querying
by Guillaume Smet
Hi,
Note: this is just a prospective idea I'd like to discuss. Even if
it's a good idea, it's definitely 5.0 material.
Those who have used Solr and are familiar with the Solr schema have
already seen the ability to use different analyzer for indexing and
querying.
It's usually useful when you use analyzers which returns several
tokens for a given token: the QueryParser usually can't build the
correct query with these analyzers.
To take an example from my current work on HSEARCH-917 (soon to come
\o/), I have the following case. From i-pod , the analyzer builds ipod
i pod i-pod. ipod and i-pod aren't the issue here but the fact that i
pod is on two tokens makes the QueryParser build an incorrect query
(even if I use the Lucene 4.4 version which is a little bit smarter
about these cases and at least make the i-pod ipod case work
correctly).
The fact is that if the analyzer used at indexing has correctly
indexed all the tokens, I don't need to expand the terms at querying
and it should be sufficient to use a simple analyzer to lowercase the
string and remove the accents.
Solr introduced this feature a long time ago (it was already there in
the good old times of 1.3) and I'm wondering if we shouldn't introduce
it in Hibernate Search too.
As for the implementation, I was thinking about adding an attribute
queryAnalyzer to the @Field annotation. I was also wondering if we
shouldn't add the ability to define an Analyzer for wildcard queries
(Lucene introduced recently an AnalyzingQueryParser to do something
like that).
And maybe, in this case, it would be a good idea to centralize the
configuration with types as it's done in Solr? Usually, the three
analyzers definitions would come together.
As for my particular needs, most of my full text fields would be
analyzed like this:
indexing:
@AnalyzerDef(name = HibernateSearchAnalyzer.TEXT,
tokenizer = @TokenizerDef(factory = WhitespaceTokenizerFactory.class),
filters = {
@TokenFilterDef(factory = ASCIIFoldingFilterFactory.class),
@TokenFilterDef(factory = WordDelimiterFilterFactory.class, params = {
@org.hibernate.search.annotations.Parameter(name =
"generateWordParts", value = "1"),
@org.hibernate.search.annotations.Parameter(name =
"generateNumberParts", value = "1"),
@org.hibernate.search.annotations.Parameter(name =
"catenateWords", value = "1"),
@org.hibernate.search.annotations.Parameter(name =
"catenateNumbers", value = "0"),
@org.hibernate.search.annotations.Parameter(name =
"catenateAll", value = "0"),
@org.hibernate.search.annotations.Parameter(name =
"splitOnCaseChange", value = "0"),
@org.hibernate.search.annotations.Parameter(name =
"splitOnNumerics", value = "0"),
@org.hibernate.search.annotations.Parameter(name =
"preserveOriginal", value = "1")
}
),
@TokenFilterDef(factory = LowerCaseFilterFactory.class)
}
),
querying:
@AnalyzerDef(name = HibernateSearchAnalyzer.TEXT,
tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
filters = {
@TokenFilterDef(factory = ASCIIFoldingFilterFactory.class),
@TokenFilterDef(factory = LowerCaseFilterFactory.class)
}
),
wildcard:
@AnalyzerDef(name = HibernateSearchAnalyzer.TEXT,
tokenizer = @TokenizerDef(factory = WhitespaceTokenizerFactory.class),
filters = {
@TokenFilterDef(factory = ASCIIFoldingFilterFactory.class),
@TokenFilterDef(factory = LowerCaseFilterFactory.class)
}
),
I could contribute time to work on this if we can agree on the way to
pursue this idea.
Thanks for your feedback.
--
Guillaume
10 years, 8 months
[Search] Why is global @AnalyzerDefs scanning limited to @Entity?
by Guillaume Smet
Hi,
We would like to declare our global @AnalyzerDefs on a class which
isn't a specific entity.
For the @TypeDefs of Hibernate, we do it by declaring the @TypeDefs on
a class annotated with @MappedSuperClass but with @AnalyzerDefs, we
are forced to declare them on a concrete @Entity.
Is this an oversight or something due to the way the initialization of
Hibernate Search is done and we cannot change?
I know it's only cosmetic but it's a bit ugly to have a dummy entity
and a dummy table in our core framework just to be able to declare
global @AnalyzerDefs.
Thanks for your feedback.
--
Guillaume
10 years, 8 months
Facing Problem while Implementing Nested Selects (Count(*)) with CriteriaBuilder
by sudipta deb
Hi Guys,
I am facing an issue in implementing the below query with Hibernate
Criteria Query. Can you please help me in implementing the same?
The Query is:
SELECT outCountry.COUNTRY_ID, outCountry.COUNTRY_DESCRIPTION,
outCountry.COUNTRY_TENANT,
(SELECT COUNT(*) FROM POPULATION population, LOCALITY locality,
STATE state, COUNTRY inCountry
WHERE population.LOCALITY_ID = locality.LOCALITY_ID
AND locality.STATE_ID = state.STATE_ID
AND state.COUNTRY_ID = inCountry.COUNTRY_ID
AND inCountry.COUNTRY_ID = outCountry.COUNTRY_ID
AND population.POPULATION_GENDER = "MALE") as MALE_COUNT,
(SELECT COUNT(*) FROM POPULATION population, LOCALITY locality,
STATE state, COUNTRY inCountry
WHERE population.LOCALITY_ID = locality.LOCALITY_ID
AND locality.STATE_ID = state.STATE_ID
AND state.COUNTRY_ID = inCountry.COUNTRY_ID
AND inCountry.COUNTRY_ID = outCountry.COUNTRY_ID
AND population.POPULATION_GENDER = "FEMALE") as FEMALE_COUNT
FROM COUNTRY outCountry;
The DDL Script is attached along with this email.
I am able to implement this with Native Query, but my requirement is to do
this with CriteriaBuilder.
I was checking in Google and found out in few pages that sub select/nested
select with count is not possible with Criteria. But there was no concrete
reason why this is not possible. Could you please help me in this regard?
Any help is highly appreciated.
With regards
Sudipta Deb.
10 years, 8 months
Re: [hibernate-dev] [hibernate-hql-parser] 1.0.0.Alpha3 released
by Gunnar Morling
Adrian,
Do you happen to have an isolated test case which shows that issue? This
would help me in analyzing the issue.
Thanks,
--Gunnar
2013/8/14 Adrian Nistor <anistor(a)redhat.com>
> Thanks a lot guys!
>
> So far everything works fine with your latest fixes in hql parser. There
> is however that issue in hibernate-search 4.4.0.Alpha1 that prevents us to
> integrate this in master right now because too many of infinispan-query
> tests would fail (due to
> https://hibernate.atlassian.net/browse/HSEARCH-1318). Do you have an ETA
> for it?
>
> Adrian
>
>
> On 08/13/2013 11:17 PM, Gunnar Morling wrote:
>
> 2013/8/13 Sanne Grinovero <sanne(a)hibernate.org>
>
>> @Gunnar I've included your last pull, tagged it and uploaded it already.
>>
>
> Awesome Sanne, thanks.
>
>
>> I've sent a pull to OGM including your tests and more tests to verify
>> the LIKE behavior; I think they are quite comprehensive and are
>> passing, but if you are able to add more failing ones please send them
>> to me :-)
>>
>
> Ok, will check it out tomorrow.
>
>>
>> No changes where needed, it just worked fine. I suspect you didn't
>> take case sensitivity in consideration when trying? It is case
>> sensitive, and should not be applied on analyzed fields.
>>
>
> I thought I used correct casing, but it might be I got it wrong. Either
> way nice that it's working! Thanks for checking it out and adding more
> tests.
>
>>
>> @Adrian, this should contain all the fixes you needed; if not we don't
>> mind making more releases soon.
>>
>
> Right, just let us/me know in case you need anything more.
>
>>
>> Cheers,
>> Sanne
>>
>
> --Gunnar
>
>
>
>
10 years, 8 months