HHH-3555
by Gail Badner
I've created a new pull request for HHH-3555. [1]
A new pull request was needed because there have been lots of changes in
master since the original pull request [2] was created.
I would like to get this pushed to master and, if possible, to 5.0 branch.
There are some questions in the pull request I need answered before moving
forward.
Could someone familiar with Envers please take a look at [1] when you have
a chance.
Thanks!
Gail
[1] https://github.com/hibernate/hibernate-orm/pull/1079
[2] https://github.com/hibernate/hibernate-orm/pull/847
10 years, 3 months
Multi-level Fetch Joins
by Gail Badner
Is the only JPA-compliant way to do a multi-level fetch join to use entity
graphs?
JPA 2.1 does not support fetch joins using an alias at all. JSR 338,
4.4.5.3 Fetch Joins says,
"It is not permitted to specify an identification variable for the objects
referenced by the right side of the FETCH JOIN clause, and hence references
to the implicitly fetched entities or elements cannot appear elsewhere in
the query. "
(I know that HQL supports using an alias for nested fetch joins. [1][2])
Also in JSR 338, 4.4.5.3 Fetch Joins is:
fetch_join ::= [ LEFT [OUTER] | INNER ] JOIN FETCH
join_association_path_expression
If I understand correctly, the definition of
join_association_path_expression does not allow for join fetching a nested
association using a path, as in:
select c from Cat c join fetch c.father join fetch c.father.mother <= not
supported by JPA or HQL
(There is an open Jira for supporting nested join fetches using HQL:
HHH-8206. [3])
Is there a JPA 2.0-compliant way to do this (without entity graphs)?
Thanks,
Gail
[1]
http://docs.jboss.org/hibernate/orm/4.3/manual/en-US/html_single/#queryhq...
[2]
https://docs.jboss.org/hibernate/orm/5.0/userGuide/en-US/html_single/#d5e...
[3] https://hibernate.atlassian.net/browse/HHH-8206
10 years, 3 months
envers + classic QueryTranslator failing test
by andrea boriero
I'm working on https://hibernate.atlassian.net/browse/HHH-9996
and I stumbled across the following test:
org.hibernate.envers.test.integration.basic.ClassicQueryTranslatorFactoryTest
I run the test not only with Derby but also with PostgreSQL and the test
fails for both.
The query:
select e__ from org.hibernate.envers.test.entities.IntTestEntity_AUD e__
where e__.originalId.REV.id = (select max(e2__.originalId.REV.id) from
org.hibernate.envers.test.entities.IntTestEntity_AUD e2__ where e2__.
originalId.REV.id <= :revision and e__.originalId.id = e2__.originalId.id)
and e__.REVTYPE <> :_p0 and e__.originalId.id = :_p1
causes the error java.sql.SQLException: An attempt was made to put a data
value of type 'byte[]' into a data value of type 'SMALLINT'. in Derby
while in PostgreSQL the error is org.postgresql.util.PSQLException: ERROR:
operator does not exist: smallint <> bytea Hint: No operator matches the
given name and argument type(s).
The problem is related with
org.hibernate.hql.internal.classic.QueryTranslatorImpl$ParameterTranslations#getNamedParameterExpectedType(String
name) returning null for the _p0 parameter (while the correct return type
should be ReivsionTyptType) this cause the wrong sql bind..
Any help is more than welcome?
Thanks
Andrea
10 years, 3 months
Hibernate Search 5.5 Final is out: Lucene 5
by Davide D'Alto
*I’m happy to announce the latest final release of Hibernate
Search:Hibernate Search 5.5 Final.Here an overview of what Hibernate Search
5.5 brings to the table:- upgrade to Lucene 5- sortable fields- built-in
bridges for JDK 8 Java Time classes- encoding null tokens for numeric
fieldsYou can find more details on the blog
post:http://in.relation.to/2015/09/15/HS-5/
<http://in.relation.to/2015/09/15/HS-5/>Cheers,Davide*
10 years, 3 months
ORM5 and naming strategies (or get me my foreign keys back!)
by Guillaume Smet
Hi all,
(starting with kudos to Steve for the 5 release, it's the first problem I
find in my migration journey)
I'm currently working on porting 2 of our existing applications to ORM 5 (I
already ported our template application to start new projects).
The naming strategies are giving me a hard time: we used the
DefaultComponentSafeNamingStrategy before and there is no real equivalent
in ORM 5.
It wouldn't be a problem to port it but there are other problems which are
not directly related. For instance, the foreign keys used to be named
fk_<hash> and they are now named fk<a different hash>:
"fk421dhylghv6secx82frew7luc" FOREIGN KEY (action_id) REFERENCES
auditaction(id)
"fk_26d86etoechksvjt5xmjdbqqg" FOREIGN KEY (action_id) REFERENCES
auditaction(id)
Same for the unique keys EXCEPT for the natural ids which are still named
the old way (with a uk_ prefix):
"uk_idim50mwro7eanb1gn9p4xv01" UNIQUE CONSTRAINT, btree (unixname)
(see AnnotationBinder line 2274)
AFAICS, there's no easy way to migrate an existing application to ORM 5 if
we want to let ORM update the schema. We end up with duplicated foreign
keys/constraints.
So a few questions:
* Am I the only one who sees this as a problem?
* Shouldn't we propose naming strategies allowing a smoother transition
from ORM 4 to 5?
* Should we add more prominent warnings in the migration doc?
* Should the prefix naming be consistent (e.g. with or without an
underscore)? I personally like it better with the underscore.
--
Guillaume
10 years, 3 months
PK columns and nullability
by Steve Ebersole
We have a Pull Request[1] to add a feature to allows Dialects to enforce
that all columns making up a primary key are defined as non-nullable.
Specifically, apparently Teradata will barf if the PK is defined over
columns that are nullable.
The PR focuses on exporting the table/pk. However, Hibernate overall makes
the assumption that all PK columns are non-nullable. So I wonder if we
ought to just enforce that in the mapping layer. Thoughts?
[1] https://github.com/hibernate/hibernate-orm/pull/1059
10 years, 3 months
Bytecode enhancement and collections
by Steve Ebersole
Wanted to get some opinions. I am working on HHH-10055 which is basically
a report of problems with that "lazy loading outside of a
session/transaction" feature when used in combination with bytecode
enhancement. The initial problem was that bytecode interception was not
accounting for collection attributes properly; it was not building the
appropriate PersistentCollection to return. I changed that code to now
build the PersistentCollection.
But that led to another issue later on that made me question how I was
building the PersistentCollection during interception. Essentially I was
trying to still build an uninitialized PersistentCollection. The
interception code immediately tries to read the size of that collection as
and up-front part of its in-line dirty checking capabilities which
triggered another interception back in to an inconsistent state.
But what I started thinking about is the assumption that this interception
ought to prefer to return an uninitialized PersistentCollection. I now
think that is not a good assumption. Why? Consider code like:
Well the idea of an uninitialized PersistentCollection comes from the
scenario of proxy-based laziness. In proxy-based laziness, code like:
MyEntity myEntity = session.load( MyEntity.class, 1 );
System.out.println( myEntity.getName() );
In the case of proxy-based laziness, the second line immediately causes the
entire proxy to become initialized. Part of that is to set any of its
collection attributes. However, as the collections are not accessed here
we want to further delay initializing them. But since the proxy is
initialized completely that means the only way to achieve that here is
setting an uninitialized version of the PersistentCollection as state,
which will initialize itself later when accessed.
For bytecode enhancement, the situation is a little bit different. There
we'd not even build the PersistentCollection instance until that attribute
is accessed. So in the above code the collection attributes would never be
built. So when we are in the interception code I mentioned above, we know
that something is trying to access that collection attribute specifically.
This is the difference.
Back to the initial problem... I think the solution is not just to have the
bytecode interception code build the PersistentCollection, but to also have
it make sure that the PersistentCollection is initialized.
Going back to the sample code, and adding a line:
MyEntity myEntity = session.load( MyEntity.class, 1 );
print( myEntity.getName() );
myEntity.getChildren();
In the proxy-based solution the collection is still uninitialized after
this. For bytecode interception I am proposing that the collection would
be initialized by that 3rd line. Again we could return the uninitialized
collection here, and wait for PersistentCollection to initialize itself on
first "further access" (calling size(), iterating, etc). I am more think
through intent; because we know specifically that the collection attribute
itself was accessed it seems reasonable to go ahead and initialize it.
And if we do not go that route, then we need a different tact as well for
dealing with the in-line dirty checking aspect of this.
Really the only time this distinction becomes an issue is in code that
explicitly tries to check whether certain attributes are initialized. So
whereas this works for the proxy-based approach:
MyEntity myEntity = session.load( MyEntity.class, 1 );
if ( !Hibernate.isInitialized( myEntity.getChildren() ) ) {
// do something with the uninitialized collection
...
}
It will fail with the bytecode interception approach I propose, because the
call to `myEntity.getChildren()` itself causes the initialization. There
you'd have to use:
MyEntity myEntity = session.load( MyEntity.class, 1 );
if ( !Hibernate.isPropertyInitialized( myEntity, "children" ) ) {
// do something with the uninitialized collection
...
}
which has always been the suggested way to deal with questioning bytecode
initialization state and which matches the JPA call too.
So any thoughts?
10 years, 3 months
Consistency guarantees of second level cache
by Radim Vansa
Hi,
I've been fixing a lot of consistency issues in Infinispan 2LC lately
and also trying to improve performance. When reasoning about consistency
guarantees I've usually assumed that we don't want to provide stale
entries from the cache after the DB commits - that means, we have to
invalidate them before the DB commit. This is a useful property if there
are some application constraints on the data (e.g. that two entities
have equal attributes). On the other hand, if we want the cache
synchronized with DB only after the commit fully finishes, we could omit
some pre-DB-commit RPCs and improve the performance a bit.
To illustrate the difference, imagine that we wouldn't require such
atomicity of transactions: when we update the two entities in TX1 and
one of them is cached and the other is not, in TX2 we could see updated
value of the non-cached value but we could still hit cache for the other
entity, seeing stale value, since TX1 has committed the DB but did not
finish the commit yet on ORM side:
A = 1, B = 1
TX1: begin
TX1: (from flush) write A -> 2
TX1: (from flush) write B -> 2
TX1: DB (XA resource) commit
TX2: read A -> 2 (handled from DB)
TX2: read B -> 1 (cached entry)
TX1: cache commit (registered as synchronization) -> cache gets updated
to B = 2
TX1 is completed, control flow returns to caller
Naturally, after TX1 returns from transaction commit, no stale values
should be provided.
Since I don't have any deep experience with DBs (I assume that they
behave really in the ACID way). I'd like to ask what are the guarantees
that we want from 2LC, and if there's anything in the session caching
that would loosen this ACIDity. I know we have the nonstrict-read-write
mode (that could implement the less strict way), but I imagine this as
something that breaks the contract a bit more, allowing even larger
performance gains (going the best-effort way without any guarantees).
Thanks for your insight!
Radim
--
Radim Vansa <rvansa(a)redhat.com>
JBoss Performance Team
10 years, 3 months