[feature request][discuss] smoother serializers integration?
by Romain Manni-Bucau
Hi guys,
Short sumarry: Wonder if hibernate could get a feature to kind of either
unproxy or freeze the entities once leaving the managed context to avoid
uncontrolled lazy loading on one side and serialization issues on another
side.
Use case example: a common example is a REST service exposing directly
hibernate entities (which is more and more common with microservice
"movement").
Objective: the goal is to not need any step - or reduce them a lot -
between the hibernate interaction and a potential serialization to avoid
issues with lazy loading and unexpected loading. Today it requires some
custom and hibernate specific logic in the serializer which kind of breaks
the transversality of the two concerns (serialization and object
management/loading).
Implementation options I see:
1. a callback requesting if the lazy relationship should be fetched,
something like
public interface GraphVisitor {
boolean shouldLoad(Object rootEntity, Property property);
}
2. An utility to remove any proxy potentially throwing an exception and
replacing the value by null or an empty collection, something like
MyEntity e = Hibernate.deepUnproxy(entity);
3. A switch of the proxy implementation, this is close to 2 but wouldn't
require a call to any utility, just a configuration in the persistence unit.
Side note: of course all 3 options can be mixed to create a single solution
like having 3 implemented based on 1 for instance.
Configuration proposal: this would be activated through a property in the
persistence unit (this shouldn't be only global IMHO cause otherwise you
can't mix 2 kind of units, like one for JSF and one for JAX-RS to be
concrete). This should also be activable as a query hint i think - but more
a nice to have.
What this feature wouldn't be responsible for: cycles. If relationships are
bidirectional then the unproxied entity would still "loop" if you browse
the object graph - this responsability would stay in the consumer since it
doesn't depend on hibernate directly but more on a plain object handling.
What do you think?
Romain Manni-Bucau
@rmannibucau <https://twitter.com/rmannibucau> | Blog
<https://blog-rmannibucau.rhcloud.com> | Old Blog
<http://rmannibucau.wordpress.com> | Github <https://github.com/rmannibucau> |
LinkedIn <https://www.linkedin.com/in/rmannibucau> | JavaEE Factory
<https://javaeefactory-rmannibucau.rhcloud.com>
7 years, 7 months
6.0 - concept naming
by Steve Ebersole
Before I get deeper into this persister work I wanted to discuss some of
the primary design concepts and start a discussion about naming the
interfaces/classes related to them.
I am thinking it would be best to model the tables associated with a
persister using a similar structure we use for
TableSpace/TableGroup/TableGroupJoin but in the non-specific-reference
case. For example, the Person entity might be mapped to 2 tables: the
primary PERSON table, and a PERSON_SUPP secondary table. Inside the
persister we'd map the binding to those tables including the join criteria
between them. This is distinctly different from the structure of
references to those tables as part of a specific query. So inside
persister we'd map a structure like:
EntityPersister {
...
rootTable = Table( PERSON )
secondaryTables = {
SecondaryTableBinding {
joinType = INNER
targetTable = Table( PERSON_SUPP )
predicate = {
EQ {
lhs = {
ColumnBinding( Column( Table( PERSON ), "id" )
}
rhs = {
ColumnBinding( Column( Table( PERSON_SUPP ), "p_id"
)
}
}
}
}
}
We could simplify this if we follow the same assumption we do today that
the secondary tables always use the root table as the lhs. Additionally we
might consider simplifying the join predicate to use the ForeignKey which
already defines this join predicate info:
EntityPersister {
...
rootTable = Table( PERSON )
secondaryTables = {
SecondaryTableBinding {
joinType = INNER
targetTable = Table( PERSON_SUPP )
predicate = ForeignKey {
uid = 123456789ABC
...
}
}
}
}
Compare this to the structure for a particular query[1], which is different:
TableGroup
root = TableReference {
table = ( Table( PERSON ) )
identifiactionVariable = p0
tableJoins = {
TableReferenceJoin {
...
}
}
}
}
Notice specifically the addition of the identifactionVariable (alias).
Because we might have multiple references to that same table which need to
be unique in the given query, e.g. a fetched or joined self-referential
association (person -> manager).
First, what do you think of these proposed structures? And secondly what
are y'alls opinions wrt the names?
FWIW there are 2 main reasons I propose modeling things the way I suggest
above in terms of the structure on persisters:
1. I think it makes it easier for consumers of persisters to understand
the bindings
2. I *think* (tbd) that it makes it easier to "clone" into the query
TableGroup structure.
Anyway, would love feedback on this.
[1] Note that the class called `TableReference` here is called
`TableBinding` in the currently pushed code, but I'd prefer to change that
and instead have "binding" term reference the binding of the table in the
persister model. I am open to different naming here though.
7 years, 7 months
Implement UserCollectionType for non-Collection types such as Guava's Multimap
by Jan-Willem Gmelig Meyling
Hi everyone,
Out of curiosity I’ve tried to implement a UserCollectionType for Guava’s Multimap [1] based on this [2] article thats doing a similar job for Apache’s MultiMap. Doing so I’ve stumbled upon two issues, for which I’d like to receive some feedback.
As far as I can see, the UserCollectionType seems focussed around mapping values to indexes. In the case of a list this would supposedly be a number, for non-ordered collections I am not sure, but for maps it seems to be the entries' key. It seems that the implementation for Apache’s MultiMap actually is a Map<K, Collection<V>> so that in the UserCollectionType mentioned in that article every index actually points to a collection of values under that key. I am wondering whether this could potentially break the behaviour for the UserCollectionType class as it seems it should be possible to dirty check the values behind the indexes which probably doesn’t work against collections.
But there’s another inconvenience I stumbled upon while implementing this. Where the Apache MultiMap extends a Map<K, Object>, Guava’s Multimap does not. I therefore had to override the TypeUtils, JavaXCollectionType, CollectionBinder, ArrtirubteFactory, PluralAttributeImpl, and most likely if I wanted to do it well a MapBinder and PluralAttribute’s CollectionType as well. While doing this I also found out that JavaXCollectionType.getCollectionClass() returns Map.class perfectly fine for the currently supported Map fields, even though they do not satisfy the <? extends Collection> return type.
So it seems this is a bad idea after all ;-) But I am wondering whether I am missing something here.
a) Is there a way to map a Map<Key, Collection<Value>> as an EnumeratedCollection in plain Hibernate?
b) Is there another way to achieve the above and persist a Guava Multimap with a custom user type? (Assuming the UserCollectionType is not the right place to start)
c) Is Hibernate 6 going to change the answer to these questions?
d) Was the article ever right about how to persist Apache’s MultiMap’s type, or was it based on false assumptions on the use of UserCollectionType anyways?
The code I’ve written for this email can be found under [3].
Feedback is much appreciated, thanks in advance!
Kind regards,
Jan-Willem
[1] http://google.github.io/guava/releases/snapshot/api/docs/com/google/commo... <http://google.github.io/guava/releases/snapshot/api/docs/com/google/commo...>
[2] http://blog.xebia.com/mapping-multimaps-with-hibernate/ <http://blog.xebia.com/mapping-multimaps-with-hibernate/>
[3] https://github.com/JWGmeligMeyling/hibernate-multimap-type/tree/master/gu... <https://github.com/JWGmeligMeyling/hibernate-multimap-type/tree/master/gu...>
7 years, 7 months
WebSphereExtendedJtaPlatform (HHH-11606)
by Gail Badner
Currently, in master and 5.1, WebSphereExtendedJtaPlatform will not work
for any tasks that use a DdlTransactionIsolatorJtaImpl.
The problem is that DdlTransactionIsolatorJtaImpl calls
TransactionManager#suspend and #resume on the TransactionManager returned
by WebSphereExtendedJtaPlatform#locateTransactionManager, which throws
UnsupportedOperationException.
I see that Christian suggested a custom JtaPlatform to work around this
issue.[1], which is basically the same as WebSphereJtaPlatform for
WebSphereEnvironment#WS_5_1.
If com.ibm.ws.Transaction.TransactionManagerFactory really has been working
since 5.1, it should be safe to update WebSphereExtendedJtaPlatform as
Christian proposed. That would be the simplest, but, I see that
com.ibm.ws.Transaction.TransactionManagerFactory is not listed in
WebSphere's Application Server API. [2]
I see that WebSphereExtendedJtaPlatform uses classes
in com.ibm.websphere.jtaextensions, which are listed in WebSphere's
Application Server API. I think it would be good thing for Hibernate to
only depend on the API. Unfortunately, I see that the API still does not
expose the TransactionManager.
Is there any way that
WebSphereExtendedJtaPlatform$TransactionManagerAdapter could implement
#suspend and #resume without referring to
com.ibm.ws.Transaction.TransactionManagerFactory? I suspect not, but
figured I'd ask.
Other than that, alternatives I see are:
1) Deprecate WebSphereExtendedJtaPlatform in favor of WebSphereJtaPlatform;
2) Change WebSphereExtendedJtaPlatform as Christian suggested (using
com.ibm.ws.Transaction.TransactionManagerFactory);
Other ideas?
Thanks,
Gail
[1]
https://hibernate.atlassian.net/browse/HHH-11606?focusedCommentId=92405&p...
[2]
https://www.ibm.com/support/knowledgecenter/en/SSEQTP_8.5.5/com.ibm.websp...
7 years, 7 months
6.0 - JPA metamodel code
by Steve Ebersole
Currently Hibernate implements the JPA metamodel contracts with a series of
wrapper objects defined in the `org.hibernate.metamodel` package. This is
non-ideal because it is a wrapper approach rather than integrating the JPA
contracts into our runtime model which we are doing in these 6.0 changes
(Navigable ties in these JPA contracts).
Based on that I propose that we simply "drop" the `org.hibernate.metamodel`
package. The only question is what "drop" means here. Does it mean just
remove these from 6.0 (possibly retro-deprecating on 5.2)? Or does it mean
we follow the deprecate-and-shadow approach?
To me this really comes down to whether we expect or ever expected users to
access stuff from this `org.hibernate.metamodel` package. Considering that
these are defined specifically in `org.hibernate.metamodel.internal` (an
internal package) I would argue not, and that we should just remove them
from 6.0 and replace in usage with Navigable and friends.
Any objections?
7 years, 7 months