HHH-10162 Inheritance and L2 cache
by Christian Beikov
Hey guys,
Steve said I should start a discussion about the possible solution for
HHH-10162 <https://hibernate.atlassian.net/browse/HHH-10162> so here we go.
While debugging the issue, I found out that the proxy is created at
DefaultLoadEventListener.createProxyIfNecessary() where it IMO should
consult the 2L cache first by calling existing =
loadFromSecondLevelCache( event, persister, keyToLoad );. The fix looks
easy, but I am not sure of the implications. Obviously this will affect
performance a little since it has to consult the L2 cache now.
I tried to start a discussion in the Dev room, but so far only Andrea,
Vlad and Chris have commented this. Has anyone a different idea for
implementing this?
--
Mit freundlichen Grüßen,
------------------------------------------------------------------------
*Christian Beikov*
6 years, 11 months
[HV] Memory footprint improvements
by Guillaume Smet
Hi,
Stuart Douglas (in CC) opened an interesting issue here:
https://hibernate.atlassian.net/browse/HV-1437
I already made some good progress here:
https://github.com/hibernate/hibernate-validator/pull/814
but I would appreciate some feedback on a few additional things.
## AnnotationMetaDataProvider#configuredBeans
So, in AnnotationMetaDataProvider, we keep a cache of the annotation
metadata found while building the aggregated BeanMetaData.
It might be useful if you have a class hierarchy for instance as you would
only build the annotation of the parent class once.
But... as we initialize the bean metadata lazily, we need to keep this
cache during the whole lifecycle of the application.
So, here, we have to find a compromise:
1/ either favor the memory footprint but the annotation of a given class
could be analyzed several times in the case of a class hierarchy
2/ or favor the startup cost (it's not a runtime cost, it's paid only once
when validating the first bean of a given class) and have a higher memory
footprint
Usually, in HV, we take the 1/ route so I was a bit surprised 2/ was chosen
here.
Thoughts?
## Collection resizing
So, interestingly, enough, the small CollectionHelper#toImmutable*
utilities I introduced make quite a difference if there are a lot of
empty/one element collections - which is the general case in Stuart's
example.
The idea of these utilities is quite simple: instead of blindly wrapping a
collection using Collections#unmodifiable* to make a collection immutable,
we test the size of the collection and we return a static empty collection
or a singleton collection if the size is 0 or 1 respectively.
So this is nice when the size is 0 or 1 but considering that HV metadata
structures are mostly immutable once they are initialized, I would like to
go a step further and create a collection of the right size when making it
immutable in all the cases.
We don't use that many collection types and we could default to the current
wrapping method if the collection is not ArrayList, HashSet, LinkedHashSet,
HashMap and maybe LinkedHashMap.
Basically, for sets, we would have:
public static <T> Set<T> toResizedImmutableSet(Set<? extends T> set) {
switch ( set.size() ) {
case 0:
return Collections.emptySet();
case 1:
return Collections.singleton( set.iterator().next() );
default:
if ( set instanceof LinkedHashSet ) {
LinkedHashSet<T> copy = new LinkedHashSet<T>(
set.size(), 1 );
copy.addAll( set );
return Collections.unmodifiableSet( copy );
}
else if ( set instanceof HashSet ) {
HashSet<T> copy = new HashSet<T>( set.size(), 1 );
copy.addAll( set );
return Collections.unmodifiableSet( copy );
}
else {
return Collections.unmodifiableSet( set );
}
}
}
It's one more memory allocation but I think reducing the runtime memory
footprint would be worth it. Especially for data structures that are very
common (think of the executable metadata, the parameter metadata and so on).
I'm not sure I would generalize it to all our projects but I think it makes
sense for HV where nearly everything is immutable and we have quite a lot
of Maps and Sets in the metadata.
Usually, Yoann and I are on the "Let's do it" side and the others on the
"I'm not sure it's worth it" when it comes to CollectionHelper, but
considering how well the first round has paid, I think we should at least
give it a try.
Thoughts?
## Metadata, metadata, metadata
So, we have a lot of metadata. A. Lot. Especially, we have metadata even
when there are no HV metadata at all.
Think of Keycloak's RealmEntity (
https://github.com/keycloak/keycloak/blob/master/model/jpa/src/main/java/...),
it makes a lot of metadata for no useful information.
It's especially the method metadata that are heavy, even if I tried to
reduce them to the minimum in this case.
I once thought about completely removing the method metadata if the method
wasn't annotated at all but then I thought that the overriding rules would
prevent us to do that.
Gunnar, am I right on this?
So basically, I think we can't really do anything about this.
I also thought that it was useless to look for annotations on
java.lang.Object but then again I think the overriding rules force us to do
so.
That's it for today.
Comments welcome.
--
Guillaume
7 years, 3 months
PESSIMISTIC_FORCE_INCREMENT lock mode
by Arnold Gálovics
Hi all,
I'm a bit confused with the mentioned lock mode.
*The doc says the following:*
*"The entity is locked pessimistically and its version is incremented
automatically even if the entity has not changed."*
I'm checking this with an H2 DB and the current behavior is the following:
- the version attribute is incremented in advance, right after fetching
(I'm using EntityManager#find here, but with lock, it should be the same)
- the original fetching query contains the SELECT ... FOR UPDATE clause
Knowing this, it seems for me that this lock mode involves a DB lock,
however the doc doesn't say anything about this, especially whether it's a
shared or exclusive lock.
I've checked Vlad's article about this.
https://vladmihalcea.com/2015/02/16/hibernate-locking-patterns-how-does-p...
It says the following: "*The PESSIMISTIC_FORCE_INCREMENT naming might lead
you into thinking that you are using a pessimistic locking strategy, while
in reality this Lock Mode is just an optimistic locking variation."*
So now I'm unsure what this really is.
Could you please briefly describe it to me if I missed something?
Thanks in advance!
Best Regards,
Arnold
7 years, 3 months
6.0 - procedure/function calls via NativeQuery
by Steve Ebersole
Another unnecessary complexity I'd like discuss removing is the ability to
execute procedure/function calls via NativeQuery. The complexity is a
bunch of String parsing and token interpretation we need to do in order to
discovery this intention. Given that both JPA and Hibernate define
specific APIs for executing procedure/function calls this seems like an
unnecessary complexity and overhead.
Objections? Thoughts?
7 years, 3 months
Null vs "empty" embeddable values
by Gail Badner
As of HHH-7610, Hibernate is supposed to treat null and empty embeddable
values as equivalent.
Should we add a requirement that an embeddable class #equals and #hashCode
methods also treats null and "empty" values as equivalent?
That would mean that #hashCode returned for all empty embeddables should be
0, and embeddableValue.equals( null ) should return true for all empty
embeddableValue.
BTW, I've already pushed a fix for HHH-11881 so that nulls are not
persisted for Set elements (they already were not persisted for bags,
idmaps, lists, or maps). I'm holding off on fixing HHH-11883, which would
no longer persist empty embeddable values and would ignore embeddables with
all null columns when initializing collections.
I don't think JPA has any such requirement, and I'm hesitant to enforce
something that may be at odds with JPA.
Comments or opinions?
Thanks,
Gail
7 years, 3 months
6.0 - NativeQuery#getQueryReturns
by Steve Ebersole
For 6.0 we need to address a few problems with this method.
First, although largely minor, is that we have an API contract returning
SPI references.
The larger concern is a side-effect of this method in terms of clearing
previous registered returns if new returns have been added since the
previous call (if one) to `#getQueryReturns`. When any of the
`NativeQuery#addXYZ` methods are called, a "builder" is actually registered
under the covers. When `#getQueryReturns` is called all previously
registered returns are cleared and the currently open builders are executed
and their generated returns are added to the internal (emptied) list.
If this were a clean-room impl I would just add a `#register`-style method
to those return builders and skip the whole queuing of the builders. But
that is actually a very dangerous change to make now as it would mean that
existing apps using this API would still call `#addXYZ` but no returns
would actually be registered.
I guess the correct direction depends on whether this method is used and
for what purpose. So first, anyone know of usages of these methods either
from applications or integrations?
The "safest" option is to:
1. document this side-effect
2. deprecate this method - it should go away, or be rethought
And we'd still have to consider the return types. One option would be to
temporarily keep around the SPI forms (deprecated) and continue to return
those. Ideally we'd retro-deprecate these SPI contracts and the method
back on 5.2 maintenance release and just drop them on 6.0[1]. Of course if
anything we know of is using this method, we would need the alternative.
Thoughts? Opinions? Suggestions?
[1] See my earlier `Continue to add 5.2 deprecations for 6.0 work?` message
to this list
7 years, 3 months