HHH-10162 Inheritance and L2 cache
by Christian Beikov
Hey guys,
Steve said I should start a discussion about the possible solution for
HHH-10162 <https://hibernate.atlassian.net/browse/HHH-10162> so here we go.
While debugging the issue, I found out that the proxy is created at
DefaultLoadEventListener.createProxyIfNecessary() where it IMO should
consult the 2L cache first by calling existing =
loadFromSecondLevelCache( event, persister, keyToLoad );. The fix looks
easy, but I am not sure of the implications. Obviously this will affect
performance a little since it has to consult the L2 cache now.
I tried to start a discussion in the Dev room, but so far only Andrea,
Vlad and Chris have commented this. Has anyone a different idea for
implementing this?
--
Mit freundlichen Grüßen,
------------------------------------------------------------------------
*Christian Beikov*
6 years, 11 months
Preventing duplicate ForeignKey generation
by Milo van der Zee
Hello all,
During development of applications I'm used to set the schema creation
to 'update' (hbm2ddl.auto = update). This makes life a bit easier.
Issue with the newer version of Hibernate is that the name of the
generated keys changed and so all keys are regenerated. For the large
databases I use this takes hours and has to be done every time a fresh
copy from production is taken to the development environment.
I do use meaningful names for the indexes where possible. But when using
abstract classes used by the entities that is not possible because the
same fields from the abstract are used by many entity classes and so
would end up having the same index names.
I checked the code that decides if the index needs to be created and
found that it only checks the name of the index. Not what the index
actually does. This is why I changed that piece of code to be a bit
smarter. It is desinged for simple constraints from one column to
another column. Not for multi to multi column indexes and constraints.
I created a Jira issue for it but nobody notices it and there are no
comments or anything else. So now I try it here :)
Jira HHH-10934 (https://hibernate.atlassian.net/browse/HHH-10934)
Code fragment I put in SchemaMigratorImpl.java:
private ForeignKeyInformation findMatchingForeignKey(ForeignKey foreignKey, TableInformation tableInformation) {
if (foreignKey.getName() ==null) {
return null;
}
/*
* Find existing keys based on referencing column and referencedTable
*/
String referencingColumn = foreignKey.getColumn(0).getName();
String referencedTableName = foreignKey.getReferencedTable().getName();
Iterable<ForeignKeyInformation> existingForeignKeys = tableInformation.getForeignKeys();
for (ForeignKeyInformation existingKey : existingForeignKeys) {
Iterable<ColumnReferenceMapping> columnReferenceMappings = existingKey.getColumnReferenceMappings();
for (ColumnReferenceMapping mapping : columnReferenceMappings) {
String existingReferencingColumn = mapping.getReferencingColumnMetadata().getColumnIdentifier().getText();
String existingReferencedTableName = mapping.getReferencedColumnMetadata().getContainingTableInformation().getName().getTableName().getCanonicalName();
if (referencingColumn.equals(existingReferencingColumn) && referencedTableName.equals(existingReferencedTableName)) {
return existingKey;
}
}
}
// If not yet found check based on key name return tableInformation.getForeignKey(Identifier.toIdentifier(foreignKey.getName()));
}
Or if you prever the Java 8 way:
private ForeignKeyInformation findMatchingForeignKey(ForeignKey foreignKey, TableInformation tableInformation) {
log.debug("findMatchingForeignKey");
if (foreignKey.getName() ==null)return null;
/*
* Find existing keys based on referencing column and referencedTable
*/
String referencingColumn = foreignKey.getColumn(0).getName();
String referencedTableName = foreignKey.getReferencedTable().getName();
Predicate<ColumnReferenceMapping> mappingPredicate = m -> referencingColumn.equals(m.getReferencingColumnMetadata().getColumnIdentifier().getText())
&& referencedTableName.equals(m.getReferencedColumnMetadata().getContainingTableInformation().getName().getTableName().getCanonicalName());
for (ForeignKeyInformation existingKey : tableInformation.getForeignKeys()) {
boolean found = StreamSupport.stream(existingKey.getColumnReferenceMappings().spliterator(),false).anyMatch(mappingPredicate);
if (found)return existingKey;
}
// If not yet found check based on key name return tableInformation.getForeignKey(Identifier.toIdentifier(foreignKey.getName()));
}
The calling method does not use the returned value. It only checks if
the returned value is null or not. So this could also be cleaned by
changing the method to return a boolean and then remove the for loop in
java-8 and use flatmap. But first let us agree on the validity of the
idea to change this piece of code.
I hope anybody would like to have a look at it and if there is any
change that the idea (not this actual very quick/dirty implementation)
goes into the system I'll clean it up and do some actual tests for more
complex database structures. I did not even check the junit tests yet.
At the moment it is good enough for me but I think it could be something
more people would benefit from.
Thanks,
Milo van der Zee
7 years, 6 months
[feature request][discuss] smoother serializers integration?
by Romain Manni-Bucau
Hi guys,
Short sumarry: Wonder if hibernate could get a feature to kind of either
unproxy or freeze the entities once leaving the managed context to avoid
uncontrolled lazy loading on one side and serialization issues on another
side.
Use case example: a common example is a REST service exposing directly
hibernate entities (which is more and more common with microservice
"movement").
Objective: the goal is to not need any step - or reduce them a lot -
between the hibernate interaction and a potential serialization to avoid
issues with lazy loading and unexpected loading. Today it requires some
custom and hibernate specific logic in the serializer which kind of breaks
the transversality of the two concerns (serialization and object
management/loading).
Implementation options I see:
1. a callback requesting if the lazy relationship should be fetched,
something like
public interface GraphVisitor {
boolean shouldLoad(Object rootEntity, Property property);
}
2. An utility to remove any proxy potentially throwing an exception and
replacing the value by null or an empty collection, something like
MyEntity e = Hibernate.deepUnproxy(entity);
3. A switch of the proxy implementation, this is close to 2 but wouldn't
require a call to any utility, just a configuration in the persistence unit.
Side note: of course all 3 options can be mixed to create a single solution
like having 3 implemented based on 1 for instance.
Configuration proposal: this would be activated through a property in the
persistence unit (this shouldn't be only global IMHO cause otherwise you
can't mix 2 kind of units, like one for JSF and one for JAX-RS to be
concrete). This should also be activable as a query hint i think - but more
a nice to have.
What this feature wouldn't be responsible for: cycles. If relationships are
bidirectional then the unproxied entity would still "loop" if you browse
the object graph - this responsability would stay in the consumer since it
doesn't depend on hibernate directly but more on a plain object handling.
What do you think?
Romain Manni-Bucau
@rmannibucau <https://twitter.com/rmannibucau> | Blog
<https://blog-rmannibucau.rhcloud.com> | Old Blog
<http://rmannibucau.wordpress.com> | Github <https://github.com/rmannibucau> |
LinkedIn <https://www.linkedin.com/in/rmannibucau> | JavaEE Factory
<https://javaeefactory-rmannibucau.rhcloud.com>
7 years, 6 months
6.0 - concept naming
by Steve Ebersole
Before I get deeper into this persister work I wanted to discuss some of
the primary design concepts and start a discussion about naming the
interfaces/classes related to them.
I am thinking it would be best to model the tables associated with a
persister using a similar structure we use for
TableSpace/TableGroup/TableGroupJoin but in the non-specific-reference
case. For example, the Person entity might be mapped to 2 tables: the
primary PERSON table, and a PERSON_SUPP secondary table. Inside the
persister we'd map the binding to those tables including the join criteria
between them. This is distinctly different from the structure of
references to those tables as part of a specific query. So inside
persister we'd map a structure like:
EntityPersister {
...
rootTable = Table( PERSON )
secondaryTables = {
SecondaryTableBinding {
joinType = INNER
targetTable = Table( PERSON_SUPP )
predicate = {
EQ {
lhs = {
ColumnBinding( Column( Table( PERSON ), "id" )
}
rhs = {
ColumnBinding( Column( Table( PERSON_SUPP ), "p_id"
)
}
}
}
}
}
We could simplify this if we follow the same assumption we do today that
the secondary tables always use the root table as the lhs. Additionally we
might consider simplifying the join predicate to use the ForeignKey which
already defines this join predicate info:
EntityPersister {
...
rootTable = Table( PERSON )
secondaryTables = {
SecondaryTableBinding {
joinType = INNER
targetTable = Table( PERSON_SUPP )
predicate = ForeignKey {
uid = 123456789ABC
...
}
}
}
}
Compare this to the structure for a particular query[1], which is different:
TableGroup
root = TableReference {
table = ( Table( PERSON ) )
identifiactionVariable = p0
tableJoins = {
TableReferenceJoin {
...
}
}
}
}
Notice specifically the addition of the identifactionVariable (alias).
Because we might have multiple references to that same table which need to
be unique in the given query, e.g. a fetched or joined self-referential
association (person -> manager).
First, what do you think of these proposed structures? And secondly what
are y'alls opinions wrt the names?
FWIW there are 2 main reasons I propose modeling things the way I suggest
above in terms of the structure on persisters:
1. I think it makes it easier for consumers of persisters to understand
the bindings
2. I *think* (tbd) that it makes it easier to "clone" into the query
TableGroup structure.
Anyway, would love feedback on this.
[1] Note that the class called `TableReference` here is called
`TableBinding` in the currently pushed code, but I'd prefer to change that
and instead have "binding" term reference the binding of the table in the
persister model. I am open to different naming here though.
7 years, 6 months
Implement UserCollectionType for non-Collection types such as Guava's Multimap
by Jan-Willem Gmelig Meyling
Hi everyone,
Out of curiosity I’ve tried to implement a UserCollectionType for Guava’s Multimap [1] based on this [2] article thats doing a similar job for Apache’s MultiMap. Doing so I’ve stumbled upon two issues, for which I’d like to receive some feedback.
As far as I can see, the UserCollectionType seems focussed around mapping values to indexes. In the case of a list this would supposedly be a number, for non-ordered collections I am not sure, but for maps it seems to be the entries' key. It seems that the implementation for Apache’s MultiMap actually is a Map<K, Collection<V>> so that in the UserCollectionType mentioned in that article every index actually points to a collection of values under that key. I am wondering whether this could potentially break the behaviour for the UserCollectionType class as it seems it should be possible to dirty check the values behind the indexes which probably doesn’t work against collections.
But there’s another inconvenience I stumbled upon while implementing this. Where the Apache MultiMap extends a Map<K, Object>, Guava’s Multimap does not. I therefore had to override the TypeUtils, JavaXCollectionType, CollectionBinder, ArrtirubteFactory, PluralAttributeImpl, and most likely if I wanted to do it well a MapBinder and PluralAttribute’s CollectionType as well. While doing this I also found out that JavaXCollectionType.getCollectionClass() returns Map.class perfectly fine for the currently supported Map fields, even though they do not satisfy the <? extends Collection> return type.
So it seems this is a bad idea after all ;-) But I am wondering whether I am missing something here.
a) Is there a way to map a Map<Key, Collection<Value>> as an EnumeratedCollection in plain Hibernate?
b) Is there another way to achieve the above and persist a Guava Multimap with a custom user type? (Assuming the UserCollectionType is not the right place to start)
c) Is Hibernate 6 going to change the answer to these questions?
d) Was the article ever right about how to persist Apache’s MultiMap’s type, or was it based on false assumptions on the use of UserCollectionType anyways?
The code I’ve written for this email can be found under [3].
Feedback is much appreciated, thanks in advance!
Kind regards,
Jan-Willem
[1] http://google.github.io/guava/releases/snapshot/api/docs/com/google/commo... <http://google.github.io/guava/releases/snapshot/api/docs/com/google/commo...>
[2] http://blog.xebia.com/mapping-multimaps-with-hibernate/ <http://blog.xebia.com/mapping-multimaps-with-hibernate/>
[3] https://github.com/JWGmeligMeyling/hibernate-multimap-type/tree/master/gu... <https://github.com/JWGmeligMeyling/hibernate-multimap-type/tree/master/gu...>
7 years, 6 months
6.0 - JPA metamodel code
by Steve Ebersole
Currently Hibernate implements the JPA metamodel contracts with a series of
wrapper objects defined in the `org.hibernate.metamodel` package. This is
non-ideal because it is a wrapper approach rather than integrating the JPA
contracts into our runtime model which we are doing in these 6.0 changes
(Navigable ties in these JPA contracts).
Based on that I propose that we simply "drop" the `org.hibernate.metamodel`
package. The only question is what "drop" means here. Does it mean just
remove these from 6.0 (possibly retro-deprecating on 5.2)? Or does it mean
we follow the deprecate-and-shadow approach?
To me this really comes down to whether we expect or ever expected users to
access stuff from this `org.hibernate.metamodel` package. Considering that
these are defined specifically in `org.hibernate.metamodel.internal` (an
internal package) I would argue not, and that we should just remove them
from 6.0 and replace in usage with Navigable and friends.
Any objections?
7 years, 6 months
Negative sequence numbers
by Gail Badner
I see a comment on HHH-10219 that sounds like negative sequence values are
supported by NoopOptimizer [1], but it does not seem to work.
I've pushed a test case to my fork [2] with an entity defined as:
@Entity( name = "TheEntity" )
@Table( name = "TheEntity" )
public static class TheEntity {
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator =
"ID_GENERATOR")
@SequenceGenerator(name = "ID_GENERATOR", sequenceName =
"ENTITY_SEQ", initialValue= -10, allocationSize = -1)
public Integer id;
}
The first generated value is -10; the second generated value should be -11,
but it is -9.
The sequence is exported as:
create sequence ENTITY_SEQ start with -10 increment by 1
In the debugger, I can see that NoopOptimizer is being used, which has the
correct incrementSize (-1).
The problem is that SequenceStructure#getSourceIncrementSize always returns
1. This is because SequenceStructure#applyIncrementSizeToSourceValues gets
initialized to false because NoopOptimizer#applyIncrementSizeToSourceValues
returns false. [3]
If I change NoopOptimizer#applyIncrementSizeToSourceValues to return true,
then the test passes. Unfortunately, SequenceHiLoGeneratorNoIncrementTest
fails though because Hibernate tries to create a sequence that increments
by 0.
If I define NoopOptimizer#applyIncrementSizeToSourceValues as follows, both
my test [2] and SequenceHiLoGeneratorNoIncrementTest pass.
public boolean applyIncrementSizeToSourceValues() {
return getIncrementSize() != 0;
}
Should Hibernate support negative sequence values?
If so, is my proposed fix OK?
Regards,
Gail
[1]
https://hibernate.atlassian.net/browse/HHH-10219?focusedCommentId=73362&p...
[2] https://github.com/gbadner/hibernate-core/tree/negative-sequence-values
[3]
https://github.com/hibernate/hibernate-orm/blob/master/hibernate-core/src...
7 years, 6 months
Re: [hibernate-dev] Hibernate Search: Adding more "hidden" fields to the index
by Yoann Rodiere
I wonder, what's the benefit for HSEARCH-2616? Do you want to have that
field so that we can just use AddLuceneWorks everywhere, and run targeted
delete operations when we start a partition? If so, is it as a fallback
solution, if what I proposed cannot be implemented, or as a better
alternative? Note I don't have strong arguments against that solution, I'm
just trying to understand the "why".
On adding a hidden field, I wonder what this will mean for Elasticsearch;
if we start doing such things, we should clearly and explicitly state in
the documentation that targeting existing ES schemas without adapting them
to Hibernate Search is not supported.
On top of that, it may hurt users upgrading Hibernate Search: Lucene may
simply ignore queries against a field that doesn't exist in the index, but
I'm not sure Elasticsearch behaves that way when the field isn't even
defined in the mapping. So users may have to upgrade their schema just for
that. I know Elasticsearch integration is experimental anyway, but what I
mean is if we do that, it must be *before* Elasticsearch we drop the
"experimental" mention on Elasticsearch integration.
Yoann Rodière
Hibernate NoORM Team
yoann(a)hibernate.org
On 27 April 2017 at 15:59, Yoann Rodiere <yrodiere(a)redhat.com> wrote:
> I wonder, what's the benefit for HSEARCH-2616? Do you want to have that
> field so that we can just use AddLuceneWorks everywhere, and run targeted
> delete operations when we start a partition? If so, is it as a fallback
> solution, if what I proposed cannot be implemented, or as a better
> alternative? Note I don't have strong arguments against that solution, I'm
> just trying to understand the "why".
>
> On adding a hidden field, I wonder what this will mean for Elasticsearch;
> if we start doing such things, we should clearly and explicitly state in
> the documentation that targeting existing ES schemas without adapting them
> to Hibernate Search is not supported.
> On top of that, it may hurt users upgrading Hibernate Search: Lucene may
> simply ignore queries against a field that doesn't exist in the index, but
> I'm not sure Elasticsearch behaves that way when the field isn't even
> defined in the mapping. So users may have to upgrade their schema just for
> that. I know Elasticsearch integration is experimental anyway, but what I
> mean is if we do that, it must be *before* Elasticsearch we drop the
> "experimental" mention on Elasticsearch integration.
>
>
> Yoann Rodière
> Software Engineer, Hibernate NoORM Team
> Red Hat
> yrodiere(a)redhat.com
>
> On 27 April 2017 at 15:23, Sanne Grinovero <sanne(a)hibernate.org> wrote:
>
>> To better implement recovery operations during MassIndexer
>> [HSEARCH-2616] - specifically in the context of the upcoming JBatch
>> based implementation - I'm considering the benefits of adding one more
>> field the the Lucene index for our internal purposes.
>>
>> This new field is only useful for Hibernate Search internals so we
>> shouldn't allow it to be targeted by queries, etc..
>>
>> There is a single precedent: we already encode the entity name, so
>> "hiding fields" is not a new problem that we have to deal with. It
>> might be a reason to polish the existing concept and improve the
>> encapsulation.
>>
>> Would anyone have a strong case against this?
>>
>> Thanks,
>> Sanne
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>
>
>
7 years, 6 months
OGM - Let's remove Fongo support
by Guillaume Smet
Hi,
So, in OGM, for MongoDB, we also support running the tests with Fongo which
is an in-memory Java (more or less accurate) MongoDB implementation.
It has a cost as Fongo behaves differently and we have to disable
tests/implement different tests without any real benefits IMHO:
- it's easy to run MongoDB embedded for testing: this is what we use by
default
- we leave in a Docker world so people might also spawn a containerized
MongoDB instance for testing
When moving to the new MongoDB API, we have a couple more
differences/things not working with Fongo and I really don't see the point
of maintaining this. It adds an unnecessary burden to changes made to the
MongoDB datastore.
If no one speaks against it, I'll remove it soon.
--
Guillaume
7 years, 6 months