Question about current flush ordering
by Vlad Mihalcea
Hi,
There is this issue that has puzzled me for many years related to
ActionQueue ordering:
- OrphanRemovalAction.class,
- AbstractEntityInsertAction.class,
- EntityUpdateAction.class,
- QueuedOperationCollectionAction.class,
- CollectionRemoveAction.class,
- CollectionUpdateAction.class,
- CollectionRecreateAction.class,
- EntityDeleteAction.class,
Why is it that we execute the OrphanRemovalAction firsts, but the
EntityDeleteAction last?
Shouldn't the EntityDeleteAction be executed before Insert or Update?
There musts be a reason for choosing this ordering, but I can't figure out
why the EntityDeleteAction was chosen to be executed last.
Vlad
7 years, 9 months
[SEARCH] 6.0: what if we flipped our metadata definition upside down?
by Yoann Rodiere
Hello,
Currently, the medata definition mechanisms in Search work this way:
- the primary way to define metadata is using annotations
- the secondary way to define metadata is programmatic, *but it only
instantiates annotations,* simulating annotated entities
- classes needing to access those "low-level" metadata
(AnnotationMetadataProvider in particular) only manipulate annotations
I'm wondering if we should change that, flipping the metadata definition
upside-down: the programmatic definition would be the main one, with a
clean, annotation-free low-level metadata model, and annotations would
merely be translated to this low-level metadata using a walker and the
programmatic API.
My first argument is that creating "simulated" annotations is rather odd,
but I'll grant you it's hardly receivable.
But I can see other, more objective reasons:
- We periodically notice missing methods in the programmatic API ([1],
[2], [3], [4]), because we thought "annotations first" and forgot about the
programmatic API. If annotation processing was to rely on programmatic
mapping, this "annotations first" thinking would not be a problem anymore,
but rather a good thing: we would have to implement both the programmatic
API and the annotations in order to make it work.
- If we want to support programmatic mapping for "free-form" (i.e.
non-POJO) entities, we will need to be more generic than what annotations
allow at some point. We already spotted the problem of using "Class<?>" to
denote entity types, but there may be more. For instance denoting property
identifiers, or property types, ... It just doesn't feel future-proof to
rely on an intrinsically Java way of modeling metadata (the annotations)
and try to model non-Java things with it...
What do you think? Are there any objections to making the programmatic API
the primary way to define metadata? Note that I'm not talking about making
users use it in priority (it won't change anything for them), just about
making it more central in our architecture.
|1]
http://stackoverflow.com/questions/43006746/hibernate-search-5-2-programm...
[2] https://hibernate.atlassian.net/browse/HSEARCH-1764
[3] https://hibernate.atlassian.net/browse/HSEARCH-2199
[4] https://hibernate.atlassian.net/browse/HSEARCH-1079
Yoann Rodière <yoann(a)hibernate.org>
Hibernate NoORM Team
7 years, 9 months
SQL Server lock hints misunderstanding
by Vlad Mihalcea
--works
select TOP(?) abstractsk0_.id as id1_0_, abstractsk0_.processed as
processe2_0_ from BatchJob abstractsk0_ with (updlock, rowlock, readpast)
--fails
select TOP(?) abstractsk0_.id as id1_0_, abstractsk0_.processed as
processe2_0_ from BatchJob abstractsk0_ with (holdlock, rowlock, readpast)
Hi,
While working on this issue which adds support for SKIP_LOCKED for SQL
server:
https://hibernate.atlassian.net/browse/HHH-10654
I came to question the way we use the lock hints based on the JPA or
Hibernate LockMode(Type).
Currently, we do like this:
- PESSIMISTIC_WRITE -> UPDLOCK
- PESSIMISTIC_READ -> HOLDLOCK
That's surprising since the HOLDLOCK is actually more restrictive than
UPDLOCK.
According to the officiala documentation (
https://msdn.microsoft.com/en-us/library/ms187373.aspx ) :
UPDLOCK:
"
Specifies that update locks are to be taken and held until the transaction
completes.
UPDLOCK takes update locks for read operations only at the row-level or
page-level.
If UPDLOCK is combined with TABLOCK,
or a table-level lock is taken for some other reason, an exclusive (X) lock
will be taken instead.
"
HOLDLOCK:
"
Is equivalent to SERIALIZABLE. For more information, see SERIALIZABLE later
in this topic.
HOLDLOCK applies only to the table or view for which it is specified
and only for the duration of the transaction defined by the statement that
it is used in.
"
Now, the difference between these two is that UPDLOCK takes shared
row-level locks while
HOLDLOCK goes byond that and takes range locks as well.
This assumption is backed by these StackOverflow answers:
http://stackoverflow.com/questions/7843733/confused-about-updlock-holdlock
http://stackoverflow.com/questions/42580238/why-does-sql-server-explicit-...
For SKIP_LOCKED, which is READPAST in SQL Server, we can't use HOLDLOCK at
all so we need to use UPDLOCK instead.
Now, I think that both PESSIMISTIC_READ and PESSIMISTIC_WRITE should use
HOLDLOCK,
and only if we specify SKIP_LOCKED, we then switch to UPDLOCK instead.
Let me know what you think?
Vlad
7 years, 9 months
6.0 - Proposed org.hibernate.mapping changes
by Steve Ebersole
For 6.0, what do y'all think of these changes proposed below to the
org.hibernate.mapping package?
*Koen, this affects tools so really would like your thoughts...*
Mostly this comes from the definition of a `#finishInitialization` method
on ManagedTypeImplementor (which covers mapped-superclass, entity and
embeddable/embedded). Currently this method takes its supertype as
PersistentClass; however PersistentClass is generally understood to model
an entity and tooling certainly uses it as such. Keeping this in mind to
hopefully minimize impact I propose the following:
1. Define a new org.hibernate.mapping.ManagedTypeMapping that represents
mappings for any "managed type" in the normal JPA meaning of that term
(mapped-superclass, entity, embeddable)
2. Define a new org.hibernate.mapping.EmbeddedTypeMapping extending
ManagedTypeMapping (org.hibernate.mapping.Composite). Or should we split
EmbeddableTypeMapping and "EmbeddedMapping"?
3. Define a new org.hibernate.mapping.IdentifiableTypeMapping extending
ManagedTypeMapping
4. Define a new org.hibernate.mapping.MappedSuperclassTypeMapping
extending IdentifiableTypeMapping
5. Define a new org.hibernate.mapping.EntityTypeMapping extending
IdentifiableTypeMapping
6. Make PersistentClass extend EntityTypeMapping and deprecate
7. Make Composite extend EmbeddedTypeMapping and deprecate
8. Make MapppedSuperclass extend MappedSuperclassTypeMapping and
deprecate
9. Re-work the hierarchies here to better fit this new model
/**
* ...
*
* @todo (6.0) Use ManagedTypeMapping here as super-type rather than
PersistentClass
*/
void finishInitialization(
ManagedTypeImplementor<? super T> superType,
PersistentClass entityBinding,
PersisterCreationContext creationContext);
7 years, 9 months
Data encoding change for Hibernate OGM / Infinispan Embedded
by Sanne Grinovero
To fix the sequence generation consistency issue on Infinispan
Embedded [OGM-1212] I will need to change how sequences
are encoded within the datagrid.
This means OGM 5.2 will be able to read data as encoded by previous
versions, but it will write data using a new format.
This has some implications, such as people upgrading temporarily to
OGM 5.2 can't go back to previous versions (without restoring a backup
of the data).
I will document this limitation.
There's a second aspect:
OGM will now include some code to be able to read pre-5.1 data, but
eventually we'll want to remote this. How should we handle that?
I'm thinking that people hitting this problem (in some future) will
simply need to fetch OGM 5.2 and use that as an intermediate step;
however OGM will only upgrade the data encoding "lazily" as it goes
along: when something happens to be read, and happens to be
re-written, it will re-encode it.
But some data might never be rolled over to the new format.
So I think we'll eventually need a data migration tool which performs
all data-encoding aspects eagerly, so that it can report a point in
time for which it's done and safe to move on to a future version.
I don't wish to create such a tool now for OGM version 5.2 but I
we should agree on a plan already.
I also wonder if this should mark the following issue "out of date":
- https://hibernate.atlassian.net/browse/OGM-1148
Thanks,
Sanne
7 years, 9 months
JIRA usage for OGM
by Sanne Grinovero
There are more than 300 open issues, which is fine but rather than
being these well-defined issues most sound like wishful thinking of
someone having a (possibly cool) idea but not really executing on it.
Since JIRA is an issue tracker and not really a planning tool / note
taking app I wish we could limit this practice of having issues like
"explore integration with.." ?
More specifically, could we move "out of the way" all issues related
to Databases which we're moving into the "contrib" repository?
I think it would be nice to have these in a different JIRA project.
Thanks,
Sanne
7 years, 9 months