There is this issue that has puzzled me for many years related to
Why is it that we execute the OrphanRemovalAction firsts, but the
Shouldn't the EntityDeleteAction be executed before Insert or Update?
There musts be a reason for choosing this ordering, but I can't figure out
why the EntityDeleteAction was chosen to be executed last.
Currently, the medata definition mechanisms in Search work this way:
- the primary way to define metadata is using annotations
- the secondary way to define metadata is programmatic, *but it only
instantiates annotations,* simulating annotated entities
- classes needing to access those "low-level" metadata
(AnnotationMetadataProvider in particular) only manipulate annotations
I'm wondering if we should change that, flipping the metadata definition
upside-down: the programmatic definition would be the main one, with a
clean, annotation-free low-level metadata model, and annotations would
merely be translated to this low-level metadata using a walker and the
My first argument is that creating "simulated" annotations is rather odd,
but I'll grant you it's hardly receivable.
But I can see other, more objective reasons:
- We periodically notice missing methods in the programmatic API (,
, , ), because we thought "annotations first" and forgot about the
programmatic API. If annotation processing was to rely on programmatic
mapping, this "annotations first" thinking would not be a problem anymore,
but rather a good thing: we would have to implement both the programmatic
API and the annotations in order to make it work.
- If we want to support programmatic mapping for "free-form" (i.e.
non-POJO) entities, we will need to be more generic than what annotations
allow at some point. We already spotted the problem of using "Class<?>" to
denote entity types, but there may be more. For instance denoting property
identifiers, or property types, ... It just doesn't feel future-proof to
rely on an intrinsically Java way of modeling metadata (the annotations)
and try to model non-Java things with it...
What do you think? Are there any objections to making the programmatic API
the primary way to define metadata? Note that I'm not talking about making
users use it in priority (it won't change anything for them), just about
making it more central in our architecture.
Yoann Rodière <yoann(a)hibernate.org>
Hibernate NoORM Team
select TOP(?) abstractsk0_.id as id1_0_, abstractsk0_.processed as
processe2_0_ from BatchJob abstractsk0_ with (updlock, rowlock, readpast)
select TOP(?) abstractsk0_.id as id1_0_, abstractsk0_.processed as
processe2_0_ from BatchJob abstractsk0_ with (holdlock, rowlock, readpast)
While working on this issue which adds support for SKIP_LOCKED for SQL
I came to question the way we use the lock hints based on the JPA or
Currently, we do like this:
- PESSIMISTIC_WRITE -> UPDLOCK
- PESSIMISTIC_READ -> HOLDLOCK
That's surprising since the HOLDLOCK is actually more restrictive than
According to the officiala documentation (
https://msdn.microsoft.com/en-us/library/ms187373.aspx ) :
Specifies that update locks are to be taken and held until the transaction
UPDLOCK takes update locks for read operations only at the row-level or
If UPDLOCK is combined with TABLOCK,
or a table-level lock is taken for some other reason, an exclusive (X) lock
will be taken instead.
Is equivalent to SERIALIZABLE. For more information, see SERIALIZABLE later
in this topic.
HOLDLOCK applies only to the table or view for which it is specified
and only for the duration of the transaction defined by the statement that
it is used in.
Now, the difference between these two is that UPDLOCK takes shared
row-level locks while
HOLDLOCK goes byond that and takes range locks as well.
This assumption is backed by these StackOverflow answers:
For SKIP_LOCKED, which is READPAST in SQL Server, we can't use HOLDLOCK at
all so we need to use UPDLOCK instead.
Now, I think that both PESSIMISTIC_READ and PESSIMISTIC_WRITE should use
and only if we specify SKIP_LOCKED, we then switch to UPDLOCK instead.
Let me know what you think?
For 6.0, what do y'all think of these changes proposed below to the
*Koen, this affects tools so really would like your thoughts...*
Mostly this comes from the definition of a `#finishInitialization` method
on ManagedTypeImplementor (which covers mapped-superclass, entity and
embeddable/embedded). Currently this method takes its supertype as
PersistentClass; however PersistentClass is generally understood to model
an entity and tooling certainly uses it as such. Keeping this in mind to
hopefully minimize impact I propose the following:
1. Define a new org.hibernate.mapping.ManagedTypeMapping that represents
mappings for any "managed type" in the normal JPA meaning of that term
(mapped-superclass, entity, embeddable)
2. Define a new org.hibernate.mapping.EmbeddedTypeMapping extending
ManagedTypeMapping (org.hibernate.mapping.Composite). Or should we split
EmbeddableTypeMapping and "EmbeddedMapping"?
3. Define a new org.hibernate.mapping.IdentifiableTypeMapping extending
4. Define a new org.hibernate.mapping.MappedSuperclassTypeMapping
5. Define a new org.hibernate.mapping.EntityTypeMapping extending
6. Make PersistentClass extend EntityTypeMapping and deprecate
7. Make Composite extend EmbeddedTypeMapping and deprecate
8. Make MapppedSuperclass extend MappedSuperclassTypeMapping and
9. Re-work the hierarchies here to better fit this new model
* @todo (6.0) Use ManagedTypeMapping here as super-type rather than
ManagedTypeImplementor<? super T> superType,
While reviewing this Pull Request:
I came to realize that we can't merge it as-is since it will break backward
However, we can have the user opt for this behavior, in which case we can
Let me know if there is someone disagreeing with my proposal.
To fix the sequence generation consistency issue on Infinispan
Embedded [OGM-1212] I will need to change how sequences
are encoded within the datagrid.
This means OGM 5.2 will be able to read data as encoded by previous
versions, but it will write data using a new format.
This has some implications, such as people upgrading temporarily to
OGM 5.2 can't go back to previous versions (without restoring a backup
of the data).
I will document this limitation.
There's a second aspect:
OGM will now include some code to be able to read pre-5.1 data, but
eventually we'll want to remote this. How should we handle that?
I'm thinking that people hitting this problem (in some future) will
simply need to fetch OGM 5.2 and use that as an intermediate step;
however OGM will only upgrade the data encoding "lazily" as it goes
along: when something happens to be read, and happens to be
re-written, it will re-encode it.
But some data might never be rolled over to the new format.
So I think we'll eventually need a data migration tool which performs
all data-encoding aspects eagerly, so that it can report a point in
time for which it's done and safe to move on to a future version.
I don't wish to create such a tool now for OGM version 5.2 but I
we should agree on a plan already.
I also wonder if this should mark the following issue "out of date":
There are more than 300 open issues, which is fine but rather than
being these well-defined issues most sound like wishful thinking of
someone having a (possibly cool) idea but not really executing on it.
Since JIRA is an issue tracker and not really a planning tool / note
taking app I wish we could limit this practice of having issues like
"explore integration with.." ?
More specifically, could we move "out of the way" all issues related
to Databases which we're moving into the "contrib" repository?
I think it would be nice to have these in a different JIRA project.