MySQL is the first choice for many projects (even if PostgreSQL is more
advanced) and the AUTO_INCREMENT is the only choice a DBA will approve.
The TABLE identifier generator is too heavyweight and any external DB user
will have to know how it works for inserting new rows.
I'll run some tests for TABLE identifier and see how the locking and
transaction context switch overhead affects performance.
If this that delaying identity entities to flush-time is a doable task, I
think we should add a Jira issue for this.
On Mon, Dec 7, 2015 at 8:45 PM, Steve Ebersole <steve(a)hibernate.org> wrote:
On Sat, Dec 5, 2015 at 11:46 PM Vlad Mihalcea <mihalcea.vlad(a)gmail.com>
> I always wandered why the identity generator needs to assign an identifier
> prior to flushing.
In general, how else do you propose to obtain the identifier with which we
can associate the entity into the PersistenceContext? That has to come
from the insert. Just look at the "hacks" that are in place to support
this today in terms of (1) generating a "stand-in identifier", (2) tracking
these in the PC and (3) going back and adjusting them after the eventual
insert. I guess probably I am missing what you are really asking here...
Because of this behavior, the JDBC batching is disabled which is bad news
> for MySQL where most setups use IDENTITY instead of TABLE generation type.
> Since the AbstractSaveEventListener supports the
> option for when there is no transaction n progress, we could make this a
> default anyway.
Well that is *part of* the resolution of shouldDelayIdentityInserts. The
other part is whether the source of this entity becoming persistent/managed
requires immediate access to the id; the typical case here is the
difference between save() and persist() - save() returns the id as the
method return, whereas persist() does not. So if save() was used the
insert *has to* happen immediately.
If the inserts are delayed to flush-time, we could batch them based on the
> current dialect. Only the SQL Server dialect and Oracle 11g don't allow
> fetching the resulting keys from a batch. But Oracle 12c (which supports
> IDENTITY), PostgreSQL and MySQL can save a batch of inserts and fetch the
> generated keys.
> If I judge from a Stack Overflow perspective, most questions are in the
> context of MySQL, so many Hibernate users use MySQL in fact.
> What do you think? Should we make this change and allow MySQL users to
> batch JDBC inserts even for IDENTITY.
As you point out, the proper solution here would require consultation with
the Dialect to determine whether getGeneratedKeys works properly with
batched statements. From there AbstractEntityPersister would need changes
to allow these inserts to be batched based on what the Dialect reports. If
that is all something you want to take on, I don't see an issue with it -
provided the transaction and requiresImmediateIdAccess checks in
AbstractSaveEventListener stay in place. The requiresImmediateIdAccess
requirement is obvious. Maybe I am just letting my loathing of 2 bad ideas
(auto-commit and IDENTITY) combined cloud my judgement in regards to the
transaction "requirement" ;)