It took us a long time to have the concept of "NumericField" fully
exposed to Hibernate Search users, as a primary concept people are
getting familiar with.
Which means of course that the Lucene team is going to get rid of them
in the near future: .
NumericField(s) will live for a bit longer in a "backwards-codec"
package, and to be fair the migration makes sense for Lucene as the
new alternative structure is delivering much better performance across
the board (less indexing time, better query times, less index space,
So what I'm wondering now is if it was a mistake to expose this.
For sure since we're up to redesign the API soon we should keep this
in mind, but also while we traditionally made "the best choice"
automatically out of the box about how to translate certain types to
the index world, we always allowed for power users to override our
So what's the correct level of abstraction here?
We want to allow power users to use specifics, but not keep breaking
APIs. Or should we just accept that such level of details will keep
Ideas very welcome..
1 - https://issues.apache.org/jira/browse/LUCENE-6917
I always wandered why the identity generator needs to assign an identifier
prior to flushing.
Because of this behavior, the JDBC batching is disabled which is bad news
for MySQL where most setups use IDENTITY instead of TABLE generation type.
Since the AbstractSaveEventListener supports the shouldDelayIdentityInserts
option for when there is no transaction n progress, we could make this a
If the inserts are delayed to flush-time, we could batch them based on the
current dialect. Only the SQL Server dialect and Oracle 11g don't allow
fetching the resulting keys from a batch. But Oracle 12c (which supports
IDENTITY), PostgreSQL and MySQL can save a batch of inserts and fetch the
If I judge from a Stack Overflow perspective, most questions are in the
context of MySQL, so many Hibernate users use MySQL in fact.
What do you think? Should we make this change and allow MySQL users to
batch JDBC inserts even for IDENTITY.
Someone on Twitter pointed out this issue:
I managed to add a test case using the Hibernate ORM Test Templates and the
issue is replicated.
The question is whether we should run the version check when loading
entities from the database. This issue is caused by initializing a
collection that contains a previously loaded entity which has changed in
I also think we shouldn't raise the exception in this case because the
client might not modify the data anyway.
SQM needs information about the domain model being queried. In the
initial original Antlr redesign work I did, I opted for a completely new
"type system" specific to the parsing. The main reason for this was to
allow for "other consumers" (besides Hibernate ORM) of its services. By
and large we have all agreed that should no longer be a design
requirement. But that still leaves us with the question of what we should
do in SQM moving forward. We have a few options on how to achieve this.
At the highest level we could either reuse an existing type system or we
could develop a "SQM specific" type system.
Reusing an existing type system really boils down to 2 options:
1) Use the Hibernate ORM type system (Type, EntityPersister,
2) Use the JPA type system (javax.persistence.metamodel.Type, etc)
I have a prototype of SQM using the JPA type system. There are some
major limitations to this approach, as well as some very significant
benefits. The main benefit is that it makes it much more trivial to
interpret JPA criteria queries (no conversion between type systems).
However, as I said the limitations are pretty significant. Put simply, the
JPA type system is very inflexible and certain concepts in Hibernate simply
would not fit such; this includes ANY type mappings, dynamic
(EntityType.MAP, etc) model types, BAG and IDBAG collections, etc. Also,
it defines a lot of things we don't need nor care about for query
translation. All in all, I'd vote against this.
Using the HIbernate type system is a viable alternative. Though I think
that works best if we move SQM in its entirety upstream into the ORM
project (otherwise we have a bi-directional set of dependencies). The only
drawback is that the Hibernate ORM type system is not very consumption
The flip side is to develop a SQM-specific type system. We have 2
prototypes of that at the moment. First is the original one I pretty
much copied directly over from the original Antlr redesign work I mentioned
above. I'd against vote against this one; I started looking at
alternatives for a (few) reason(s) ;) The second is one I developed
loosely based on the JPA type system, but more flexible, more aligned with
Hibernate concepts and limited to just query translation-related concerns.
I am open to alternatives too. Thoughts?
Davide is the right person to talk to. We were discussing this very subject during our face to face meeting and a few things will change. So now is a good time to feed your needs.
> On 30 nov. 2015, at 16:23, Scott Marlow <smarlow(a)redhat.com> wrote:
> Hi Gytis,
> Excellent to hear that you are looking at Neo4j. I'm not sure, Emmanuel would be the better person to ask.
>> On 11/26/2015 12:23 PM, HipChat wrote:
>> Gytis Trikleris
>> just sent you a 1-1 message but you're offline:
>> Gytis Trikleris
>> Hey Scott! I am playing around with Neo4j as a result of your email for
>> Tom last month. The one about recovering Neo4j transactions via their
>> REST API.
>> I'm checking whether it's doable, so would like to do a prototype for
>> it. Do you know, if there is already a way to get Neo4j transaction id
>> in OGM?
>> 12:23 PM
I managed to migrate the whole User Guide to AsciiDoctor and you can check
it out on this branch:
Next week, I'll start reading and reviewing it, and make a plan of what
needs to be written so we have a full 5.0 user reference documentation.
Have a great week-end,
I see that StandardDatabaseInfoDialectResolver selects PostgresPlusDialect
for database named "EnterpriseDB". Is that still correct for Enterprise DB
Postgres Plus Advanced Server 9.4, or should PostgreSQL94Dialect be used?