6.0 - ResultTransformer
by Steve Ebersole
Another legacy concept I'd like to revisit as we move to 6.0 is the
Hibernate ResultTransformer. I'd argue that ResultTransformer is no longer
needed, especially in it's current form.
Specifically, ResultTransformer defines 2 distinct ways to transform the
results of a query:
1. `#transformTuple` - this method operates on each "row" of the result,
allowing the user to transform the Object[] into some other structure.
This is specifically the one I see no value in moving forward. Between
dynamic-instantiation, Tuple-handling, etc I think users have the needed
capabilities to transform the query result tuples.
2. `#transformList` - this one operates on the query result as a whole
(unless scroll/iterate are used). This method at least adds something that
cannot be done in another way. But I'd still personally question its
overall usefulness.
Does anyone have an argument for continuing to support either of these?
Personally, I propose just dropping the ResultTransformer support
altogether.
7 years, 9 months
Dialect#remapSqlTypeDescriptor
by Steve Ebersole
TLDR: Should we adjust to allow Dialect to know the "context" of where the
remapping is requested?
Ah Oracle...
So this comes from the fact that Oracle does not support a BOOLEAN
datatype. Well kind of. It does not support a BOOLEAN datatype in its
"SQL engine". However, in PL/SQL it does in fact support a BOOLEAN
datatype. Which comes into play when we talk about calling functions and
procedures: the arguments and returns can in fact be a BOOLEAN.
As far as I know, Oracle is the only database with this type of
inconsistency in its type system. But the question is whether we want to
pass along some kind of information regarding the context (SQL v function v
procedure) to the Dialect#remapSqlTypeDescriptor.
You can look at https://hibernate.atlassian.net/browse/HHH-11141 for an
illustration of how this impacts applications. And if you look through the
comments you can see the kind-of-crazy workaround needed.
8 years, 1 month
mixing named and positional parameters
by Steve Ebersole
The JPA spec specifically says:
<quote>
Either positional or named parameters may be used. Positional and named
parameters must not be mixed in a single query.
</quote>
I was thinking about how it does not make sense to mix these in a query
(its confusing) and went looking to see what, if anything, the spec had to
say on the subject. Which is when I found that passage.
Currently we do not validate this one way or the other. But I think we
ought to start. The only real question is whether to make this an
exception all the time, or just when strict-JPA-compliance is requested.
Personally I vote for always disallowing this (like I said, I find it
confusing), but would like to get other's thoughts.
8 years, 1 month
Starting 5.2.3 release
by andrea boriero
Due to troubles with my Nexus account the release is not yet completed
anyway It's now possible to push to master.
8 years, 1 month
Money Validation
by Willi Schönborn
Hi,
I'm currently preparing a pull request to contribute Java Money related
validators to HV:
https://github.com/zalando/money-validation
The main part are custom validators to add support for MonetaryAmount to
the following constraints:
- DecimalMax
- DecimalMin
- Max
- Min
As an addition we defined the following custom constraints:
- Negative
- NegativeOrZero
- Positive
- PositiveOrZero
- Zero
Their names are closely aligned to methods in MonetaryAmount, but they are
in fact built solely on top of the standard constraints DecimalMax and
DecimalMin. So in theory somebody can use those with int, longs,
BigDecimals, BigInteger, CharSequence, ... You get the idea.
My question now is, are you guys interested in those custom constraints as
well or should I limit my pull request to the validators for MonetaryAmount?
Best,
Willi
8 years, 1 month
ORM and Java 9
by Steve Ebersole
It seems like Shigeru and team have Javassist Java 9 compatible now. Per
https://issues.jboss.org/browse/JASSIST-261 I have played with the propsed
changes using a SNAPSHOT of that built and pushed by Scott. Using that
SNAPSHOT, all those tests which used to fail due to Javassist now pass.
We do still have some failures under Java 9 which are now all attributable
to the version of WildFly we use for some Arquillian-based testing not
being Java 9 compatible. But WildFly is now Java 9 compatible from what I
understand. Anyone know if there is a WildFly 10 release yet that is Java
9 compatible?
8 years, 1 month
HSEARCH-2358 "fields" attribute in Elasticsearch search results is being ignored
by Yoann Rodiere
Hi,
I wanted to start a discussion about this issue.
It's about stored field retrieval. When searching, Elasticsearch can return
field values two different ways:
* through the "_source" attribute [1], which basically provides a
copy-paste of the JSON that was submitted when indexing
* or through the "fields" attribute [2], which only works for stored
fields and provides the actual value that Elasticsearch stored
The main difference really boils down to formatting. With the "_source"
attribute, there's no formatting involved, you get exactly what was
originally submitted. With the "fields" attribute, the value is formatted
according to the first format in the mapping's format list [3].
The thing is, Elasticsearch allows admins to set multiple formats for a
given field. This won't change the output format, but will allow using any
one of these formats when submitting information. Since these "extra"
formats probably aren't understood by Hibernate Search, this means that
using the "_source" attribute to retrieve field values becomes unreliable
as soon as someone else adds/changes documents in Elasticsearch...
So we have two solutions:
1. Either we only use the "fields" attribute to retrieve field values, and
we force users to have the output format set to something HSearch will
understand, but allow extra input formats.
2. or we use the "_source" attribute to retrieve field values, and then we
force both output and input format on users, and do not allow extra formats.
I'd be in favor of 1, which seems more rational to me. It only has one
downside: if we go on with this approach, Calendar values (and
ZonedDateTime, ZonedTime, etc.) will have to be stored as String, not as
Date, since Elasticsearch doesn't store the timezone, just the UTC
timestamp. We're currently working this around by inspecting the "_source",
which contains the original timezone (since it's just the raw, originally
submitted JSON).
What do you think?
[1]
https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-s...
[2]
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-re...
[3]
https://www.elastic.co/guide/en/elasticsearch/reference/2.4/mapping-date-...
Yoann Rodière <yoann(a)hibernate.org>
Hibernate NoORM Team
8 years, 1 month