JSR 354 - Money and Currency
by Steve Ebersole
So it sounds like JSR 354 may not be included in Java 9. Do we still want
to support this for ORM 5? I am not sure if "moneta" requires Java 9...
8 years, 9 months
Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 10 months
Search DSL expectations for "keyword()" clause
by Sanne Grinovero
Assuming you build a Lucene Query the following way:
queryBuilder.keyword().onField( "age" ).matching( 5 ).createQuery();
What is your expectation, if the "age" field is being indexed as a NumericField?
Thanks,
Sanne
8 years, 11 months
HQL and spatial
by Steve Ebersole
Karel, etal
We have discussed spatial-specific extensions to HQL for quite some time.
But those discussions have always been kind of esoteric ("boy wouldn't it
be nice to have some spatial support in HQL").
As we are working on redesigning the parsing and interpretation of HQL
queries and since spatial has been integrated upstream, it seems like a
great time to discuss specifics of what this might mean.
I have never used spatial data, let alone crafted queries using spatial
data. So I am not the best driver here.
What kinds of things make sense to add to HQL for supporting spatial
queries?
9 years
SQM : Criteria translation design
by Steve Ebersole
I started work on SQM-24 which covers translation of criteria queries into
SQM.
The difficulty with this is that the JPA contracts alone do not give enough
information to really be able to understand the semantics of the query;
there is just not enough information exposed. I can get into specific
examples if that helps, but for now lets take that as a given...
So then how do we go about translating a criteria into an SQM? There are
going to be 2 main approaches. Each requires some level of extension to
the standard contracts:
The first approach is to use visitation. The criteria nodes would be
expected to implement an SQM extension accepting a visitor and do the right
thing. The gains here are the normal gains with the visitor pattern. The
downside is that this makes SQM highly dependent on the criteria impl doing
the right thing and makes the criteria impl sensitive to SQM (depending n
how we expose the visitation methods to a degree).
The second approach would be to extended the standard criteria contracts to
more fully cover the semantic. As one example, JPA defines just Predicate
(for the most part) without exposing the type of predicate. Is it a LIKE
expression? A BETWEEN? A Comparison (=, !=, <, etc)? We just don't know
from the standard contracts. So we'd have to develop a full semantic
extension model here. `interface LikePredicate extends Predicate`,
`BetweenPredicate extends Predicate`, etc.
I lean towards the visitor approach given these choices. Anyone else have
opinions? Other options?
9 years, 1 month
Hibernate5 migration
by Koen Serneels
Hi,
I'm migrating from Hibernate4 to 5(RC4). While doing so I'm stumbling on
some stuff that has been removed or moved.
- In Hibernate4 we modified mappings on-the-fly by overriding Spring's
LocalSessionFactoryBean#buildSessionFactory.
Doing so we could first access the getClassMapping on
org.hibernate.cfg.Configuration before letting the SF actually build.
However, in Hibernate5 the metadata access has been refactored and is no
longer part of the Configuration. Should we use MetadataContributor instead
for these purposes?
- Is there a way to register MetadataContributor dynamically? I see that it
is being loaded using Java's ServiceLoader.
However, I need some programmatic API access to enable or disable the
Contributor (for example based on Spring profiles).
- For JTA integration we were using
org.hibernate.engine.transaction.internal.jta.CMTTransactionFactory, but
this class is no longer present.
Also in org.hibernate.cfg.AvailableSettings the key that was used
(hibernate.transaction.factory_class) to configure the factory is also
removed.
Is hibernate.transaction.coordinator_class the new key we should be using
instead with
org.hibernate.resource.transaction.backend.jta.internal.JtaTransactionCoordi
natorImpl as value for JTA?
Do we need to configure anything special in case of resource local TX or is
JdbcResourceLocalTransactionCoordinatorImpl the default?
- generateDropSchemaScript and generateSchemaCreationScript have been
removed from Configuration. Is there a way to access this in another way?
Thanks
Koen.
9 years, 1 month
Some proposals
by Steve Ebersole
Getting some proposals that have been rolling around in my head down on
paper (electronically speaking)..
*Caching SessionFactory state*
The Jira[1] contains the details. The basic gist is to allow for slimming
down the in-memory size of the SessionFactory based on how we store certain
SF-scoped state. I do not have hard numbers that this would help
performance, but I do know that the SessionFactory can be a large hit to
"old gen" memory on a lot of systems and that minimizing the amount of such
memory space in general helps with the operational performance of the VM;
so I thought it might be worth some exploration. Let's please discuss this
one on the Jira. Add any thoughts you may have, or vote it up if you think
it makes sense.
*Merge hibernate-core and hibernate-entitymanager*
This is one we have discussed before. There is not a Jira for it
specifically afaik. The idea would be to merge together the core and hem
modules into a single module (jar). This has a lot of different benefits,
which we have discussed before. The reason I am bringing it up now (again)
is that there is a new looming benefit as we work on SQM. At the moment
SQM defines its own "metamodel" contracts (org.hibernate.sqm.domain
package). However, if we merged core and hem that would mean that the
Hibernate core stuff would have access to the JPA metamodel definitions and
therefore we could define SQM in terms of the JPA metamodel.
The issue that has held us back in the past is different behaviors in the
different event listeners implementations for certain events. However, I
think every hard limitation is a result in listener and PC design in
regards to cascading in that the listener itself says what operation to
cascade. So, e.g. in core save/persist/merge/update operations are
cascaded as save-update, whereas those operations in the JPA-based
listeners cascade as merge. This has been the one sticky point that has
held us back from doing this merging previously. The problem (imo) is that
the PC has no concept of a "current operation context". This is why, e.g.,
you see listeners for cascadable operations define method overloads; one
taking a "context Map" and one not. Gail and I have discussed actually
adding a concept such as this "current operation context" to the PC as a
way around some other limitations and it would certainly help here too.
*Some changes to mapping model*
The inclusion of the completely new "mapping model" is being delayed
indefinitely. In the meantime, I do propose that we pull some of the
improvement concepts over to the existing mapping model (as defined in
org.hibernate.mapping). Most of the changes I propose relate to relational
side. A lot of it deals with aggregating related state (OO design).
Koen, I'd especially like you thoughts as this would represent another
change that I think affects you in tooling code. This would be work done
as part of the "jandex-binding" work, which is still to-be-scheduled, so
it's not like it adds work for you tomorrow :)
Some (not exhaustive) specific changes include:
* As mentioned above, I'd really like to rework at least the relational
side. Specifically replace org.hibernate.mapping representations of Table,
Column, Formula, etc with definitions more in line with the definitions we
worked on in metamodel. This includes tables, columns, etc understanding
the split between logical and physical naming, and keeping reference to
both.
* Defining associations based on a ForeignKey, rather than just a
collection of columns (encapsulation). Whether the ForeignKey is generated
is a whole different story.
* More aggregation at the binding level. For example, RootClass currently
exposes multiple pieces of information about an identifier (pk), rather
than just a single "identifier descriptor". Same for caching descriptor,
"fetching characteristics", etc.
[1] - https://hibernate.atlassian.net/browse/HHH-10213
9 years, 1 month