JSR 354 - Money and Currency
by Steve Ebersole
So it sounds like JSR 354 may not be included in Java 9. Do we still want
to support this for ORM 5? I am not sure if "moneta" requires Java 9...
8 years, 10 months
Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 11 months
Testing Hibernate 5: injecting a Spring managed interceptor
by Guillaume Smet
Hi,
As I have cycles this week and next week, I thought I might as well do some
QA on Hibernate 5.
I'm still in the process of porting our code to 5 atm and I have a pattern
we used before I can't find an elegant way to port on Hibernate 5: this
pattern is used to inject a Spring managed interceptor.
We override the persistence provider to inject the interceptor in the
Hibernate configuration:
https://gist.github.com/gsmet/e8d3003344938b1d327b
I studied the new code for quite some time and I couldn't find a way to
inject my interceptor in 5.
Note that it's a pretty common usage in the Spring managed world.
Thanks for any guidance.
--
Guillaume
9 years, 4 months
new proposal for tx timeout handling using transaction DISASSOCIATING event notification...
by Scott Marlow
With a proposed TM level listener, we will have an SPI for notification
of when application threads associated with a JTA transaction, become
disassociated with the transaction (tm.commit/rollback/suspend time).
Having this knowledge in a synchronization callback, can determine
whether the persistence context should be cleared directly from the
Synchronization.afterCompletion(int) call or should be deferred until
the transaction is disassociated from the JTA transaction.
This idea is based on a TM level listener approach that Tom Jenkinson
[1] suggested. Mike Musgrove has a "proof of concept" implementation of
the suggested changes [2]. I did some testing with [3] to see if the
improvement helps with clearing entities that might still be in the
persistence context after a background tx timeout.
I'm wondering if in the Hibernate ORM
Synchronization.afterCompletion(int status) implementation, in case of
tx rollback, if we could defer the clearing of the Hibernate session to
be handled by the JtaPlatform. This could be setup at
EntityManager.joinTransaction() time (if a new property like
"hibernate.transaction.defer_clear_session" is true). Perhaps via a
JtaPlatform.joinTransaction(EntityManager) registration call?
Thoughts?
Scott
[1] https://developer.jboss.org/thread/252572?start=45&tstart=0
[2]
https://github.com/mmusgrov/jboss-transaction-spi/blob/threadDisassociati...
[3]
https://github.com/scottmarlow/wildfly/tree/transactiontimeout_clientut_n...
9 years, 4 months
ORM Team "triage" meeting
by Steve Ebersole
Gail and I discussed Jira a little bit last week and how to best manage
scheduling issues.
We both agreed that a team get together, either weekly or every-other-week,
to discuss new issues to triage them would be a great idea.
One thing I absolutely do not want happening is just scheduling issues as a
means to come back and triage them later. Scheduling an issue, on a "real
version" anyway, should mean something. It should mean some level of
dedication to finish that task for that release. In short, unless you are
volunteering to take on a task *yourself* for that release, please do not
schedule it for that release.
As for the triage meeting, I would definitely like Gail and Andrea
involved. Of course anyone is welcome. The reason I mention this is that
Gail is usually left on early side of scheduling these. So we will find a
time that works best for us 3 and go from there. I recommend that we
leverage HipChat for these discussion.
Andrea is coming to Austin for a few days starting Monday, so I would like
to start this triaging while he is here. Gail, I am thinking 1pm my time
(11am yours) would be a good time. Andrea, does that work for you after
Austin?
9 years, 5 months
HSEARCH: Removing dynamic analyzer mapping?
by Sanne Grinovero
Among the many changes of Apache Lucene 5, it is no longer possible to
override the Analyzer on a per-document base.
You have to pick a single Analyzer when opening the IndexWriter.
Of course the Analyzer can still return a different tokenization chain
for each field, but the field->tokenizer mapping has to be consistent
for the lifecycle of the IndexWriter.
This means we might need to drop our "Dynamic Analyzer" feature:
http://docs.jboss.org/hibernate/search/5.4/reference/en-US/html_single/#_...
I did ask to restore the functionality:
https://issues.apache.org/jira/browse/LUCENE-6212
So, the alternatives I'm seeing:
# Dropping the Dynamic Analyzer feature
# Cheat and pass in a mutable Analyzer - needs some caution re concurrent usage
# Cheat and pass in a pre-analyzed Document
# Fork & patch the IndexWriter
Patching the functionality back in Lucene is trivial, but the Lucene
team needs to agree on the use case and then the release time will be
long.
We should discuss both a short-term solution and the better long-term solution.
My favourite long-term solution would be to do pre-analysis: in our
master/slave clustering approach, that would have several other
benefits:
- move the analyzer work to the slaves
- reduce the network payloads
- remove the need to be able to serialize analyzers
But I'd prefer to do this in a second "polishing phase" rather than
consider such a backend rewrite as a blocker for Lucene 5.
WDYT?
Thanks,
Sanne
9 years, 5 months
5.0.0.CR2 delay
by Steve Ebersole
The timebox for CR2 release is next Wednesday. However I am taking some
time off early next week. As a result I am going to push CR2 back one week.
9 years, 6 months
The Hibernate Search / Apache Tika interaction with WildFly modules
by Sanne Grinovero
TLDR
- Remove all "optional" Maven dependencies from the project
- Things like the TikaBridge need to live in their own build unit
(their own jar)
- Components which don't have all dependencies shall not be included
in WildFly modules
These are my notes after debugging HSEARCH-1885.
A service can be optionally loaded by the Service Loader pattern, but
all dependencies of each module must be available to the static module
definition.
Our current WildFly modules include the hibernate-search-engine jar,
which has an optional dependencies to Apache Tika.
We don't provide a module of Apache Tika as it has many dependencies,
so there was the assumption that extensions can be loaded from the
user classpath (as it normally works). This one specifically, can't
currently be loaded from the user EAR/WAR as that causes a
> java.lang.NoClassDefFoundError: org/apache/tika/parser/Parser
The problem is that, while we initialize the
org.hibernate.search.bridge.builtin.TikaBridge using the correct
classloader (an aggregate from Hibernate ORM which includes the user
deployment), this only initialized the definition of the TikaBridge
itself.
After its class initialization, when this is first used this will
trigger initialization of its import statements; it imports
org.apache.tika.parser.Parser (among others), but at this point we're
out of the scope of the custom classloader usage, so the current
module is being used as the extension was in fact *loaded from* the
classloader for hibernate-search-engine. The point is that the
TikaBridge - while it was loaded from the aggregated classloader - it
was ultimately found in the hibernate-search-engine module and at that
point was associated with that.
A possible workaround is to set the TCCL to the aggregate classloader
during initialization of the TikaBridge and its dependencies, but this
is problematic as we can't predict which other dependencies will be
needed at runtime, when the Tika parsing happens of any random data:
one would also need to store a pointer to this classloader within the
FieldBridge, and then override the TCCL at runtime each time the
bridge is invoked.. that's horrible.
The much simpler solution is to make sure the TikaBridge class is
loaded *and associated* to a classloader which is actually able to
load its extensions! In other words, if the user deployment includes
the Tika extensions, it should also include the TikaBridge.
So the correct solution is to break out this into a Tika module, and
not include it within the WildFly module, but have the users include
it as an extension point, as they would with other custom
FieldBridges.
This problem would apply to any other dependency using the "optional"
qualifier of Maven; currently only our Tika integration relies on it,
so let's remove it but please let's also avoid "optional" in the
future.
Thanks,
Sanne
9 years, 6 months