Currently Oracle supports database versions from 10.1 to 11.2 . LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) . Oracle keeps those column types only for
backward compatibility .
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call . I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
 "Getting a LONG RAW Data Column with getBytes"
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
> Strong Liu <stliu(a)hibernate.org>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In , I am seeing the following type mappings:
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>> From this standpoint, the current settings are appropriate.
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>  Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>>  Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>>  Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>> Welcome Community!
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> Gail Badner wrote
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> There have been a number of issues opened since the change was made
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> raw'. This change was already documented in the migration notes.
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> not as first or last while processing SQL statement.
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>> What is your opinion?
>>>> Lukasz Antoniak
>>>> hibernate-dev mailing list
>>> hibernate-dev mailing list
>> hibernate-dev mailing list
As I have cycles this week and next week, I thought I might as well do some
QA on Hibernate 5.
I'm still in the process of porting our code to 5 atm and I have a pattern
we used before I can't find an elegant way to port on Hibernate 5: this
pattern is used to inject a Spring managed interceptor.
We override the persistence provider to inject the interceptor in the
I studied the new code for quite some time and I couldn't find a way to
inject my interceptor in 5.
Note that it's a pretty common usage in the Spring managed world.
Thanks for any guidance.
I finally got to re-enable MariaDB and PostgreSQL based tests for
Hibernate Search - which has been running on H2 only for some months -
and had to debug a case of a single test hanging for a long time.
Essentially it would block - for hours - on the SessionFactory#close()
method, attempting to drop the database schema with the following
> alter table AncientBook_alternativeTitles drop constraint
Dumping threads I would get a very similar stack trace on both
databases; initially I thought someone had copy/pasted a socket
handling bug from one JDBC driver to the other ;-)
The PostgreSQL testsuite hanging:
"main" prio=10 tid=0x00007f0f40009000 nid=0x5f7c runnable [0x00007f0f48956000]
at java.net.SocketInputStream.socketRead0(Native Method)
- locked <0x00000007c11e3860> (a org.postgresql.core.v3.QueryExecutorImpl)
The MariaDB testsuite hanging:
"main" prio=10 tid=0x00007f8ca0009000 nid=0x4043 runnable [0x00007f8ca5f5c000]
at java.net.SocketInputStream.socketRead0(Native Method)
- locked <0x00000007c0baa518> (a com.mysql.jdbc.util.ReadAheadInputStream)
- locked <0x00000007c0baa850> (a java.lang.Object)
- locked <0x00000007c0baa850> (a java.lang.Object)
On Gail's suggestion I went looking for database level locks; I hadn't
thought of that as I assumed it would have timed out more aggressively
rather than have me wait for hours.
It turns out she was right and the reason for blocking was a simple
"count *" query being run as a post-test assertion, which we had
forgotten to wrap in an "open transaction & commit" statements pair.
The test assertion would be successful, but apparently it would hold
on the table lock beyond closing the Session and, then fail to drop
the database at the teardown of the test.
I'm wondering if this is expected, or if there's something in the new
transaction handling code which could be improved? It took me several
hours to figure this out; maybe I'm just not using ORM as frequently
as once :)
If it's this critical to have the transaction, maybe it should be mandatory?
And as a memo for next time, this is the query to figure out details
about locks on our testing db:
> SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid
= psa.pid where datname = 'testingdb';
You might remember that running a full-text Query on a field always
required some specific care;
since the beginning of Hibernate Search the user had to make sure the
field was not tokenized, or tokenized but generating a single token.
This was a "soft requirement": if you didn't know, you'd get
inconsistent results but no error would be shown - after all, a Lucene
was typically schema-less.
With Lucene 5, if you didn't map your field specifically for the
sorting purpose, you'll get a runtime exception at query time. By
"specifically for sorting" the requirement is that the *single token*
needs to be stored as a DocValue.
DocValues are useful for other purposes too; for example they are a
more efficient strategy to store our "id" field and I hope we'll soon
use that transparently. It's also a better replacement for all those
use cases which previously would rely on a FieldCache (which by the
way was used by the sorting code under the hood). For example, we
recently migrated the Spatial indexing to use DocValues, and we now
support serializing and writing of DocValues.
What we don't have is a way for end user to explicitly single out
which fields they want to be "DocValue encoded", and this really needs
to be added now as the workaround we have to be able to run Sort
operations without it is killing our performance.
How should such annotations look like?
I don't like to expose such low-level details when for most people
it's just about sorting. Still, being useful for other reasons having
a "@Sortable" (or similar) named annotation would be limiting.
DocValues themselves - as a concept - are fine but even in the Lucene
history the exact name changed several times; wondering if we should
stick to the (current) technical term or abstract a bit from it.
I'm not sure if this should be extending the @Field annotation as
there are special restrictions implied in terms of analysis: are we
going to enforce a specific type of tokenizer, or simply take the
analysis option away?
Any nice suggestion of how this could looke like? This would become a
highly used public API.
The good news is that we'll be able to validate sort definitions.
As some of you know Sourceforge has had a severe distributed file system corruption and they have been working on it for a full week. You can read their blog for regular updates http://sourceforge.net/blog/
The concrete issue for us is that we cannot upload new files: Hibernate Validator and Hibernate ORM are now pending a release.
There are 4 options on the table
SourceForge will eventually reopen upload, I imagine it might take form one to two weeks.
Their binary hosting support is relatively correct and all of our download statistics are there.
Move to download.jboss.org::
JBoss has a facility to host binaries. WildFly amongst other uses it. We can ask them if they are happy with it.
It is not connected to the rest of the forum/CMS infra, it’s a simple file upload AFAIK so easily scriptable.
They also offer statistics but how needs to be investigated.
Move to GitHub::
GitHub has a binary upload facility. I could only find a web based approach (can it be done programmatically?).
They don’t seem to have any statistics service, which is a big negative point.
Also I don’t trust GitHub anymore for their binary hosting. They had a version in the past that they scrapped with barely no notice. I’m not exactly willing to give them my trust again.
Move to BinTray::
Binary hosting is their life and blood. People seem happy with them. It seems however that the statistics require a paying package instead of the free oss tier.
I think we should try in the following order:
1. Be patient with Sourceforge (but for how long?)
2. go for download.jboss.org and before that ask around for the process and stability of the infrastructure
3. explore Bintray
4. GitHub (did I say that I no longer trust their binary hosting support?)
I have been putting a lot of TLC into the ORM documentation this weekend
getting ready for 5.0 to go Final. To that end I have put together a
proposal for these changes, it is attached. I'd like to get some
I am finished with the 4.2.20.Final release, except for uploading distributions due to problems at SourceForge. Artifacts have been successfully uploaded to nexus.
I will wait until Monday to send out an announcement in the hopes that I can upload the distributions to SourceForge by then.
There are a couple more bugfixes I'd like to get into 4.3.11.Final, so I am delaying that release until Wednesday, July 29.
With a proposed TM level listener, we will have an SPI for notification
of when application threads associated with a JTA transaction, become
disassociated with the transaction (tm.commit/rollback/suspend time).
Having this knowledge in a synchronization callback, can determine
whether the persistence context should be cleared directly from the
Synchronization.afterCompletion(int) call or should be deferred until
the transaction is disassociated from the JTA transaction.
This idea is based on a TM level listener approach that Tom Jenkinson
 suggested. Mike Musgrove has a "proof of concept" implementation of
the suggested changes . I did some testing with  to see if the
improvement helps with clearing entities that might still be in the
persistence context after a background tx timeout.
I'm wondering if in the Hibernate ORM
Synchronization.afterCompletion(int status) implementation, in case of
tx rollback, if we could defer the clearing of the Hibernate session to
be handled by the JtaPlatform. This could be setup at
EntityManager.joinTransaction() time (if a new property like
"hibernate.transaction.defer_clear_session" is true). Perhaps via a
JtaPlatform.joinTransaction(EntityManager) registration call?