Currently Oracle supports database versions from 10.1 to 11.2 . LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) . Oracle keeps those column types only for
backward compatibility .
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call . I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
 "Getting a LONG RAW Data Column with getBytes"
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
> Strong Liu <stliu(a)hibernate.org>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In , I am seeing the following type mappings:
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>> From this standpoint, the current settings are appropriate.
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>  Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>>  Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>>  Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>> Welcome Community!
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> Gail Badner wrote
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> There have been a number of issues opened since the change was made
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> raw'. This change was already documented in the migration notes.
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> not as first or last while processing SQL statement.
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>> What is your opinion?
>>>> Lukasz Antoniak
>>>> hibernate-dev mailing list
>>> hibernate-dev mailing list
>> hibernate-dev mailing list
I'm working at the integration between OGM and Neo4j.
Neo4j is fully transactional and it requires to open a transaction
before the execution of any operation on the DB.
To integrate this mechanism with OGM I've created the class
Neo4jJtaPlatform.java that extends AbstractJtaPlatform and it's
registered using the configuration parameter
If I'm correct, while this solution is working, it requires the
application to manage transactions using the Neo4j transaction
manager. Are there alternatives to this approach?
The related pull request can be found at
Have encryption capabilities in ORM ever been considered? There are a few 3rd party libraries, like Jasypt, providing UserTypes to handle encryption behind the scenes. Even basic support for an @Encrypt annotation, hashing, and configurable salt values would be helpful.
Red Hat Software Engineer, Hibernate
I believe something like the work I had in mind for
is the best option for implementing JPA 2.1 "entity graph" support into
Hibernate. To that end I have been working on that proposal to see how
quickly that could come about.
Essentially the phase we need to tie in "entity graph" support is a
redefinition of the current JoinWalkers. The approach I took, at a high
level, is that of the visitor pattern.
The work currently lives at
https://github.com/sebersole/hibernate-loader-redesign I'd appreciate
any extras eyes.
Longer term I'd like the stuff that lives in the
org.hibernate.loader.walking package to be part of the run time
(persister) metadata, but for the moment I just use wrapping and
when trying to use
in Hibernate Validator I get an UnsupportedClassVersionError.
Seems the API is compiled with JDK 7, which causes problems in the HV build.
Is the use of Java 7 intentional for the JPA API? I don't think it is required, right?
Just to let you know that lately I created 3 Jira:
- OGM-270 [MongoDB] Expose the WriteConcern settings to the configuration
- OGM-271 [Module] Create dedicated MongoDB module file for AS7
- OGM-272 [Core] Fix BasicGridExtractor logging message
I was wondering do we need to test all the new writeconcern settings ?
If the answer is yes, do we test all values with all "classic" tests like
we did for the association strategies or we just check that the value is
properly set as in DatastoreInitializationTest.
Second question, for the doc we have 2 options:
- the first is to put all possible values into the configuration properties
table (see here https://gist.github.com/gscheibel/5012033).
- the second is to add another section and a link into the properties table
Have a nice day