Currently Oracle supports database versions from 10.1 to 11.2 . LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) . Oracle keeps those column types only for
backward compatibility .
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call . I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
 "Getting a LONG RAW Data Column with getBytes"
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
> Strong Liu <stliu(a)hibernate.org>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In , I am seeing the following type mappings:
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>> From this standpoint, the current settings are appropriate.
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>  Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>>  Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>>  Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>> Welcome Community!
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> Gail Badner wrote
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> There have been a number of issues opened since the change was made
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> raw'. This change was already documented in the migration notes.
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> not as first or last while processing SQL statement.
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>> What is your opinion?
>>>> Lukasz Antoniak
>>>> hibernate-dev mailing list
>>> hibernate-dev mailing list
>> hibernate-dev mailing list
>>> Is it possible to disable prepared statement caching for batched fetching, so I end up with a single query in the < default_batch_fetch_size case only >>instead of the
>>> fixed-size batch loading hibernate does by default?
> I think the main reason for no feedback so far, is that nobody was able to understand this sentence.
> Usually 'prepared statement caching' is a synonym to 'prepared statement pooling' and is something which has to be provided by a connection-pool (or a jdbc-driver) and thus
> Hibernate does actually not implement any prepared statement cache/pooling.
> Can you please explain what you intend under 'prepared statement caching'?
> Can you also please try to better explain the second part of your sentence?
Sorry for beeing that cryptic, I will try to rephrase it:
When Hibernate does batch-fetching, it generates PreparedStatements
for certain batch sizes - for a batch_size of 50, the prepared
statements for batch-sizes will have the following sizes:
[1,2,3,4,5,6,7,8,9,10,12,25,50]. When e.g. a batch of size 13 should
be fetched, because of the fixed size of the prepared statements, 3
queries are issued for batch-fetching, although 13 <= 50. In this case
the 3 batches would be of the size 13 = 8 + 4 + 1.
In a latency bound (between db and application) environment, this
serverly hampers response time - instead of a single round-trip to do
the batched fetch, Hibernate requires 3.
(subselect can't be used in my case, because my queries are already
rather complex, and the added complexity confuses the DBs query
planner too much)
What I did in this case (only for integer PKs) is to pad up to the
next batch size with a non-existant PK.
So, for the example mentioned above, I can use the PreparedStatement
with size 25, and insert padding from 14-25, which will make the query
slightly more inefficient but avoid 2 additioan round-trips.
Not sure if y'all are tracking what is going on at SourceForge. They
have done a redesign of the site using a new hosting software they
developed called Allura. We will need to upgrade to use Allura at some
point. The upgrade has a few ramifications that I wanted to give
everyone some time to handle.
First is our hosted MediaWiki instance. As far as I know, I am the only
one that uses that, so I don't think this affects anyone other than me.
In fact all hosted apps will be getting decommissioned. We only have
MediaWiki, WordPress and dotProject enabled to date. To the best of my
knowledge, only MediaWiki ever got used.
Source Code. We still have the original CVS repo accessible for read.
As I understand it, that will remain true and use the same urls. We
moved directly from SF CVS to JBoss SVN, so no SF SVN to worry about
that I know of. For Git, we did create some Git repos. Those will be
inaccessible after the upgrade; so if there is anything y'all want in
any of those Git repos, you should pull/clone it. Let me know.
I do not know of any other ramifications. Let me know if you think of
others that need to be addresses.
If you are unfamiliar and want to see what Allura offers, here is the
i am trying to build hibernate from hibernate source code as you
explained here but i couldn't
when i issue gradlew clean build -x test
the following exception appered
* What went wrong:
Execution failed for task ':hibernate-entitymanager:compileJava'.
> Compile failed; see the compiler error output for details.
Run with --stacktrace option to get the stack trace. Run with --info or --debug
option to get more log output.
Not sure if the expectation at this point is that all tests on the
metamodel branch pass when hibernate.test.new_metadata_mappings is set
to true or not. That was supposed to be the idea with the new
@FailureExpectedWithNewMetamodel annotation which I see is in place now.
But I am still seeing failures. Yes I have local changes here, but
there is no way my changes can be causing these:
The failures in all cases is from attempt to bind the discriminator from
annotations. As far as I can tell the cause is that annotation sources
are returning empty string for that column name rather than null.
But either way, nothing to do with my changes. So I am going to go
ahead and push my changes
Hi Steve et al,
I have included the API updates we discussed recently in the latest
Jason T. Greene
JBoss AS Lead / EAP Platform Architect
JBoss, a division of Red Hat
On Jul 27, 2012, at 2:25 PM, Steve Ebersole wrote:
> This is all changing as we had discussed the other day.
Right, that's why I brought it up.
> If for now you want to change Exporter to be public for what you need, go for it.
I thought it would be nice to test/assert against the generated DDL script. In a true unit test sense I probably should
use the model classes and make sure that the constraints get applied by the Bean Validation TypeSafeActivator.
On the other hand I was surprised that it was not possible to programmatically get hold of the DDL statements from
SchemaExport. That's where I came across the Exporter interface.
> However as for setter keep in mind there are actually multiple export ers in play.
In fact there is a whole list and I am suggesting to add just another one to the list.
> We can discuss the state of the changes so far locally on my machine if you want.
Sounds good. Maybe you can push thing to GitHub
> Why aren't you just changing the "relational model" (adding check constraints, etc) for the exports to pick up?
That's what I am doing. I just wanted to use the SchemaExporter for testing.
> And no, that should be using System.out. Its output it sent to stdout specifically for purposes of command line piping/redirection.
Ok. Good to know.
On Jul 27, 2012, at 2:29 PM, Eric Dalquist wrote:
> Not sure if this is an official way but if you have access to the Configuration and Dialect objects you can do:
> final String createSQL = configuration.generateSchemaCreationScript(dialect);
Ahh, I didn't see that. On the other hand, I don't want to use Configuration, because in the long run the test have to
work with the new metamodel.