Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 9 months
HSEARCH Preparations for 4.0.0 Final
by Emmanuel Bernard
## Timeline and work to do
Hibernate Core goes final next week if things go as planned. So it's time for us to gear towards a CR2 and release it right after Core goes final. I have done some JIRA cleanup and everything I think is important is tagged as fix version 4.0.0.CR2. Please have a look, pick up issues and fix them.
Also have a look at issues marked as 4.0.0.Final which I don't think any is critical or realistic to put in this release. See my rant about managing JIRA below on the subject. Once you have looked at them I will nuke their fix version field into oblivion.
4.0 will target Infinispan 5.0. A week or two from CR, we will go final.
4.1 will be a short cycle targeted for mid december (the final version). The idea is to align with Infinispan 5.1 and add whatever useful feature or fix we think is important.
## Managing JIRA
JIRA is not exactly a list to Santa Claus. Let me rephrase, JIRA is not a list to Santa Claus. You can't put a version number to a JIRA issue and hope things will magically be fixed in this version. The rule of thumb is simple:
1. If you think you will do it, set the version number
2. If you know someone that will likely do it, put a version number
3. if it's vitally important that this be fixed in the next version, see rule #1
Otherwise don't put a version number without asking the project lead
The rule is a bit different for the project lead as he has to draw the big picture of what a release will contain and assigning a number is the easiest solution. A corollary is that moving a problem from version n to version n+1 is useless.
Today we ended up with 60 issues that ought to be resolved in less than a week. That obviously is beyond our capacity.
Of course these rules are not hard enforced but we definitively need to shift back into a more conservative version assignment management.
By the way, I don't know if you have followed AS 7's team rant on JIRA and actionable items. While I'm not 100% inlined with their rule, I am sympathetic to the idea of a managed flow of JIRA issues. I'm not sure how this can be applied to (or at least get closer with) search, validator and ogm but I'm open to ideas.
12 years, 11 months
HHH-1123 - Cannot put more than 1000 elements in a InExpression
by Łukasz Antoniak
Hi all!
Recently I had a closer look at HHH-1123 issue. This bug affects both -
Criteria API and HQL. I have introduced
Dialect#maximumInExpressionElements() method which returns maximum
number of allowed elements in a single SQL IN clause, or null treated as
infinite. The change of InExpression was very easy. However, fixing this
bug for HQL queries requires modification of ParameterMetadata
(namedDescriptorMap cannot remain unmodifiable), as well as
AbstractQueryImpl (queryString). As I don't see any other solution, I
wanted to ask you guys for suggestions. Is it the only possible way of
fixing this issue? Finally, shall we really fix this? This is a DB
vendor limitation, but 40 user gave their vote for it.
Regards,
Lukasz Antoniak
12 years, 11 months
Hibernate Filters and EntityManager.find
by Jason Clawson
Hi everyone. I know that Hibernate session filters do not apply to
find/load operations because the assumption was made that if you know the
ID of the entity you wish to load, why tack on the extra WHERE condition.
Please let me explain my use case for filters and illustrate why this
assumption is incorrect.
We use filters to do data separation. For example, separating one
customers data from another's. We also have other filters that do finer
grained object visibility conditions. But lets take a look at customer
data separation since its the easiest to understand. The advantage of
doing customer data separation in this way is that developers don't need to
think about it. It just works, and it works *automatically*. The problem
comes in when you want to do something like em.find(User.class, 1). No
WHERE clause is attached to the SQL statement. Yes, I know the ID, but I
really want to tack on to the WHERE clause "AND customerId = 3" to make
sure that someone isn't fuzzing the ID parameter to try and get at another
customer's data.
The workaround we have is another mechanism that validates the entity in a
PostLoad entity listener and throws an exception if the customerId != the
request's customerId. This is "ok" for the simple example I laid out here.
However, we now have many more filters that implement complex visibility
rules based on subselects and oracle CONNECT BY clauses which cannot be
implemented using a simple equality check in java. The best, most
performant, solution is to be able to apply the filter clause to the
EntityManager.find operation.
What is your take on this?
Thanks,
Jason Clawson
12 years, 11 months
New Search QA job
by Sanne Grinovero
Hi Strong,
I've created this new job on Jenkins:
http://hudson.qa.jboss.com/hudson/job/hibernate-search-master-dbmatrix-co...
Usually we have CI testing Hibernate Search master, but it tests the
latest Hibernate Core release we defined as dependency; this new job
is specifically overriding the version of Hibernate Core to
4.0.0-SNAPSHOT.
so it tests:
- Search matrix tests extend Core's matrix test with more tests
- Improvements done in Core improve the Search matrix test as well
- Core changes won't break Search
We can remove this later on if you think it's overkill, for now it's
very useful for us to start fixing the Search matrix tests, which need
the fixes from latest snapshosts of Core.
Sanne
12 years, 11 months
any suggestion about how to fix HHH-5992?
by Strong Liu
T_User
id username
1 stliu
2 gail
for example a native query sql "select v1.username, v2.username from T_User v1, T_User v2 where v1.id = '1' and v2.id = '2'"
but the query returns ["stliu", "stliu"] instead of the expected ["stliu", "gail"]
this is because hibernate uses column alias (in this case, both are "username") to get the result from ResultSet, and since the two result in RS are all keyed by "username"
should we generate a alias for each like hql does?
-------------------------
Best Regards,
Strong Liu <stliu at hibernate.org>
http://about.me/stliu/bio
12 years, 11 months
org.hibernate.ejb.test.lob.BlobTest fails on Derby
by Christian Bauer
Debugging a BLOB issue on H2 (different issue than this) led me to try Derby, which fails even earlier.
This is the same test as org.hibernate.ejb.test.lob.BlobTest, if someone could run this on Derby, you should see the exception. I've copied the test 1:1 into my environment and had the same exception.
The sql.BlobTypeDescriptor.STREAM_BINDING option is used by default for an @Lob java.sql.Blob property. This option "unwraps" the value by calling java.BlobTypeDescriptor.unwrap():
return (X) new BinaryStreamImpl( DataHelper.extractBytes( value.getBinaryStream() ) );
This crashes on the call to value.getBinaryStream():
Caused by: java.sql.SQLException: You cannot invoke other java.sql.Clob/java.sql.Blob methods after calling the free() method or after the Blob/Clob's transaction has been committed or rolled back.
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.ConnectionChild.newSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedBlob.checkValidity(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedBlob.getBinaryStream(Unknown Source)
at org.hibernate.type.descriptor.java.BlobTypeDescriptor.unwrap(BlobTypeDescriptor.java:123)
Bug report?
12 years, 12 months