Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 9 months
mutable versus immutable natural keys
by Steve Ebersole
One of the enhancements I want to get into 4.1 is HHH-2879 / HHH-2896
regarding adding an actual API for entity loading by natural key.
However, I think at the same time we should be more explicit and
consistent about what it means when we say that a mapped natural key is
mutable or immutable. Basically, is "immutable" a hint from the user
that the values cannot change (so that we can perform certain
optimizations)? Or is "immutable" a mandate that we have to verify the
natural key has not changed, either through Hibernate or "behind" us?
Today we kind of have a blend of those two.
Personally I think we should take it as a hint from the user that the
values will not change. WDYT?
--
steve(a)hibernate.org
http://hibernate.org
12 years, 10 months
[HSEARCH] About HSEARCH-917 or DSL API and query parser
by Guillaume Smet
Hi,
I opened https://hibernate.onjira.com/browse/HSEARCH-917 a few months
ago about something that really prevents us from using the DSL API in
a lot of cases.
I explained why in the JIRA issue. While I'm OK to do the ground work
of coding something and adding tests for it, I think it might be a
good idea to discuss it before.
Basically, the problem is that when I search for XXXX-AAAA-HAGYU-19910
using an analyzer with the WordDelimiterFilterFactory filter, the DSL
API searches for "XXXX" OR "AAAA" OR "HAGYU" OR "19910" (yes, OR). In
this case, the Lucene QueryParser is designed to look for "XXXX" AND
"AAAA" AND "HAGYU" AND "19910".
The underlying problem is that in
ConnectedMultiFieldsTermQueryBuilder, we don't use the QueryParser to
build the Lucene query but a getAllTermsFromText() method which uses
the analyzer to get all the terms and from that it builds a OR query.
You can also observe the problem if you search for more than one word.
I think it's plain wrong in the case of a standard search and it
should be fixed by using the Lucene QueryParser to build the query.
The only problem I see with using the Lucene query parser is that it
doesn't pass the text through the analyzer for a fuzzy or wildcard
search (but we have a special case for wildcard so I think it's
already working this way with the current code). I'm not sure it's
really a problem, considering that it's a well known Lucene behaviour.
But it's probably why it's done that way (maybe someone with the
history can explain why it's done that way).
It would be cool to discuss this problem and find an acceptable solution.
Have a nice day.
--
Guillaume
12 years, 10 months
[HSEARCH] Lucene lock problems with Hibernate Search 4.0
by Guillaume Smet
Hi,
After our upgrade to 4.0 (from 3.4.1), we've started to have a lot of
lock errors on our development boxes. I thought it was due to brutal
kills of the JVM (typically Terminate in Eclipse) but we also have
them in our CI environment on a project where every test is OK.
The stracktraces look like this:
[2011-12-28 12:17:55,963] ERROR - LuceneBackendQueueTask -
HSEARCH000072: Couldn't open the IndexWriter because of previous
error: operation skipped, index ouf of sync!
[2011-12-28 12:17:56,974] ERROR - LogErrorHandler -
HSEARCH000058: Exception occurred
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out: SimpleFSLock(a)/data/services/test/data/helios/lucene/fr.openwide.helios.core.business.contract.model.Company/write.lock
Primary Failure:
Entity fr.openwide.helios.core.business.contract.model.Company Id 1
Work Type org.hibernate.search.backend.DeleteLuceneWork
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out: SimpleFSLock(a)/data/services/test/data/helios/lucene/fr.openwide.helios.core.business.contract.model.Company/write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1115)
at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:125)
at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:100)
at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:114)
at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:101)
at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:69)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Does it ring a bell? Does anyone have any idea of where I should start
to investigate?
As you can imagine, it's pretty annoying (especially when it's a
project with 100k entities and we need to reindex after this error to
sync the indexes again).
I never saw this problem prior to 4.0 (and we have quite a lot of
applications in production which uses Hibernate Search).
Thanks for your feedback.
--
Guillaume
12 years, 10 months
5.0.0 JIRA version
by Steve Ebersole
I created the intial 5.0.0 version in JIRA and bulked moved all issues
scheduled for 4.1.0 over to it, then went back and individually moved
issues that did not appear related to the new metamodel code back to
4.1.0. If you see any issues I missed moving back that do not really
belong grouped with the new metamodel code, please adjust them as needed.
Thanks.
--
steve(a)hibernate.org
http://hibernate.org
12 years, 11 months
recursive flushing?
by Steve Ebersole
I seem to recall a JIRA issue pertaining to allowing flushing to occur
recursively such that it would pick up changes made during flush. But I
cannot find that issue ATM.
Does anyone remember that and know the JIRA key?
Or maybe it was only ever discussed on the dev list?
--
steve(a)hibernate.org
http://hibernate.org
12 years, 11 months