Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 11 months
Proxies and typing
by Steve Ebersole
In regards to the new "load access", a user asked why we don't leverage
generics.
The problem is the existence of @Proxy#proxyClass. We have the same
issue with Session#load/get taking the entity Class. We can't use a
generic sig like:
public <T> T load(Class<T> entityType, ...)
because at times we return objects that are not typed to <T>. I have to
dive back into the specifics, but IIRC the problem is that we don't do
the expected thing and have the generated proxy class extend from the
entity class if @Proxy#proxyClass names an interface. I remember this
change way back when, but the specifics of why escape me at the moment.
IMO I think providing generic signatures would obviously be a great
improvement. Is it enough to change this behavior?
WDYT?
--
steve(a)hibernate.org
http://hibernate.org
12 years, 9 months
natural-id to primary key cache
by Steve Ebersole
Historically natural-id look ups were accomplished by leveraging
Criteria queries. Caching was handled through the second level query
cache.
One of the new things in 4.1 is the dedicated natural-id loading API.
So the caching will be quite different here. I am a little leery about
making a breaking changes in 4.1 after all the changes in 4.0 if we can
avoid it. If we can't we can't. One thought for this was to use a
SessionFactory scoped "cache" for this in 4.1 and then add a new second
level cache Region construct for this in 5.0. The *only* benefit is to
keep the second level cache SPI the same between 4.0 and 4.1. Is that
worth it? Any thoughts?
--
steve(a)hibernate.org
http://hibernate.org
12 years, 9 months
HHH-6942 - Envers Collection revision entries don't include deletes on detached entity saveOrUpdate
by Łukasz Antoniak
Hi all!
Lately I have been working on HHH-6942 JIRA issue. Envers behaves differently when detached object is updated with new collection and persisted by invoking Session.merge() or Session.saveOrUpdate()
methods.
SQL statements executed by Session.merge():
14:53:47,031 DEBUG SQL:104 - select setrefcoll0_.id as id1_0_, setrefcoll0_.data as data1_0_ from SetRefCollEntity setrefcoll0_ where setrefcoll0_.id=?
*14:53:47,078 DEBUG SQL:104 - select collection0_.SetRefCollEntity_id as SetRefCo1_1_1_, collection0_.collection_id as collection2_1_, strtestent1_.id as id0_0_, strtestent1_.str as str0_0_ from
SetRefCollEntity_StrTestEntity collection0_ inner join StrTestEntity strtestent1_ on collection0_.collection_id=strtestent1_.id where collection0_.SetRefCollEntity_id=?*
*14:53:47,125 DEBUG SQL:104 - delete from SetRefCollEntity_StrTestEntity where SetRefCollEntity_id=? and collection_id=?*
14:53:47,140 DEBUG SQL:104 - insert into REVINFO (REV, REVTSTMP) values (null, ?)
14:53:47,140 DEBUG SQL:104 - insert into SetRefCollEntity_StrTestEntity_AUD (REVTYPE, REV, SetRefCollEntity_id, collection_id) values (?, ?, ?, ?)
14:53:47,140 DEBUG SQL:104 - insert into SetRefCollEntity_AUD (REVTYPE, data, id, REV) values (?, ?, ?, ?)
SQL statements executed by Session.saveOrUpdate():
14:54:32,171 DEBUG SQL:104 - select setrefcoll_.id, setrefcoll_.data as data1_ from SetRefCollEntity setrefcoll_ where setrefcoll_.id=?
*14:54:32,187 DEBUG SQL:104 - delete from SetRefCollEntity_StrTestEntity where SetRefCollEntity_id=?*
14:54:32,187 DEBUG SQL:104 - insert into SetRefCollEntity_StrTestEntity (SetRefCollEntity_id, collection_id) values (?, ?)
14:54:32,187 DEBUG SQL:104 - insert into REVINFO (REV, REVTSTMP) values (null, ?)
14:54:32,187 DEBUG SQL:104 - insert into SetRefCollEntity_StrTestEntity_AUD (REVTYPE, REV, SetRefCollEntity_id, collection_id) values (?, ?, ?, ?)
14:54:32,187 DEBUG SQL:104 - insert into SetRefCollEntity_AUD (REVTYPE, data, id, REV) values (?, ?, ?, ?)
The main difference is that Session.merge() fetches uninitialized collection from database and deletes a particular record. In this case Envers operates correctly. With Session.saveOrUpdate() all
associated records are removed and new ones inserted (in general works as expected). When Session.saveOrUpdate() is called, PreCollectionRemoveEventListener.onPreRemoveCollection() methods gets
executed with null collection attribute of PreCollectionRemoveEvent type. Would it be possible to initialize a collection inside PreCollectionRemoveEventListener.onPreRemoveCollection()
implementation? If so could you provide me with a sample code? I cannot figure it out. I've tried using event.getSession().getPersistenceContext().addUninitializedDetachedCollection() and
event.getSession().initializeCollection() but with no luck.
Regards,
Lukasz
12 years, 11 months
Re: [hibernate-dev] mutable versus immutable natural keys
by Eric Dalquist
After our conversion in IRC I think the latest iteration of the idea
having the following would suffice:
public @interface NaturalId {
/**
* If the NaturalId can change, either via the application or direct database manipulation
*/
boolean mutable() default false;
/**
* If the NaturalId->PrimaryKey resolution should be stored in the L2 cache
*/
boolean cache() default true;
}
I think we can do away with IMMUTABLE vs IMMUTABLE_CHECKED and simply
always do the consistency check when detached entities with natural ids
are attached.
As for the conflict of doing something like:
@NaturalId
@Column(updatable=true)
private String userName;
I think the solution is to log a warning along the lines of
"com.example.User.userName is marked as an immutable NaturalId and
always marked as updatable, the updatable flag will be ignored"
-Eric
On 01/17/2012 02:18 PM, steve(a)hibernate.org wrote:
In talking with few people that use this feature, there is def a desire
to account for both immutable-with-checking and
immutable-without-checking.
Initially I started down the path of an enum to model this:
public enum NaturalIdMutability {
MUTABLE,
IMMUTABLE,
IMMUTABLE_CHECKED,
@Deprecated
UNSPECIFIED
}
and:
public @interface NaturalId {
@Deprecated
boolean mutable() default false;
NaturalIdMutability mutability() default
NaturalIdMutability.UNSPECIFIED;
}
But I started to think it might be better to instead separate the
definition of mutable/immutable and whether or not to do checking on
immutable. What is the difference in folks minds between:
public class User {
...
@NaturalId(mutable=false)
private String userName;
}
and
public class User {
...
@NaturalId
@Column(updatable=false)
private String userName;
}
?
Or is everyone ok with this form:
public class User {
...
@NaturalId(mutability=NaturalIdMutability.MUTABLE)
private String userName;
}
and if so, how should this be interpreted:
public class User {
...
@NaturalId(mutability=NaturalIdMutability.IMMMUTABLE)
@Column(updatable=true)
private String userName;
}
12 years, 11 months
Test
by Eric Dalquist
Sorry for the spam, having subscription problems.
12 years, 11 months
Using SOLR with indexing/search Server
by Anderson vasconcelos
Hi
It's possible to use apache SOLR integrated with HIbernate Search? I
wanna to use the features of HIbernate Search (Like integration with
database objects, MultiObject Mapping, indexedEmbebed, etc) and use
SOLR for indexing and search, this is possible?
Thanks
12 years, 11 months
Backports for 3.6.10
by Gail Badner
I've created new issues for backporting fixes for 3.6.10.
Please take a look at: https://hibernate.onjira.com/secure/IssueNavigator.jspa?reset=true&mode=h...
I'm not planning to backport dialect-related issues, although I could be talked into it.
Steve, should HHH-6855/HHH-6854 and/or HHH-4358 also be backported?
Adam, are there any Envers issues that should be backported for 3.6.10? If so, please create new issues for them and assign as appropriate.
Feedback?
Thanks,
Gail
12 years, 11 months