Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 9 months
Fwd: Re: Proxies and typing
by Steve Ebersole
Forwarding a part of this discussion that got inadvertently limited to
just Sanne and myself.
Bringing this back up because this is most likely not going to be
accepted into JPA 2.1. Anyway, I am all for going down the path that:
public interface User {
...
}
@Entity
@Proxy(proxyClass=User.class)
public class UserImpl implements User {
...
}
means that users would use User.class in all phases of the API:
User user = session.byId( User.class ).get( 1 );
EntityType<User> jpaEntityType = emf.getMetamodel().entity(User.class );
etc.
I think that is the cleanest path that allows generic-typed api.
-------- Original Message --------
Subject: Re: [hibernate-dev] Proxies and typing
Date: Thu, 26 Jan 2012 14:37:22 +0000
From: Sanne Grinovero <sanne(a)hibernate.org>
To: Steve Ebersole <steve(a)hibernate.org>
On 26 January 2012 14:02, Steve Ebersole <steve(a)hibernate.org> wrote:
> These emails are just between you and me. Not sure if thats what you
> intended. I erroneously replied to just you at one point but then sent to
> whole list also. Anyway, just mentioning...
Ah, sorry, didn't notice either. Well last reply then, will try resume
the public conversation if I have more comments.
> The idea of requiring the interface is appealing in a way. But, for
> example, there are odd inconsistencies then. For example
>
> User user = session.byId( User.class ).get( 1 );
>
> but then
>
> EntityType<UserImpl> jpaEntityType = emf.getMetamodel().entity(
> UserImpl.class )
>
>
> Which I guess is my biggest hang up. On one side we are saying that the
> impl is the entity and on the other saying the interface is the entity.
>
> You know me and consistency :)
I agree on consistency, but this is tricky, I'm not sure if you need
the UserImpl at all, maybe you can remove it from the MetaModel (maybe
after having read out other metadata from it).
Isn't such a mapping definition like a dirty workaround to actually
map the interface ?
Sanne
12 years, 4 months
Quoted names
by Dmitry Geraskov
Hi, guys,
I am working on the hibernate tools code generation problem for tables
which has quoted names (name, schema or catalog has special symbols(dot
for ex.)).
Any reason why Table#setName(x) and Table#setSchema(x) have "unquote"
logic, but Table#setCatalog(x) does not?
Dmitry Geraskov
12 years, 5 months
OGM-157 copyright year
by Guillaume SCHEIBEL
Hi guys,
I'm starting the translation of the documentation in french and I've seen
the "copyright year" into the ogm.ent file is set to 2011.
Should I change it ? If yes should I take it into OGM-157 (specific to the
translation) or should I open a new JIRA about that ?
Guillaume
12 years, 6 months
[OGM] Transaction-aware
by Pawel Kozlowski
hi!
First of all, thank you for coming up with the idea & implementing
OGM. For quite some time I was thinking of using JPA annotations /
semantics to drive different NoSQL stores but the whole idea of
re-implementing the JPA machinery was really scary. Now we don't need
to do this anymore as we've got OGM :-)
For the few past days I was looking at the
org.hibernate.ogm.dialect.GridDialect interface (as well as at the
existing Map, Infinispan and Ehcache implementations) and it looks
like it is very easy to implement non-transactional behavior (I mean -
persistence of tuples and associations is really straightforward).
What I was struggling with thought is making a NoSQL store aware of
the JTA-transaction demarcation. What I would like to achieve is to
start a transaction in the underlying store (on transaction begin) and
commit/roll it back inside data store when the JTA transaction is
committed / rolled-back.
Looking at the existing implementations it wasn't easy to figure out
how to achieve this. Moreover I've bumped into this discussion:
http://www.mail-archive.com/hibernate-dev@lists.jboss.org/msg07373.html
where Emmanuel provided his insight: "Form this discussion it also
seems that we might need to have datastores and
dialect implement the Hibernate transaction object so that the datastore can
properly demarcate when isolation starts and when it ends. But that's clearly
not abstracted yet in Hibernate OGM."
In the end my question is rather simple (although answer might be
not...): what would be OGM-way to start / commit a transaction in the
underlying data store in response to JTA transaction events? Or am I
asking totaly wrong question and I should be taking a different
approach? Would be grateful for any insight.
Cheers,
Pawel Kozlowski
12 years, 6 months
Fwd: NaturalIdLoadAccess behaviour on 2Lcache is this expected?
by Madhumita Sadhukhan
Hello Developers,
Strong suggested I fwd this to the mailing-list.With reference to mail below could any of you clarify/explain if this is expected behaviour of NaturalIdLoadAccess with 2lcache as I find some discrepancy.
Thanks and Regards,
Madhumita Sadhukhan
JBoss EAP QE Team
Red Hat Brno
----- Forwarded Message -----
From: "Madhumita Sadhukhan" <msadhukh(a)redhat.com>
To: "Strong Liu" <stliu(a)redhat.com>
Sent: Friday, April 27, 2012 12:19:18 AM
Subject: NaturalIdLoadAccess behaviour is this expected?
Hi Strong,
I notice a strange behaviour while loading natural id with secondlevel cache enabled.
Not sure if this is expected behaviour but I notice some discrepancies which I would like to clarify.
I am not yet uploading this test to my AS7 branch on github as AS7 is still stuck on hibernate4.1.2 and this test requires 4.1.3 from EAP 6 tested currently to work.
Please paste the attached unzipped folder in AS testsuite folder structure in location:
/jboss-as/testsuite/integration/basic/src/test/java/org/jboss/as/test/integration/hibernate/
Please run with hibernate 4.1.3 jars replaced in modules/org/hibernate..... within build of AS7
In my test I have tried to load an entity with NaturalIdLoadAccess while 2lcache is enabled using two natural ids(firstname and voterid) in several steps.
1) create person(with natural ids firstname and voterid)
2) load using natural ids first time
3) modify/update natural ids in database(in order to not touch 2lcache)
4) load using loader from 2Lcache using old natural id values in step 2---//this works correctly as expected and is able to load values from cache though it has been modified in DB.
5) then load using loader from (2) but using updated values of firstname and voterid(naturalids) used in step 3
//this is where it breaks the Person object returned from this load still shows older value of firstname as in step 2 but wondering how the loader works as I have passed the new values in using(....)
also the next step fails so wondering where the flaw is!
6) try to use same loader and load using older values of natural id this is to recheck if older value is still persisting in cache as it was returned in step5
//this returns null pointer showing it does not exist in cache and loader is unable to load it!
7) If i try again with the new actual values the loader is able to load the person entity again but with old values!!!
So the problem/confusion is due to the discrepancy between step 5 and step 6 (with the older value of entity loaded in step5 indicating it is still cached but then with the failure to load it in step 6 throwing nullpointer which indicates its not)
Also I am surprised at how using() function behaves on the loader i.e it returns the entity with older values while loading "using()" the updated values!(as confirmed in step 7)
Please note that the testcase will pass as I replaced the asserts with S.O.Ps hence you should check for the values in server log excerpt(attaching for ease)
Could you take a quick look please and explain if this is expected behaviour?
Thanks and Regards,
Madhumita Sadhukhan
JBoss EAP QE Team
Red Hat Brno
12 years, 6 months
Re: [hibernate-dev] Adding features to Dialect class
by Steve Ebersole
Their better option is to apply a Type for String that handles this.
http://docs.jboss.org/hibernate/orm/4.1/manual/en-US/html_single/#types-r...
This is the type of thing we will be able to handle automatically in 5.0
But as for the forum user's exact question, personally I think him
expecting null and empty string *in the java model* to be handled
equally is just plain wrong.
P.S., these kinds of questions should be directed at the dev list so we
can get everyone's input.
On Mon 23 Apr 2012 07:39:19 AM CDT, Łukasz Antoniak wrote:
> If you are sure that only Oracle threads empty strings this way, then
> I will check if Oracle dialect is currently utilized.
> Issue: https://hibernate.onjira.com/browse/HHH-7246
>
> Regards,
> Lukasz
>
> W dniu 23 kwietnia 2012 14:00 użytkownik Steve Ebersole
> <steve(a)hibernate.org> napisał:
>> First, why do you need this?
>>
>> Second, they all distinguish between NULL and empty string. What Oracle
>> does that is odd is to instead write empty strings as NULL when inserting or
>> updating values.
>>
>>
>> On Mon 23 Apr 2012 01:59:25 AM CDT, Łukasz Antoniak wrote:
>>>
>>> Hello team,
>>>
>>> I would like to add the following method to abstract Dialect class:
>>>
>>> /**
>>> * Does this dialect distinguish between empty string and {@code NULL}
>>> value?
>>> *
>>> * @return {@code true} if the database does not thread {@code NULL}
>>> as an empty string; {@code false} otherwise.
>>> */
>>> public boolean supportsEmptyString() {
>>> return true;
>>> }
>>>
>>> I know that in Oracle dialect, it has to return false.
>>>
>>> Am I supposed to override it appropriately for other dialects before
>>> committing my changes?
>>> Any comments about naming? I couldn't come up with a better one.
>>>
>>> Regards,
>>> Lukasz
>>
>>
>> --
>> steve(a)hibernate.org
>> http://hibernate.org
--
steve(a)hibernate.org
http://hibernate.org
12 years, 6 months
How to run tests on the MongoDB branch
by Sanne Grinovero
Continuing the IRC chat here:
<emmanuel> I force enabled it
<emmanuel> still it uses localhost
<emmanuel> funnily enough 5 tests pass
<emmanuel> gtg, train arived
This is how it's designed to work, please forget about profiles:
if you run (using your build script):
$ sh build
it builds all modules *except* mongodb
if you run the same command prefixing it with the env parameters you want:
$ MONGODB_HOSTNAME=127.0.0.7 sh build
it will build all modules, including mongodb as well, using the
hostname you defined for tests..
That should be good to test one-shot; of course I'd expect frequent
builders like yourself to define the variable globally so you won't
have to type it over again.
You can use profiles as well to enable it, in which case it will use
localhost:defaultmongodbPort , so in your case you should just enable
the environment settings and leave profiles alone.
5 tests will pass even with a wrong environment as they are true "unit
tests" not actually needing MongoDB.
Cheers,
Sanne
12 years, 7 months