Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 11 months
Changelog file in Hibernate ORM
by Sanne Grinovero
The file changelog.txt in the root ot the Hibernate ORM project seems outdated.
Is it not maintained anymore? I found it handy.
Sanne
9 years, 7 months
[OGM] OGM-21 query support
by Guillaume SCHEIBEL
Hello,
I have started to work on the support of the JPQL queries from the
EntityManager and I'm facing some difficulties.
I have look into the ORM as a start and so I have created
public class OgmQueryImpl<X> extends AbstractQueryImpl<X> implements
TypedQuery<X> { ... }
Therefore, the AbstractQueryImpl class needs a constuctor
using HibernateEntityManagerImplementor. Because OgmEntityManager is
directly implementing EntityManager it cannot be done directly (HEMI is an
implementation of the EntityManager interface).
So I've tried to switch OgmEntityManager to
extend AbstractEntityManagerImpl (which is an implementation of
HibernateEntityManagerImplementor) but now
OgmEntityManager#getEntityManagerFactory() must return an instance
of EntityManagerFactoryImpl.
The point is OgmEntityManagerFactory is implementing EntityManagerFactory
and HibernateEntityManagerFactory and not EntityManagerFactoryImpl.
I don't want to change the complete class hierarchie if there is a better
option / choice.
So any thoughts ?
Guillaume
10 years, 11 months
Hibernate Search: Transactions timeout on MassIndexer
by Sanne Grinovero
Hi Emmanuel,
in case you get very bored at Devoxx :)
I remember you implementing a quite complex fix for my initial
MassIndexer which involved avoiding the transactions we use from
timing out.
This is probably more than a year old, but there is a user on the
forums now using 4.4.0.Final and having a suspiciously similar
problem:
https://forum.hibernate.org/viewtopic.php?f=9&t=1029562
I've looked into our code, but I'm not understanding how the class
org.hibernate.search.batchindexing.impl.OptionallyWrapInJTATransaction
is supposed to prevent the transaction from timing out?
Do you have any idea on the problem?
I recently had to apply some refactoring so I might have introduced a
regression but I need another pair of eyes.
Tia,
Sanne
11 years
SessionEventsListener feature (HHH-8654)
by Steve Ebersole
I wanted to highlight a new feature in 4.3 as it came about from
performance testing efforts. Its a way to hopefully help track down
potential performance problems in applications that use Hibernate. In
this way it is similar to statistics, but it operates per-Session
(though certainly custom impls could role the metrics up to a SF level).
It revolves around the SessionEventsListener[1] interface which
essentially defines a number of start/end pairs for the interesting
events (for example starting to prepare a JDBC statement and ending that
preparation).
Multiple SessionEventsListener instances can be associated with the
Session simultaneously. You can add them programatically to a Session
using Session#addEventsListeners(SessionEventsListener...) method. They
can also be added to the Session up-front via the
SessionFactory#withOptions API for building Sessions.
Additionally there are 2 settings that allow SessionEventsListener impls
to be applied to all Sessions created:
* 'hibernate.session.events.auto' allows you to name any arbitrary
SessionEventsListener class to apply to all Sessions.
* 'hibernate.session.events.log' refers to a particular built-in
implementation of SessionEventsListener that applies some timings across
the start/end pairs
(org.hibernate.engine.internal.LoggingSessionEventsListener). In fact
this listener is added by default if (a) stats are enabled and (b) the
log level (currently INFO) of LoggingSessionEventsListener is enabled.
Below[2] is some sample output of LoggingSessionEventsListener.
There is also a org.hibernate.EmptySessionEventsListener (no-op) class
to help develop custom ones.
Anyway, as much as anything I wanted to point it out so people can try
it out and to get feedback. I think the API covers most of the
interesting events. If you feel there are any missing, lets discuss
here or on a Jira issue.
[1] https://gist.github.com/sebersole/7438250
[2]
14:40:20,017 INFO LoggingSessionEventsListener:275 - Session Metrics {
9762 nanoseconds spent acquiring 1 JDBC connections;
0 nanoseconds spent releasing 0 JDBC connections;
1020726 nanoseconds spent preparing 4 JDBC statements;
1442351 nanoseconds spent executing 4 JDBC statements;
0 nanoseconds spent executing 0 JDBC batches;
0 nanoseconds spent executing 0 L2C puts;
0 nanoseconds spent executing 0 L2C hits;
0 nanoseconds spent executing 0 L2C misses;
2766689 nanoseconds spent executing 1 flushes (flushing a total of
3 entities and 1 collections);
1096552384585007 nanoseconds spent executing 2 partial-flushes
(flushing a total of 3 entities and 3 collections)
}
11 years
[OGM] Distinguishing embedded collections and associations in document stores
by Gunnar Morling
Hi,
I'm working on support for embedded associations in CouchDB [1]. Checking
how this is mapped by the MongoDB dialect I saw its done like this (here
with an order column):
{
"_id": "123",
"orderedChildren": [
{
"birthorder": 0,
"orderedChildren_id": "456"
},
{
"birthorder": 1,
"orderedChildren_id": "789"
}
]
}
Just looking at this document one can't tell whether "orderedChildren"
actually represents an association or an embedded collection. For our
engine that's no problem as it knows the kind of the element from its
meta-model.
We have a testing approach though which makes assertions on the number of
associations stored in the database. With the representation described
above the number of embedded associations can't be determined on the
server-side alone (using a "view" in CouchDB terms).
Besides from adding an attribute which describes the kind of a collection
(which wouldn't be so nice as it was just for testing purposes), I don't
see any other way than obtaining all the candidates and single out actual
associations on the client based on the meta-model.
Maybe anyone has a better idea?
Btw. for MongoDB the problem is ignored by having the assertion method
always return true in this case.
--Gunnar
[1] https://hibernate.atlassian.net/browse/OGM-389
11 years
Annotation Processors
by Steve Ebersole
I started today on removing the separate calls to javac to execute
Annotation Processors in the Hibernate ORM build. From Gradle it is
working fine. However when I try to enable Annotation Processing in
IntelliJ, it complains about the "module cycle" between hibernate-core
and hibernate-testing.
I'd really like to get a gauge on how many people really use
hibernate-testing.
11 years
[ORM/OGM] String-typed version properties
by Gunnar Morling
All,
I'm working on supporting on the CouchDB backend for OGM, and more
specifically on integrating the optimistic locking functionality which is
built into CouchDB [1].
For that purpose, each CouchDB document has a defined field "_rev" which is
a UUID and is updated on the server-side upon each write. So I thought I
could map this attribute like this:
@Generated //this prop. needs to be read back value after writes
@Version //this prop. is used for optimistic locking
String _rev;
But this gave me a CCE since @Version is not allowed on Strings
(org.hibernate.type.StringType is no VersionType).
Looking around, I found BinaryType which looks like what'd I need for
Strings, there is also a note mentioning basically the same use case case
[2]. If I register a custom type derived from StringType with an equivalent
implementation of the VersionType contract, I get the behavior I need.
Does anyone see a problem with making o.h.t.String a VersionType in this
way (i.e. it'd only support DB-generated values)?
Thanks,
--Gunnar
[1] https://hibernate.atlassian.net/browse/OGM-392
[2] "only known application of binary types for versioning is for use with
the TIMESTAMP datatype supported by Sybase and SQL Server, which are
completely db-generated values"
11 years, 1 month
[OGM] Build time
by Emmanuel Bernard
My machine is in a poor state. But still, the default mvn install took
more than 7 minutes.
For info the minimal build takes 3:20 (4:30 with the integration tests).
We have added a few mechanisms over time on OGM:
* modules depending on an external DB are skipped if SOMEDB_HOSTNAME is
not set
* skipDocs which skips documentation and JavaDocs (JavaDocs take a lot
of time on my machine)
* skipITs which skips the integration tests
* skipDistro which skips the distribution
While each individual mechanism serves a purpose, it ends up serving
everyone badly. Minimalarians complain about the myriad of flags to
write each time. Safarians complain that if they forget
COUCHDB_HOSTNAME, the distribution will simply not contain it.
I think there are four main use cases
1. run the test suite for one specific db + rebuild core as things might
have changed
2. run the minimal test suite to make sure things compile and work
3. run a full test on every backend and build the distribution for a
release
4. other cases
I suspect the % of time per use case is as follow (your mileage may
vary):
1. 45%
2. 45%
3. 1%
4. 9%
But when you are in case 3. you absolutely must be sure everything run
and no module is skipped.
Here is a proposal
a. Provide a -Dminimal flag to run in case 1.
b. Provide a -Dcomplete flag to run in case 3.
c. Provide a script to do 2. I suspect case 2. can only be done with a
custom script or by moving to Gradle. For various reasons, I don't want
us to move to Gradle at this stage.
Everyone OK with doing a. and b.
The problem with c. is that making a cross platform script requires time
but we could make it work for us at least.
Note that this leaves open what 'mvn clean install' should do.
Thoughts?
Emmanuel
11 years, 1 month