Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 11 months
Changelog file in Hibernate ORM
by Sanne Grinovero
The file changelog.txt in the root ot the Hibernate ORM project seems outdated.
Is it not maintained anymore? I found it handy.
Sanne
9 years, 7 months
JdbcSession proposal
by Steve Ebersole
I found a few spare minutes to work on this a little and move it into
the next stage with some actual interfaces, impls and usages to help
illustrate some of the proposed concepts.
https://github.com/sebersole/JdbcSession
The README.md is very up-to-date and detailed. Would be good to get
input from others.
P.S. I probably dislike the *Inflow naming more than you do :)
10 years, 4 months
[OGM] OGM-21 query support
by Guillaume SCHEIBEL
Hello,
I have started to work on the support of the JPQL queries from the
EntityManager and I'm facing some difficulties.
I have look into the ORM as a start and so I have created
public class OgmQueryImpl<X> extends AbstractQueryImpl<X> implements
TypedQuery<X> { ... }
Therefore, the AbstractQueryImpl class needs a constuctor
using HibernateEntityManagerImplementor. Because OgmEntityManager is
directly implementing EntityManager it cannot be done directly (HEMI is an
implementation of the EntityManager interface).
So I've tried to switch OgmEntityManager to
extend AbstractEntityManagerImpl (which is an implementation of
HibernateEntityManagerImplementor) but now
OgmEntityManager#getEntityManagerFactory() must return an instance
of EntityManagerFactoryImpl.
The point is OgmEntityManagerFactory is implementing EntityManagerFactory
and HibernateEntityManagerFactory and not EntityManagerFactoryImpl.
I don't want to change the complete class hierarchie if there is a better
option / choice.
So any thoughts ?
Guillaume
10 years, 11 months
JdbcSession -> DataStoreSession
by Steve Ebersole
As Gail and I discussed the actual API to use in this JdbcSession
redesign we both liked the idea of sending Operations in to the
JdbcSession rather than JdbcSession exposing a very granular, JDBC-like
API. This got me to thinking that JdbcSession really could be a generic
representation of a connection+transaction context for *any* back end,
whether that back-end was JDBC or not.
I discussed this idea a bit with Sanne on IRC. He was on board in the
general sense.
So this morning I spent some time re-working the code to express this
concept. I just pushed this out to my repo. The main contract now is
org.hibernate.resource.store.DataStoreSession. The JDBC realization of
this contract is defined by
org.hibernate.resource.store.jdbc.JdbcSession (which extends the
DataStoreSession).
The nice thing is that this all ties in seamlessly with the
TransactionCoordinator. There is a notion of "resource local
transactions" and "jta transactions". "resource local transactions"
ultimately come from the DataStoreSession which allows integrating
back-end specific notions of transactions by implementing a
org.hibernate.resource.transaction.ResourceLocalTransaction for that
back end (org.hibernate.resource.store.jdbc.spi.PhysicalJdbcTransaction
for example for JDBC).
Anyway, it was a nice exercise even if y'all decide to not use this in
OGM. It forced me to think about the separation between the 2 main
packages (org.hibernate.resource.transaction versus
org.hibernate.resource.store) and to remove the cyclic deps.
10 years, 11 months
Refactoring of org.hibernate.cfg.Configuration
by Andrey Polyakov
Hi everyone,
I am writing to make a proposal to refactor the Configuration class as an
interface and and implementation for the reason below.
It would allow third-party developers as myself to use proxies and
decorators with default configuration.
For example, in groovy.grails they extend the Configuration class. I may
extend DefaultGrailsDomainConfiguration but if there were another developer
he would have to choose which child to extend which makes things trickier.
To be precise I am playing with generateSchemaCreationScript and
generateSchemaUpdateList. I want to do things with default scripts by
simply wrapping around configuration given.
Kind Regards,
Andrey Polyakov
10 years, 12 months
Hibernate Search: Transactions timeout on MassIndexer
by Sanne Grinovero
Hi Emmanuel,
in case you get very bored at Devoxx :)
I remember you implementing a quite complex fix for my initial
MassIndexer which involved avoiding the transactions we use from
timing out.
This is probably more than a year old, but there is a user on the
forums now using 4.4.0.Final and having a suspiciously similar
problem:
https://forum.hibernate.org/viewtopic.php?f=9&t=1029562
I've looked into our code, but I'm not understanding how the class
org.hibernate.search.batchindexing.impl.OptionallyWrapInJTATransaction
is supposed to prevent the transaction from timing out?
Do you have any idea on the problem?
I recently had to apply some refactoring so I might have introduced a
regression but I need another pair of eyes.
Tia,
Sanne
11 years
SessionEventsListener feature (HHH-8654)
by Steve Ebersole
I wanted to highlight a new feature in 4.3 as it came about from
performance testing efforts. Its a way to hopefully help track down
potential performance problems in applications that use Hibernate. In
this way it is similar to statistics, but it operates per-Session
(though certainly custom impls could role the metrics up to a SF level).
It revolves around the SessionEventsListener[1] interface which
essentially defines a number of start/end pairs for the interesting
events (for example starting to prepare a JDBC statement and ending that
preparation).
Multiple SessionEventsListener instances can be associated with the
Session simultaneously. You can add them programatically to a Session
using Session#addEventsListeners(SessionEventsListener...) method. They
can also be added to the Session up-front via the
SessionFactory#withOptions API for building Sessions.
Additionally there are 2 settings that allow SessionEventsListener impls
to be applied to all Sessions created:
* 'hibernate.session.events.auto' allows you to name any arbitrary
SessionEventsListener class to apply to all Sessions.
* 'hibernate.session.events.log' refers to a particular built-in
implementation of SessionEventsListener that applies some timings across
the start/end pairs
(org.hibernate.engine.internal.LoggingSessionEventsListener). In fact
this listener is added by default if (a) stats are enabled and (b) the
log level (currently INFO) of LoggingSessionEventsListener is enabled.
Below[2] is some sample output of LoggingSessionEventsListener.
There is also a org.hibernate.EmptySessionEventsListener (no-op) class
to help develop custom ones.
Anyway, as much as anything I wanted to point it out so people can try
it out and to get feedback. I think the API covers most of the
interesting events. If you feel there are any missing, lets discuss
here or on a Jira issue.
[1] https://gist.github.com/sebersole/7438250
[2]
14:40:20,017 INFO LoggingSessionEventsListener:275 - Session Metrics {
9762 nanoseconds spent acquiring 1 JDBC connections;
0 nanoseconds spent releasing 0 JDBC connections;
1020726 nanoseconds spent preparing 4 JDBC statements;
1442351 nanoseconds spent executing 4 JDBC statements;
0 nanoseconds spent executing 0 JDBC batches;
0 nanoseconds spent executing 0 L2C puts;
0 nanoseconds spent executing 0 L2C hits;
0 nanoseconds spent executing 0 L2C misses;
2766689 nanoseconds spent executing 1 flushes (flushing a total of
3 entities and 1 collections);
1096552384585007 nanoseconds spent executing 2 partial-flushes
(flushing a total of 3 entities and 3 collections)
}
11 years
[OGM] Managing GridDialect via the service registry
by Gunnar Morling
Hi,
We currently have a custom mechanism for providing the current GridDialect
to components and I'm wondering about the motivation for this mechanism.
More specifically there is GridDialectFactory which instantiates the
dialect type and DatastoreServices which provides access to this instance.
So we have two services and initiators just dealing with providing access
to the current grid dialect.
Couldn't there simply be a GridDialectInitiator which instantiates the
GridDialect type and registers the dialect itself as service in the
registry? This seems beneficial to me:
* GridDialect implementations can leverage all benefits of being a service,
e.g. they can implement Configurable to access the configuration or
ServiceRegistryAwareService to access other services (including the
DatastoreProvider). Note that GridDialect extends Service already today
which seems confusing as it is no service managed by the registry.
* We could remove DatastoreProvider, GridDialectFactory and their initiators
* One indirection less for users of the GridDialect as they can get it
directly from the registry
Any thoughts? Is there a specific reason for the current design which I'm
missing?
Thanks,
--Gunnar
11 years
[Search] Search 5 and architecture review
by Hardy Ferentschik
Hi,
IMO we should rethink how we describe the architecture of Search and especially its extension points.
I think for Search 5 we will need to review/rewrite the documentation (maybe as part of moving to asciidoc) and clear out some
of the ambiguities we have in the used terminology.
For example, ‘backend’. Technically it is the implementation of BackendQueueProcessor, but I think we sometimes used the
term in the wrong context as well, for example when we talk about the "Infinispan backend” which is really a DirectoryProvider.
Speaking of the latter, my understanding is that we wanted to move away from DirectoryProvider (at least from a configuration/
documentation point of view) and consistently refer to IndexManager.
These are just two examples of various major and minor inconsistencies in how we describe Search. I think we really should
spend some time and work out the terminology we want to use and weed out the documentation. Maybe this is something
everyone can just keep in mind for a while to think about.
Should we create an issue for this where we can collect ideas on what needs to be changed?
—Hardy
11 years