Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 10 months
Stored procedure support
by Steve Ebersole
I thought I had written about this before to the list, but maybe not.
Anyway, I added much enhanced support for calling database functions and
procedures to master. But this is just my initial swag and as far as I
know my eyes are the only ones that have seen it. So I wanted to get
some feedback. Feel free to give feedback on any/all aspects of that
API, but there was one aspect in particular I was really wanting
feedback: parameters. The concept of parameters, much like in queries,
is split into 2 parts: declaration and usage.
The initial impetus for this was the JPA 2.1 feature for supporting
procedure calls. But I went a little different direction in our
"native" support for this, the main difference being modeling the
outputs as a separate thing from the call itself. I really like our API
there better.
The declaration of the call is modeled as
org.hibernate.StoredProcedureCall (although I am thinking of moving away
from "StoredProcedure" as the name base here since functions are
supported as well; better name suggestions welcome). The outputs of the
call execution is org.hibernate.StoredProcedureOutputs.
To create a StoredProcedureCall, one simply calls one of the overloaded
Session.createStoredProcedureCall methods passing in either (a) the
func/proc name, (b) the func/proc name and any entity class(es) to map
the results back to, (c) the func/proc name and any result set mapping
name(s) to apply to the results.
From there, parameters are declared/registered through the overloaded
StoredProcedureCall#registerStoredProcedureParameter methods. Again, in
retrospect not liking that name; should be declareParameter or
registerParameter imo. Anyway, parameters can be treated as either
positional or named. Named here has a little bit different meaning
though, intending to name the arguments in the procedure/function
definition. This is a feature defined by JDBC 3, although as I
understand it not all drivers support it (aka, it can lead to
SQLFeatureNotSupportedException). We can even know this a priori via
DatabaseMetaData.html#supportsNamedParameters() to give better (and
earlier!) exceptions.
Anyway, currently registerStoredProcedureParameter returns back
StoredProcedureCall for method chaining. We'll come back to that in a
second...
After parameters are registered, the values for IN and INOUT style
parameters must be set/bound. Currently this is untyped because of the
fact that registration does not return any "memento" with the typing
information (the Java type is passed to the register method). After
execution, the values from all INOUT and OUT parameters can be
extracted, but again those extractions are untyped for the same reason.
Which leads me to question whether we want to consider handling
parameter values (whether in or out) in a typed manner is important. As
an example, currently to extract an OUT parameter value you'd have:
StoredProcedureCall call = session.createStoredProcedureCall("my_proc");
call.registerStoredProcedureParameter("p1",Long.class,ParameterMode.OUT);
//maybe some other stuff...
StoredProcedureOutputs outputs = call.getOutputs();
Long p1 = (Long) outputs.getOutputParameterValue("p1");
The alternative would be something like defining a typed
RegisteredParameter contract:
interface RegisteredParameter<T> {
public Class<T> getParameterType();
public ParameterMode getMode();
}
and then:
StoredProcedureCall call = session.createStoredProcedureCall("my_proc");
RegisteredParameter<Long> p1Param = call.registerParameter(
"p1",
Long.class,
ParameterMode.OUT
);
//maybe some other stuff...
StoredProcedureOutputs outputs = call.getOutputs();
Long p1 = outputs.getOutputParameterValue( p1Param );
Or maybe even:
interface RegisteredParameter<T> {
public Class<T> getParameterType();
public ParameterMode getMode();
public void bind(T value);
public T extract();
}
StoredProcedureCall call = session.createStoredProcedureCall("my_proc");
RegisteredParameter<Long> p1Param = call.registerParameter(
"p1",
Long.class,
ParameterMode.OUT
);
//maybe some other stuff...
StoredProcedureOutputs outputs = call.getOutputs();
Long p1 = p1Param.extract();
The problem with this last one is managing when that 'extract' can be
called...
Anyway, thoughts?
--
steve(a)hibernate.org
http://hibernate.org
12 years
hiberntae 4.1.6 / spring 3.1.3 upgrade issue
by Vikas Bali
Hi, I had recently upgraded hibernate version to 4.1.6 with that I also had to upgrade spring too to version 3.1.3.Release which is compatible to hibernate 4, but I am getting following issue with this change:
In my hibernate.cfg.xml, I load a mapping file e.g, "<mapping resource="com/test/x.hbm.xml" />" and
x.hbn.xml:
<hibernate-mapping package="com.test.packA">
<class entity-name="DummyEntityName" table="DEN">
<component name="compA" class="com.test.PackA.ClassA">
</component>
</class>
</ hibernate-mapping >
.. When I call LocalSessionFactoryBean.afterPropertiesSet(), I get following exception. Please note class "ClassA" is part of some other jar which may or may not be included at the runtime. It was working fine with earlier version but getting following exception with this upgrade ... Any idea ?
org.hibernate.HibernateException: Unable to instantiate default tuplizer [org.hibernate.tuple.component.PojoComponentTuplizer]
at org.hibernate.tuple.component.ComponentTuplizerFactory.constructTuplizer(ComponentTuplizerFactory.java:101)
at org.hibernate.tuple.component.ComponentTuplizerFactory.constructDefaultTuplizer(ComponentTuplizerFactory.java:122)
at org.hibernate.tuple.component.ComponentMetamodel.<init>(ComponentMetamodel.java:80)
at org.hibernate.mapping.Component.getType(Component.java:172)
at org.hibernate.mapping.SimpleValue.isValid(SimpleValue.java:294)
at org.hibernate.mapping.Property.isValid(Property.java:238)
at org.hibernate.mapping.PersistentClass.validate(PersistentClass.java:469)
at org.hibernate.mapping.RootClass.validate(RootClass.java:270)
at org.hibernate.cfg.Configuration.validate(Configuration.java:1294)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1738)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1784)
at org.springframework.orm.hibernate4.LocalSessionFactoryBuilder.buildSessionFactory(LocalSessionFactoryBuilder.java:251)
at org.springframework.orm.hibernate4.LocalSessionFactoryBean.buildSessionFactory(LocalSessionFactoryBean.java:372)
at org.springframework.orm.hibernate4.LocalSessionFactoryBean.afterPropertiesSet(LocalSessionFactoryBean.java:357)
at com.test.tk.service.persistence.hbm.SessionManager.getSessionFactoryBean(SessionManager.java:252)
...
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedConstructorAccessor30.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.hibernate.tuple.component.ComponentTuplizerFactory.constructTuplizer(ComponentTuplizerFactory.java:98)
... 145 more
Caused by: org.hibernate.MappingException: component class not found: com.test.packA.ClassA
at org.hibernate.mapping.Component.getComponentClass(Component.java:134)
at org.hibernate.tuple.component.PojoComponentTuplizer.buildGetter(PojoComponentTuplizer.java:155)
at org.hibernate.tuple.component.AbstractComponentTuplizer.<init>(AbstractComponentTuplizer.java:64)
at org.hibernate.tuple.component.PojoComponentTuplizer.<init>(PojoComponentTuplizer.java:59)
... 149 more
Caused by: java.lang.ClassNotFoundException: com.test.packA.ClassA from [Module "deployment.test.war:main" from Service Module Loader]
at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:190)
at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:468)
at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:456)
at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:423)
at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:398)
at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:120)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:169)
at org.hibernate.internal.util.ReflectHelper.classForName(ReflectHelper.java:192)
at org.hibernate.mapping.Component.getComponentClass(Component.java:131)
... 152 more
12 years
The problem of cascading database reads
by Marc Schipperheyn
Hi,
I've been having this discussion on the forum with Sanne (
https://forums.hibernate.org/viewtopic.php?f=9&t=1023973) about the
problems I have with cascading database reads. I thought it might be
interesting to continue this conversation here with some additional
thoughts:
There are a number of situations where an update to an object can trigger a
cascade of database reads that is required because Lucene doesn't allow
document updates but instead requires a complete recreation of the document
from scratch. These cascading reads can bring a server to its knees or lead
to application unresponsiveness.
I find that the speed of Hibernate Search's reads is often offset by the
cascade of database reads that may occur when an indexed entity is updated.
However, that very read speed is a major reason for using it, so it would
be great if the write speed problems could be alleviated.
E.g. some simpliefed examples
Network
@OneToMany
@IndexedEmbedded(includePaths={"id"})
List<User> users;
When a new User is added to the Network, all the existing Users have to be
read from the database to recreate the Lucene document.
Another headache example is when a stored property that is used for
selection purposes changes
LinkedInGroup
@Field(index=Index.YES)
boolean hidden;
@OneToMany
@ContainedIn
List<LinkedInGroupPost> posts;
@OneToMany
@IndexedEmbedded(includePaths={"id"})
Set<Tag> tags
LinkedInGroupPost
@ManyToOne
@IndexedEmbedded(includePaths={"id","hidden"})
Group group;
Assuming there can be hundreds of thousands of Posts, a change of hidden to
true would trigger a read of all those records.
While we might say that you should apply the architecture that best fits
the purpose of both the application and the technology, I really think that
Hibernate Search should be able to more easily handle these kinds of use
cases without leading to excessive database reads.
Some directions for thoughts
* In the Network example, the includedPaths only contains the id. Looking
at my own work, I often find that IndexedEmbedded references just stores
the id and I believe we should think about optimizing this use case. In
that case an optimized read from the database could be executed that just
reads that value in stead of initializing the entire entity.
This kind of "projection read" could be an optional setting even when
includePaths contains non identifier values, assuming the developer knows
which limitations this might entail (e.g. no FieldBridges, no Hibernate
cache). It's a kind of "document oriented" MassIndexer approach to Document
"updates".
* Lucene Document update support is at an alpha stage right now
LUCENE-3837. This effort could be supported by the Hibernate team or
implemented at the earliest viable moment.
* A kind of JoinFilter is conceivable where the join filter would be able
to exclude results based on selection results from another index.
E.g. one queries the LinkedInGroupPost but the JoinFilter reads
group.idreferences from the Group index (just reading the ones needed
and storing
them during the read) and excludes LinkedInGroupPosts based on the value of
"hidden". I wonder if this approach could patterned or documented.
* The documentation could contain some suggestions for dealing with the
issue of cascading initialization and how to deal with this in a smart way.
* In the tests I have done, saving a LinkedInPostGroup where the
indexedEmbedded attributes (id,hidden) are *not* dirty, all the posts are
reinitialized anyway. The reason for this is that with a Set<Tag> the set
elements are deleted and reinserted on each save even when they haven't
changed. It looks like Hibernate Search is not optimized to deal with this
"semi-dirty" situation (Hibernate ORM treats a field as dirty when it
really isn't). Nothing really changed in the relevant values for the
document but because Hibernate needs to reinsert the set, it thinks so. I
wonder if this use case can or should be optimized. If not, documentation
should warn against using Sets.
* When a document is recreated because one attribute is changed leading to
all sorts of cascading database reads I often wonder: why? The reason is
that the Index segments cannot e recreated for the indexed attributes. So
we need to read them again. But what if those attributes are actually
Stored in the original document and not dirty? Why not just read these
values straight from the document with a single read in stead of executing
a slew of database reads?
12 years
Regressions after upgrading from ORM 4.1.6 to 4.1.8
by Guillaume Smet
Hi,
After upgrading from 4.1.6 to 4.1.8, we have a couple of regressions
in one of our applications.
We tried to obtain self contained test cases and understand what the
problem is but it's quite hard to reproduce and we haven't found a way
to isolate the problem yet.
Anyway, I was wondering if the stacktraces could ring a bell for
someone to help us analyze the problem. It might even be an obvious
bug for you once you have the stracktrace.
Here are both stacktraces for the regressions we have:
Stacktrace 1 (only when we configure the batch loading - might be due
to a race condition because it's not systematic):
Caused by: java.lang.NullPointerException
at
org.hibernate.type.descriptor.java.AbstractTypeDescriptor.extractHashCode(AbstractTypeDescriptor.java:88)
at
org.hibernate.type.AbstractStandardBasicType.getHashCode(AbstractStandardBasicType.java:210)
at
org.hibernate.type.AbstractStandardBasicType.getHashCode(AbstractStandardBasicType.java:214)
at org.hibernate.cache.spi.CacheKey.calculateHashCode(CacheKey.java:71)
at org.hibernate.cache.spi.CacheKey.<init>(CacheKey.java:67)
at
org.hibernate.internal.AbstractSessionImpl.generateCacheKey(AbstractSessionImpl.java:252)
at
org.hibernate.engine.spi.BatchFetchQueue.isCached(BatchFetchQueue.java:330)
at
org.hibernate.engine.spi.BatchFetchQueue.getCollectionBatch(BatchFetchQueue.java:312)
at
org.hibernate.loader.collection.BatchingCollectionInitializer.initialize(BatchingCollectionInitializer.java:72)
at
org.hibernate.persister.collection.AbstractCollectionPersister.initialize(AbstractCollectionPersister.java:678)
at
org.hibernate.event.internal.DefaultInitializeCollectionEventListener.onInitializeCollection(DefaultInitializeCollectionEventListener.java:80)
at
org.hibernate.internal.SessionImpl.initializeCollection(SessionImpl.java:1804)
at
org.hibernate.collection.internal.AbstractPersistentCollection$4.doWork(AbstractPersistentCollection.java:549)
at
org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:234)
at
org.hibernate.collection.internal.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:545)
at
org.hibernate.collection.internal.PersistentBag.removeAll(PersistentBag.java:345)
Stacktrace 2 (we have this one even if we disable the batch loading):
Caused by: java.lang.NullPointerException
at org.hibernate.engine.internal.Cascade.cascadeProperty(Cascade.java:259)
at org.hibernate.engine.internal.Cascade.cascade(Cascade.java:165)
at
org.hibernate.event.internal.AbstractFlushingEventListener.cascadeOnFlush(AbstractFlushingEventListener.java:160)
at
org.hibernate.event.internal.AbstractFlushingEventListener.prepareEntityFlushes(AbstractFlushingEventListener.java:151)
at
org.hibernate.event.internal.AbstractFlushingEventListener.flushEverythingToExecutions(AbstractFlushingEventListener.java:88)
at
org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:51)
at org.hibernate.internal.SessionImpl.flush(SessionImpl.java:1213)
at
org.hibernate.ejb.AbstractEntityManagerImpl.flush(AbstractEntityManagerImpl.java:986)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.springframework.orm.jpa.SharedEntityManagerCreator$SharedEntityManagerInvocationHandler.invoke(SharedEntityManagerCreator.java:240)
at $Proxy115.flush(Unknown Source)
Thanks for your feedback.
--
Guillaume
12 years
Shards
by Steve Ebersole
On https://github.com/hibernate/hibernate-orm/pull/407 Adriano has
started work on porting Shards (based on the last release source) to the
latest 4.1 code base.
Shards really became a dump and run, which is why we did not even bother
moving it over from SVN to GitHub when we migrated the other projects.
The folks who developed it moved on and none of the development team
really know its code.
Bottom line, what is really needed is for someone to own that. I asked
Adriano on that pull request if that was something he was willing to
take on. He said yes.
So the question becomes how do we proceed? Do we let him (and anyone
else who joins in to help finish the migration) finish the work in his
personal repo and move that over to under the Hibernate community when
that work is complete? Other suggestions?
--
steve(a)hibernate.org
http://hibernate.org
12 years