Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 9 months
Changelog file in Hibernate ORM
by Sanne Grinovero
The file changelog.txt in the root ot the Hibernate ORM project seems outdated.
Is it not maintained anymore? I found it handy.
Sanne
9 years, 5 months
JdbcSession proposal
by Steve Ebersole
I found a few spare minutes to work on this a little and move it into
the next stage with some actual interfaces, impls and usages to help
illustrate some of the proposed concepts.
https://github.com/sebersole/JdbcSession
The README.md is very up-to-date and detailed. Would be good to get
input from others.
P.S. I probably dislike the *Inflow naming more than you do :)
10 years, 3 months
[Search] Index embedded and id property of embedded entity
by Hardy Ferentschik
Hi,
I started to look at HSEARCH-1494 [1] which deals with an unexpected exception is thrown when @IndexedEmbedded is used.
Easiest to explain with an example:
@Entity
@Indexed
public class A {
@Id
@GeneratedValue
private long id;
@OneToOne
@IndexedEmbedded
private B b;
}
@Entity
public class B {
@Id
@GeneratedValue
private Timestamp id;
@Field
private String foo;
public Timestamp getId() {
return id;
}
public String getFoo() {
return foo;
}
}
A includes B via @IndexedEmbedded and is only interested in including the ‘foo’ field. However, atm we implicitly index B.id as well.
In this particular case an exception is thrown, because we don’t know which field bridge to use for B.id.
This also relates to HSEARCH-1092 [2], where the include path feature of @IndexedEmbedded is used. Even though the configured
paths don’t include the ids, they are added which increases the index size unnecessarily (not really sure whether it really matters in practice).
If we skip the implicit inclusion of id properties, the user will need to add an explicit @Field in case he wants to include an id property via indexed
embedded. If the embedded entity itself is not indexed, I think this makes sense. But what if the embedded entity is indexed itself? Does it seem
wrong in this case to specify an additional @Field? Do we need some additional configuration element?
Thoughts?
—Hardy
[1] https://hibernate.atlassian.net/browse/HSEARCH-1494
[2] https://hibernate.atlassian.net/browse/HSEARCH-1092
10 years, 5 months
Making tests nicer with lambdas
by Gunnar Morling
Hey,
I've played around a bit with the idea of using Java 8 lambdas to make
tests easier to write and read. We have many tests which open a session and
TX, do some stuff, commit, open a new TX (and/or session), do some
assertions and so on:
Session session = openSession();
Transaction transaction = session.beginTransaction();
// heavy testing action...
transaction.commit();
session.clear();
transaction = session.beginTransaction();
// load, assert...
transaction.commit();
session.clear();
The same could look like this using Java 8 lambdas:
Foo foo = inTransactionWithResult( (session, tx) -> {
// heavy testing action...
} );
inTransaction( (session, tx) -> {
// load, assert...
} );
Extracting the session/TX handling removes quite some clutter and focuses
more on the actual testing logic. It also avoids problems due to dangling
transactions e.g. in case of assertion failures as the TX handling is done
in a finally block in inTransaction().
At this point I've just done a quick POC and would be interested in
feedback whether you think that's worth pursuing or not. Note that
different language levels can be used for test and main code, so we could
make use of lambdas in tests while ensuring Java 6 compatibility for the
delivered artifacts.
--Gunnar
10 years, 5 months
Loggers and categories
by Steve Ebersole
Wanted to revisit something that's been bothering me ever since we moved to
JBoss Logging : the definition of loggers.
While I think we have always understood that there are really 2 different
audiences for log messages, I think I have come to have a more clear
understanding of what that means and its implications.
We have log messages intended for usage scenarios and log messages intended
for debugging scenarios. To me, the first groups is represented by the
JBoss Logging MessageLogger messages. The second group are the things we
simply "pass through" to the underlying log backend (trace, debug messages).
In both cases we currently use the name of the Class initiating the log
message as the "category name". While I think that is absolutely the
correct thing to do in the case of that second group, what about the first
group? For that first group I wonder if its better to have "logical
category names" more like 'org.hibernate.orm.deprecations' and
'org.hibernate.orm.mapping'... The idea being that its easier to switch
these grouped categories on/off.
10 years, 6 months
Jandex, scanning and discovery
by Steve Ebersole
As Brett finishes up the "unified schema" work I wanted to start discussion
about some simplifications that could/should come from that.
Jandex, scanning and discovery
One of the outcomes of moving to Jandex to process annotations is the
requirement that we have a good Jandex Index available to us. There are 2
parts to this:
1. We need a "baseline Index". This can be a Jandex Index passed to us
(by WildFly, e.g.). But in cases where we are not passed one, we need to
build one.
2. We then augment the Index with "virtual annotations" built from XML
mappings.
As I said, in the cases we are not handed a Jandex Index we need to build
one. Currently this is a tedious process handled by manually indexing all
known classes (known from a few different sources). But this need to know
the classes to index them is problematic.
Which got me to thinking that it sure would be nice to tie this in to the
scanning (org.hibernate.jpa.boot.scan.spi.Scanner) stuff we do already such
that as we walk the "archive" we automatically add classes we encounter to
the Index. I'd actually add things to the Index whether or not they are
supposed to be managed classes (exclude-unlisted-classes).
There are lots of benefits to this. And not just to ORM, provided we start
sharing the Jandex with Search (etc) as we have discussed. This better
ensures that all needed classes are available in the Index. This allows
better discovery of "ancillary" classes via annotation. Etc...
Technically this one is not tied to the unified schema work, we could do
this change today. The difference is that the move to the unified schema
means that everything is processed using Jandex, so having that good Index
becomes even more important.
There is one difficulty I see here. Assuming we are to only use
listed-classes, that means classes explicitly listed in the PU *and*
classes listed in XML. Checking that a class is listed in the PU is
simple, but checking whether a class comes from XML before we use it is not
as straightforward. Its not impossible, just another thing to consider.
Applying XML defaults
orm.xsd allows specifying attribute specific defaults. One such default,
e.g., is access. What happens in the current annotation "mocking"[1] is
that the default access is only applied to attributes that are referenced
in the XML. If the XML is used in an "override" way and an attribute is
defined in the entity with annotations but is not mentioned in the XML, the
default access from the XML is not picked up, as it should be (unless the
annotations define a more local explicit access).
I think the solution is to expose all the 'persistence-unit-defaults'
values on the org.hibernate.metamodel.source.spi.MappingDefaults and grab
them from there.
-- seems to me there were others I wanted to discuss, but they escape my
mind at the moment ;)
[1] I think we all dislike the term "mocking" here. I'd like to start
discussing alternatives. I mentioned "augmentating" above, I like that
one.
10 years, 6 months
Build failure executing :hibernate-core:runAnnotationProcessors
by Mahesh Gadgil
I am facing the same issue Gale Badner mentioned in his email (hibernate-core:runAnnotationProcessors - BUILD FAILED) -
Gail Badner Fri, 28 Mar 2014 16:35:18 -0700.
but unlike him I am unable to resolve the issue. I am using java 1.7.0_51 and tried to compile 4.3 branch.
ThanksMahesh
10 years, 6 months
GitHub repo for demo projects
by Gunnar Morling
Hi,
Together with Sanne I've been creating a demo app which shows some features
of Hibernate OGM.
Right now this lives under my personal account on GitHub, but IMO it'd make
sense to move it somewhere under https://github.com/hibernate/ to make it
more visible, encourage re-use and contributions by others etc.
What do you think about creating a repo under the hibernate organization
such as "hibernate-demos" which could host this and other demos for our
projects in the future? Or would it even make more sense on a per-project
base ("hibernate-ogm-demos" etc.)?
--Gunnar
[1] https://github.com/gunnarmorling/ogm-hiking-demo
10 years, 6 months
HEM "packaged" tests
by Steve Ebersole
With my latest Scanner changes as outlined in the earlier email there is
one failure in the HEM "packaged" tests that I simply cannot explain why it
was previously considered valid.
The test
is org.hibernate.jpa.test.packaging.PackagedEntityManagerTest#testCfgXmlPar.
The test builds a PAR (via ShrinkWrap) and then tries to use it. The
persistence.xml simply points to a cfg.xml. The cfg.xml names 4 classes.
But the PAR only bundles 2 of them. The test fails because Binder cannot
find the other 2 classes. Should this be valid? Should the cfg.xml inside
a PAR be allowed to name classes not contained in the PAR? Obviously if
the test is allowed to pick up all classes on the "real test classpath" the
test will pass.
10 years, 6 months