Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 10 months
Changelog file in Hibernate ORM
by Sanne Grinovero
The file changelog.txt in the root ot the Hibernate ORM project seems outdated.
Is it not maintained anymore? I found it handy.
Sanne
9 years, 6 months
DocumentBuilder refactoring in Hibernate Search: how to deal (internally) with metadata
by Sanne Grinovero
We're starting a series of refactorings in Hibernate Search to improve
how we handle the entity mapping to the index; to summarize goals:
1# Expose the Metadata as API
We need to expose it because:
a - OGM needs to be able to read this metadata to produce appropriate queries
b - Advanced users have expressed the need to for things like listing
all indexed entities to integrate external tools, code generation,
etc..
c - All users (advanced and not) have interest in -at least- logging
the field structure to help creating Queries; today people need a
debugger or Luke.
Personally I think we end up needing this just as an SPI: that might
be good for cases {a,b}, and I have an alternative proposal for {c}
described below.
However we expose it, I think we agree this should be a read-only
structure built as a second phase after the model is consumed from
(annotations / programmatic API / jandex / auto-generated by OGM). It
would also be good to keep it "minimal" in terms of memory cost, so to
either:
- drop references to the source structure
- not holding on it at all, building the Metadata on demand (!)
(Assuming we can build it from a more obscure internal representation
I'll describe next).
Whatever the final implementation will actually do to store this
metadata, for now the priority is to define the contract for the sake
of OGM so I'm not too concerned on the two phase buildup and how
references are handled internally - but let's discuss the options
already.
2# Better fit Lucene 4 / High performance
There are some small performance oriented optimizations that we could
already do with Lucene 3, but where unlikely to be worth the effort;
for example reusing Field instances and pre-intern all field names.
These considerations however are practically mandatory with Lucene 4, as:
- the cost of *not* doing as Lucene wants is higher (runtime field
creation is more expensive now)
- the performance benefit of following the Lucene expectations are
significantly higher (takes advantage of several new features)
- code is much more complex if we don't do it
3# MutableSearchFactory
Let's not forget we also have a MutableSearchFactory to maintain: new
entities could be added at any time so if we drop the original
metadata we need to be able to build a new (read-only) one from the
current state.
4# Finally some cleanups in AbstractDocumentBuilder
This class served us well, but has grown too much over time.
Things we wanted but where too hard to do so far:
- Separate annotation reading from Document building. Separate
validity checks too.
- It checks for JPA @Id using reflection as it might not be available
-> pluggable?
- LuceneOptionsImpl are built at runtime each time we need one ->
reuse them, coupling them to their field
DocumentBuilderIndexedEntity specific:
- A ConversionContext tracks progress on each field by push/pop a
navigation stack to eventually thrown an exception with the correct
description. If instead we used a recursive function, there would be
no need to track anything.
- We had issues with "forgetting" to initialize a collection before
trying to index it (HSEARCH-1245, HSEARCH-1240, ..)
- We need a reliable way to track which field names are created, and
from which bridge they are originating (including custom bridges:
HSEARCH-904)
- If we could know in advance which properties of the entities need
to be initialized for a complete Document to be created we could
generate more efficient queries at entity initialization time, or at
MassIndexing select time. I think users really would expect such a
clever integration with ORM (HSEARCH-1235)
== Solution ? ==
Now let's assume that we can build this as a recursive structure which
accepts a generic visitor.
One could "visit" the structure with a static collector to:
- discover which fields are written - and at the same time collect
information about specific options used on them
-> query validation
-> logging the mapping
-> connect to some tooling
- split the needed properties graph into optimised loading SQL or
auto-generated fetch profiles; ideally taking into account 2nd level
cache options from ORM (which means this visitor resides in the
hibernate-search-orm module, not engine! so note the dependency
inversion).
- visit it with a non-static collector to initialize all needed
properties of an input Entity
- visit it to build a Document of an initialized input Entity
- visit it to build something which gets feeded into a non-Lucene
output !! (ElasticSearch or Solr client value objects: HSEARCH-1188)
.. define the Analyzer mapping, generate the dynamic boosting
values, etc.. each one could be a separate, clean, concern.
This would also make it easier to implement a whole crop of feature
requests we have about improving the @IndexedEmbedded(includePaths)
feature, and the ones I like most:
# easy tool integration for inspection
# better testability of how we create this metadata
# could make a "visualizing" visitor to actually show how a test
entity is transformed and make it easier to understand why it's
matching a query (or not).
Quite related, what does everybody think of this :
https://hibernate.atlassian.net/browse/HSEARCH-438 Support runtime
polymorphism on associations (instead of defining the indexed
properties based on the returned type)
?
Personally I think the we should support that, but it's a significant
change. I'm bringing that up again as I suspect it would affect the
design of the changes proposed above.
This might sound a big change; in fact I agree it's a significant
style change but it is rewriting what is defined today in just 3
classes; no doubt we'll get more than a dozen ouf of it, but I think
it would be better to handle in the long run, more flexible and
potentially more efficient too.
Do we all agree on this? In practical terms we'd also need to define
how far Hardy wants to go with this, if he wants to deal only with the
Metadata API/SPI aspect and then I could apply the rest, or if he
wants to try doing it all in one go. I don't think we can start
working in parallel on this ;-)
[sorry I tried to keep it short.. then I run out of time]
Sanne
11 years, 5 months
ServiceRegistries and OSGi
by Steve Ebersole
Now that OSGi support is in place and we know it is being used, I am
curious whether the concept of ServiceRegistry helped or hindered in
that process?
One of the major reasons to define such a ServiceRegistry was the idea
that it would help porting Hibernate into other containers and other
runtimes, not just traditonal JSE/JEE environments. Specifically OSGi
was one of the things considered, although in a very generic sense back
then. So part of the reason I ask is that I wonder how successful we
were in that, first; and then, in areas we can get better, how?
Now is a great time to review that as we get ready to start making a
push towards 5.0 after 4.3 (JPA 2.1 support) gets stabilized...
11 years, 6 months
HSEARCH Faceting and facet counts
by Emmanuel Bernard
I know we had a debate but I can't seem to find any detail in the
documentation about how facet selection influences the facet counts.
In my demo, the facet count is applied after the selection. ie if I do a
query that returns '< $100' = 20 and '> $100' = 45, once I select '<
$100', the count displayed on '> $100' = 0 which is very weird from a
use point of view.
That seems to be the expected behavior according to
https://hibernate.atlassian.net/browse/HSEARCH-713 but my first reaction
was that it was a bug.
Should we clarify that in the documentation? And implement the more
natural way?
Emmanuel
11 years, 6 months
Re: [hibernate-dev] HSEARCH FacetManager.getFacetingNames()
by Emmanuel Bernard
Actually a List might make sense if the order you define faceting is the
order you want to expose it. But that's a tiny bit far fetched.
I would love for our API to be easily consumed by UIs but it's a tiny
bit impractical at the moment. I've identified the name list issue,
serializability is a concern (JSON) and Facet.getValue() for range
facets is crap to expose.
Emmanuel
On Wed 2013-05-29 14:11, Hardy Ferentschik wrote:
> One could also return the FacetRequest instances. Something like:
>
> interface FacetingManager {
> Set<FacetingRequest> getAppliedFacetRequests();
> }
>
> Either way it should probably be a set.
>
> --Hardy
>
> On 29 Jan 2013, at 1:10 PM, Emmanuel Bernard <emmanuel(a)hibernate.org> wrote:
>
> > Trying to write a slightly generic code listing the facets and exposing
> > them in a UI. I cannot find a way to list the faceting requests applied.
> >
> > Am I missing something? What do you think of adding
> >
> > interface FacetingManager {
> > List<String> getFacetingNames();
> > ...
> > }
> >
> > I'd love a less stringy API but I am out of idea.
> >
> > Emmanuel
> > _______________________________________________
> > hibernate-dev mailing list
> > hibernate-dev(a)lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
11 years, 6 months