JSR 354 - Money and Currency
by Steve Ebersole
So it sounds like JSR 354 may not be included in Java 9. Do we still want
to support this for ORM 5? I am not sure if "moneta" requires Java 9...
8 years, 9 months
Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 9 months
Re: [hibernate-dev] [hibernate-orm] HHH-7572 - Develop API for load-by-multiple-ids (#1136)
by Konstantin Bulanov
Hello Steve, as you asked moving our discussion about HHH-7572 in dev mail
list.
Regarding you question, in current architecture and implementation we have
the following point to perform entity persistence customization.
Annotation:
https://docs.jboss.org/hibernate/orm/5.0/javadocs/org/hibernate/annotatio...
which allows us to specify our own implementation of
https://docs.jboss.org/hibernate/orm/5.0/javadocs/org/hibernate/persister....
One of its methods is:
Object load(Serializable id,
Object optionalObject,
LockMode lockMode,
SessionImplementor session)
throws HibernateException
Load an instance of the persistent class.
and
Object load(Serializable id,
Object optionalObject,
LockOptions lockOptions,
SessionImplementor session)
throws HibernateException
Load an instance of the persistent class.
These two methods allows to specify you own Loader implementation to load
Entity by IDS,
in mentioned issue this part of contract was ignored by changing call
sequence on loading by multiple ids.
By Single id;
org.hibernate.internal.SessionImpl#get->IdentifierLoadAccessImpl->org.hibernate.internal.SessionImpl.IdentifierLoadAccessImpl#load->org.hibernate.event.spi.LoadEventListener#onLoad->org.hibernate.event.internal.DefaultLoadEventListener#loadFromDatasource->org.hibernate.persister.entity.EntityPersister#load
By Multiple id:
org.hibernate.internal.SessionImpl#byMultipleIds->org.hibernate.internal.SessionImpl.MultiIdentifierLoadAccessImpl#multiLoad->org.hibernate.loader.entity.DynamicBatchingEntityLoaderBuilder#multiLoad
So in new API for multiple load we lose at least 2 possible extension
points: onLoadEvent, Persister.load (here we could customize loader -
specify our own instead hardcoded one)
>From my point of view there should be the same approach to get entities by
ID(independent multiple or single).
So which one approach is correct and future-proof for Single id or Multiple
Ids?
20 нояб. 2015 г. 18:19 пользователь "Steve Ebersole" <
notifications(a)github.com> написал:
> Customize how? Loader still calls into the persister. Persisters and
> Loaders have a back-and-forth synergy.
>
> Also please discuss this on the hibernate-dev mailing list so others can be
> involved.
>
> On Fri, Nov 20, 2015 at 7:15 AM Konstantin Bulanov <
> notifications(a)github.com>
> wrote:
>
> > Hello Steve, could you be so kind to advice why we have different
> behavior
> > for loading by single id and multiple ids?
> >
> > In Case of single id, loading is going through
> > session->IdentifierLoadAccess->event->persister->Loader
> > In Case of multiple ids, loading is going through
> > session->MultiIdentifierLoadAccess->Loader
> >
> > So in case of load by single id it is possible to cutomize loading of
> > Entify using persister, but in new introduced API we lost this
> posibility.
> >
> > —
> > Reply to this email directly or view it on GitHub
> > <
> https://github.com/hibernate/hibernate-orm/pull/1136#issuecomment-158400273
> >
> > .
> >
>
> —
> Reply to this email directly or view it on GitHub
> <https://github.com/hibernate/hibernate-orm/pull/1136#issuecomment-158413356>
> .
>
8 years, 10 months
Search DSL expectations for "keyword()" clause
by Sanne Grinovero
Assuming you build a Lucene Query the following way:
queryBuilder.keyword().onField( "age" ).matching( 5 ).createQuery();
What is your expectation, if the "age" field is being indexed as a NumericField?
Thanks,
Sanne
8 years, 11 months
GitHub options to disable force pushing
by Sanne Grinovero
Hi all,
GitHub now provides an option to:
- prevent pushing with the "force" option to a specific branch
- prevent people to delete a specific branch
Considering our workflow and also to prevent user mistakes, I think we
should enable them on the reference repositories (the ones in
github.com/hibernate ).
I did enable this for Hibernate Search. If someone has good reason to
want delete a branch or push with "force" it's just two clicks away to
disable it.. at least I feel confident against unintentional mistakes.
Thanks,
Sanne
8 years, 11 months
Re: [hibernate-dev] Why does Hibernate has aggressive connection releasing for JTA
by Steve Ebersole
I'd like to test this later using the ConnectionAcquitionMode. In theory
this should led to zero overhead for real applications.
P.S. I had to remove your image as the mailing list does not accept
attachments.
On Thu, Nov 19, 2015 at 1:11 AM Vlad Mihalcea <mihalcea.vlad(a)gmail.com>
wrote:
> I wrote a test to replicate the aggressive release overhead (
> https://github.com/vladmihalcea/high-performance-java-persistence/blob/ma...
> ) and these are my findings:
>
>
>
> The more statements a transaction has, the more obvious the performance
> impact.
> This was tested with Spring and Bitronix and so it measures Bitronix
> overhead.
>
> We'll have to update the docs to advise the clients to consider the
> AFTER_TRANSACTION mode for some stand-alone JTA environments.
> I wonder if today's Java EE application servers still require the
> aggressive release as an workaround to their connection leak detection
> algorithms.
>
> Vlad
>
> On Wed, Nov 18, 2015 at 5:49 PM, Steve Ebersole <steve(a)hibernate.org>
> wrote:
>
>> Yes, I think that's a good idea. I also think working
>> on ConnectionAcquisitionMode is the best option. The fact that Hibernate
>> delays getting the Connection is so generally not useful.
>>
>>
>> On Wed, Nov 18, 2015 at 9:42 AM Vlad Mihalcea <mihalcea.vlad(a)gmail.com>
>> wrote:
>>
>>> Thanks for the explanation. I found a discussion from 2006 where you
>>> explained this behavior:
>>>
>>> http://lists.jboss.org/pipermail/hibernate-dev/2006-December/000903.html
>>>
>>> I am currently testing the AFTER_TRANSACTION release mode with Spring
>>> and Bitronix and I think it can give some performance gain over
>>> AFTER_STATEMENT.
>>> I'll keep you posted with the final results.
>>>
>>> Do you think we should update the docs to explain that this is rather
>>> required by Java EE containers, and it might be fine with stand-along JTA
>>> transaction managers?
>>>
>>> Vlad
>>>
>>>
>>>
>>> On Wed, Nov 18, 2015 at 4:05 PM, Steve Ebersole <steve(a)hibernate.org>
>>> wrote:
>>>
>>>> It was to work around certain containers (not just EE containers) that
>>>> implement "resource containment" checks. The Hibernate Session defers
>>>> getting a JDBC Connection until it actually needs one, which can lead to
>>>> cases like the following where 2 beans share a Session/EM:
>>>>
>>>> Bean1: get Session, but don't use it yet in way that needs Connection
>>>> Bean1: call Bean2...
>>>> Bean2: get Session, do some work forcing Session to obtain Connection
>>>> Bean2: return (Session still hold Connection)
>>>>
>>>> At this point, these containers see this as a "leaked" Connection
>>>> because the handle was not released by the end of the scope in which it was
>>>> obtained. Hence, aggressive releasing. My contention at the time was that
>>>> a ConnectionAcquisitionMode would have been better/cleaner. I still feel
>>>> that way, and hope to still come back and add that; so much so in fact that
>>>> the enum already exists[1] :).
>>>>
>>>> [1] org.hibernate.ConnectionAcquisitionMode
>>>>
>>>> On Wed, Nov 18, 2015 at 1:45 AM Vlad Mihalcea <mihalcea.vlad(a)gmail.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Does anyone remember why does Hibernate support aggressive connection
>>>>> releasing?
>>>>> I've never found this requirement in either JTA or JDBC spec.
>>>>> Was it something required by the Java EE application server?
>>>>>
>>>>> Vlad
>>>>> _______________________________________________
>>>>> hibernate-dev mailing list
>>>>> hibernate-dev(a)lists.jboss.org
>>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>>
>>>>
>>>
>
8 years, 11 months
OGM-933 - PostLoad annotation support test
by David Williams
Hi,
I'm currently working on adding support for the PostLoad annotation to OGM.
I've got it working against one of my own projects but am looking for some
guidance around test coverage before I submit a pull request. I couldn't
find any existing tests in OGM for other annotations that already work
(e.g. PrePersist and PreUpdate) so wanted to check whether I should include
a test for PostLoad and if so if there was an appropriate package for me to
put it in?
Thanks,
David.
8 years, 11 months
Fwd: Re: Apache Trafodion Dialect
by Brett Meyer
Hey Sanne, just realized I never responded to this. That was definitely
my first thought as well. But, not knowing very much about OGM, are
there any use cases for plain JPA + Trafodion, vs Hadoop + OGM? A means
to "try out" Hadoop without jumping into OGM right away?
Even if Hadoop on OGM happens, I guess I don't see a problem also
including this dialect, if the Trafodion team maintains it. But, that's
of course only if others would use it...
On Oct 26, 2015 4:10 PM, Sanne Grinovero <sanne(a)hibernate.org> wrote:
>
> +1 for the usage question! Would be nice to understand the use case of it too.
>
> And in this specific case, I'd also wonder if the same use case
> wouldn't be fulfilled better by an Hibernate OGM dialect.
>
> On 26 October 2015 at 18:13, Brett Meyer <brett(a)hibernate.org> wrote:
> > All, we've been approached by the team responsible for the Apache
> > Trafodion project, an "SQL-on-Hadoop" solution. They've developed a
> > Dialect, are willing to contribute it, and are willing to maintain it
> > long term. The latter has been a requirement for a while -- we have too
> > many Dialects that were contributed then abandoned.
> >
> > However, the other requirement is actual demand by community users. So,
> > out of curiosity, would anyone actually use it? I'm not at all familiar
> > with the project or space, but it definitely sounds interesting. If
> > this Dialect would be helpful, please add your vote to the JIRA:
> >
> > https://hibernate.atlassian.net/browse/HHH-10216
> >
> > _______________________________________________
> > hibernate-dev mailing list
> > hibernate-dev(a)lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/hibernate-dev
8 years, 11 months
JDK9 testing
by Sanne Grinovero
We had several jobs failing on ci.hibernate.org as I had previously
uploaded a "Jigsaw enabled" build of JDK9. Turns out that was a bit
too a daring leap.
I've now replaced it with the latest build JDK 9 b93 "regular", so I
expect most issues to be gone.. I still wish to pursue Jigsaw
compatibility at some point, but let's at least aim at keeping our
projects stable on the regular one as a first step.
There are some goodies in this last build; quoting Rory O'Donnell:
>>>
JEP 254: Compact Strings (http://openjdk.java.net/jeps/254)
This JEP adopts a more space-efficient internal representation for strings.
We propose to change the internal representation of the String class
from a UTF-16 char array to a byte array plus an encoding-flag field.
The new String class will store characters encoded either as
ISO-8859-1/Latin-1 (one byte per character), or as UTF-16 (two bytes
per character), based upon the contents of the string. The encoding
flag will indicate which encoding is used.
JEP 165: Compiler Control (http://openjdk.java.net/jeps/165)
This JEP proposes an improved way to control the JVM compilers. It
enables runtime manageable, method dependent compiler flags.
(Immutable for the duration of a compilation.)
Method-context dependent control of the compilation process is a
powerful tool for writing small contained JVM compiler tests that can
be run without restarting the entire JVM. It is also very useful for
creating workarounds for bugs in the JVM compilers.
JEP 243: Java-Level JVM Compiler Interface (http://openjdk.java.net/jeps/243)
This JEP instruments the data flows within the JVM which are used by
the JIT compiler to allow Java code to observe, query, and affect the
JVM's compilation process and its associated metadata.
JEP 268: XML Catalog API (http://openjdk.java.net/jeps/268)
This JEP develops a standard XML Catalog API that supports the OASIS
XML Catalogs standard, v1.1. The API will define catalog and
catalog-resolver abstractions which can be used with the JAXP
processors that accept resolvers.
<<<
Thanks,
Sanne
8 years, 12 months