JSR 354 - Money and Currency
by Steve Ebersole
So it sounds like JSR 354 may not be included in Java 9. Do we still want
to support this for ORM 5? I am not sure if "moneta" requires Java 9...
8 years, 9 months
Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 10 months
Testing Hibernate 5: injecting a Spring managed interceptor
by Guillaume Smet
Hi,
As I have cycles this week and next week, I thought I might as well do some
QA on Hibernate 5.
I'm still in the process of porting our code to 5 atm and I have a pattern
we used before I can't find an elegant way to port on Hibernate 5: this
pattern is used to inject a Spring managed interceptor.
We override the persistence provider to inject the interceptor in the
Hibernate configuration:
https://gist.github.com/gsmet/e8d3003344938b1d327b
I studied the new code for quite some time and I couldn't find a way to
inject my interceptor in 5.
Note that it's a pretty common usage in the Spring managed world.
Thanks for any guidance.
--
Guillaume
9 years, 3 months
Missing transaction hangs the testsuite
by Sanne Grinovero
I finally got to re-enable MariaDB and PostgreSQL based tests for
Hibernate Search - which has been running on H2 only for some months -
and had to debug a case of a single test hanging for a long time.
Essentially it would block - for hours - on the SessionFactory#close()
method, attempting to drop the database schema with the following
statement:
> alter table AncientBook_alternativeTitles drop constraint
FKn8hhkmhof1mdgc4oi77ccq989
Dumping threads I would get a very similar stack trace on both
databases; initially I thought someone had copy/pasted a socket
handling bug from one JDBC driver to the other ;-)
The PostgreSQL testsuite hanging:
"main" prio=10 tid=0x00007f0f40009000 nid=0x5f7c runnable [0x00007f0f48956000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:145)
at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:114)
at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)
at org.postgresql.core.PGStream.ReceiveChar(PGStream.java:274)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1660)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)
- locked <0x00000007c11e3860> (a org.postgresql.core.v3.QueryExecutorImpl)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:500)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:374)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:302)
at org.hibernate.tool.hbm2ddl.DatabaseExporter.export(DatabaseExporter.java:47)
at org.hibernate.tool.hbm2ddl.SchemaExport.perform(SchemaExport.java:476)
at org.hibernate.tool.hbm2ddl.SchemaExport.execute(SchemaExport.java:430)
at org.hibernate.tool.hbm2ddl.SchemaExport.drop(SchemaExport.java:375)
at org.hibernate.tool.hbm2ddl.SchemaExport.drop(SchemaExport.java:371)
at org.hibernate.internal.SessionFactoryImpl.close(SessionFactoryImpl.java:1069)
at org.hibernate.search.test.util.FullTextSessionBuilder.close(FullTextSessionBuilder.java:149)
at org.hibernate.search.test.util.FullTextSessionBuilder$1.evaluate(FullTextSessionBuilder.java:248)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
The MariaDB testsuite hanging:
"main" prio=10 tid=0x00007f8ca0009000 nid=0x4043 runnable [0x00007f8ca5f5c000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at com.mysql.jdbc.util.ReadAheadInputStream.fill(ReadAheadInputStream.java:114)
at com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary(ReadAheadInputStream.java:161)
at com.mysql.jdbc.util.ReadAheadInputStream.read(ReadAheadInputStream.java:189)
- locked <0x00000007c0baa518> (a com.mysql.jdbc.util.ReadAheadInputStream)
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2499)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2952)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2941)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3489)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1959)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2113)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2562)
- locked <0x00000007c0baa850> (a java.lang.Object)
at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1664)
- locked <0x00000007c0baa850> (a java.lang.Object)
at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1583)
at org.hibernate.tool.hbm2ddl.DatabaseExporter.export(DatabaseExporter.java:47)
at org.hibernate.tool.hbm2ddl.SchemaExport.perform(SchemaExport.java:476)
at org.hibernate.tool.hbm2ddl.SchemaExport.execute(SchemaExport.java:430)
at org.hibernate.tool.hbm2ddl.SchemaExport.drop(SchemaExport.java:375)
at org.hibernate.tool.hbm2ddl.SchemaExport.drop(SchemaExport.java:371)
at org.hibernate.internal.SessionFactoryImpl.close(SessionFactoryImpl.java:1069)
at org.hibernate.search.test.util.FullTextSessionBuilder.close(FullTextSessionBuilder.java:149)
at org.hibernate.search.test.util.FullTextSessionBuilder$1.evaluate(FullTextSessionBuilder.java:248)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
On Gail's suggestion I went looking for database level locks; I hadn't
thought of that as I assumed it would have timed out more aggressively
rather than have me wait for hours.
It turns out she was right and the reason for blocking was a simple
"count *" query being run as a post-test assertion, which we had
forgotten to wrap in an "open transaction & commit" statements pair.
The test assertion would be successful, but apparently it would hold
on the table lock beyond closing the Session and, then fail to drop
the database at the teardown of the test.
I'm wondering if this is expected, or if there's something in the new
transaction handling code which could be improved? It took me several
hours to figure this out; maybe I'm just not using ORM as frequently
as once :)
If it's this critical to have the transaction, maybe it should be mandatory?
And as a memo for next time, this is the query to figure out details
about locks on our testing db:
> SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid
= psa.pid where datname = 'testingdb';
Thanks,
Sanne
9 years, 3 months
[Hibernate Search] DocValues and Sorting API -> new mapping annotations ?
by Sanne Grinovero
You might remember that running a full-text Query on a field always
required some specific care;
since the beginning of Hibernate Search the user had to make sure the
field was not tokenized, or tokenized but generating a single token.
This was a "soft requirement": if you didn't know, you'd get
inconsistent results but no error would be shown - after all, a Lucene
was typically schema-less.
With Lucene 5, if you didn't map your field specifically for the
sorting purpose, you'll get a runtime exception at query time. By
"specifically for sorting" the requirement is that the *single token*
needs to be stored as a DocValue.
DocValues are useful for other purposes too; for example they are a
more efficient strategy to store our "id" field and I hope we'll soon
use that transparently. It's also a better replacement for all those
use cases which previously would rely on a FieldCache (which by the
way was used by the sorting code under the hood). For example, we
recently migrated the Spatial indexing to use DocValues, and we now
support serializing and writing of DocValues.
What we don't have is a way for end user to explicitly single out
which fields they want to be "DocValue encoded", and this really needs
to be added now as the workaround we have to be able to run Sort
operations without it is killing our performance.
How should such annotations look like?
I don't like to expose such low-level details when for most people
it's just about sorting. Still, being useful for other reasons having
a "@Sortable" (or similar) named annotation would be limiting.
DocValues themselves - as a concept - are fine but even in the Lucene
history the exact name changed several times; wondering if we should
stick to the (current) technical term or abstract a bit from it.
I'm not sure if this should be extending the @Field annotation as
there are special restrictions implied in terms of analysis: are we
going to enforce a specific type of tokenizer, or simply take the
analysis option away?
Any nice suggestion of how this could looke like? This would become a
highly used public API.
The good news is that we'll be able to validate sort definitions.
Thanks,
Sanne
9 years, 3 months
Hosting of binaries
by Emmanuel Bernard
As some of you know Sourceforge has had a severe distributed file system corruption and they have been working on it for a full week. You can read their blog for regular updates http://sourceforge.net/blog/
The concrete issue for us is that we cannot upload new files: Hibernate Validator and Hibernate ORM are now pending a release.
There are 4 options on the table
Be patient::
SourceForge will eventually reopen upload, I imagine it might take form one to two weeks.
Their binary hosting support is relatively correct and all of our download statistics are there.
Move to download.jboss.org::
JBoss has a facility to host binaries. WildFly amongst other uses it. We can ask them if they are happy with it.
It is not connected to the rest of the forum/CMS infra, it’s a simple file upload AFAIK so easily scriptable.
They also offer statistics but how needs to be investigated.
Move to GitHub::
GitHub has a binary upload facility. I could only find a web based approach (can it be done programmatically?).
They don’t seem to have any statistics service, which is a big negative point.
Also I don’t trust GitHub anymore for their binary hosting. They had a version in the past that they scrapped with barely no notice. I’m not exactly willing to give them my trust again.
Move to BinTray::
Binary hosting is their life and blood. People seem happy with them. It seems however that the statistics require a paying package instead of the free oss tier.
I think we should try in the following order:
1. Be patient with Sourceforge (but for how long?)
2. go for download.jboss.org and before that ask around for the process and stability of the infrastructure
3. explore Bintray
4. GitHub (did I say that I no longer trust their binary hosting support?)
9 years, 3 months
ORM Documentation
by Steve Ebersole
I have been putting a lot of TLC into the ORM documentation this weekend
getting ready for 5.0 to go Final. To that end I have put together a
proposal for these changes, it is attached. I'd like to get some
feedback. Thanks
9 years, 3 months
4.2.20.Final and SourceForge problems; delaying 4.3.11.Final until next week
by Gail Badner
I am finished with the 4.2.20.Final release, except for uploading distributions due to problems at SourceForge. Artifacts have been successfully uploaded to nexus.
I will wait until Monday to send out an announcement in the hopes that I can upload the distributions to SourceForge by then.
There are a couple more bugfixes I'd like to get into 4.3.11.Final, so I am delaying that release until Wednesday, July 29.
Regards,
Gail
9 years, 3 months
new proposal for tx timeout handling using transaction DISASSOCIATING event notification...
by Scott Marlow
With a proposed TM level listener, we will have an SPI for notification
of when application threads associated with a JTA transaction, become
disassociated with the transaction (tm.commit/rollback/suspend time).
Having this knowledge in a synchronization callback, can determine
whether the persistence context should be cleared directly from the
Synchronization.afterCompletion(int) call or should be deferred until
the transaction is disassociated from the JTA transaction.
This idea is based on a TM level listener approach that Tom Jenkinson
[1] suggested. Mike Musgrove has a "proof of concept" implementation of
the suggested changes [2]. I did some testing with [3] to see if the
improvement helps with clearing entities that might still be in the
persistence context after a background tx timeout.
I'm wondering if in the Hibernate ORM
Synchronization.afterCompletion(int status) implementation, in case of
tx rollback, if we could defer the clearing of the Hibernate session to
be handled by the JtaPlatform. This could be setup at
EntityManager.joinTransaction() time (if a new property like
"hibernate.transaction.defer_clear_session" is true). Perhaps via a
JtaPlatform.joinTransaction(EntityManager) registration call?
Thoughts?
Scott
[1] https://developer.jboss.org/thread/252572?start=45&tstart=0
[2]
https://github.com/mmusgrov/jboss-transaction-spi/blob/threadDisassociati...
[3]
https://github.com/scottmarlow/wildfly/tree/transactiontimeout_clientut_n...
9 years, 3 months