Re: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
by Łukasz Antoniak
Currently Oracle supports database versions from 10.1 to 11.2 [1]. LONG
and LONG RAW data types are deprecated since version 8 and 8i (released
before September 2000) [2]. Oracle keeps those column types only for
backward compatibility [3].
I tried the following scenario (Oracle 10gR2):
1. Create schema with "hibernate.hbm2ddl.auto" set to "create". The LONG
column is created.
2. Insert some data.
3. Modify Oracle dialect as Gail suggested. Avoid setting
"hibernate.hbm2ddl.auto".
4. Insert some data.
To my surprise the test actually passed :). However, I think that we
cannot guaranty the proper behavior in every situation.
As for performance, ImageType is extracted by calling
ResultSet.getBytes() method, which fetches all data in one call [4]. I
don't suppose a major performance difference when data is streamed in
another call. oracle.jdbc.driver.LongRawAccessor.getBytes also fetches
data by reading the stream.
The bug reading LONG column affects JDBC drivers since version 10.2.0.4.
I think that we have to choose between:
- changing Oracle10gDialect. Make a not about it in migration guide to
4.0 and update "5.2.2. Basic value types" chapter in Hibernate
documentation.
- introducing Oracle11gDialect. It can sound weird to access Oracle 10g
database with Oracle 11g dialect.
- disabling execution of Hibernate tests that fail because of this issue
with @SkipForDialect (and maybe develop another version of them with
CLOBs and BLOBs, @RequiresDialect). Hibernate is written correctly
according to "Default Mappings Between SQL Types and Java Types"
(referenced earlier by Gail) and this is more Oracle's JDBC
implementation issue. This option came to my mind, but it's weird :P.
I would vote for the first option.
Regards,
Lukasz Antoniak
[1]
http://www.oracle.com/us/support/library/lifetime-support-technology-0691...
(page 4)
[2]
http://download.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90120/ch...
[3]
http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/datatype.htm
[4] "Getting a LONG RAW Data Column with getBytes"
http://download.oracle.com/docs/cd/B19306_01/java.102/b14355/jstreams.htm
Strong Liu pisze:
> I think oracle 11g is the only one supported DB version by oracle, can we just introduce a new oracle dialect with suggested changes, and deprecate all other existed oracle dialects? this won't affects users app
>
> -----------
> Strong Liu <stliu(a)hibernate.org>
> http://hibernate.org
> http://github.com/stliu
>
> On Oct 15, 2011, at 11:14 AM, Scott Marlow wrote:
>
>> How does this impact existing applications? Would they have to convert
>> LONGs to CLOBs (and LONGRAWs to BLOBs) to keep the application working?
>>
>> As far as the advantage of CLOB over TEXT, if you read every character,
>> which one is really faster? I would expect TEXT to be a little faster,
>> since the server side will send the characters before they are asked
>> for. By faster, I mean from the application performance point of view. :)
>>
>> Could this be changed in a custom Oracle dialect? So new
>> applications/databases could perhaps use that and existing applications
>> might use LONGs a bit longer via the existing Oracle dialect.
>>
>> On 10/14/2011 09:22 PM, Gail Badner wrote:
>>> In [1], I am seeing the following type mappings:
>>>
>>> Column type: LONG -> java.sql.Types.LONGVARCHAR -> java.lang.String
>>> Column type: LONGRAW -> java.sql.Types.LONGVARBINARY -> byte[]
>>>
>>> org.hibernate.type.TextType is consistent with the mapping for LONG.
>>>
>>> org.hibernate.type.ImageType is consistent with the mapping for LONGRAW.
>>>
>>> From this standpoint, the current settings are appropriate.
>>>
>>> I understand there are restrictions when LONG and LONGRAW are used and I see from your other message that there is Oracle documentation for migrating to CLOB and BLOB.
>>>
>>> I agree that changing column type registration as follows (for Oracle only) should fix this:
>>> registerColumnType( Types.VARBINARY, 2000, "raw($l)" );
>>> registerColumnType( Types.VARBINARY, "blob" );
>>>
>>> registerColumnType( Types.LONGVARCHAR, "clob" );
>>> registerColumnType( Types.LONGVARBINARY, "blob" );
>>>
>>> registerColumnType( Types.VARCHAR, 4000, "varchar2($l char)" );
>>> registerColumnType( Types.VARCHAR, "clob" );
>>>
>>> Steve, what do you think? Is it too late to make this change for 4.0.0?
>>>
>>> [1] Table 11-1 of Oracle® Database JDBC Developer's Guide and Reference, 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/datacc.htm#g...)
>>> [2] Hibernate Core Migration Guide for 3.5 (http://community.jboss.org/wiki/HibernateCoreMigrationGuide35)
>>> [3] Table 2-10 of Oracle® Database SQL Language Reference
>>> 11g Release 1 (11.1) (http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elemen...)
>>>
>>> ----- Original Message -----
>>>> From: "Łukasz Antoniak"<lukasz.antoniak(a)gmail.com>
>>>> To: hibernate-dev(a)lists.jboss.org
>>>> Sent: Thursday, October 13, 2011 12:50:13 PM
>>>> Subject: [hibernate-dev] HHH-6726 LONG and LONG RAW column types in Oracle
>>>>
>>>> Welcome Community!
>>>>
>>>> I have just subscribed to the list and wanted to discuss HHH-6726
>>>> JIRA
>>>> issue.
>>>>
>>>> Gail Badner wrote
>>>> (http://lists.jboss.org/pipermail/hibernate-dev/2011-October/007208.html):
>>>> HHH-6726 (Oracle : map TextType to clob and ImageType to blob)
>>>> https://hibernate.onjira.com/browse/HHH-6726
>>>> There have been a number of issues opened since the change was made
>>>> to
>>>> map TextType (LONGVARCHAR) 'long' and ImageType (LONGVARBINARY) to
>>>> 'long
>>>> raw'. This change was already documented in the migration notes.
>>>> Should
>>>> the mapping for Oracle (only) be changed back to clob and blob?
>>>>
>>>> HHH-6726 is caused by an issue in Oracle JDBC driver (version
>>>> 10.2.0.4
>>>> and later). This bug appears when LONG or LONG RAW columns are
>>>> accessed
>>>> not as first or last while processing SQL statement.
>>>>
>>>> I have discussed the topic of mapping TextType to CLOB and ImageType
>>>> to
>>>> BLOB (only in Oracle dialect) with Strong Liu. Reasons for doing so:
>>>> - Oracle allows only one LONG / LONG RAW column per table. This might
>>>> be
>>>> the most important from Hibernate's perspective.
>>>> - LONG / LONG RAW - up to 2 GB, BLOB / CLOB - up to 4 GB.
>>>> - In PL/SQL using LOBs is more efficient (random access to data).
>>>> LONG
>>>> only sequential.
>>>> - LONG and LONG RAW are deprecated.
>>>>
>>>> What is your opinion?
>>>>
>>>> Regards,
>>>> Lukasz Antoniak
>>>> _______________________________________________
>>>> hibernate-dev mailing list
>>>> hibernate-dev(a)lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>>>>
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
8 years, 11 months
Re: [hibernate-dev] Where are the batched fetch statements generated?
by Clemens Eisserer
Hi Guenther,
>>> Is it possible to disable prepared statement caching for batched fetching, so I end up with a single query in the < default_batch_fetch_size case only >>instead of the
>>> fixed-size batch loading hibernate does by default?
> I think the main reason for no feedback so far, is that nobody was able to understand this sentence.
> Usually 'prepared statement caching' is a synonym to 'prepared statement pooling' and is something which has to be provided by a connection-pool (or a jdbc-driver) and thus
> Hibernate does actually not implement any prepared statement cache/pooling.
> Can you please explain what you intend under 'prepared statement caching'?
> Can you also please try to better explain the second part of your sentence?
Sorry for beeing that cryptic, I will try to rephrase it:
When Hibernate does batch-fetching, it generates PreparedStatements
for certain batch sizes - for a batch_size of 50, the prepared
statements for batch-sizes will have the following sizes:
[1,2,3,4,5,6,7,8,9,10,12,25,50]. When e.g. a batch of size 13 should
be fetched, because of the fixed size of the prepared statements, 3
queries are issued for batch-fetching, although 13 <= 50. In this case
the 3 batches would be of the size 13 = 8 + 4 + 1.
In a latency bound (between db and application) environment, this
serverly hampers response time - instead of a single round-trip to do
the batched fetch, Hibernate requires 3.
(subselect can't be used in my case, because my queries are already
rather complex, and the added complexity confuses the DBs query
planner too much)
What I did in this case (only for integer PKs) is to pad up to the
next batch size with a non-existant PK.
So, for the example mentioned above, I can use the PreparedStatement
with size 25, and insert padding from 14-25, which will make the query
slightly more inefficient but avoid 2 additioan round-trips.
- Clemens
12 years, 1 month
SessionFactory building, 5.0-style
by Steve Ebersole
The initial design I had for building a SessionFactory using the new
metamodel was 3 steps:
1) build ServiceRegistry
2) build Metadata
3) build SessionFactory using both ServiceRegistry and Metadata
This changed a little as actually implemented in the metamodel branch:
1) build "boot strap" service registry
2) build basic service registry
3) build MetadataSources
4) build Metadata
5) build SessionFactory using both basic service registry and Metadata
I would like to change this slightly based on JPA 2.1 work and
integrating that with containers (mainly through planning with Scott for
JBoss AS).
<background>
Essentially Scott and I made a change proposal to JPA EG for how managed
EMF bootstrapping happens to better account for stuff in the container's
environment not being available until certain times. It is a typical
"hole in the interaction of specs" deal. Long story short, we want to
make boot strapping of an EMF into 2 distinct phases. Whether or not
that gets accepted/approved, we will implement this approach for
Hibernate EMF bootstrapping and JBoss AS will leverage it.
The 2 phases are meant to account for container resources not being
available or the need to delay classloading.
</background>
The changes I propose would just be ordering:
-- phase 1 --
1) build "boot strap" service registry
2) build MetadataSources
-- phase 2 --
3) build basic service registry
4) build Metadata
5) build SessionFactory using both basic service registry and Metadata
--
steve(a)hibernate.org
http://hibernate.org
12 years, 3 months
Re: [hibernate-dev] (no subject)
by Steve Ebersole
Also, I have been thinking that there really ought to be different
types of "integrators". Not sure what we gain by forcing
MetadataContributingIntegrator, ServiceContributingIntegrator,
TypeContributingIntegrator, etc to extend Integrator
On Sat 18 Aug 2012 05:50:29 PM CDT, Steve Ebersole wrote:
> The general idea is good. But not really getting the point/purpose of
> having both before and after hooks.
>
> On 08/17/2012 01:12 AM, Strong Liu wrote:
>> I'm thinking add the interface below, with it, modules like envers
>> can choose either before or after the metamodel get processed to hook
>> into its own extending mappings
>>
>>
>> public interface MetadataContributingIntegrator extends Integrator {
>> /**
>> * Allow the integrator to alter the {@link MetadataImplementor}
>> BEFORE {@link MetadataSources} get processed.
>> * <p/>
>> *
>> * At this stage, metamode ( like {@link
>> org.hibernate.metamodel.spi.binding.EntityBinding} etc. ) is not
>> available yet.
>> * This is a good time to add the custom sources into
>> {@MetadataSources} and get it processed by Hibernate Metamodel.
>> *
>> * @param metadata The Metamodel which is going to be completed
>> by processing MetadataSources.
>> * @param source Meta data sources to be processed.
>> */
>> public void beforeMetadataProcessing(MetadataImplementor
>> metadata, MetadataSources source);
>>
>> /**
>> * Allow the itnegrator to alter the {@link MetadataImplementor}
>> AFTER @{@link MetadataSources} get processed.
>> * <p/>
>> *
>> * At this stage, metamode ( like {@link
>> org.hibernate.metamodel.spi.binding.EntityBinding} etc. ) is bindded.
>> * So, it depends on the integrator to manually create metamodel
>> and add it to {@link MetadataImplementor} or modifiy the
>> * existing one.
>> *
>> * @param metadata
>> * @param source
>> */
>> public void afterMetadataProcessing(MetadataImplementor metadata,
>> MetadataSources source);
>> }
>>
>> downside is, envers with old metamodel calls
>> Configuration.buildMappings() again during the integration phase, so,
>> its created hbm xml can be processed (again) before SF create, but
>> with new metamodel, it is hard to do that I think ( or too time
>> consuming )
>>
>> and to create envers own metamodel ( hbm ) it needs to know the
>> hibernate type of entity property, which can be easily get from
>> Entitybinding but hard / duplicated to resolve from source by envers
>> itself, so it seems we should choose "afterMetadataProcessing"
>>
>> but envers' hbm creation involves lots of code and pretty complicated
>> ( to me :) it's better to reuse those code and choose
>> "beforemetadataProcessing" and let new metamodel to take care of that.
>>
>> thoughts?
>>
>>
>> On Aug 2, 2012, at 12:17 AM, Hardy Ferentschik <hardy(a)hibernate.org>
>> wrote:
>>
>>> On 31 Jan 2012, at 10:55 PM, Sanne Grinovero wrote:
>>>
>>>> Why is the "annotation indexing" discussion part of the metamodel?
>>> Why not? We are using Jandex in the new metamodel which is a
>>> annotation index/repository
>>>
>>>> I initially understood that a replacement of commons-annotations was
>>>> being developed, which would be nice for Search too as Search does not
>>>> and should not depend on Hibernate ORM.
>>> as Strong already said, there is no replacement module for
>>> commons-annotations. There is no
>>> need for it.
>>>
>>> And yes, Search should imo also switch to Jandex, however, it can
>>> initially just create its own index.
>>> Of course it might be nice to be able to use a Jandex index passed
>>> to it via the integrator spi. Different story though.
>>>
>>> --Hardy
>>> _______________________________________________
>>> hibernate-dev mailing list
>>> hibernate-dev(a)lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>> -------------------------
>> Best Regards,
>>
>> Strong Liu <stliu at hibernate.org>
>> http://about.me/stliu/bio
>>
>> _______________________________________________
>> hibernate-dev mailing list
>> hibernate-dev(a)lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/hibernate-dev
>
>
--
steve(a)hibernate.org
http://hibernate.org
12 years, 3 months
Re: [hibernate-dev] Metamodel dev questions
by Hardy Ferentschik
On 31 Jan 2012, at 1:46 PM, Steve Ebersole wrote:
> IIUC that not how use of jboss-logging for generating exception messages is supposed to work, though to be honest, I do not know the details. Since you "use this in Validator and Search" I would figure you would know how to do it.
Turns out it is another classical chicken and egg problem. I tried to return a MappingException which is at the time the annotation processor runs not compiled yet (In Search and Validator we use exceptions which are available on the classpath).
A solution would be to first compile exceptions prior to running the annotation processor, but that would make the build even more complicated. The alternative is to just return a string
@Message(value = "Unable to find mapping information for %s. Are you sure all annotated classes and configuration files are added?", id = 443)
String missingEntitySource(String entityName);
and use this string in the exception constructor.
--Hardy
12 years, 3 months
Metamodel dev questions
by Hardy Ferentschik
Hi,
I just bumped into a couple of issues when working on some metamodel tests.
First thing I came across was a NullPointerException :-( Turns out I was missing to add a annotated class to the test.
Digging a little deeper I noticed that the problem is in the Binder. It does something like:
final EntitySource entitySource = entitySourcesByName.get( entityName );
final EntityBinding superEntityBinding =
SubclassEntitySource.class.isInstance( entitySource )
? entityBinding( ( ( SubclassEntitySource ) entitySource ).superclassEntitySource().getEntityName() )
: null;
If the user for some reason forgets to add all configuration sources he will just get a NullPointerException. Not nice :-(
It seems this is a general problem in the Binder atm. We have to better cater for border cases and think about
potential problems and how we can provide useful feedback.
Next, I tried to change this to
final EntitySource entitySource = entitySourcesByName.get( entityName );
if(entitySource == null) {
throw coreLogger.missingEntitySourceException( entityName );
}
// Get super entity binding (creating it if necessary using recursive call to this method)
final EntityBinding superEntityBinding =
SubclassEntitySource.class.isInstance( entitySource )
? entityBinding( ( ( SubclassEntitySource ) entitySource ).superclassEntitySource().getEntityName() )
: null;
where I added the following to CoreMessageLogger:
@Message(value = "Unable to find mapping information for %s. Are you sure all annotated classes and configuration files are added?", id = 443)
/MappingException missingEntitySourceException(String entityName);
I just want to get a i18n exception. When doing so, the generation of the logger classes fails!
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':hibernate-core:generateMainLoggingClasses'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:68)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:34)
at org.gradle.api.internal.changedetection.CacheLockHandlingTaskExecuter$1.run(CacheLockHandlingTaskExecuter.java:34)
at org.gradle.cache.internal.DefaultCacheAccess$2.create(DefaultCacheAccess.java:200)
at org.gradle.cache.internal.DefaultCacheAccess.longRunningOperation(DefaultCacheAccess.java:172)
at org.gradle.cache.internal.DefaultCacheAccess.longRunningOperation(DefaultCacheAccess.java:198)
at org.gradle.cache.internal.DefaultPersistentDirectoryStore.longRunningOperation(DefaultPersistentDirectoryStore.java:137)
at org.gradle.api.internal.changedetection.DefaultTaskArtifactStateCacheAccess.longRunningOperation(DefaultTaskArtifactStateCacheAccess.java:83)
at org.gradle.api.internal.changedetection.CacheLockHandlingTaskExecuter.execute(CacheLockHandlingTaskExecuter.java:32)
at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:55)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:57)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:41)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:51)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:52)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:42)
at org.gradle.api.internal.AbstractTask.executeWithoutThrowingTaskFailure(AbstractTask.java:247)
at org.gradle.execution.DefaultTaskGraphExecuter.executeTask(DefaultTaskGraphExecuter.java:192)
at org.gradle.execution.DefaultTaskGraphExecuter.doExecute(DefaultTaskGraphExecuter.java:177)
at org.gradle.execution.DefaultTaskGraphExecuter.execute(DefaultTaskGraphExecuter.java:83)
at org.gradle.execution.SelectedTaskExecutionAction.execute(SelectedTaskExecutionAction.java:36)
at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61)
at org.gradle.execution.DefaultBuildExecuter.access$200(DefaultBuildExecuter.java:23)
at org.gradle.execution.DefaultBuildExecuter$2.proceed(DefaultBuildExecuter.java:67)
at org.gradle.api.internal.changedetection.TaskCacheLockHandlingBuildExecuter$1.run(TaskCacheLockHandlingBuildExecuter.java:31)
at org.gradle.cache.internal.DefaultCacheAccess$1.create(DefaultCacheAccess.java:111)
at org.gradle.cache.internal.DefaultCacheAccess.useCache(DefaultCacheAccess.java:126)
at org.gradle.cache.internal.DefaultCacheAccess.useCache(DefaultCacheAccess.java:109)
at org.gradle.cache.internal.DefaultPersistentDirectoryStore.useCache(DefaultPersistentDirectoryStore.java:129)
at org.gradle.api.internal.changedetection.DefaultTaskArtifactStateCacheAccess.useCache(DefaultTaskArtifactStateCacheAccess.java:79)
at org.gradle.api.internal.changedetection.TaskCacheLockHandlingBuildExecuter.execute(TaskCacheLockHandlingBuildExecuter.java:29)
at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61)
at org.gradle.execution.DefaultBuildExecuter.access$200(DefaultBuildExecuter.java:23)
at org.gradle.execution.DefaultBuildExecuter$2.proceed(DefaultBuildExecuter.java:67)
at org.gradle.execution.DryRunBuildExecutionAction.execute(DryRunBuildExecutionAction.java:32)
at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61)
at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:54)
at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:155)
at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:110)
at org.gradle.initialization.DefaultGradleLauncher.run(DefaultGradleLauncher.java:78)
at org.gradle.launcher.cli.ExecuteBuildAction.run(ExecuteBuildAction.java:38)
at org.gradle.launcher.exec.InProcessGradleLauncherActionExecuter.execute(InProcessGradleLauncherActionExecuter.java:39)
at org.gradle.launcher.exec.InProcessGradleLauncherActionExecuter.execute(InProcessGradleLauncherActionExecuter.java:25)
at org.gradle.launcher.cli.RunBuildAction.run(RunBuildAction.java:50)
at org.gradle.launcher.cli.ActionAdapter.execute(ActionAdapter.java:30)
at org.gradle.launcher.cli.ActionAdapter.execute(ActionAdapter.java:22)
at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:200)
at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:173)
at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:169)
at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:138)
at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:33)
at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:22)
at org.gradle.launcher.Main.doAction(Main.java:48)
at org.gradle.launcher.bootstrap.EntryPoint.run(EntryPoint.java:45)
at org.gradle.launcher.Main.main(Main.java:39)
at org.gradle.launcher.bootstrap.ProcessBootstrap.runNoExit(ProcessBootstrap.java:50)
at org.gradle.launcher.bootstrap.ProcessBootstrap.run(ProcessBootstrap.java:32)
at org.gradle.launcher.GradleMain.main(GradleMain.java:26)
at org.gradle.wrapper.BootstrapMainStarter.start(BootstrapMainStarter.java:33)
at org.gradle.wrapper.Wrapper.execute(Wrapper.java:87)
at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:37)
Caused by: org.gradle.api.internal.tasks.compile.CompilationFailedException: Compilation failed; see the compiler error output for details.
at org.gradle.api.internal.tasks.compile.jdk6.Jdk6JavaCompiler.execute(Jdk6JavaCompiler.java:42)
at org.gradle.api.internal.tasks.compile.jdk6.Jdk6JavaCompiler.execute(Jdk6JavaCompiler.java:33)
at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.delegateAndHandleErrors(NormalizingJavaCompiler.java:95)
at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.execute(NormalizingJavaCompiler.java:48)
at org.gradle.api.internal.tasks.compile.NormalizingJavaCompiler.execute(NormalizingJavaCompiler.java:34)
at org.gradle.api.internal.tasks.compile.DelegatingJavaCompiler.execute(DelegatingJavaCompiler.java:29)
at org.gradle.api.internal.tasks.compile.DelegatingJavaCompiler.execute(DelegatingJavaCompiler.java:20)
at org.gradle.api.internal.tasks.compile.IncrementalJavaCompilerSupport.execute(IncrementalJavaCompilerSupport.java:33)
at org.gradle.api.internal.tasks.compile.IncrementalJavaCompilerSupport.execute(IncrementalJavaCompilerSupport.java:23)
at org.gradle.api.tasks.compile.Compile.compile(Compile.java:60)
at org.gradle.api.internal.BeanDynamicObject$MetaClassAdapter.invokeMethod(BeanDynamicObject.java:196)
at org.gradle.api.internal.BeanDynamicObject.invokeMethod(BeanDynamicObject.java:102)
at org.gradle.api.internal.CompositeDynamicObject.invokeMethod(CompositeDynamicObject.java:99)
at org.gradle.api.tasks.compile.Compile_Decorated.invokeMethod(Unknown Source)
at org.gradle.util.ReflectionUtil.invoke(ReflectionUtil.groovy:23)
at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$4.execute(AnnotationProcessingTaskFactory.java:150)
at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$4.execute(AnnotationProcessingTaskFactory.java:145)
at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:472)
at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:461)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:60)
... 60 more
In Validator and Search we use this method to generate i18n exceptions. Looking at the current state of CoreMessageLogger we only use log messages (@LogMessage).
There is no exception generated. Is this intended? How do I create an exception with an i18n message?
And what's up with the annotation processor when using this approach anyways? (btw, trying to upgrade to 'org.jboss.logging:jboss-logging:3.1.1.GA' and 'org.jboss.logging:jboss-logging-processor:1.0.3.Final' did not help)
--Hardy
12 years, 3 months
org.hibernate.engine.jdbc.dialect.spi.DialectResolver
by Steve Ebersole
I think we need to consider changing DialectResolver to fit with some
upcoming JPA 2.1 features.
JPA 2.1 is adding some form of schema export. The initial plan there is
to identify the "dialect" to target by passing in the (1) database name
and (2) database version as it would come from JDBC DatabaseMetaData.
The connection may or may not be available.
Currently we just pass the DatabaseMetaData to the DialectResolver:
public Dialect resolveDialect(DatabaseMetaData metaData)
The original thinking was to support resolvers looking at information
other than just name/version ion making the determination (or even in
potentially configuring the Dialect before return). However all our
implementations are based on just name/version resolution.
Even worse the current proposal proposes using (String)
DatabaseMetaData#getDatabaseProductVersion whereas we use (int)
DatabaseMetaData#getDatabaseMajorVersion and (int)
DatabaseMetaData#getDatabaseMinorVersion inside the standard resolver
So should we change this contract?
public Dialect resolveDialect(String dbName, int majorVersion, int
minorVersion)
or
public Dialect resolveDialect(String dbName, String dbVersion)
WDYT?
--
steve(a)hibernate.org
http://hibernate.org
12 years, 3 months
HHH-5951 - Guess appropriate JtaPlatform based on environment if an explicit one not specified
by Łukasz Antoniak
Hello all,
I have implemented basic proof of concept regarding JTA platform recognition. You can find the initial suggestion here:
https://github.com/lukasz-antoniak/hibernate-core/commit/3df34efad32ceed9....
Implementation notes:
1) Defining JAR archives in classes extending AbstractJtaPlatform might not be the best option, but I did not want to double each
platform class by introducing something like:
public interface EnterprisePlatform {
public JtaPlatform getJtaPlatform();
public Collection<Pattern> getCharacteristicJarArchivePatterns();
}
public class WeblogicEnterprisePlatform implements EnterprisePlatform {
// obvious goes here...
}
Any thoughts?
2) Decided to match JAR archive names with regular expressions instead of strict names because of JAR versioning (case of Bitronix).
As for now, automatic recognition has been tested on WebLogic 12 and it seems to work fine.
I wanted to know your opinion before testing against other application servers.
Best Regards,
Lukasz
12 years, 3 months
Search: paging over deleted entries [ISPN-2206]
by Sanne Grinovero
Hi all,
I feel the need to share some thoughts on this quite tricky patch proposal:
https://github.com/infinispan/infinispan/pull/1245
I'm tempted to say that Hibernate Search should "scan ahead" to look
for more results to fill the gap; but -even assuming this was easy to
implement (which it is not)- this behaviour would be inconsistent with
update operations, or even inserts.
For inserts we could compensate by keeping an in-memory index paired
to the current transactions, and consider this additional index as a
temporary additional shard; by following this path I'm confident we
could also implement proper removals and updates using a custom collector,
but this will definitely be more complex and introduce some overhead.
Overhead could be minimized by considering this temporary in-memory
index as a pre-analysed dataset, so that we avoid doing the work again
at commit time.
Any opinions on how this should work?
Cheers,
Sanne
12 years, 3 months
packages
by Steve Ebersole
I am noticing some inconsistencies in packages that i wanted us all to
talk through. Mainly what I have seen has related to services and that
sometimes they are put in o.h.services and sometimes in more "natural"
packages. Though, also, I think that some of this boils down to what
"engine" means as in o.h.engine.
For example, take JDBC-related stuff. Yes, dealing with JDBC is just a
lot of code for us. But we have JDBC related code spread throughout
multiple packages.
org.hibernate.jdbc - I am not too concerned with as it is somewhat
special, because it is JDBC related stuff that we expose to the use (API).
But org.hibernate.engine.jdbc and org.hibernate.service.jdbc feel like
they should be in the same package structure.
Initially I had intended o.h.services to be home for just the code
related to the registry and service infrastructure, but not home for
actual services. Not sure why we got off track with that tbh. I
remember initially putting some "essential" services (what eventually
became bootstrap services) in here because they did not seem to fit
anywhere else (though we do now interestingly have org.hibernate.boot).
Just wanted to get a discussion started about it...
--
steve(a)hibernate.org
http://hibernate.org
12 years, 3 months