[Hibernate-JIRA] Created: (HSEARCH-640) MassIndexer/JBoss 6: Could not register synchronization for container transaction
by Mario Winterer (JIRA)
MassIndexer/JBoss 6: Could not register synchronization for container transaction
---------------------------------------------------------------------------------
Key: HSEARCH-640
URL: http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-640
Project: Hibernate Search
Issue Type: Bug
Components: massindexer
Affects Versions: 3.3.0.CR1
Environment: Hibernate 3.6.0.Final, JBoss 6.0.0.CR1, HSQL 1.8.0 (shipped with JBoss 6.0.0.CR1)
Java(TM) SE Runtime Environment (build 1.6.0_20-b02)
Reporter: Mario Winterer
Attachments: contacts.ear
If a mass indexer is used inside a CMP bean in JBoss 6 to rebuild the search index, an org.hibernate.TransactionException is thrown:
{noformat}
2010-12-06 14:01:30,720 ERROR [org.hibernate.search.batchindexing.EntityConsumerLuceneworkProducer]
(Hibernate Search: collectionsloader-1) error during batch indexing:
: org.hibernate.TransactionException: Could not register synchronization for container transaction
at org.hibernate.transaction.CMTTransaction.begin(CMTTransaction.java:76) [:3.6.0.Final]
at org.hibernate.ejb.transaction.JoinableCMTTransaction.begin(JoinableCMTTransaction.java:89) [:3.6.0.Final]
at org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1473) [:3.6.0.Final]
at org.hibernate.search.batchindexing.EntityConsumerLuceneworkProducer.run(EntityConsumerLuceneworkProducer.java:90) [:3.3.0.CR1]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [:1.6.0_20]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [:1.6.0_20]
at java.lang.Thread.run(Thread.java:619) [:1.6.0_20]
{noformat}
I've included an example EAR (see attachment) that can be deployed to JBoss 6.0.0.CR1 without any modifications.
It contains the required hibernate search libraries and defines one single indexed entity bean {{Contact}} as well as a JBoss service EJB named {{at.mw.contacts:ContactsService}} that can be accessed via JMX console.
Use this service to do one of the following things:
* add new contacts by specifying firstName/lastName
* list all contacts
* search for contacts using the search index
* rebuild the entire search index (using a MassIndexer)
Rebuilding the index fails with the exception listed above.
Sources included.
(the example uses an in-memory HSQL-DB and the ram directory provider for the lucene index).
(see also forum discussion: [Could not register synchronization for container transaction|https://forum.hibernate.org/viewtopic.php?f=9&t=1004515&start=0])
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 3 months
[Hibernate-JIRA] Created: (HSEARCH-625) Some errors triggered by Lucene are not catched by the ErrorHandler
by Emmanuel Bernard (JIRA)
Some errors triggered by Lucene are not catched by the ErrorHandler
-------------------------------------------------------------------
Key: HSEARCH-625
URL: http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-625
Project: Hibernate Search
Issue Type: Bug
Affects Versions: 3.3.0.Beta3
Reporter: Emmanuel Bernard
Assignee: Sanne Grinovero
Fix For: 3.3.0
I ran through them when working on HSEARCH-573
You can reproduce by workspace.forceLockRelease(); all the time in PerDPQueueProcessor (instead of only when not LockObtainFailedException)
And running DoNotCloseOnLockTimeoutTest
Exception in thread "Hibernate Search: indexwriter-1" java.lang.RuntimeException: after flush: fdx size mismatch: 9000 docs vs 0 length in bytes of _d.fdx file exists?=false
at org.apache.lucene.index.StoredFieldsWriter.closeDocStore(StoredFieldsWriter.java:97)
at org.apache.lucene.index.DocFieldProcessor.closeDocStore(DocFieldProcessor.java:51)
at org.apache.lucene.index.DocumentsWriter.closeDocStore(DocumentsWriter.java:417)
at org.apache.lucene.index.IndexWriter.flushDocStores(IndexWriter.java:1777)
at org.apache.lucene.index.IndexWriter.doFlushInternal(IndexWriter.java:3649)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3565)
at org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:4171)
at org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:4053)
at org.apache.lucene.index.ConcurrentMergeScheduler.merge(ConcurrentMergeScheduler.java:189)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2521)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2516)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2512)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3556)
at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2001)
at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:76)
at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace.doWorkInSync(DirectoryProviderWorkspace.java:95)
Exception in thread "Thread-3" java.lang.RuntimeException: after flush: fdx size mismatch: 1000 docs vs 0 length in bytes of _m.fdx file exists?=false
at org.apache.lucene.index.StoredFieldsWriter.closeDocStore(StoredFieldsWriter.java:97)
at org.apache.lucene.index.DocFieldProcessor.closeDocStore(DocFieldProcessor.java:51)
at org.apache.lucene.index.DocumentsWriter.closeDocStore(DocumentsWriter.java:417)
at org.apache.lucene.index.IndexWriter.flushDocStores(IndexWriter.java:1777)
at org.apache.lucene.index.IndexWriter.doFlushInternal(IndexWriter.java:3649)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3565)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3555)
at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3431)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3506)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3477)
at org.hibernate.search.backend.Workspace.commitIndexWriter(Workspace.java:173)
at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace.stopAndFlush(DirectoryProviderWorkspace.java:83)
at org.hibernate.search.backend.impl.batchlucene.LuceneBatchBackend.stopAndFlush(LuceneBatchBackend.java:96)
at org.hibernate.search.batchindexing.BatchCoordinator.run(BatchCoordinator.java:96)
at org.hibernate.search.impl.MassIndexerImpl.startAndWait(MassIndexerImpl.java:196)
at org.hibernate.search.test.batchindexing.DoNotCloseOnLockTimeoutTest$MassindexerWork.run(DoNotCloseOnLockTimeoutTest.java:90)
org.hibernate.search.SearchException: Exception while closing IndexWriter
at org.hibernate.search.backend.Workspace.closeIndexWriter(Workspace.java:195)
at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:113)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:637)
Caused by: java.io.FileNotFoundException: /Users/manu/projects/notbackedup/git/search/hibernate-search/target/indextemp/org.hibernate.search.test.batchindexing.ConcurrentData/_0.cfs (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.<init>(SimpleFSDirectory.java:76)
at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.<init>(SimpleFSDirectory.java:97)
at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:87)
at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:67)
at org.apache.lucene.index.CompoundFileReader.<init>(CompoundFileReader.java:67)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:114)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:590)
at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:616)
at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:591)
at org.apache.lucene.index.DocumentsWriter.applyDeletes(DocumentsWriter.java:997)
at org.apache.lucene.index.IndexWriter.applyDeletes(IndexWriter.java:4520)
at org.apache.lucene.index.IndexWriter.doFlushInternal(IndexWriter.java:3723)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3565)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3555)
at org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1711)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1674)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1638)
at org.hibernate.search.backend.Workspace.closeIndexWriter(Workspace.java:191)
... 7 more
//We probably can't do much on this one
Exception in thread "Lucene Merge Thread #1" org.apache.lucene.index.MergePolicy$MergeException: org.apache.lucene.index.CorruptIndexException: doc counts differ for segment _c: fieldsReader shows 1 but segmentInfo shows 0
at org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:347)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:312)
Caused by: org.apache.lucene.index.CorruptIndexException: doc counts differ for segment _c: fieldsReader shows 1 but segmentInfo shows 0
at org.apache.lucene.index.SegmentReader$CoreReaders.openDocStores(SegmentReader.java:296)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:592)
at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:616)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4309)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3965)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:231)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:288)
at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace$AsyncIndexRunnable.run(DirectoryProviderWorkspace.java:143)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:637)
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 3 months
[Hibernate-JIRA] Created: (HV-387) org.hibernate.validator.cfg.defs.GenericConstraintDef should (probably) not extend the raw type ConstraintDef
by Dag Hovland (JIRA)
org.hibernate.validator.cfg.defs.GenericConstraintDef should (probably) not extend the raw type ConstraintDef
-------------------------------------------------------------------------------------------------------------
Key: HV-387
URL: http://opensource.atlassian.com/projects/hibernate/browse/HV-387
Project: Hibernate Validator
Issue Type: Bug
Components: engine
Reporter: Dag Hovland
Assignee: Hardy Ferentschik
Priority: Trivial
Attachments: ConstraintDef.diff, GenericConstraintDef.diff
The class
org.hibernate.validator.cfg.defs.GenericConstraintDef
extends the raw type ConstraintDef. Using maven, it compiles, but in Eclipse (with jdk-1.6) We get
the following error:
"Name clash: The method groups(Class<?>...) of type GenericConstraintDef has the same erasure as
groups(Class...) of type ConstraintDef but does not override it"
and similar for the method "payload". The others methods give no errors. We do not understand why,
but it compiles after changing the definition to
public class GenericConstraintDef<A extends Annotation>
extends ConstraintDef<A>
and also replacing the "Class<?>" in the argument to the method "constraintType" with "Class<A>.
The fix seems more "correct", and I suggest this is changed in the source.
The file GenericConstraintDef.diff contains the mentioned fix, while ConstraintDef.diff adds type parameters to a few related methods in org.hibernate.validator.cfg.ConstraintDef
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 3 months
[Hibernate-JIRA] Created: (HV-404) Simplify creation of ConstraintDef derivations
by Gunnar Morling (JIRA)
Simplify creation of ConstraintDef derivations
----------------------------------------------
Key: HV-404
URL: http://opensource.atlassian.com/projects/hibernate/browse/HV-404
Project: Hibernate Validator
Issue Type: Improvement
Components: engine
Affects Versions: 4.1.0.Final
Reporter: Gunnar Morling
Assignee: Hardy Ferentschik
When working with the programmatic constraint declaration API one typically creates a constraint definition type (derivation of ConstraintDef) for each constraint type.
In its current form the API requires that in each such ConstraintDef sub-type the methods message(), payload() and groups() are overridden to have the fluent invocation style working properly. If for instance message() were not overridden in SizeDef, invoking this method would return the type ConstraintDef, hindering to call later on a specific SizeDef member such as min().
This can be improved by making ConstraintDef a so called self referential type as follows:
public abstract class ConstraintDef<C extends ConstraintDef<C, A>, A extends Annotation> {
//...
public C message(String message) {
//...
}
}
That way overriding said methods in concrete definition types is not necessary any more while still allowing the fluent invocation style, as message() would return SizeDef when called on a SizeDef instance.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 3 months
[Hibernate-JIRA] Commented: (HHH-1480) JOIN precendence rules per SQL-99
by Patras Vlad (JIRA)
[ http://opensource.atlassian.com/projects/hibernate/browse/HHH-1480?page=c... ]
Patras Vlad commented on HHH-1480:
----------------------------------
Replacing "," with "cross join" causes a lot of issues. Not only "cross join" is not supported in many DB engines but "," and "cross join" are NOT equivalent, changing one with another changes the precedence of other joins, and precedence is important for outer joins.
This is detailed in HHH-5352
> JOIN precendence rules per SQL-99
> ---------------------------------
>
> Key: HHH-1480
> URL: http://opensource.atlassian.com/projects/hibernate/browse/HHH-1480
> Project: Hibernate Core
> Issue Type: New Feature
> Components: query-hql
> Affects Versions: 3.1.2
> Reporter: trebor iksrazal
> Assignee: Steve Ebersole
> Fix For: 3.5.0-Beta-2
>
>
> In SQL-92 joins performed in the where clause (comma operator in from clause) and joins performed in the from clause (join keyword) had the same precedence. SQL-99 clarified this such that the from clause joins had higher precedence.
> Hibernate currently treats these as having the same precedence.
> A good explanation comes from the MySQL docs ( http://dev.mysql.com/doc/refman/5.0/en/join.html ) :
> #
> Previously, the comma operator (,) and JOIN both had the same precedence, so the join expression t1, t2 JOIN t3 was interpreted as ((t1, t2) JOIN t3). Now JOIN has higher precedence, so the expression is interpreted as (t1, (t2 JOIN t3)). This change affects statements that use an ON clause, because that clause can refer only to columns in the operands of the join, and the change in precedence changes interpretation of what those operands are.
> Example:
> CREATE TABLE t1 (i1 INT, j1 INT);
> CREATE TABLE t2 (i2 INT, j2 INT);
> CREATE TABLE t3 (i3 INT, j3 INT);
> INSERT INTO t1 VALUES(1,1);
> INSERT INTO t2 VALUES(1,1);
> INSERT INTO t3 VALUES(1,1);
> SELECT * FROM t1, t2 JOIN t3 ON (t1.i1 = t3.i3);
> Previously, the SELECT was legal due to the implicit grouping of t1,t2 as (t1,t2). Now the JOIN takes precedence, so the operands for the ON clause are t2 and t3. Because t1.i1 is not a column in either of the operands, the result is an Unknown column 't1.i1' in 'on clause' error. To allow the join to be processed, group the first two tables explicitly with parentheses so that the operands for the ON clause are (t1,t2) and t3:
> SELECT * FROM (t1, t2) JOIN t3 ON (t1.i1 = t3.i3);
> Alternatively, avoid the use of the comma operator and use JOIN instead:
> SELECT * FROM t1 JOIN t2 JOIN t3 ON (t1.i1 = t3.i3);
> This change also applies to statements that mix the comma operator with INNER JOIN, CROSS JOIN, LEFT JOIN, and RIGHT JOIN, all of which now have higher precedence than the comma operator.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators....
-
For more information on JIRA, see: http://www.atlassian.com/software/jira
14 years, 3 months