[hibernate-issues] [Hibernate-JIRA] Resolved: (HSEARCH-625) Some errors triggered by Lucene are not catched by the ErrorHandler

Sanne Grinovero (JIRA) noreply at atlassian.com
Thu Dec 9 15:24:13 EST 2010


     [ http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sanne Grinovero resolved HSEARCH-625.
-------------------------------------

    Resolution: Fixed

> Some errors triggered by Lucene are not catched by the ErrorHandler
> -------------------------------------------------------------------
>
>                 Key: HSEARCH-625
>                 URL: http://opensource.atlassian.com/projects/hibernate/browse/HSEARCH-625
>             Project: Hibernate Search
>          Issue Type: Bug
>    Affects Versions: 3.3.0.Beta3
>            Reporter: Emmanuel Bernard
>            Assignee: Sanne Grinovero
>             Fix For: 3.3.0.CR2
>
>
> I ran through them when working on HSEARCH-573
> You can reproduce by workspace.forceLockRelease(); all the time in PerDPQueueProcessor (instead of only when not LockObtainFailedException)
> And running DoNotCloseOnLockTimeoutTest
> Exception in thread "Hibernate Search: indexwriter-1" java.lang.RuntimeException: after flush: fdx size mismatch: 9000 docs vs 0 length in bytes of _d.fdx file exists?=false
> 	at org.apache.lucene.index.StoredFieldsWriter.closeDocStore(StoredFieldsWriter.java:97)
> 	at org.apache.lucene.index.DocFieldProcessor.closeDocStore(DocFieldProcessor.java:51)
> 	at org.apache.lucene.index.DocumentsWriter.closeDocStore(DocumentsWriter.java:417)
> 	at org.apache.lucene.index.IndexWriter.flushDocStores(IndexWriter.java:1777)
> 	at org.apache.lucene.index.IndexWriter.doFlushInternal(IndexWriter.java:3649)
> 	at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3565)
> 	at org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:4171)
> 	at org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:4053)
> 	at org.apache.lucene.index.ConcurrentMergeScheduler.merge(ConcurrentMergeScheduler.java:189)
> 	at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2521)
> 	at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2516)
> 	at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2512)
> 	at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3556)
> 	at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2001)
> 	at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:76)
> 	at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace.doWorkInSync(DirectoryProviderWorkspace.java:95)
> Exception in thread "Thread-3" java.lang.RuntimeException: after flush: fdx size mismatch: 1000 docs vs 0 length in bytes of _m.fdx file exists?=false
> 	at org.apache.lucene.index.StoredFieldsWriter.closeDocStore(StoredFieldsWriter.java:97)
> 	at org.apache.lucene.index.DocFieldProcessor.closeDocStore(DocFieldProcessor.java:51)
> 	at org.apache.lucene.index.DocumentsWriter.closeDocStore(DocumentsWriter.java:417)
> 	at org.apache.lucene.index.IndexWriter.flushDocStores(IndexWriter.java:1777)
> 	at org.apache.lucene.index.IndexWriter.doFlushInternal(IndexWriter.java:3649)
> 	at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3565)
> 	at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3555)
> 	at org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:3431)
> 	at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3506)
> 	at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3477)
> 	at org.hibernate.search.backend.Workspace.commitIndexWriter(Workspace.java:173)
> 	at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace.stopAndFlush(DirectoryProviderWorkspace.java:83)
> 	at org.hibernate.search.backend.impl.batchlucene.LuceneBatchBackend.stopAndFlush(LuceneBatchBackend.java:96)
> 	at org.hibernate.search.batchindexing.BatchCoordinator.run(BatchCoordinator.java:96)
> 	at org.hibernate.search.impl.MassIndexerImpl.startAndWait(MassIndexerImpl.java:196)
> 	at org.hibernate.search.test.batchindexing.DoNotCloseOnLockTimeoutTest$MassindexerWork.run(DoNotCloseOnLockTimeoutTest.java:90)
> org.hibernate.search.SearchException: Exception while closing IndexWriter
> 	at org.hibernate.search.backend.Workspace.closeIndexWriter(Workspace.java:195)
> 	at org.hibernate.search.backend.impl.lucene.PerDPQueueProcessor.run(PerDPQueueProcessor.java:113)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> 	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:637)
> Caused by: java.io.FileNotFoundException: /Users/manu/projects/notbackedup/git/search/hibernate-search/target/indextemp/org.hibernate.search.test.batchindexing.ConcurrentData/_0.cfs (No such file or directory)
> 	at java.io.RandomAccessFile.open(Native Method)
> 	at java.io.RandomAccessFile.<init>(RandomAccessFile.java:212)
> 	at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput$Descriptor.<init>(SimpleFSDirectory.java:76)
> 	at org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.<init>(SimpleFSDirectory.java:97)
> 	at org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.<init>(NIOFSDirectory.java:87)
> 	at org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:67)
> 	at org.apache.lucene.index.CompoundFileReader.<init>(CompoundFileReader.java:67)
> 	at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:114)
> 	at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:590)
> 	at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:616)
> 	at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:591)
> 	at org.apache.lucene.index.DocumentsWriter.applyDeletes(DocumentsWriter.java:997)
> 	at org.apache.lucene.index.IndexWriter.applyDeletes(IndexWriter.java:4520)
> 	at org.apache.lucene.index.IndexWriter.doFlushInternal(IndexWriter.java:3723)
> 	at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3565)
> 	at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3555)
> 	at org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1711)
> 	at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1674)
> 	at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1638)
> 	at org.hibernate.search.backend.Workspace.closeIndexWriter(Workspace.java:191)
> 	... 7 more
> //We probably can't do much on this one
> Exception in thread "Lucene Merge Thread #1" org.apache.lucene.index.MergePolicy$MergeException: org.apache.lucene.index.CorruptIndexException: doc counts differ for segment _c: fieldsReader shows 1 but segmentInfo shows 0
> 	at org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:347)
> 	at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:312)
> Caused by: org.apache.lucene.index.CorruptIndexException: doc counts differ for segment _c: fieldsReader shows 1 but segmentInfo shows 0
> 	at org.apache.lucene.index.SegmentReader$CoreReaders.openDocStores(SegmentReader.java:296)
> 	at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:592)
> 	at org.apache.lucene.index.IndexWriter$ReaderPool.get(IndexWriter.java:616)
> 	at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4309)
> 	at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3965)
> 	at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:231)
> 	at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:288)
> 	at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace$AsyncIndexRunnable.run(DirectoryProviderWorkspace.java:143)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> 	at java.lang.Thread.run(Thread.java:637)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://opensource.atlassian.com/projects/hibernate/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        


More information about the hibernate-issues mailing list