[JBoss JIRA] (ISPN-8958) NPE in JGroupsTransport.send(..) when stopping
by Ryan Emerson (JIRA)
[ https://issues.jboss.org/browse/ISPN-8958?page=com.atlassian.jira.plugin.... ]
Ryan Emerson resolved ISPN-8958.
--------------------------------
Fix Version/s: 9.2.1.Final
Resolution: Done
> NPE in JGroupsTransport.send(..) when stopping
> ----------------------------------------------
>
> Key: ISPN-8958
> URL: https://issues.jboss.org/browse/ISPN-8958
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.2.0.Final
> Reporter: Radoslav Husar
> Assignee: Radoslav Husar
> Priority: Minor
> Fix For: 9.2.1.Final
>
>
> Mostly just log noise.
> {noformat}
> 2018-03-16 15:52:30,614 INFO [org.infinispan.CLUSTER] (remote-thread--p31-t2) [Context=client-mappings][Scope=node-2]ISPN100003: Node node-2 finished rebalance phase with topology id 33
> 2018-03-16 15:52:30,614 WARN [org.infinispan.topology.CacheTopologyControlCommand] (remote-thread--p33-t7) ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=clusterbench-ee7.ear.clusterbench-ee7-web-granular.war, type=REBALANCE_PHASE_CONFIRM, sender=node-3, joinInfo=null, topologyId=43, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, phase=null, actualMembers=null, throwable=null, viewId=8}: org.infinispan.commons.CacheException: Failed to broadcast asynchronous command: CacheTopologyControlCommand{cache=clusterbench-ee7.ear.clusterbench-ee7-web-granular.war, type=CH_UPDATE, sender=node-1, joinInfo=null, topologyId=44, rebalanceId=13, currentCH=DefaultConsistentHash{ns=256, owners = (2)[node-2: 124+132, node-3: 132+124]}, pendingCH=null, availabilityMode=null, phase=NO_REBALANCE, actualMembers=[node-2, node-3], throwable=null, viewId=8}
> at org.infinispan.topology.ClusterTopologyManagerImpl.executeOnClusterAsync(ClusterTopologyManagerImpl.java:638)
> at org.infinispan.topology.ClusterTopologyManagerImpl.broadcastTopologyUpdate(ClusterTopologyManagerImpl.java:649)
> at org.infinispan.topology.ClusterCacheStatus.endReadNewPhase(ClusterCacheStatus.java:452)
> at org.infinispan.topology.RebalanceConfirmationCollector.confirmPhase(RebalanceConfirmationCollector.java:55)
> at org.infinispan.topology.ClusterCacheStatus.confirmRebalancePhase(ClusterCacheStatus.java:337)
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleRebalancePhaseConfirm(ClusterTopologyManagerImpl.java:258)
> at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:183)
> at org.infinispan.topology.CacheTopologyControlCommand.invokeAsync(CacheTopologyControlCommand.java:160)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.invokeReplicableCommand(GlobalInboundInvocationHandler.java:169)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.runReplicableCommand(GlobalInboundInvocationHandler.java:150)
> at org.infinispan.remoting.inboundhandler.GlobalInboundInvocationHandler.lambda$handleReplicableCommand$1(GlobalInboundInvocationHandler.java:144)
> at org.infinispan.util.concurrent.BlockingTaskAwareExecutorServiceImpl$RunnableWrapper.run(BlockingTaskAwareExecutorServiceImpl.java:212)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at org.wildfly.clustering.service.concurrent.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:47)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.send(JGroupsTransport.java:976)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.sendCommandToAll(JGroupsTransport.java:1096)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.performAsyncRemoteInvocation(JGroupsTransport.java:1034)
> at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotelyAsync(JGroupsTransport.java:242)
> at org.infinispan.remoting.transport.Transport.invokeRemotely(Transport.java:65)
> at org.infinispan.topology.ClusterTopologyManagerImpl.executeOnClusterAsync(ClusterTopologyManagerImpl.java:635)
> ... 15 more
> 2018-03-16 15:52:30,616 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 19) WFLYCLINF0003: Stopped client-mappings cache from ejb container
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8903) Conflict resolution not initiated if node rejoins with same topology
by Ryan Emerson (JIRA)
[ https://issues.jboss.org/browse/ISPN-8903?page=com.atlassian.jira.plugin.... ]
Ryan Emerson updated ISPN-8903:
-------------------------------
Status: Pull Request Sent (was: Coding In Progress)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5857
> Conflict resolution not initiated if node rejoins with same topology
> --------------------------------------------------------------------
>
> Key: ISPN-8903
> URL: https://issues.jboss.org/browse/ISPN-8903
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.2.0.Final
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Fix For: 9.3.0.Final, 9.2.1.Final
>
>
> The current logic in PreferAvailabilityStrategy and PreferConsistencyStrategy assumes that when a split brain occurs, the two partitions will continue to operate independently before a merge occurs.
> Consider a cluster \{A,B\} which partitions into P1 \{A\} and P2 \{C\}. P1 continues to operate and update cache entries, however P2 makes no process (possibly down to a long GC pause). When P2 merges into P1, no rebalance occurs (correct as the CH remains the same) and no conflict resolution occurs. Conflict resolution should be attempted in this scenario, as it's possible that entries have been put to P1 during the partition and therefore P2 will have stale values.
> This can be reproduced by creating two nodes, pausing one process, wait for split and then resuming the process. No CR will occur.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8954) StateReceiverImpl should request segments via an executor
by Ryan Emerson (JIRA)
[ https://issues.jboss.org/browse/ISPN-8954?page=com.atlassian.jira.plugin.... ]
Ryan Emerson updated ISPN-8954:
-------------------------------
Status: Pull Request Sent (was: Coding In Progress)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5857
> StateReceiverImpl should request segments via an executor
> ---------------------------------------------------------
>
> Key: ISPN-8954
> URL: https://issues.jboss.org/browse/ISPN-8954
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.2.0.Final
> Reporter: Ryan Emerson
> Assignee: Ryan Emerson
> Fix For: 9.2.1.Final
>
>
> Currently when requesting segments an InboundTransferTask is executed in the thread calling StateReceiver::getAllReplicasForSegment. The problem with this is that InboundTransferTask::requestSegments is a blocking RPC call, which due to the synchronization used by a SegmentRequest object means that it's not possible for a segment request to be cancelled while an InboundTransferTask::requestSegment call is being executed. Furthermore, this situation is exasperated by the fact that currently the transfer tasks are created using the state transfer timeout (defualt is 4 mins), so it's possible for the calling thread to be blocked for this amount of time.
> The solution is to utilise the StateTransferExecutor to process the InboundTransferTasks so that a segment request can be cancelled during a transfer request. Also, we should utilise the remaining time of the DefaultConflictManager::ReplicaSpliterator as the upper bound on the transfer tasks.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8946) All collection wrappers should always delegate to underlying collections where available
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-8946?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-8946:
-------------------------------------
Note these changes will also try to avoid all unnecessary wrapping by subsequent interceptors (especially around iterator and supporting the remove method of it). These wrappings should be avoidable at all levels by setting the REMOTE_ITERATION flag as needed. Note only the highest level should provide the wrapping unless the flag was already set (by remote invocation such as LocalStreamManager.
> All collection wrappers should always delegate to underlying collections where available
> ----------------------------------------------------------------------------------------
>
> Key: ISPN-8946
> URL: https://issues.jboss.org/browse/ISPN-8946
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.2.0.Final
> Reporter: William Burns
> Assignee: William Burns
>
> All places that override values returned from keySet and entrySet should always delegate appropriately to the underlying keySet or entrySet. We should try to be as minimally invasive to each level down as possible. An example would be that we shouldn't create copies of said collections if possible and instead just use delegates.
> https://github.com/infinispan/infinispan/pull/5794#discussion_r173789766
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8967) Infinispan Directory Provider: Lucene : Error loading metadata for index file
by Rohit Singh (JIRA)
[ https://issues.jboss.org/browse/ISPN-8967?page=com.atlassian.jira.plugin.... ]
Rohit Singh commented on ISPN-8967:
-----------------------------------
Hi,
Can you suggest on some of the cases of index getting corrupted.
I believe, setting the locking_strategy to single should not be causing index corruption *(As in this case, as this is a standalone server - single node).*
> Infinispan Directory Provider: Lucene : Error loading metadata for index file
> -----------------------------------------------------------------------------
>
> Key: ISPN-8967
> URL: https://issues.jboss.org/browse/ISPN-8967
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 8.2.5.Final
> Environment: *{color:red}Production Env{color}*
> Weblogic 12.2.1
> AIX 7.1
> JDK - IBM J9 - 1.8.0 - SR3
> Oracle DB - 12.0.2.0
> Reporter: Rohit Singh
> Priority: Critical
> Attachments: neutrino-hibernate-search-worker-jgroups.xml, neutrino-hibernatesearch-infinispan.xml
>
>
> J2EE Application - Production Env - Banking Domain
> *Hibernate Search Indexes (Lucene Indexes) - 5.7.0.Final*
> *Infinispan - 8.2.5.Final*
> *infinispan-directory-provider-8.2.5.Final*
> *jgroups-3.6.7.Final*
> Worker Backend : JGroups
> Worker Execution: Sync
> write_metadata_async: false (implicitly)
> *{color:red}On a standalone server (non-clustered), we are getting below error intermittently:{color}*
> 2018-03-19 17:29:11,938 ERROR [Hibernate Search sync consumer thread for index com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer] o.h.s.e.i.LogErrorHandler [LogErrorHandler.java:69]
> *{color:red}HSEARCH000058: Exception occurred java.io.FileNotFoundException: Error loading metadata for index file{color}*: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> Primary Failure:
> Entity com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer Id 1649990024999813056 Work Type org.hibernate.search.backend.AddLuceneWork
> java.io.FileNotFoundException: Error loading metadata for index file: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:138) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.infinispan.lucene.impl.DirectoryLucene.openInput(DirectoryLucene.java:102) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:294) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:171) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:949) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:126) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:92) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractCommitPolicy.getIndexWriter(AbstractCommitPolicy.java:33) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexCommitPolicy.getIndexWriter(SharedIndexCommitPolicy.java:77) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexWorkspaceImpl.getIndexWriter(SharedIndexWorkspaceImpl.java:36) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:203) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:81) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:165) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:151) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal]
> *As per our understanding, this issue should not come in {color:red}'non-clustered'{color} env. Also it should not arise when worker execution is {color:red}'sync'{color}.*
> *We have debugged the code, and confirmed that the value for {color:red}'write_metadata_async'{color} is coming as 'false' only (as expected).*
> As this is a production env (Banking Domain), we need your quick suggestion and support.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8784) CNFE with jdk10
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-8784?page=com.atlassian.jira.plugin.... ]
Dan Berindei reopened ISPN-8784:
--------------------------------
> CNFE with jdk10
> ---------------
>
> Key: ISPN-8784
> URL: https://issues.jboss.org/browse/ISPN-8784
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling
> Affects Versions: 9.2.0.CR2
> Environment: jdk10 (jvm 10-ea+42) (linux, osx)
> Reporter: Olivier Lamy
> Assignee: Dan Berindei
> Priority: Blocker
> Fix For: 9.2.1.Final
>
>
> Trying to infinispan-core with jdk10 (jetty infinispan session manager) I get the following CNFE:
> {code}
> org.infinispan.commons.CacheException: java.lang.NoClassDefFoundError: Could not initialize class org.infinispan.commons.marshall.jboss.ExtendedRiverMarshaller
> at org.infinispan.interceptors.impl.InvocationContextInterceptor.rethrowException(InvocationContextInterceptor.java:144)
> at org.infinispan.interceptors.impl.InvocationContextInterceptor.access$000(InvocationContextInterceptor.java:44)
> at org.infinispan.interceptors.impl.InvocationContextInterceptor$1.apply(InvocationContextInterceptor.java:61)
> at org.infinispan.interceptors.InvocationExceptionFunction.apply(InvocationExceptionFunction.java:21)
> at org.infinispan.interceptors.impl.SimpleAsyncInvocationStage.addCallback(SimpleAsyncInvocationStage.java:67)
> at org.infinispan.interceptors.InvocationStage.andExceptionally(InvocationStage.java:34)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndExceptionally(BaseAsyncInterceptor.java:132)
> at org.infinispan.interceptors.impl.InvocationContextInterceptor.visitCommand(InvocationContextInterceptor.java:97)
> at org.infinispan.interceptors.impl.AsyncInterceptorChainImpl.invoke(AsyncInterceptorChainImpl.java:248)
> at org.infinispan.cache.impl.CacheImpl.executeCommandAndCommitIfNeeded(CacheImpl.java:1651)
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1299)
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:1765)
> at org.infinispan.cache.impl.CacheImpl.put(CacheImpl.java:280)
> at org.infinispan.cache.impl.AbstractDelegatingCache.put(AbstractDelegatingCache.java:358)
> at org.infinispan.cache.impl.EncoderCache.put(EncoderCache.java:655)
> at org.eclipse.jetty.session.infinispan.InfinispanSessionDataStore.doStore(InfinispanSessionDataStore.java:216)
> at org.eclipse.jetty.server.session.AbstractSessionDataStore.store(AbstractSessionDataStore.java:103)
> at org.eclipse.jetty.server.session.DefaultSessionCache.shutdown(DefaultSessionCache.java:162)
> at org.eclipse.jetty.server.session.SessionHandler.shutdownSessions(SessionHandler.java:994)
> at org.eclipse.jetty.server.session.SessionHandler.doStop(SessionHandler.java:514)
> at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
> at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:149)
> at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:170)
> at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:124)
> at org.eclipse.jetty.server.handler.ContextHandler.stopContext(ContextHandler.java:863)
> at org.eclipse.jetty.servlet.ServletContextHandler.stopContext(ServletContextHandler.java:381)
> at org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:927)
> at org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:297)
> at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
> at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:149)
> at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:170)
> at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:124)
> at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
> at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:149)
> at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:170)
> at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:124)
> at org.eclipse.jetty.server.Server.doStop(Server.java:490)
> at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89)
> at org.eclipse.jetty.server.session.TestServer.stop(TestServer.java:128)
> at org.eclipse.jetty.server.session.AbstractClusteredSessionScavengingTest.testNoScavenging(AbstractClusteredSessionScavengingTest.java:146)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:564)
> at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
> at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
> at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
> at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143)
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.infinispan.commons.marshall.jboss.ExtendedRiverMarshaller
> at org.infinispan.commons.marshall.jboss.JBossMarshallerFactory.createMarshaller(JBossMarshallerFactory.java:49)
> at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller$PerThreadInstanceHolder.getMarshaller(AbstractJBossMarshaller.java:314)
> at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.startObjectOutput(AbstractJBossMarshaller.java:90)
> at org.infinispan.marshall.core.ExternalJBossMarshaller.objectToObjectStream(ExternalJBossMarshaller.java:32)
> at org.infinispan.marshall.core.GlobalMarshaller.writeRawUnknown(GlobalMarshaller.java:603)
> at org.infinispan.marshall.core.GlobalMarshaller.writeUnknown(GlobalMarshaller.java:598)
> at org.infinispan.marshall.core.GlobalMarshaller.writeNonNullableObject(GlobalMarshaller.java:412)
> at org.infinispan.marshall.core.GlobalMarshaller.writeNullableObject(GlobalMarshaller.java:355)
> at org.infinispan.marshall.core.GlobalMarshaller.writeObjectOutput(GlobalMarshaller.java:188)
> at org.infinispan.marshall.core.GlobalMarshaller.writeObjectOutput(GlobalMarshaller.java:181)
> at org.infinispan.marshall.core.GlobalMarshaller.objectToBuffer(GlobalMarshaller.java:305)
> at org.infinispan.marshall.core.MarshalledEntryImpl.marshall(MarshalledEntryImpl.java:117)
> at org.infinispan.marshall.core.MarshalledEntryImpl.getValueBytes(MarshalledEntryImpl.java:100)
> at org.infinispan.persistence.file.SingleFileStore.write(SingleFileStore.java:322)
> at org.infinispan.persistence.manager.PersistenceManagerImpl.lambda$writeToAllNonTxStores$9(PersistenceManagerImpl.java:529)
> at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
> at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
> at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1492)
> at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
> at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
> at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
> at org.infinispan.persistence.manager.PersistenceManagerImpl.writeToAllNonTxStores(PersistenceManagerImpl.java:529)
> at org.infinispan.interceptors.impl.CacheWriterInterceptor.storeEntry(CacheWriterInterceptor.java:452)
> at org.infinispan.interceptors.impl.CacheWriterInterceptor.lambda$visitPutKeyValueCommand$1(CacheWriterInterceptor.java:187)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextThenAccept(BaseAsyncInterceptor.java:109)
> at org.infinispan.interceptors.impl.CacheWriterInterceptor.visitPutKeyValueCommand(CacheWriterInterceptor.java:179)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:58)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitDataCommand(CacheLoaderInterceptor.java:201)
> at org.infinispan.interceptors.impl.CacheLoaderInterceptor.visitPutKeyValueCommand(CacheLoaderInterceptor.java:128)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextThenAccept(BaseAsyncInterceptor.java:102)
> at org.infinispan.interceptors.impl.EntryWrappingInterceptor.setSkipRemoteGetsAndInvokeNextForDataCommand(EntryWrappingInterceptor.java:664)
> at org.infinispan.interceptors.impl.EntryWrappingInterceptor.visitPutKeyValueCommand(EntryWrappingInterceptor.java:311)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndFinally(BaseAsyncInterceptor.java:154)
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitNonTxDataWriteCommand(AbstractLockingInterceptor.java:135)
> at org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor.visitDataWriteCommand(NonTransactionalLockingInterceptor.java:38)
> at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitPutKeyValueCommand(AbstractLockingInterceptor.java:85)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNext(BaseAsyncInterceptor.java:58)
> at org.infinispan.interceptors.impl.CacheMgmtInterceptor.updateStoreStatistics(CacheMgmtInterceptor.java:197)
> at org.infinispan.interceptors.impl.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:162)
> at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:67)
> at org.infinispan.interceptors.BaseAsyncInterceptor.invokeNextAndExceptionally(BaseAsyncInterceptor.java:127)
> ... 59 more
> {code}
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8967) Infinispan Directory Provider: Lucene : Error loading metadata for index file
by Sanne Grinovero (JIRA)
[ https://issues.jboss.org/browse/ISPN-8967?page=com.atlassian.jira.plugin.... ]
Sanne Grinovero commented on ISPN-8967:
---------------------------------------
Hi,
this exception means that the index is corrupted. You'll need to rebuild the index, and figure out why this happened - there should be some previously logged error to help figure out the specific cause.
In terms of configuration, I would not recommend using locking_strategy "single" as it means the index lock is local to one node, defeating its purpose of defending the index from concurrent writes, which would likely result in index corruption.
See also the Hibernate Search documentation explicitly suggesting to not use that in an architecture having multiple instances:
- https://docs.jboss.org/hibernate/stable/search/reference/en-US/html_singl...
> Infinispan Directory Provider: Lucene : Error loading metadata for index file
> -----------------------------------------------------------------------------
>
> Key: ISPN-8967
> URL: https://issues.jboss.org/browse/ISPN-8967
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 8.2.5.Final
> Environment: *{color:red}Production Env{color}*
> Weblogic 12.2.1
> AIX 7.1
> JDK - IBM J9 - 1.8.0 - SR3
> Oracle DB - 12.0.2.0
> Reporter: Rohit Singh
> Priority: Critical
> Attachments: neutrino-hibernate-search-worker-jgroups.xml, neutrino-hibernatesearch-infinispan.xml
>
>
> J2EE Application - Production Env - Banking Domain
> *Hibernate Search Indexes (Lucene Indexes) - 5.7.0.Final*
> *Infinispan - 8.2.5.Final*
> *infinispan-directory-provider-8.2.5.Final*
> *jgroups-3.6.7.Final*
> Worker Backend : JGroups
> Worker Execution: Sync
> write_metadata_async: false (implicitly)
> *{color:red}On a standalone server (non-clustered), we are getting below error intermittently:{color}*
> 2018-03-19 17:29:11,938 ERROR [Hibernate Search sync consumer thread for index com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer] o.h.s.e.i.LogErrorHandler [LogErrorHandler.java:69]
> *{color:red}HSEARCH000058: Exception occurred java.io.FileNotFoundException: Error loading metadata for index file{color}*: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> Primary Failure:
> Entity com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer Id 1649990024999813056 Work Type org.hibernate.search.backend.AddLuceneWork
> java.io.FileNotFoundException: Error loading metadata for index file: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:138) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.infinispan.lucene.impl.DirectoryLucene.openInput(DirectoryLucene.java:102) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:294) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:171) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:949) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:126) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:92) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractCommitPolicy.getIndexWriter(AbstractCommitPolicy.java:33) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexCommitPolicy.getIndexWriter(SharedIndexCommitPolicy.java:77) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexWorkspaceImpl.getIndexWriter(SharedIndexWorkspaceImpl.java:36) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:203) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:81) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:165) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:151) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal]
> *As per our understanding, this issue should not come in {color:red}'non-clustered'{color} env. Also it should not arise when worker execution is {color:red}'sync'{color}.*
> *We have debugged the code, and confirmed that the value for {color:red}'write_metadata_async'{color} is coming as 'false' only (as expected).*
> As this is a production env (Banking Domain), we need your quick suggestion and support.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years