[JBoss JIRA] (ISPN-8977) Counter listener fails in client-server mode
by Pedro Ruivo (JIRA)
Pedro Ruivo created ISPN-8977:
---------------------------------
Summary: Counter listener fails in client-server mode
Key: ISPN-8977
URL: https://issues.jboss.org/browse/ISPN-8977
Project: Infinispan
Issue Type: Bug
Components: Clustered Counter, Hot Rod
Reporter: Pedro Ruivo
Assignee: Pedro Ruivo
Sometime, most of the listener test fails (reason unknown):
{noformat}
org.infinispan.client.hotrod.counter.WeakCounterAPITest.testListenerAddAndRemove
org.infinispan.client.hotrod.counter.StrongCounterAPITest.testListenerWithBounds
org.infinispan.client.hotrod.counter.StrongCounterAPITest.testConcurrentListenerAddAndRemove
org.infinispan.client.hotrod.counter.StrongCounterAPITest.testListenerAddAndRemove
org.infinispan.client.hotrod.counter.WeakCounterAPITest.testExceptionInListener
org.infinispan.client.hotrod.counter.StrongCounterAPITest.testListenerFailover
org.infinispan.client.hotrod.counter.WeakCounterAPITest.testListenerFailover
org.infinispan.client.hotrod.counter.StrongCounterAPITest.testExceptionInListener
org.infinispan.client.hotrod.counter.WeakCounterAPITest.testConcurrentListenerAddAndRemove
{noformat}
probably related to lazy cache start? another issue with cluster listener registration?
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8967) Infinispan Directory Provider: Lucene : Error loading metadata for index file
by Rohit Singh (JIRA)
[ https://issues.jboss.org/browse/ISPN-8967?page=com.atlassian.jira.plugin.... ]
Rohit Singh commented on ISPN-8967:
-----------------------------------
[~sannegrinovero] Kindly provide some inputs.
> Infinispan Directory Provider: Lucene : Error loading metadata for index file
> -----------------------------------------------------------------------------
>
> Key: ISPN-8967
> URL: https://issues.jboss.org/browse/ISPN-8967
> Project: Infinispan
> Issue Type: Bug
> Components: Lucene Directory
> Affects Versions: 8.2.5.Final
> Environment: *{color:red}Production Env{color}*
> Weblogic 12.2.1
> AIX 7.1
> JDK - IBM J9 - 1.8.0 - SR3
> Oracle DB - 12.0.2.0
> Reporter: Rohit Singh
> Priority: Critical
> Attachments: neutrino-hibernate-search-worker-jgroups.xml, neutrino-hibernatesearch-infinispan.xml
>
>
> J2EE Application - Production Env - Banking Domain
> *Hibernate Search Indexes (Lucene Indexes) - 5.7.0.Final*
> *Infinispan - 8.2.5.Final*
> *infinispan-directory-provider-8.2.5.Final*
> *jgroups-3.6.7.Final*
> Worker Backend : JGroups
> Worker Execution: Sync
> write_metadata_async: false (implicitly)
> *{color:red}On a standalone server (non-clustered), we are getting below error intermittently:{color}*
> 2018-03-19 17:29:11,938 ERROR [Hibernate Search sync consumer thread for index com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer] o.h.s.e.i.LogErrorHandler [LogErrorHandler.java:69]
> *{color:red}HSEARCH000058: Exception occurred java.io.FileNotFoundException: Error loading metadata for index file{color}*: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> Primary Failure:
> Entity com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer Id 1649990024999813056 Work Type org.hibernate.search.backend.AddLuceneWork
> java.io.FileNotFoundException: Error loading metadata for index file: M|segments_w6|com.nucleus.integration.ws.server.globalcustomer.entity.GlobalCustomer|-1
> at org.infinispan.lucene.impl.DirectoryImplementor.openInput(DirectoryImplementor.java:138) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.infinispan.lucene.impl.DirectoryLucene.openInput(DirectoryLucene.java:102) ~[infinispan-lucene-directory-8.2.5.Final.jar:8.2.5.Final]
> at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:109) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:294) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexFileDeleter.<init>(IndexFileDeleter.java:171) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:949) ~[lucene-core-5.5.4.jar:5.5.4 31012120ebbd93744753eb37f1dbc5e654628291 - jpountz - 2017-02-08 19:08:03]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:126) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:92) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractCommitPolicy.getIndexWriter(AbstractCommitPolicy.java:33) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexCommitPolicy.getIndexWriter(SharedIndexCommitPolicy.java:77) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SharedIndexWorkspaceImpl.getIndexWriter(SharedIndexWorkspaceImpl.java:36) ~[hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriterDelegate(AbstractWorkspaceImpl.java:203) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:81) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:46) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.applyChangesets(SyncWorkProcessor.java:165) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at org.hibernate.search.backend.impl.lucene.SyncWorkProcessor$Consumer.run(SyncWorkProcessor.java:151) [hibernate-search-engine-5.7.0.Final.jar:5.7.0.Final]
> at java.lang.Thread.run(Thread.java:785) [na:1.8.0-internal]
> *As per our understanding, this issue should not come in {color:red}'non-clustered'{color} env. Also it should not arise when worker execution is {color:red}'sync'{color}.*
> *We have debugged the code, and confirmed that the value for {color:red}'write_metadata_async'{color} is coming as 'false' only (as expected).*
> As this is a production env (Banking Domain), we need your quick suggestion and support.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8974) Avoid JBossMarshaller instance caches growing limitless
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-8974?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-8974:
-----------------------------------
Status: Open (was: New)
> Avoid JBossMarshaller instance caches growing limitless
> -------------------------------------------------------
>
> Key: ISPN-8974
> URL: https://issues.jboss.org/browse/ISPN-8974
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling
> Affects Versions: 8.2.10.Final
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
>
> In the 8.2.x design, JBoss Marshaller marshaller/unmarshaller instances were cached in thread locals to speed up things. Internally it instance caches are kept which never shrink. This can create leaks.
> We should change the code so that when releasing marshallers/unmarshaller, if instance cache is bigger than certain size, destroy the marshaller and let it recreate it. A good size to not go beyond would be 1024, given that in 8.2.x we configure JBoss Marshaller instance count to 32.
> This would require Infinispan code to access and check instance cache size.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8974) Avoid JBossMarshaller instance caches growing limitless
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-8974?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-8974:
-----------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/infinispan/infinispan/pull/5860
> Avoid JBossMarshaller instance caches growing limitless
> -------------------------------------------------------
>
> Key: ISPN-8974
> URL: https://issues.jboss.org/browse/ISPN-8974
> Project: Infinispan
> Issue Type: Bug
> Components: Marshalling
> Affects Versions: 8.2.10.Final
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
>
> In the 8.2.x design, JBoss Marshaller marshaller/unmarshaller instances were cached in thread locals to speed up things. Internally it instance caches are kept which never shrink. This can create leaks.
> We should change the code so that when releasing marshallers/unmarshaller, if instance cache is bigger than certain size, destroy the marshaller and let it recreate it. A good size to not go beyond would be 1024, given that in 8.2.x we configure JBoss Marshaller instance count to 32.
> This would require Infinispan code to access and check instance cache size.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8976) 2 subclusters failed to merge to 1 cluster with
by Robert Cernak (JIRA)
[ https://issues.jboss.org/browse/ISPN-8976?page=com.atlassian.jira.plugin.... ]
Robert Cernak updated ISPN-8976:
--------------------------------
Summary: 2 subclusters failed to merge to 1 cluster with (was: 2 subclusters failed to merge to 1 cluster)
> 2 subclusters failed to merge to 1 cluster with
> ------------------------------------------------
>
> Key: ISPN-8976
> URL: https://issues.jboss.org/browse/ISPN-8976
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 9.1.4.Final
> Reporter: Robert Cernak
> Attachments: logs.zip
>
>
> At the beginning I have main cluster consisted of 8 nodes.
> Then I disconnected main switch on which these nodes were connected.
> This leaded to separating main cluster to 2 subclusters - first with 2 nodes and second with 6 nodes. This was expected.
> After that I rebooted the nodes. After reboot, nodes again correctly formed 2 subclusters with 2 and 6 members.
> After a long time when all nodes were stable with low cpu load, I connected the main switch back which should lead to recreation of main cluster with 8 controllers.
> However main cluster did not recovered:
> subcluster2 did not change - still had 6 nodes connected - no new members
> subcluster1 - nodes did not connect with subcluster2 and after cca 30min they left the cluster.
> When I checked infinispan logs of node1 from 1st subcluster I had IllegalLifecycleStateException for every created cache (see included logs.zip):
> [transport-thread-744a974a-2811-4f79-ac63-f32daf005d7f-p4-t6] (ClusterCacheStatus.java:599) - ISPN000228: Failed to recover cache XXX state after the current node became the coordinator
> org.infinispan.IllegalLifecycleStateException: Cache container has been stopped and cannot be reused. Recreate the cache container.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8976) 2 subclusters failed to merge to 1 cluster - IllegalLifecycleStateException
by Robert Cernak (JIRA)
[ https://issues.jboss.org/browse/ISPN-8976?page=com.atlassian.jira.plugin.... ]
Robert Cernak updated ISPN-8976:
--------------------------------
Summary: 2 subclusters failed to merge to 1 cluster - IllegalLifecycleStateException (was: 2 subclusters failed to merge to 1 cluster with )
> 2 subclusters failed to merge to 1 cluster - IllegalLifecycleStateException
> ---------------------------------------------------------------------------
>
> Key: ISPN-8976
> URL: https://issues.jboss.org/browse/ISPN-8976
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 9.1.4.Final
> Reporter: Robert Cernak
> Attachments: logs.zip
>
>
> At the beginning I have main cluster consisted of 8 nodes.
> Then I disconnected main switch on which these nodes were connected.
> This leaded to separating main cluster to 2 subclusters - first with 2 nodes and second with 6 nodes. This was expected.
> After that I rebooted the nodes. After reboot, nodes again correctly formed 2 subclusters with 2 and 6 members.
> After a long time when all nodes were stable with low cpu load, I connected the main switch back which should lead to recreation of main cluster with 8 controllers.
> However main cluster did not recovered:
> subcluster2 did not change - still had 6 nodes connected - no new members
> subcluster1 - nodes did not connect with subcluster2 and after cca 30min they left the cluster.
> When I checked infinispan logs of node1 from 1st subcluster I had IllegalLifecycleStateException for every created cache (see included logs.zip):
> [transport-thread-744a974a-2811-4f79-ac63-f32daf005d7f-p4-t6] (ClusterCacheStatus.java:599) - ISPN000228: Failed to recover cache XXX state after the current node became the coordinator
> org.infinispan.IllegalLifecycleStateException: Cache container has been stopped and cannot be reused. Recreate the cache container.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years
[JBoss JIRA] (ISPN-8976) 2 subclusters failed to merge to 1 cluster
by Robert Cernak (JIRA)
[ https://issues.jboss.org/browse/ISPN-8976?page=com.atlassian.jira.plugin.... ]
Robert Cernak updated ISPN-8976:
--------------------------------
Summary: 2 subclusters failed to merge to 1 cluster (was: 2 subclusters failed to )
> 2 subclusters failed to merge to 1 cluster
> ------------------------------------------
>
> Key: ISPN-8976
> URL: https://issues.jboss.org/browse/ISPN-8976
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 9.1.4.Final
> Reporter: Robert Cernak
> Attachments: logs.zip
>
>
> At the beginning I have main cluster consisted of 8 nodes.
> Then I disconnected main switch on which these nodes were connected.
> This leaded to separating main cluster to 2 subclusters - first with 2 nodes and second with 6 nodes. This was expected.
> After that I rebooted the nodes. After reboot, nodes again correctly formed 2 subclusters with 2 and 6 members.
> After a long time when all nodes were stable with low cpu load, I connected the main switch back which should lead to recreation of main cluster with 8 controllers.
> However main cluster did not recovered:
> subcluster2 did not change - still had 6 nodes connected - no new members
> subcluster1 - nodes did not connect with subcluster2 and after cca 30min they left the cluster.
> When I checked infinispan logs of node1 from 1st subcluster I had IllegalLifecycleStateException for every created cache (see included logs.zip):
> [transport-thread-744a974a-2811-4f79-ac63-f32daf005d7f-p4-t6] (ClusterCacheStatus.java:599) - ISPN000228: Failed to recover cache XXX state after the current node became the coordinator
> org.infinispan.IllegalLifecycleStateException: Cache container has been stopped and cannot be reused. Recreate the cache container.
--
This message was sent by Atlassian JIRA
(v7.5.0#75005)
8 years