[JBoss JIRA] (ISPN-6391) Cache managers failing to start do not stop global components
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6391?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec updated ISPN-6391:
--------------------------------------
Fix Version/s: 9.0.0.Alpha1
> Cache managers failing to start do not stop global components
> -------------------------------------------------------------
>
> Key: ISPN-6391
> URL: https://issues.jboss.org/browse/ISPN-6391
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 8.2.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 9.0.0.Final, 9.0.0.Alpha1, 8.2.1.Final
>
>
> If one of the global components fails to start, {{GlobalComponentRegistry.start()}} removes the volatile components, but it doesn't call {{stop()}} on those components.
> The most likely reason for a global component start failure is a timeout in {{JGroupsTransport.waitForInitialNodes()}}. After such a timeout, the transport isn't stopped, so the channel's sockets and threads are only freed after a few GC cycles (via finalization).
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-6239) InitialClusterSizeTest.testInitialClusterSizeFail random failures
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6239?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec updated ISPN-6239:
--------------------------------------
Fix Version/s: 9.0.0.Final
9.0.0.Alpha1
> InitialClusterSizeTest.testInitialClusterSizeFail random failures
> -----------------------------------------------------------------
>
> Key: ISPN-6239
> URL: https://issues.jboss.org/browse/ISPN-6239
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 8.2.0.Beta2
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_failure
> Fix For: 8.2.0.CR1, 8.2.0.Final, 9.0.0.Final, 9.0.0.Alpha1
>
>
> The test starts 3 nodes concurrently, but configures Infinispan to wait for a cluster of 4 nodes, and expects that the nodes fail to start in {{initialClusterTimeout}} + 1 second.
> However, because of a bug in {{TEST_PING}}, the first 2 nodes see each other as coordinator and send a {{JOIN}} request to each other, and it takes 3 seconds to recover and start the cluster properly.
> The bug in {{TEST_PING}} is actually a hack introduced for {{ISPN-5106}}. The problem was that the first node (A) to start would install a view with itself as the single node, but the second node to start (B) would start immediately, and the discovery request from B would reach B's {{TEST_PING}} before it saw the view. That way, B could choose itself as the coordinator based on the order of A's and B's UUIDs, and the cluster would start as 2 partitions. Since most of our tests actually remove {{MERGE3}} from the protocol stack, the partitions would never merge and the test would fail with a timeout.
> I fixed this in {{TEST_PING}} by assuming that the sender of the first discovery response is a coordinator, when there is a single response. This worked because all but a few tests start their managers sequentially, however it sometimes introduces this 3 seconds delay when nodes start in parallel.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-6409) NPE in ChannelMetric for non-master nodes
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-6409?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-6409:
--------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
Integrated in master and 8.2.x. Thanks [~NadirX]!
> NPE in ChannelMetric for non-master nodes
> -----------------------------------------
>
> Key: ISPN-6409
> URL: https://issues.jboss.org/browse/ISPN-6409
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 8.2.0.Final
> Reporter: Tristan Tarrant
> Assignee: Tristan Tarrant
> Fix For: 9.0.0.Final, 9.0.0.Alpha1, 8.2.1.Final
>
>
> Attempting to retrieve the jgroups subsystem attributes of a non-master node on a RELAY channel causes an NPE.
> [Server:earth-one] 18:36:34,055 ERROR [org.jboss.as.controller.management-operation] (ServerService Thread Pool -- 35) WFLYCTL0013: Operation ("read-attribute") failed - address: ([
> [Server:earth-one] ("subsystem" => "datagrid-jgroups"),
> [Server:earth-one] ("channel" => "xsite")
> [Server:earth-one] ]): java.lang.IllegalArgumentException: value is null
> [Server:earth-one] at org.jboss.dmr.ModelNode.<init>(ModelNode.java:162)
> [Server:earth-one] at org.infinispan.server.jgroups.subsystem.ChannelMetric$2.execute(ChannelMetric.java:46)
> [Server:earth-one] at org.infinispan.server.jgroups.subsystem.ChannelMetric$2.execute(ChannelMetric.java:43)
> [Server:earth-one] at org.infinispan.server.jgroups.subsystem.ChannelMetricExecutor.execute(ChannelMetricExecutor.java:47)
> [Server:earth-one] at org.infinispan.server.commons.controller.MetricHandler.executeRuntimeStep(MetricHandler.java:70)
> [Server:earth-one] at org.jboss.as.controller.AbstractRuntimeOnlyHandler$1.execute(AbstractRuntimeOnlyHandler.java:53)
> [Server:earth-one] at org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:890)
> [Server:earth-one] at org.jboss.as.controller.AbstractOperationContext.processStages(AbstractOperationContext.java:659)
> [Server:earth-one] at org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:370)
> [Server:earth-one] at org.jboss.as.controller.OperationContextImpl.executeOperation(OperationContextImpl.java:1344)
> [Server:earth-one] at org.jboss.as.controller.ModelControllerImpl.internalExecute(ModelControllerImpl.java:392)
> [Server:earth-one] at org.jboss.as.controller.ModelControllerImpl.execute(ModelControllerImpl.java:217)
> [Server:earth-one] at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler.internalExecute(TransactionalProtocolOperationHandler.java:247)
> [Server:earth-one] at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler$ExecuteRequestHandler.doExecute(TransactionalProtocolOperationHandler.java:185)
> [Server:earth-one] at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler$ExecuteRequestHandler$1.run(TransactionalProtocolOperationHandler.java:138)
> [Server:earth-one] at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler$ExecuteRequestHandler$1.run(TransactionalProtocolOperationHandler.java:134)
> [Server:earth-one] at java.security.AccessController.doPrivileged(Native Method)
> [Server:earth-one] at javax.security.auth.Subject.doAs(Subject.java:360)
> [Server:earth-one] at org.jboss.as.controller.AccessAuditContext.doAs(AccessAuditContext.java:81)
> [Server:earth-one] at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler$ExecuteRequestHandler$2$1.run(TransactionalProtocolOperationHandler.java:157)
> [Server:earth-one] at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler$ExecuteRequestHandler$2$1.run(TransactionalProtocolOperationHandler.java:153)
> [Server:earth-one] at java.security.AccessController.doPrivileged(Native Method)
> [Server:earth-one] at org.jboss.as.controller.remote.TransactionalProtocolOperationHandler$ExecuteRequestHandler$2.execute(TransactionalProtocolOperationHandler.java:153)
> [Server:earth-one] at org.jboss.as.protocol.mgmt.AbstractMessageHandler$ManagementRequestContextImpl$1.doExecute(AbstractMessageHandler.java:363)
> [Server:earth-one] at org.jboss.as.protocol.mgmt.AbstractMessageHandler$AsyncTaskRunner.run(AbstractMessageHandler.java:472)
> [Server:earth-one] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [Server:earth-one] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [Server:earth-one] at java.lang.Thread.run(Thread.java:745)
> [Server:earth-one] at org.jboss.threads.JBossThread.run(JBossThread.java:320)
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-6405) Persistence configuration with clustered cache can cause duplicate expiration messages
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-6405?page=com.atlassian.jira.plugin.... ]
William Burns commented on ISPN-6405:
-------------------------------------
I figured out the core issue here. The problem is that due to the in memory expiration being asynchronous, this completes and creates a bunch of asynchronous expiration requests. It then proceeds to process the store synchronously. It is then possible for the store to read the entry and attempt to remove it after the asynchronous thread has since completed (thus getting 2 notifications). The true fix is thus to make the processExpiration for in memory entries to be synchronous. And to be honest this fix is a lot better since before we could spin up and keep all async threads busy if there are enough expired in memory entries which is what we don't want.
> Persistence configuration with clustered cache can cause duplicate expiration messages
> --------------------------------------------------------------------------------------
>
> Key: ISPN-6405
> URL: https://issues.jboss.org/browse/ISPN-6405
> Project: Infinispan
> Issue Type: Bug
> Components: Expiration
> Reporter: William Burns
> Assignee: William Burns
>
> It seems that some configuration of persistence causes the entry to be expired twice when it should have only expired once.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years
[JBoss JIRA] (ISPN-6022) Unable to query cache when data is preloaded via AdvancedCacheLoader#process
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-6022?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes commented on ISPN-6022:
-----------------------------------------
If you use {{index.addProperty("default.directory_provider", "ram")}} the indexes will be volatile, and preload will also cause a re-index. Be aware that {{ram}} storage is tailored for small to medium indexes only.
> Unable to query cache when data is preloaded via AdvancedCacheLoader#process
> ----------------------------------------------------------------------------
>
> Key: ISPN-6022
> URL: https://issues.jboss.org/browse/ISPN-6022
> Project: Infinispan
> Issue Type: Bug
> Components: Embedded Querying, Loaders and Stores
> Affects Versions: 8.1.0.Final, 8.1.1.Final
> Reporter: Dan Siviter
>
> When preloading from a {{AdvancedCacheLoader}} the index doesn't get updated. Therefore it is only possible to query items that have been {{#put(...)}} into the cache. I am able to get preloaded items from the cache using their key which leads me to think the index is never built on pre-load.
> I've seen no implicit rebuilding of caches in any of the existing {{AdvancedCacheLoader#process(...)}} which leads me to think this will not work with any of them.
> I've verified this reindexing using {{searchManager.getMassIndexer().start()}} and the query will then return results.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
10 years