[JBoss JIRA] (ISPN-4994) MultiHotRodServerIspnDirReplQueryTest fails consistently
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-4994?page=com.atlassian.jira.plugin.... ]
Adrian Nistor commented on ISPN-4994:
-------------------------------------
{noformat}
MultiHotRodServerIspnDirReplQueryTest.testAttributeQuery
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at org.infinispan.client.hotrod.query.MultiHotRodServerQueryTest.testAttributeQuery(MultiHotRodServerQueryTest.java:124)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
at org.testng.TestRunner.privateRun(TestRunner.java:767)
at org.testng.TestRunner.run(TestRunner.java:617)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:348)
at org.testng.SuiteRunner.access$000(SuiteRunner.java:38)
at org.testng.SuiteRunner$SuiteWorker.run(SuiteRunner.java:382)
at org.testng.internal.thread.ThreadUtil$2.call(ThreadUtil.java:64)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}
> MultiHotRodServerIspnDirReplQueryTest fails consistently
> --------------------------------------------------------
>
> Key: ISPN-4994
> URL: https://issues.jboss.org/browse/ISPN-4994
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Querying
> Affects Versions: 7.0.1.Final
> Reporter: Adrian Nistor
> Assignee: Adrian Nistor
>
> See failure in CI ere: http://ci.infinispan.org/viewLog.html?buildId=14226&tab=buildResultsDiv&b...
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months
[JBoss JIRA] (ISPN-4949) Split brain: inconsistent data after merge
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-4949?page=com.atlassian.jira.plugin.... ]
Dan Berindei commented on ISPN-4949:
------------------------------------
Bela, speaking of reliable view installation, can you clarify a bit on why view acks are needed with the current algorithm? I got reminded of them as I'm getting these errors in my stress test:
{noformat}
22:38:00,391 WARN (Incoming-1,C-35962:) [GMS] C-35962: failed to collect all ACKs (expected=2) for view [C-35962|5] after 2000ms, missing 1 ACKs from (1) C-35962
22:38:00,412 WARN (Incoming-1,A-23928:) [GMS] A-23928: failed to collect all ACKs (expected=2) for view [C-35962|5] after 2000ms, missing 1 ACKs from (1) A-23928
22:38:21,339 WARN (Incoming-1,C-35962:) [GMS] C-35962: failed to collect all ACKs (expected=2) for view [C-35962|7] after 2000ms, missing 1 ACKs from (1) C-35962
22:38:21,364 WARN (Incoming-1,D-4191:) [GMS] D-4191: failed to collect all ACKs (expected=2) for view [C-35962|7] after 2000ms, missing 1 ACKs from (1) D-4191
22:38:45,348 WARN (Incoming-1,C-35962:) [GMS] C-35962: failed to collect all ACKs (expected=2) for view [C-35962|9] after 2000ms, missing 1 ACKs from (1) C-35962
22:38:45,368 WARN (Incoming-1,B-18775:) [GMS] B-18775: failed to collect all ACKs (expected=2) for view [C-35962|9] after 2000ms, missing 1 ACKs from (1) B-18775
22:39:06,304 WARN (Incoming-1,C-35962:) [GMS] C-35962: failed to collect all ACKs (expected=2) for view [C-35962|11] after 2000ms, missing 1 ACKs from (1) C-35962
22:39:06,326 WARN (Incoming-1,A-23928:) [GMS] A-23928: failed to collect all ACKs (expected=1) for view [C-35962|11] after 2000ms, missing 1 ACKs from (1) A-23928
22:39:18,935 WARN (Incoming-1,D-4191:) [GMS] D-4191: failed to collect all ACKs (expected=2) for view [C-35962|12] after 2000ms, missing 2 ACKs from (2) A-23928, D-4191
{noformat}
[~rvansa] the current PR looks pretty good in my stress tests. I still get some failures because MERGE3 sometimes merges the partitions in 2 steps and it takes > 20 seconds to install the final view, but otherwise waiting for an ack from all the members before handling view updates seems to do the trick.
> Split brain: inconsistent data after merge
> ------------------------------------------
>
> Key: ISPN-4949
> URL: https://issues.jboss.org/browse/ISPN-4949
> Project: Infinispan
> Issue Type: Bug
> Components: State Transfer
> Affects Versions: 7.0.0.Final
> Reporter: Radim Vansa
> Assignee: Dan Berindei
> Priority: Critical
>
> 1) cluster A, B, C, D splits into 2 parts:
> A, B (coord A) finds this out immediately and enters degraded mode with CH [A, B, C, D]
> C, D (coord D) first detects that B is lost, gets view A, C, D and starts rebalance with CH [A, C, D]. Segment X is primary owned by C (it had backup on B but this got lost)
> 2) D detects that A was lost as well, therefore enters degraded mode with CH [A, C, D]
> 3) C inserts entry into X: all owners (only C) is present, therefore the modification is allowed
> 4) cluster is merged and coordinator finds out that the max stable topology has CH [A, B, C, D] (it is the older of the two partitions' topologies, got from A, B) - logs 'No active or unavailable partitions, so all the partitions must be in degraded mode' (yes, all partitions are in degraded mode, but write has happened in the meantime)
> 5) The old CH is broadcast in newest topology, no rebalance happens
> 6) Inconsistency: read in X may miss the update
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months
[JBoss JIRA] (ISPN-4978) Adding ClientListener on secured cache fails
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-4978?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant reassigned ISPN-4978:
-------------------------------------
Assignee: Tristan Tarrant
> Adding ClientListener on secured cache fails
> --------------------------------------------
>
> Key: ISPN-4978
> URL: https://issues.jboss.org/browse/ISPN-4978
> Project: Infinispan
> Issue Type: Bug
> Components: Security, Server
> Reporter: Vojtech Juranek
> Assignee: Tristan Tarrant
>
> Executing some operation related to {{ClientListener}} on {{RemoteCache}} (e.g. {{addClientListener}}) which is secured fails with
> {noformat}
> ERROR [org.infinispan.remoting.InboundInvocationHandlerImpl] (remote-thread--p3-t2) ISPN000260: Exception executing command: java.lang.SecurityException: ISPN000287: Unauthorized acce
> ss: subject 'null' lacks 'EXEC' permission
> at org.infinispan.security.impl.AuthorizationHelper.checkPermission(AuthorizationHelper.java:76) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.security.impl.AuthorizationManagerImpl.checkPermission(AuthorizationManagerImpl.java:44) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.distexec.DefaultExecutorService.ensureAccessPermissions(DefaultExecutorService.java:635) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.distexec.DefaultExecutorService.<init>(DefaultExecutorService.java:166) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.distexec.DefaultExecutorService.<init>(DefaultExecutorService.java:139) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.notifications.cachelistener.cluster.ClusterListenerReplicateCallable.setEnvironment(ClusterListenerReplicateCallable.java:65) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.commands.read.DistributedExecuteCommand.perform(DistributedExecuteCommand.java:96) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:97) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.remoting.InboundInvocationHandlerImpl.access$000(InboundInvocationHandlerImpl.java:52) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.remoting.InboundInvocationHandlerImpl$2.run(InboundInvocationHandlerImpl.java:193) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_55]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_55]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_55]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months
[JBoss JIRA] (ISPN-4978) Adding ClientListener on secured cache fails
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-4978?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-4978:
----------------------------------
Status: Open (was: New)
> Adding ClientListener on secured cache fails
> --------------------------------------------
>
> Key: ISPN-4978
> URL: https://issues.jboss.org/browse/ISPN-4978
> Project: Infinispan
> Issue Type: Bug
> Components: Security, Server
> Reporter: Vojtech Juranek
> Assignee: Tristan Tarrant
>
> Executing some operation related to {{ClientListener}} on {{RemoteCache}} (e.g. {{addClientListener}}) which is secured fails with
> {noformat}
> ERROR [org.infinispan.remoting.InboundInvocationHandlerImpl] (remote-thread--p3-t2) ISPN000260: Exception executing command: java.lang.SecurityException: ISPN000287: Unauthorized acce
> ss: subject 'null' lacks 'EXEC' permission
> at org.infinispan.security.impl.AuthorizationHelper.checkPermission(AuthorizationHelper.java:76) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.security.impl.AuthorizationManagerImpl.checkPermission(AuthorizationManagerImpl.java:44) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.distexec.DefaultExecutorService.ensureAccessPermissions(DefaultExecutorService.java:635) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.distexec.DefaultExecutorService.<init>(DefaultExecutorService.java:166) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.distexec.DefaultExecutorService.<init>(DefaultExecutorService.java:139) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.notifications.cachelistener.cluster.ClusterListenerReplicateCallable.setEnvironment(ClusterListenerReplicateCallable.java:65) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.commands.read.DistributedExecuteCommand.perform(DistributedExecuteCommand.java:96) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.remoting.InboundInvocationHandlerImpl.handleInternal(InboundInvocationHandlerImpl.java:97) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.remoting.InboundInvocationHandlerImpl.access$000(InboundInvocationHandlerImpl.java:52) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at org.infinispan.remoting.InboundInvocationHandlerImpl$2.run(InboundInvocationHandlerImpl.java:193) [infinispan-core.jar:7.0.1-SNAPSHOT]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_55]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_55]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_55]
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months
[JBoss JIRA] (ISPN-4939) The text is repeated three times in META-INF\services
by ratking (JIRA)
[ https://issues.jboss.org/browse/ISPN-4939?page=com.atlassian.jira.plugin.... ]
ratking commented on ISPN-4939:
-------------------------------
Thank you very much!
:-)
> The text is repeated three times in META-INF\services
> -----------------------------------------------------
>
> Key: ISPN-4939
> URL: https://issues.jboss.org/browse/ISPN-4939
> Project: Infinispan
> Issue Type: Bug
> Components: Build process
> Affects Versions: 7.0.0.Final
> Reporter: ratking
> Assignee: Tristan Tarrant
> Priority: Blocker
> Fix For: 7.0.2.Final
>
>
> Download infinispan-7.0.0.Final-all.zip or infinispan-7.0.1.Final-all.zip from http://infinispan.org/download/ and unzip the file.
> {quote}
> infinispan-embedded-7.0.0.Final.jar\META-INF\beans.xml (Has been fixed in 7.0.1)
> infinispan-embedded-7.0.0.Final.jar\META-INF\services\
> infinispan-embedded-query-7.0.0.Final.jar\META-INF\services\
> infinispan-remote-7.0.0.Final.jar\META-INF\services\
> {quote}
> Open these files with a text editor, you will find that the text is *repeated three times*.
> disastrous
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months
[JBoss JIRA] (ISPN-3395) ISPN000196: Failed to recover cluster state after the current node became the coordinator
by Periyasamy Palanisamy (JIRA)
[ https://issues.jboss.org/browse/ISPN-3395?page=com.atlassian.jira.plugin.... ]
Periyasamy Palanisamy edited comment on ISPN-3395 at 11/19/14 2:23 AM:
-----------------------------------------------------------------------
Is this issue fixed in any of the infinispan versions ?
If not, Please let me know when this issue gets fixed. This is a major blocker in infinispan 5.3 during node reboot scenario.
was (Author: palani.peri):
Is this issue fixed in any of the infinispan versions ?
If not, Please let us when this issue gets fixed. This is a major blocker in infinispan during node reboot scenario.
> ISPN000196: Failed to recover cluster state after the current node became the coordinator
> -----------------------------------------------------------------------------------------
>
> Key: ISPN-3395
> URL: https://issues.jboss.org/browse/ISPN-3395
> Project: Infinispan
> Issue Type: Bug
> Components: State Transfer
> Affects Versions: 5.3.0.Final
> Reporter: Mayank Agarwal
>
> We are using infinispan 5.3.0.Final in our distributed application. we are testing infinispan in HA scenarios and getting following exception when new node becomes co-ordinator.
> ISPN000196: Failed to recover cluster state after the current node became the coordinator
> java.lang.NullPointerException: null
> at org.infinispan.topology.ClusterTopologyManagerImpl.recoverClusterStatus(ClusterTopologyManagerImpl.java:455) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleNewView(ClusterTopologyManagerImpl.java:235) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener$1.run(ClusterTopologyManagerImpl.java:647) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[na:1.6.0_25]
> at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) ~[na:1.6.0_25]
> at java.util.concurrent.FutureTask.run(Unknown Source) [na:1.6.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) [na:1.6.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.6.0_25]
> at java.lang.Thread.run(Unknown Source) [na:1.6.0_25]
> This is happening because cacheTopology is null at ClusterTopologyManagerImpl.java:455
> at 449: code is checking cacheTopology for null that for loop which is updating cacheStatusMap at 457 should be in that check itself.
> Fix:
> --- a/core/src/main/java/org/infinispan/topology/ClusterTopologyManagerImpl.java
> +++ b/core/src/main/java/org/infinispan/topology/ClusterTopologyManagerImpl.java
> @@ -448,7 +448,7 @@ public class ClusterTopologyManagerImpl implements ClusterTopologyManager {
> // but didn't get a response back yet
> if (cacheTopology != null) {
> topologyList.add(cacheTopology);
> - }
> +
>
> // Add all the members of the topology that have sent responses first
> // If we only added the sender, we could end up with a different member order
> @@ -457,6 +457,7 @@ public class ClusterTopologyManagerImpl implements ClusterTopologyManager {
> cacheStatusMap.get(cacheName).addMember(member);
> }
> }
> + }
>
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months
[JBoss JIRA] (ISPN-3395) ISPN000196: Failed to recover cluster state after the current node became the coordinator
by Periyasamy Palanisamy (JIRA)
[ https://issues.jboss.org/browse/ISPN-3395?page=com.atlassian.jira.plugin.... ]
Periyasamy Palanisamy commented on ISPN-3395:
---------------------------------------------
Is this issue fixed in any of the infinispan versions ?
If not, Please let us when this issue gets fixed. This is a major blocker in infinispan during node reboot scenario.
> ISPN000196: Failed to recover cluster state after the current node became the coordinator
> -----------------------------------------------------------------------------------------
>
> Key: ISPN-3395
> URL: https://issues.jboss.org/browse/ISPN-3395
> Project: Infinispan
> Issue Type: Bug
> Components: State Transfer
> Affects Versions: 5.3.0.Final
> Reporter: Mayank Agarwal
>
> We are using infinispan 5.3.0.Final in our distributed application. we are testing infinispan in HA scenarios and getting following exception when new node becomes co-ordinator.
> ISPN000196: Failed to recover cluster state after the current node became the coordinator
> java.lang.NullPointerException: null
> at org.infinispan.topology.ClusterTopologyManagerImpl.recoverClusterStatus(ClusterTopologyManagerImpl.java:455) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleNewView(ClusterTopologyManagerImpl.java:235) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener$1.run(ClusterTopologyManagerImpl.java:647) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[na:1.6.0_25]
> at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) ~[na:1.6.0_25]
> at java.util.concurrent.FutureTask.run(Unknown Source) [na:1.6.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) [na:1.6.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.6.0_25]
> at java.lang.Thread.run(Unknown Source) [na:1.6.0_25]
> This is happening because cacheTopology is null at ClusterTopologyManagerImpl.java:455
> at 449: code is checking cacheTopology for null that for loop which is updating cacheStatusMap at 457 should be in that check itself.
> Fix:
> --- a/core/src/main/java/org/infinispan/topology/ClusterTopologyManagerImpl.java
> +++ b/core/src/main/java/org/infinispan/topology/ClusterTopologyManagerImpl.java
> @@ -448,7 +448,7 @@ public class ClusterTopologyManagerImpl implements ClusterTopologyManager {
> // but didn't get a response back yet
> if (cacheTopology != null) {
> topologyList.add(cacheTopology);
> - }
> +
>
> // Add all the members of the topology that have sent responses first
> // If we only added the sender, we could end up with a different member order
> @@ -457,6 +457,7 @@ public class ClusterTopologyManagerImpl implements ClusterTopologyManager {
> cacheStatusMap.get(cacheName).addMember(member);
> }
> }
> + }
>
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months
[JBoss JIRA] (ISPN-4991) Implement clustered cache statistics
by Vladimir Blagojevic (JIRA)
[ https://issues.jboss.org/browse/ISPN-4991?page=com.atlassian.jira.plugin.... ]
Vladimir Blagojevic updated ISPN-4991:
--------------------------------------
Parent: ISPN-4993
Issue Type: Sub-task (was: Task)
> Implement clustered cache statistics
> ------------------------------------
>
> Key: ISPN-4991
> URL: https://issues.jboss.org/browse/ISPN-4991
> Project: Infinispan
> Issue Type: Sub-task
> Components: JMX, reporting and management
> Reporter: Vladimir Blagojevic
> Assignee: Vladimir Blagojevic
>
> As of 7.0.0 release we implement cache statistics on a per node cache level. For Infinispan admin console we need to implement aggregate statistics for each cache across all nodes in the cluster. The implementing class should be a registered MBean and should implement similar cache statistics currently implemented by org.infinispan.interceptors.CacheMgmtInterceptor
--
This message was sent by Atlassian JIRA
(v6.3.8#6338)
11 years, 4 months