[JBoss JIRA] (ISPN-3942) HotRod client keep trying recover a connection to a cluster member after shutdown
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3942?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3942:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1059277|https://bugzilla.redhat.com/show_bug.cgi?id=1059277] from ASSIGNED to MODIFIED
> HotRod client keep trying recover a connection to a cluster member after shutdown
> ---------------------------------------------------------------------------------
>
> Key: ISPN-3942
> URL: https://issues.jboss.org/browse/ISPN-3942
> Project: Infinispan
> Issue Type: Bug
> Components: Remote Protocols
> Affects Versions: 6.0.1.Final, 7.0.0.Alpha1
> Environment: Infinispan server with HotRod client in server-client mode and replicated caches.
> Reporter: Wolf-Dieter Fink
> Assignee: Pedro Ruivo
> Priority: Critical
> Labels: clustered, hotrod, hotrod-java-client
> Fix For: 7.0.0.Final, 7.0.0.Alpha2
>
> Attachments: hotrod-client.log
>
>
> If a HotRod client is connected to a server cluster and one of the servers do a correct shutdown, the client keep try to reconnect the server after the cluster view has changed.
> There is only a replicated cache configured.
> From the server-logfile the new cluster view is established
> 16:00:22,290 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (Incoming-3,shared=udp) ISPN000094: Received new cluster view: [node1/clustered|2] (1) [node1/clustered]
> There is no failure from the application perspective, but there there is always a retry if the RoundRobin return the unavailable server.
> This might end in a performance or failure issue if there is a larger cluster and a part going out of work for some reason.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (ISPN-2255) FineGrainedAtomicMap missing key, value pairs in some cluster nodes in distributed mode, embedded infinispan cache
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-2255?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-2255:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1042861|https://bugzilla.redhat.com/show_bug.cgi?id=1042861] from NEW to MODIFIED
> FineGrainedAtomicMap missing key,value pairs in some cluster nodes in distributed mode, embedded infinispan cache
> -----------------------------------------------------------------------------------------------------------------
>
> Key: ISPN-2255
> URL: https://issues.jboss.org/browse/ISPN-2255
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 5.1.5.FINAL
> Environment: Details about cluster:
> 1. Infinispan 5.1.5
> 2. java 7u5 64bit
> 3. linux 64bit (two nodes debian and one node ubuntu)
> 4. tomcat 7.0.28
> 5. jbossTM 4.16.0 JTA only
> Tomcat integration code is here:
> https://github.com/zvrablik/tomcatInfinispanSessionManager/tree/master/to...
> 6. all machines are on real hw ( no virtual machines), date and time is synchronized and shouldn't vary more than 1 second
> Reporter: Zdenek Henek
> Assignee: Pedro Ruivo
> Priority: Critical
> Labels: 621
> Fix For: 7.0.0.Alpha1, 7.0.0.Final
>
> Attachments: fgamissueTest2.zip, jgroupsConfig.xml, sessionInfinispanConfigtestLB.xml
>
>
> I am using FineGrainedAtomicMap to store data into clustering distributed cache. When clustering set to replicated doesn't cause any issues or at least I haven't run
> into any. When I switch to distributed mode I don't see some key,value pairs in some nodes. This happens randomly. The numOwners is set to two and I have cluster of three nodes.
> more details:
> https://community.jboss.org/thread/203420?tstart=60
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (ISPN-4053) WriteSkew not throwing an exception for AtomicMap in REPL and DIST mode
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-4053?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-4053:
-----------------------------------------------
Tristan Tarrant <ttarrant(a)redhat.com> changed the Status of [bug 1047908|https://bugzilla.redhat.com/show_bug.cgi?id=1047908] from POST to MODIFIED
> WriteSkew not throwing an exception for AtomicMap in REPL and DIST mode
> -----------------------------------------------------------------------
>
> Key: ISPN-4053
> URL: https://issues.jboss.org/browse/ISPN-4053
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 6.0.1.Final
> Reporter: Pedro Ruivo
> Assignee: Pedro Ruivo
> Labels: 621
> Fix For: 7.0.0.Final, 7.0.0.Alpha2
>
>
> The writeSkewCheck flag does not work correctly for AtomicMap. When Infinispan runs in REPEATABLE_READ isolation mode for transactions, the writeSkewCheck flag is enabled, and two concurrent transactions read/write data from the same AtomicMap. The WriteSkewException exception is never thrown when committing either of the transactions.
> Consequently, the data stored in the AtomicMap might be changed by another transaction, and current transaction will not register this change during commit.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (ISPN-3936) State transfer should not modify or iterate shared cache stores
by Dan Berindei (JIRA)
[ https://issues.jboss.org/browse/ISPN-3936?page=com.atlassian.jira.plugin.... ]
Dan Berindei updated ISPN-3936:
-------------------------------
Summary: State transfer should not modify or iterate shared cache stores (was: State transfer should not modify shared cache stores)
> State transfer should not modify or iterate shared cache stores
> ---------------------------------------------------------------
>
> Key: ISPN-3936
> URL: https://issues.jboss.org/browse/ISPN-3936
> Project: Infinispan
> Issue Type: Bug
> Components: Core, State Transfer
> Affects Versions: 6.0.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 7.0.0.Final, 7.0.0.Alpha2
>
>
> When installing a new topology, we invalidate the cache entries that are no longer mapped to the local node. We also iterate over the entries in the cache stores and delete the ones that are no longer local, but this should only happen if the cache store is not shared.
> We had similar issues in the past, but this seems to be related to the new cache store API introduced in 6.0.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (ISPN-3395) ISPN000196: Failed to recover cluster state after the current node became the coordinator
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3395?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3395:
-----------------------------------------------
Michal Babacek <mbabacek(a)redhat.com> changed the Status of [bug 927615|https://bugzilla.redhat.com/show_bug.cgi?id=927615] from ASSIGNED to VERIFIED
> ISPN000196: Failed to recover cluster state after the current node became the coordinator
> -----------------------------------------------------------------------------------------
>
> Key: ISPN-3395
> URL: https://issues.jboss.org/browse/ISPN-3395
> Project: Infinispan
> Issue Type: Bug
> Components: State Transfer
> Affects Versions: 5.3.0.Final
> Reporter: Mayank Agarwal
> Assignee: Dan Berindei
>
> We are using infinispan 5.3.0.Final in our distributed application. we are testing infinispan in HA scenarios and getting following exception when new node becomes co-ordinator.
> ISPN000196: Failed to recover cluster state after the current node became the coordinator
> java.lang.NullPointerException: null
> at org.infinispan.topology.ClusterTopologyManagerImpl.recoverClusterStatus(ClusterTopologyManagerImpl.java:455) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleNewView(ClusterTopologyManagerImpl.java:235) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener$1.run(ClusterTopologyManagerImpl.java:647) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[na:1.6.0_25]
> at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) ~[na:1.6.0_25]
> at java.util.concurrent.FutureTask.run(Unknown Source) [na:1.6.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) [na:1.6.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.6.0_25]
> at java.lang.Thread.run(Unknown Source) [na:1.6.0_25]
> This is happening because cacheTopology is null at ClusterTopologyManagerImpl.java:455
> at 449: code is checking cacheTopology for null that for loop which is updating cacheStatusMap at 457 should be in that check itself.
> Fix:
> --- a/core/src/main/java/org/infinispan/topology/ClusterTopologyManagerImpl.java
> +++ b/core/src/main/java/org/infinispan/topology/ClusterTopologyManagerImpl.java
> @@ -448,7 +448,7 @@ public class ClusterTopologyManagerImpl implements ClusterTopologyManager {
> // but didn't get a response back yet
> if (cacheTopology != null) {
> topologyList.add(cacheTopology);
> - }
> +
>
> // Add all the members of the topology that have sent responses first
> // If we only added the sender, we could end up with a different member order
> @@ -457,6 +457,7 @@ public class ClusterTopologyManagerImpl implements ClusterTopologyManager {
> cacheStatusMap.get(cacheName).addMember(member);
> }
> }
> + }
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months
[JBoss JIRA] (ISPN-3395) ISPN000196: Failed to recover cluster state after the current node became the coordinator
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3395?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration updated ISPN-3395:
------------------------------------------
Bugzilla Update: Perform
Bugzilla References: https://bugzilla.redhat.com/show_bug.cgi?id=927615
> ISPN000196: Failed to recover cluster state after the current node became the coordinator
> -----------------------------------------------------------------------------------------
>
> Key: ISPN-3395
> URL: https://issues.jboss.org/browse/ISPN-3395
> Project: Infinispan
> Issue Type: Bug
> Components: State Transfer
> Affects Versions: 5.3.0.Final
> Reporter: Mayank Agarwal
> Assignee: Dan Berindei
>
> We are using infinispan 5.3.0.Final in our distributed application. we are testing infinispan in HA scenarios and getting following exception when new node becomes co-ordinator.
> ISPN000196: Failed to recover cluster state after the current node became the coordinator
> java.lang.NullPointerException: null
> at org.infinispan.topology.ClusterTopologyManagerImpl.recoverClusterStatus(ClusterTopologyManagerImpl.java:455) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleNewView(ClusterTopologyManagerImpl.java:235) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener$1.run(ClusterTopologyManagerImpl.java:647) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[na:1.6.0_25]
> at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) ~[na:1.6.0_25]
> at java.util.concurrent.FutureTask.run(Unknown Source) [na:1.6.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) [na:1.6.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.6.0_25]
> at java.lang.Thread.run(Unknown Source) [na:1.6.0_25]
> This is happening because cacheTopology is null at ClusterTopologyManagerImpl.java:455
> at 449: code is checking cacheTopology for null that for loop which is updating cacheStatusMap at 457 should be in that check itself.
> Fix:
> --- a/core/src/main/java/org/infinispan/topology/ClusterTopologyManagerImpl.java
> +++ b/core/src/main/java/org/infinispan/topology/ClusterTopologyManagerImpl.java
> @@ -448,7 +448,7 @@ public class ClusterTopologyManagerImpl implements ClusterTopologyManager {
> // but didn't get a response back yet
> if (cacheTopology != null) {
> topologyList.add(cacheTopology);
> - }
> +
>
> // Add all the members of the topology that have sent responses first
> // If we only added the sender, we could end up with a different member order
> @@ -457,6 +457,7 @@ public class ClusterTopologyManagerImpl implements ClusterTopologyManager {
> cacheStatusMap.get(cacheName).addMember(member);
> }
> }
> + }
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
10 years, 9 months