[JBoss JIRA] (ISPN-6108) A stopped Node is displayed in all server groups
by Martin Gencur (JIRA)
[ https://issues.jboss.org/browse/ISPN-6108?page=com.atlassian.jira.plugin.... ]
Martin Gencur updated ISPN-6108:
--------------------------------
Attachment: (was: other-server-group-cluster-view)
> A stopped Node is displayed in all server groups
> ------------------------------------------------
>
> Key: ISPN-6108
> URL: https://issues.jboss.org/browse/ISPN-6108
> Project: Infinispan
> Issue Type: Bug
> Components: Console
> Affects Versions: 8.1.0.Final
> Reporter: Martin Gencur
> Assignee: Vladimir Blagojevic
>
> The "View nodes" page displays stopped nodes no matter which cluster/server group they're defined in. When I start the node and it is running, it is displayed correctly in just a single server group view.
> Attaching a screenshot for server called "server-two" which is in the "main-server-group":
> {code}
> <server name="server-two" group="main-server-group" auto-start="false">
> {code}
> The node is displayed in the view for "other-server-group"
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-6108) A stopped Node is displayed in all server groups
by Martin Gencur (JIRA)
Martin Gencur created ISPN-6108:
-----------------------------------
Summary: A stopped Node is displayed in all server groups
Key: ISPN-6108
URL: https://issues.jboss.org/browse/ISPN-6108
Project: Infinispan
Issue Type: Bug
Components: Console
Affects Versions: 8.1.0.Final
Reporter: Martin Gencur
Assignee: Vladimir Blagojevic
The "View nodes" page displays stopped nodes no matter which cluster/server group they're defined in. When I start the node and it is running, it is displayed correctly in just a single server group view.
Attaching a screenshot for server called "server-two" which is in the "main-server-group":
{code}
<server name="server-two" group="main-server-group" auto-start="false">
{code}
The node is displayed in the view for "other-server-group"
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-6100) L1 entries not always stored as L1InternalCacheEntry
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-6100?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant updated ISPN-6100:
----------------------------------
Status: Resolved (was: Pull Request Sent)
Fix Version/s: 8.2.0.Final
Resolution: Done
> L1 entries not always stored as L1InternalCacheEntry
> ----------------------------------------------------
>
> Key: ISPN-6100
> URL: https://issues.jboss.org/browse/ISPN-6100
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 8.1.0.Final
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Labels: testsuite_stability
> Fix For: 8.2.0.Beta1, 8.2.0.Final
>
>
> {{DistributedClusteringLogic.commitSingleEntry}} is using regular {{Metadata}} instead of {{L1Metadata}} when storing entries that are not owned by the local node. The entry will then be stored as a regular {{MortalCacheEntry}} instead of an {{L1InternalCacheEntry}}.
> In non-transactional mode, I believe this can only happen if the node was an owner at the time of entry wrapping. I'm not sure if the same applies in transactional caches.
> {{BaseDistFunctionalTest.assertOwnershipAndNonOwnership()}} assumes L1 entries are stored as {{L1InternalCacheEntries}}, so at the very least the mismatch results in random test failures in {{BaseDistFunctionalTest}} subclasses like {{ConcurrentJoinTest}}. It may result in stale data as well.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-6107) State transfer should not fetch segments that were added during a rebalance
by Dan Berindei (JIRA)
Dan Berindei created ISPN-6107:
----------------------------------
Summary: State transfer should not fetch segments that were added during a rebalance
Key: ISPN-6107
URL: https://issues.jboss.org/browse/ISPN-6107
Project: Infinispan
Issue Type: Bug
Components: Core, Test Suite - Core
Affects Versions: 8.1.0.Final
Reporter: Dan Berindei
Assignee: Dan Berindei
Fix For: 8.2.0.Beta1
When the last owner of a segment leaves the cache, the coordinator will update the consistent hash and replace that owner with {{numOwners}} owners (so that a segment always has at least 1 owner). If there is a rebalance in progress, it could be that both the current and the pending CH lost all the owners of a segment, and the coordinator will assign new owners in both CHs (not necessarily the same).
Sometimes, this causes tests that create clusters with many nodes to spend a lot of time shutting down the cluster. Here's an example:
# Cluster ABCDE, coordinator A, topology id = 0, currentCH = \{0: CD, 1: BC\}, pendingCH = null
# D leaves
# A broadcasts a REBALANCE_START command with topology id 1, members = ABCE, currentCH = \{0: C, 1: BC\}, pendingCH = \{0: BC, 1: BC\}
# A and E confirm that they finished the rebalance
# C leaves before sending the data for segment 0 to B
# A broadcasts a CH_UPDATE command with topology id 2, members = ABE, currentCH = \{0: AE, 1: B\}, pendingCH = \{0: B, 1: B\}
# A now owns segment 0 in the writeCH (which is the union of currentCH and pendingCH).
# A tries to request segment 0 from the other owner in the currentCH, E
# B confirms that it finished the rebalance
# A broadcasts a new topology: topology id 3, currentCH = \{0: B, 1: B\}, pendingCH = null
# E installs topology 3, and throws an IllegalArgumentException when handling A's request for segments
# A is not able to install topology 3, because it requests the transactions data while holding the lock on the LocalCacheStatus
# A receives the IllegalArgumentException from E and retries. But because it still has the old topology, it retries on E ad infinitum - using a lot of CPU in the process.
A requesting segment 0 from E is not a problem in itself - normally E would just send back an empty set of transactions and entries. The problem is that the cluster is able to install a new topology, because A already confirmed receiving all the data, but A is stuck with the old topology.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months
[JBoss JIRA] (ISPN-6069) Unify the continuous query API for remote and embedded mode
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-6069?page=com.atlassian.jira.plugin.... ]
Adrian Nistor updated ISPN-6069:
--------------------------------
Description: The api for registering/unregistering the listener is slightly different in embedded vs remote. The listener interface is identical, but resides in different packages. These need to be unified to ensure a nicer usability. Making ContinousQuery an interface instead of a class and moving it along with the listener interface to the query DSL package is a possible option. (was: The api for registering/unregistering the listener is slightly different in embedded vs remote. The listener interface is identical, but resides in different packages. These need to be unified to ensure a nicer usability.)
> Unify the continuous query API for remote and embedded mode
> -----------------------------------------------------------
>
> Key: ISPN-6069
> URL: https://issues.jboss.org/browse/ISPN-6069
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Adrian Nistor
> Assignee: Adrian Nistor
> Fix For: 8.2.0.Beta1, 8.2.0.Final
>
>
> The api for registering/unregistering the listener is slightly different in embedded vs remote. The listener interface is identical, but resides in different packages. These need to be unified to ensure a nicer usability. Making ContinousQuery an interface instead of a class and moving it along with the listener interface to the query DSL package is a possible option.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 11 months