[JBoss JIRA] (ISPN-5042) Remote gets caused by writes could be replicated only to the primary owner
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5042?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5042:
-----------------------------------
Fix Version/s: 7.2.0.CR1
(was: 7.2.0.Beta2)
> Remote gets caused by writes could be replicated only to the primary owner
> --------------------------------------------------------------------------
>
> Key: ISPN-5042
> URL: https://issues.jboss.org/browse/ISPN-5042
> Project: Infinispan
> Issue Type: Enhancement
> Components: Core, State Transfer
> Affects Versions: 7.1.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Minor
> Labels: 7.0
> Fix For: 7.2.0.CR1
>
>
> For write operations that need the previous value, a write CH-only owner that doesn't have a key locally will attempt to retrieve the key from the read CH-owners.
> Sending the remote get command to all the previous owners will create extra load on the cluster during state transfer, so it should be more efficient to send the remote get only to the primary owner. Even though the latency of some write operations will be higher, the average latency should be better.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 9 months
[JBoss JIRA] (ISPN-5046) PartitionHandling: split during commit can leave the cache inconsistent after merge
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5046?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5046:
-----------------------------------
Fix Version/s: 7.2.0.CR1
(was: 7.2.0.Beta2)
> PartitionHandling: split during commit can leave the cache inconsistent after merge
> -----------------------------------------------------------------------------------
>
> Key: ISPN-5046
> URL: https://issues.jboss.org/browse/ISPN-5046
> Project: Infinispan
> Issue Type: Bug
> Components: Core, State Transfer
> Affects Versions: 7.0.2.Final, 7.1.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Fix For: 7.2.0.CR1
>
>
> Say we have a cluster ABCD; a transaction T was started on A, with B as the primary owner and C the backup owner. B and C both acknowledge the prepare, and the network splits into AB and CD right before A sends the commit command. Eventually A suspects C and D, but the commit still succeeds on B before C and D are suspected. And SuspectExceptions are ignored for commit commands, so the user won't see any error.
> However, C will eventually suspect A and B. When the CD cache topology is installed, it will roll back transaction T. After the merge, both partitions are in degraded mode, so we assume that they both have the latest data and the key is never updated on C.
> From C's point of view, this is very similar to ISPN-3421. The fix should also be similar, we could delay the transaction rollback on C until we get a confirmation from B that T was not committed there. Since B is inaccessible, it will eventually get a SuspectException and the CD cache topology, at which point the cache is in degraded mode and it can wait for a merge. On merge, it should check the status of the transaction on B again, and either commit or rollback based on what B did.
> We also need to suspend the cleanup of completed transactions while the cache is in degraded mode, otherwise C might not find T on B after the merge.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 9 months
[JBoss JIRA] (ISPN-5076) Pessimistic transactions can lose their locks when the primary owner changes
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5076?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5076:
-----------------------------------
Fix Version/s: 7.2.0.CR1
(was: 7.2.0.Beta2)
> Pessimistic transactions can lose their locks when the primary owner changes
> ----------------------------------------------------------------------------
>
> Key: ISPN-5076
> URL: https://issues.jboss.org/browse/ISPN-5076
> Project: Infinispan
> Issue Type: Bug
> Components: Core, State Transfer
> Affects Versions: 7.0.2.Final, 7.1.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Critical
> Labels: 7.0
> Fix For: 7.2.0.CR1
>
>
> In a pessimistic cache, if a transaction {{T1}} has a {{put(k, v)}} operation and the primary owner of the key is the originator, the lock is acquired on the originator but it is not replicated to on the backup(s).
> If one of the backup owners becomes the primary owner, it will allow another transaction {{T2}} to lock (and update) key {{k}} before it receives the one-phase prepare command from the originator of {{T1}}.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 9 months
[JBoss JIRA] (ISPN-5073) Improve "Number of Entries" stats
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5073?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5073:
-----------------------------------
Fix Version/s: 7.2.0.CR1
(was: 7.2.0.Beta2)
> Improve "Number of Entries" stats
> ---------------------------------
>
> Key: ISPN-5073
> URL: https://issues.jboss.org/browse/ISPN-5073
> Project: Infinispan
> Issue Type: Task
> Components: Core, JMX, reporting and management
> Affects Versions: 7.1.0.Alpha1
> Reporter: Tristan Tarrant
> Assignee: Vladimir Blagojevic
> Fix For: 7.2.0.CR1
>
>
> Currently the getNumberOfEntries in CacheMgmtInterceptor returns the size of the datacontainer which doesn't take into account expired entries and cache stores.
> To avoid compatibility issues, modify the description to reflect its behaviour and add proper statistics, possibly with different flag combinations (SKIP_REMOTE, SKIP_CACHE_LOAD)
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 9 months
[JBoss JIRA] (ISPN-5093) Granularity of remote event listener implementations doing the same job
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5093?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5093:
-----------------------------------
Fix Version/s: 7.2.0.CR1
(was: 7.2.0.Beta2)
> Granularity of remote event listener implementations doing the same job
> -----------------------------------------------------------------------
>
> Key: ISPN-5093
> URL: https://issues.jboss.org/browse/ISPN-5093
> Project: Infinispan
> Issue Type: Enhancement
> Components: Remote Protocols
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 7.2.0.CR1
>
>
> Currently, if N clients add the same listener to a cache that does the same job, e.g. keeping a near cache consistent, this results in N server-side cluster listeners created, each potentially installed in different nodes. If one of those nodes fails, all clients that had a listener registered to that node will have to find a different node for this listener.
> The downsides of this approach is that there are as many cluster listeners installed as clients have added listeners (or have near cache enabled), which might not very efficient. If a node goes down, all clients that have cluster listeners there need to failover to some other node.
> The advantage of this approach is simplicity of the approach to decide where to add the listener and where to failover to.
> For this type of scenarios, an alternative set up might be worth exploring:
> If all these client side listeners are interested in exactly the same events, and the client ID would be exposed via the RemoteCache API, a server side cluster listener multi-plexing between all these clients could be potentially built. In other words, instead of having N clients register N cluster listeners, the first client would register the cluster listener with a client listener ID, and if more registrations were added with the same client listener ID, the connections would be added to the existing cluster listener implementation.
> The maximise the efficiency of this solution, all clients (even running in different JMVs), given the same client listener ID, should agree upon the node to add the listener in. For a distributed cache, hashing on the cache name would work. For replicated caches, since there's no hashing available, the first node of the view could be used.
> Since the logic to be executed server-side varies between being the first node adding the client listener vs the others, synchronization would be added to make sure that the first invocation only creates the cluster listener, and the others simply add the channel to the listener.
> Failover is a bit more tricky too, because if the node with the cluster listener goes down, all the clients have to failover, which again exposes a 1st vs the others type of logic.
> Advantages of this approach is the reduction in number of cluster listeners and potentially efficiency coming from a single cluster listener implementation server side.
> The disadvantages come from the server side logic to add/failover a cluster listener, which need to take into account if the listener is present or not. Other disadvantages come from needing the clients to use some specific routing for adding listeners for same node.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 9 months
[JBoss JIRA] (ISPN-5163) A write operation with the SKIP_LOCKING flag can roll back the transaction
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/ISPN-5163?page=com.atlassian.jira.plugin.... ]
Galder Zamarreño updated ISPN-5163:
-----------------------------------
Fix Version/s: 7.2.0.CR1
(was: 7.2.0.Beta2)
> A write operation with the SKIP_LOCKING flag can roll back the transaction
> --------------------------------------------------------------------------
>
> Key: ISPN-5163
> URL: https://issues.jboss.org/browse/ISPN-5163
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 7.0.3.Final, 7.1.0.Beta1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 7.2.0.CR1
>
>
> When a write operation has the SKIP_LOCKING flag, it does not send a {{LockControlCommand}} to the primary owner, but it can send a {{ClusteredGetCommand}} with {{acquireRemoteLocks=true}} instead. The {{ClusteredGetCommmand}} will then execute a {{LockControlCommand}} with the origin not set properly, and {{TxInterceptor}} will roll back the transaction because the originator ({{null}}) appears to have left the cluster.
--
This message was sent by Atlassian JIRA
(v6.3.11#6341)
9 years, 9 months