[JBoss JIRA] (ISPN-5876) Pre-commit cache invalidation creates stale cache vulnerability
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5876?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5876:
-----------------------------------------------
Petr Penicka <ppenicka(a)redhat.com> changed the Status of [bug 1273147|https://bugzilla.redhat.com/show_bug.cgi?id=1273147] from VERIFIED to CLOSED
> Pre-commit cache invalidation creates stale cache vulnerability
> ---------------------------------------------------------------
>
> Key: ISPN-5876
> URL: https://issues.jboss.org/browse/ISPN-5876
> Project: Infinispan
> Issue Type: Bug
> Components: Transactions
> Affects Versions: 5.2.7.Final
> Reporter: Stephen Fikes
> Assignee: Galder Zamarreño
> Fix For: 5.2.15.Final, 8.1.0.Beta1, 8.1.0.Final
>
>
> In a cluster where Infinispan serves as the level 2 cache for Hibernate (configured for invalidation), because invalidation requests for modified entities are sent *before* database commit, it is possible for nodes receiving the invalidation request to perform eviction and then (due to "local" read requests) reload the evicted entities prior to the time the database commit takes place in the server where the entity was modified.
> Consequently, other servers in the cluster may contain data that remains stale until a subsequent change in another server or until the entity times out from lack of use.
> It isn't easy to write a testcase for this - it required manual intervention to reproduce - but can be seen with any entity class, cluster, etc. (at least using Oracle - results may vary with specific databases) so I've not attached a testcase. The issue can be seen/understood by code inspection (i.e. the timing of invalidation vs. database commit). That said, my test consisted of a two node cluster and I used Byteman rules to delay database commit of a change to an entity (with an optimistic version property) long enough in "server 1" for eviction to complete and a subsequent re-read (by a worker thread on behalf of an EJB) to take place in "server 2". Following the re-read in "server 2", I the database commit proceeds in "server 1" and "server 2" now has a stale copy of the entity in cache.
> One option is pessimistic locking which will block any read attempt until the DB commit completes. It is not feasible, however, for many applications to use pessimistic locking for all reads as this can have a severe impact on concurrency - and is the reason for using optimistic version control. But due to the early timing of invalidation broadcast (*before* database commit, while the data is not yet stale), optimistic locking is insufficient to guard against "permanently" stale data. We did see that some databases default to blocking repeatable reads even outside of transactions and without explicit lock requests. Oracle does not provide such a mode. So, all reads must be implemented to use pessimistic locks (which must be enclosed in explicit transactions - (b)locking reads are disallowed when autocommit=true in Oracle) and this could require significant effort (re-writes) to use pessimistic reads throughout - in addition to the performance issues this can introduce.
> If broadcast of an invalidation message always occurs *after* database commit, optimistic control attributes are sufficient to block attempts to write stale data and though a few failures may occur (as they would in a single server with multiple active threads), it can be known that the stale data will be removed in some finite period.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 2 months
[JBoss JIRA] (ISPN-5568) KeyAffinityService race condition on view change
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5568?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5568:
-----------------------------------------------
Petr Penicka <ppenicka(a)redhat.com> changed the Status of [bug 1233968|https://bugzilla.redhat.com/show_bug.cgi?id=1233968] from VERIFIED to CLOSED
> KeyAffinityService race condition on view change
> ------------------------------------------------
>
> Key: ISPN-5568
> URL: https://issues.jboss.org/browse/ISPN-5568
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 5.2.11.Final
> Reporter: Dennis Reed
> Assignee: Bartosz Baranowski
> Fix For: 8.0.0.Beta2, 5.2.14.Final, 7.2.4.Final
>
>
> KeyAffinityService#getKeyForAddress runs in a tight loop looking for keys:
> {noformat}
> queue = address2key.get(address)
> while (result == null)
> result = queue.poll()
> {noformat}
> KeyAffinityService#handleViewChange clears and resets the queue list on membership change:
> {noformat}
> address2key.clear()
> for each address
> map.put(address, new queue)
> {noformat}
> If a view change comes in after getKeyForAddress gets the queue, and the queue is empty, it will get stuck in a tight loop looking at the wrong queue forever while new keys are added to the new queue.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 2 months
[JBoss JIRA] (HRJS-18) Local server iterator test fails and hangs randomly with NoSuchElementException
by Galder Zamarreño (JIRA)
[ https://issues.jboss.org/browse/HRJS-18?page=com.atlassian.jira.plugin.sy... ]
Galder Zamarreño updated HRJS-18:
---------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Local server iterator test fails and hangs randomly with NoSuchElementException
> -------------------------------------------------------------------------------
>
> Key: HRJS-18
> URL: https://issues.jboss.org/browse/HRJS-18
> Project: Infinispan Javascript client
> Issue Type: Bug
> Affects Versions: 0.3.0
> Reporter: Galder Zamarreño
> Assignee: Galder Zamarreño
> Fix For: 0.4.0
>
> Attachments: server.log, tmp-tests.log
>
>
> Apart from the server returning {{NoSuchElementException}}, this is causing confusion in the client which results in the testsuite hanging completely.
> Here are snippets from client and server logs:
> {code}
> [2016-06-01 17:57:36.210] [DEBUG] client - Invoke putAll(msgId=323,pairs=[{"key":"local-it1","value":"v1","done":false},{"key":"local-it2","value":"v2","done":false},{"key":"local-it3","value":"v3","done":false}],opts=undefined)
> [2016-06-01 17:57:36.210] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.210] [TRACE] transport - Write buffer(msgId=323) to 127.0.0.1:11222: A0C302192D000003007703096C6F63616C2D697431027631096C6F63616C2D697432027632096C6F63616C2D697433027633
> [2016-06-01 17:57:36.214] [TRACE] decoder - Read header(msgId=323): opCode=46, status=0, hasNewTopology=0
> [2016-06-01 17:57:36.215] [TRACE] decoder - Call decode for request(msgId=323)
> [2016-06-01 17:57:36.215] [TRACE] connection - After decoding request(msgId=323), buffer size is 6, and offset 6
> [2016-06-01 17:57:36.215] [TRACE] connection - Complete success for request(msgId=323) with undefined
> [2016-06-01 17:57:36.215] [DEBUG] client - Invoke iterator(msgId=324,batchSize=1,opts=undefined)
> [2016-06-01 17:57:36.215] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.215] [TRACE] transport - Write buffer(msgId=324) to 127.0.0.1:11222: A0C40219310000030001010100
> [2016-06-01 17:57:36.230] [TRACE] decoder - Read header(msgId=324): opCode=50, status=0, hasNewTopology=0
> [2016-06-01 17:57:36.230] [TRACE] decoder - Call decode for request(msgId=324)
> [2016-06-01 17:57:36.230] [TRACE] connection - After decoding request(msgId=324), buffer size is 43, and offset 43
> [2016-06-01 17:57:36.230] [TRACE] connection - Complete success for request(msgId=324) with {"iterId":"28cab848-73ac-47c5-ad68-f518b89c5ba4","conn":{}}
> [2016-06-01 17:57:36.230] [TRACE] client - Invoke iterator.next(msgId=325,iteratorId=28cab848-73ac-47c5-ad68-f518b89c5ba4) on 127.0.0.1:11222
> [2016-06-01 17:57:36.230] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.231] [TRACE] transport - Write buffer(msgId=325) to 127.0.0.1:11222: A0C5021933000003002432386361623834382D373361632D343763352D616436382D663531386238396335626134
> [2016-06-01 17:57:36.231] [TRACE] client - Invoke iterator.next(msgId=326,iteratorId=28cab848-73ac-47c5-ad68-f518b89c5ba4) on 127.0.0.1:11222
> [2016-06-01 17:57:36.231] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.231] [TRACE] transport - Write buffer(msgId=326) to 127.0.0.1:11222: A0C6021933000003002432386361623834382D373361632D343763352D616436382D663531386238396335626134
> [2016-06-01 17:57:36.231] [TRACE] client - Invoke iterator.next(msgId=327,iteratorId=28cab848-73ac-47c5-ad68-f518b89c5ba4) on 127.0.0.1:11222
> [2016-06-01 17:57:36.231] [TRACE] encoder - Encode operation with topology id 0
> [2016-06-01 17:57:36.231] [TRACE] transport - Write buffer(msgId=327) to 127.0.0.1:11222: A0C7021933000003002432386361623834382D373361632D343763352D616436382D663531386238396335626134
> [2016-06-01 17:57:36.244] [TRACE] decoder - Read header(msgId=327): opCode=80, status=133, hasNewTopology=0
> [2016-06-01 17:57:36.244] [ERROR] decoder - Error decoding body of request(msgId=327): java.util.NoSuchElementException
> [2016-06-01 17:57:36.244] [TRACE] connection - After decoding request(msgId=327), buffer size is 39, and offset 39
> [2016-06-01 17:57:36.244] [TRACE] connection - Complete failure for request(msgId=327) with java.util.NoSuchElementException
> [2016-06-01 17:57:36.249] [TRACE] decoder - Read header(msgId=327): opCode=52, status=0, hasNewTopology=0
> {code}
> {code}
> 2016-06-01 17:57:36,216 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationStartRequest, version=25, messageId=324, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
> 2016-06-01 17:57:36,222 TRACE [org.infinispan.interceptors.impl.InvocationContextInterceptor] (HotRodServerHandler-6-115) Invoked with command EntrySetCommand{cache=default} and InvocationContext [org.infinispan.context.impl.NonTxInvocationContext@7bc2d6d3]
> 2016-06-01 17:57:36,222 TRACE [org.infinispan.interceptors.impl.CallInterceptor] (HotRodServerHandler-6-115) Executing command: EntrySetCommand{cache=default}.
> 2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.ContextHandler] (HotRodServerHandler-6-115) Write response IterationStartResponse{version=25, messageId=324, cacheName=, operation=IterationStartResponse, status=Success, iterationId=28cab848-73ac-47c5-ad68-f518b89c5ba4}
> 2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-8-2) Encode msg IterationStartResponse{version=25, messageId=324, cacheName=, operation=IterationStartResponse, status=Success, iterationId=28cab848-73ac-47c5-ad68-f518b89c5ba4}
> 2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.Encoder2x$] (HotRodServerWorker-8-2) Write topology response header with no change
> 2016-06-01 17:57:36,229 TRACE [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-8-2) Write buffer contents A1C4023200002432386361623834382D373361632D343763352D616436382D663531386238396335626134 to channel [id: 0xd8959d2b, L:/127.0.0.1:11222 - R:/127.0.0.1:52367]
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decode using instance @54abd903
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationNextRequest, version=25, messageId=325, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decode using instance @54abd903
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationNextRequest, version=25, messageId=326, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decode using instance @54abd903
> 2016-06-01 17:57:36,231 TRACE [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-8-2) Decoded header HotRodHeader{op=IterationNextRequest, version=25, messageId=327, cacheName=, flag=0, clientIntelligence=3, topologyId=0}
> 2016-06-01 17:57:36,239 DEBUG [org.infinispan.server.hotrod.HotRodExceptionHandler] (HotRodServerWorker-8-2) Exception caught: java.util.NoSuchElementException
> at org.infinispan.stream.impl.RemovableIterator.next(RemovableIterator.java:49)
> at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1$$anonfun$6.apply(IterationManager.scala:142)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1$$anonfun$6.apply(IterationManager.scala:142)
> at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:728)
> at scala.collection.immutable.Range.foreach(Range.scala:166)
> at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:727)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1.apply(IterationManager.scala:142)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager$$anonfun$next$1.apply(IterationManager.scala:138)
> at scala.Option.map(Option.scala:146)
> at org.infinispan.server.hotrod.iteration.DefaultIterationManager.next(IterationManager.scala:138)
> at org.infinispan.server.hotrod.ContextHandler.realRead(ContextHandler.java:182)
> at org.infinispan.server.hotrod.ContextHandler.lambda$channelRead0$1(ContextHandler.java:56)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
> at java.lang.Thread.run(Thread.java:745)
> 2016-06-01 17:57:36,244 TRACE [org.infinispan.server.hotrod.HotRodEncoder] (HotRodServerWorker-8-2) Encode msg ErrorResponse{version=25, messageId=327, operation=ErrorResponse, status=ServerError, msg=java.util.NoSuchElementException}
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 2 months
[JBoss JIRA] (ISPN-3395) ISPN000196: Failed to recover cluster state after the current node became the coordinator
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3395?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3395:
-----------------------------------------------
Petr Penicka <ppenicka(a)redhat.com> changed the Status of [bug 1283465|https://bugzilla.redhat.com/show_bug.cgi?id=1283465] from VERIFIED to CLOSED
> ISPN000196: Failed to recover cluster state after the current node became the coordinator
> -----------------------------------------------------------------------------------------
>
> Key: ISPN-3395
> URL: https://issues.jboss.org/browse/ISPN-3395
> Project: Infinispan
> Issue Type: Bug
> Components: State Transfer
> Affects Versions: 5.3.0.Final
> Reporter: Mayank Agarwal
> Fix For: 6.0.2.Final, 7.0.0.Final, 5.2.16.Final
>
>
> We are using infinispan 5.3.0.Final in our distributed application. we are testing infinispan in HA scenarios and getting following exception when new node becomes co-ordinator.
> ISPN000196: Failed to recover cluster state after the current node became the coordinator
> java.lang.NullPointerException: null
> at org.infinispan.topology.ClusterTopologyManagerImpl.recoverClusterStatus(ClusterTopologyManagerImpl.java:455) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at org.infinispan.topology.ClusterTopologyManagerImpl.handleNewView(ClusterTopologyManagerImpl.java:235) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener$1.run(ClusterTopologyManagerImpl.java:647) ~[infinispan-core-5.3.0.1.Final.jar:5.3.0.1.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[na:1.6.0_25]
> at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) ~[na:1.6.0_25]
> at java.util.concurrent.FutureTask.run(Unknown Source) [na:1.6.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) [na:1.6.0_25]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.6.0_25]
> at java.lang.Thread.run(Unknown Source) [na:1.6.0_25]
> This is happening because cacheTopology is null at ClusterTopologyManagerImpl.java:455
> at 449: code is checking cacheTopology for null that for loop which is updating cacheStatusMap at 457 should be in that check itself.
> Fix:
> --- a/core/src/main/java/org/infinispan/topology/ClusterTopologyManagerImpl.java
> +++ b/core/src/main/java/org/infinispan/topology/ClusterTopologyManagerImpl.java
> @@ -448,7 +448,7 @@ public class ClusterTopologyManagerImpl implements ClusterTopologyManager {
> // but didn't get a response back yet
> if (cacheTopology != null) {
> topologyList.add(cacheTopology);
> - }
> +
>
> // Add all the members of the topology that have sent responses first
> // If we only added the sender, we could end up with a different member order
> @@ -457,6 +457,7 @@ public class ClusterTopologyManagerImpl implements ClusterTopologyManager {
> cacheStatusMap.get(cacheName).addMember(member);
> }
> }
> + }
>
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
9 years, 2 months