[JBoss JIRA] (ISPN-6925) Race condition in staggered gets
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-6925?page=com.atlassian.jira.plugin.... ]
Radim Vansa commented on ISPN-6925:
-----------------------------------
I think this could be solved by replacing RspList by a structure like Map<Address, RspWrapper> where RspWrapper would have field {{volatile Rsp}} - the Rsp would be published then atomically, instead of non-atomic copy.
> Race condition in staggered gets
> --------------------------------
>
> Key: ISPN-6925
> URL: https://issues.jboss.org/browse/ISPN-6925
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.0.0.Alpha3, 8.2.3.Final
> Reporter: Radim Vansa
> Assignee: Dan Berindei
>
> There's a race condition in {{CommandAwareRpcDispatcher}}, as we do staggered gets. When the {{RspList}} is prepared, and then in {{processCallsStaggered$lambda}} the {{Rsp}} is filled in - both of them can set is as received but later see that the other response was not received yet, because there's no memory barrieri n between the {{setValue}}/{{setException}} and checking {{wasReceived}}.
> The race above happens when two responses come but none of them is accepted by the filter, but there's a second one in JGroupsTransport when the first response is accepted but then comes another one. In {{JGroupsTransport.invokeRemotelyAsync}} in the lambda handling {{rspListFuture.thenApply}} we may see another thread concurrently modifying the rsps; e.g. in {{checkRsp}} you find out that the concurrently written response was received and it's not an exception according to flags, but the value will be null, so you return null while you can have valid response in the other {{Rsp}}.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6925) Race condition in staggered gets
by Radim Vansa (JIRA)
Radim Vansa created ISPN-6925:
---------------------------------
Summary: Race condition in staggered gets
Key: ISPN-6925
URL: https://issues.jboss.org/browse/ISPN-6925
Project: Infinispan
Issue Type: Bug
Components: Core
Affects Versions: 8.2.3.Final, 9.0.0.Alpha3
Reporter: Radim Vansa
There's a race condition in {{CommandAwareRpcDispatcher}}, as we do staggered gets. When the {{RspList}} is prepared, and then in {{processCallsStaggered$lambda}} the {{Rsp}} is filled in - both of them can set is as received but later see that the other response was not received yet, because there's no memory barrieri n between the {{setValue}}/{{setException}} and checking {{wasReceived}}.
The race above happens when two responses come but none of them is accepted by the filter, but there's a second one in JGroupsTransport when the first response is accepted but then comes another one. In {{JGroupsTransport.invokeRemotelyAsync}} in the lambda handling {{rspListFuture.thenApply}} we may see another thread concurrently modifying the rsps; e.g. in {{checkRsp}} you find out that the concurrently written response was received and it's not an exception according to flags, but the value will be null, so you return null while you can have valid response in the other {{Rsp}}.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6925) Race condition in staggered gets
by Radim Vansa (JIRA)
[ https://issues.jboss.org/browse/ISPN-6925?page=com.atlassian.jira.plugin.... ]
Radim Vansa reassigned ISPN-6925:
---------------------------------
Assignee: Dan Berindei
> Race condition in staggered gets
> --------------------------------
>
> Key: ISPN-6925
> URL: https://issues.jboss.org/browse/ISPN-6925
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 9.0.0.Alpha3, 8.2.3.Final
> Reporter: Radim Vansa
> Assignee: Dan Berindei
>
> There's a race condition in {{CommandAwareRpcDispatcher}}, as we do staggered gets. When the {{RspList}} is prepared, and then in {{processCallsStaggered$lambda}} the {{Rsp}} is filled in - both of them can set is as received but later see that the other response was not received yet, because there's no memory barrieri n between the {{setValue}}/{{setException}} and checking {{wasReceived}}.
> The race above happens when two responses come but none of them is accepted by the filter, but there's a second one in JGroupsTransport when the first response is accepted but then comes another one. In {{JGroupsTransport.invokeRemotelyAsync}} in the lambda handling {{rspListFuture.thenApply}} we may see another thread concurrently modifying the rsps; e.g. in {{checkRsp}} you find out that the concurrently written response was received and it's not an exception according to flags, but the value will be null, so you return null while you can have valid response in the other {{Rsp}}.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6924) HotRod addressCache iterator shouldn't go remote
by William Burns (JIRA)
William Burns created ISPN-6924:
-----------------------------------
Summary: HotRod addressCache iterator shouldn't go remote
Key: ISPN-6924
URL: https://issues.jboss.org/browse/ISPN-6924
Project: Infinispan
Issue Type: Bug
Components: Distributed Execution and Map/Reduce, Server
Reporter: William Burns
In the forum post, the stack traces show the topology update being sent back from the server to the client. However it is using a distributed iterator. This shouldn't happen as the addressCache is a REPL cache which should own all segments and thus the iterator shouldn't go remote at all and just operate on the user thread solely.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6923) HotRod bulk commands that provide a limit need to close the iterator
by William Burns (JIRA)
[ https://issues.jboss.org/browse/ISPN-6923?page=com.atlassian.jira.plugin.... ]
William Burns updated ISPN-6923:
--------------------------------
Summary: HotRod bulk commands that provide a limit need to close the iterator (was: HotRod bulk commands that take a count need to close the iterator)
> HotRod bulk commands that provide a limit need to close the iterator
> --------------------------------------------------------------------
>
> Key: ISPN-6923
> URL: https://issues.jboss.org/browse/ISPN-6923
> Project: Infinispan
> Issue Type: Bug
> Components: Server
> Affects Versions: 8.0.0.Final
> Reporter: William Burns
> Fix For: 9.0.0.Alpha4, 8.2.4.Final
>
>
> When using a cache that is distributed and a hotrod bulk operation with a count the entries or keys are not fully iterated upon, which requires the iterator to be manually closed or else this will consume a thread per request until the server runs out of resources.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6923) HotRod bulk commands that take a count need to close the iterator
by William Burns (JIRA)
William Burns created ISPN-6923:
-----------------------------------
Summary: HotRod bulk commands that take a count need to close the iterator
Key: ISPN-6923
URL: https://issues.jboss.org/browse/ISPN-6923
Project: Infinispan
Issue Type: Bug
Components: Server
Affects Versions: 8.0.0.Final
Reporter: William Burns
Fix For: 9.0.0.Alpha4, 8.2.4.Final
When using a cache that is distributed and a hotrod bulk operation with a count the entries or keys are not fully iterated upon, which requires the iterator to be manually closed or else this will consume a thread per request until the server runs out of resources.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months