Yes, I am trying to reuse those components and I've did only minor
modifications so far. Without topology change that works. But I
understand that you had some concept behind these components and I am
looking for a natural extension.
So, if I need to add waiting, is it acceptable to add StateTransferLock
and DistributionManager into ClusterStreamManager? That sounds like the
most convenient way, but then I should remove CH as the parameter from
calls from DistributedCacheStream, because routing information is
already available in ClusterStreamManager.
Radim
On 04/10/2017 05:56 PM, William Burns wrote:
Comments inline
To be honest it might be easier to talk on IRC sometime since I am not
sure the exact part you are trying to work on now. I am guessing you
are reusing the ClusterStreamManager and LocalStreamManager parts and
adapting as needed but I don't know for sure.
On Mon, Apr 10, 2017 at 6:41 AM Radim Vansa <rvansa(a)redhat.com
<mailto:rvansa@redhat.com>> wrote:
Hi Will,
rebasing scattered cache PR I've found a test failure when handling
streams, and I'd like to ask you guidance, how to address it.
My problem is that temporarily primary owners can be unknown (when a
node crashes), but streams assume that there's always a primary owner.
Therefore, a remote streams operation must be delayed until a topology
arrives where the new owner is decided.
One thing to reduce this is if a node was primary there is a special
exception. If this node becomes a backup owner after being primary
this is fine. The segment is only suspected if it loses ownership
completely.
There is no waiting currently it will submit the request again until
it eventually gets what it wants. (this could be improved) There is a
wait though when the originator has a newer topology if a remote node
doesn't yet have the updated topology (StreamRequestCommand implements
TopologyAffectedCommand).
Could you suggest how should I adapt the code?
The ClusterStreamManager only does one invocation and stores the
results from that invocation. The caller then has to adapt the next
call with the segments it still needs and call back into the
ClusterStreamManager if needed (segments could be owned locally now).
ClusteredStreamManagerImpl currently does not hold DistributionManager
reference, ConsistenHash is passed down from
DistributedCacheStream, and
it seems that segments are suspected after any unsuccessful response.
Which components should react to topology changes?
The only component currently that reacts to topology change is in the
LocalStreamManagerImpl which registers a listener to detect when a
segment is lost. This is done on remote nodes only. When this segment
is lost it will send that those segments weren't completed back to the
requester (ClusterStreamManager) on its next response.
As you mentioned any unsuccessful response is treated as suspected,
since we can't really trust that node, so there is no code on
originator required to listen for topology changes.
Thanks!
Radim
--
Radim Vansa <rvansa(a)redhat.com <mailto:rvansa@redhat.com>>
JBoss Performance Team
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Radim Vansa <rvansa(a)redhat.com>
JBoss Performance Team