<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Sep 18, 2014 at 3:29 PM, Pedro Ruivo <span dir="ltr"><<a href="mailto:pedro@infinispan.org" target="_blank">pedro@infinispan.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
On 09/18/2014 12:03 PM, Dan Berindei wrote:<br>
> Thanks Pedro, this looks great.<br>
><br>
> However, I don't think it's ok to treat CommitCommands/Pessimistic<br>
> PrepareCommands as RemoteLockCommands just because they may send L1<br>
> invalidation commands. It's true that those commands will block, but<br>
> there's no need to wait for any other command before doing the L1<br>
> invalidation. In fact, the non-tx writes on backup owners, which you<br>
> consider to be non-blocking, can also send L1 invalidation commands (see<br>
> L1NonTxInterceptor.invalidateL1).<br>
<br>
</span>They are not treated as RemoteLockCommands. I just said that they are<br>
processed in the remote executor service (need to double check what I<br>
wrote in the wiki). Unfortunately, I haven't think about the L1 in that<br>
scenario... :(<br></blockquote><div><br></div><div>Ok, sorry I leapt to conclusions :)</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class=""><br>
><br>
> On the other hand, one of the good things that the remote executor did<br>
> was to allow queueing lots of commands with a higher topology id, when<br>
> one of the nodes receives the new topology much later than the others.<br>
> We still have to consider each TopologyAffectedCommand as potentially<br>
> blocking and put it through the remote executor.<br>
><br>
> And InvalidateL1Commands are also TopologyAffectedCommands, so there's<br>
> still a potential for deadlock when L1 is enabled and we have maxThreads<br>
> write commands blocked sending L1 invalidations and those L1<br>
> invalidation commands are stuck in the remote executor's queue on<br>
> another node. And with (very) unlucky timing the remote executor might<br>
> not even get to create maxThreads threads before the deadlock appears. I<br>
> wonder if we could write a custom executor that checks what the first<br>
> task in the queue is every second or so, and creates a bunch of new<br>
> threads if the first task in the queue hasn't changed.<br>
<br>
</span>I need to think a little more about it.<br>
<br>
So, a single put can originate:<br>
1 RPC to the primary owner (to lock)<br>
X RPC to invalidate L1 from the primary owner<br>
R RPC for the primary owner to the backups owner<br>
Y RPC to invalidate L1 from the backup owner<br>
<br>
is this correct?<br></blockquote><div><br></div><div>That is correct when "smart" L1 invalidation is enabled (l1.invalidationThreshold > 0). But it is disabled by default, so it's more like this:</div><div><br></div><div>1 RPC to the primary owner</div><div>0 or 1 broadcast RPC to invalidate L1 from the primary owner</div><div>numOwners - 1 from the primary owner to the backup owners</div><div>0 or 1 broadcast RPCs from each backup owners</div><div><br></div><div>In rare circumstances there might be some more L1 invalidations from the L1LastChanceInterceptor.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
any suggestions are welcome.<br>
<span class=""><br>
><br>
> You're right about the remote executor getting full as well, we're<br>
> lacking any feedback mechanism to tell the sender to slow down, except<br>
> for blocking the OOB thread. I wonder if we could tell JGroups somehow<br>
> to discard the message from inside MessageDispatcher.handle (e.g. throw<br>
> a DiscardMessageException), so the sender has to retransmit it and we<br>
> don't block the OOB thread. That should allow us to set a size limit on<br>
> the BlockingTaskAwareExecutor's blockedTasks collection as well. Bela, WDYT?<br>
<br>
</span>Even if we have a way to tell the JGroups to resend the message, we have<br>
no idea if the executor service is full or not. We allow a user to<br>
inject their own implementation of it.<br></blockquote><div><br></div><div>We do allow a custom executor implementation, but it's our SPI. So we can require the custom executor to be configured to throw a RejectedExecutionException when the queue is full instead of blocking the caller thread, if it helps us.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<span class=""><br>
><br>
> Cheers<br>
> Dan<br>
><br>
><br>
> On Wed, Sep 17, 2014 at 7:17 PM, Pedro Ruivo <<a href="mailto:pedro@infinispan.org">pedro@infinispan.org</a><br>
</span><span class="">> <mailto:<a href="mailto:pedro@infinispan.org">pedro@infinispan.org</a>>> wrote:<br>
><br>
> new link:<br>
> <a href="https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler" target="_blank">https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler</a><br>
><br>
> On 09/17/2014 05:08 PM, Pedro Ruivo wrote:<br>
> > Hi,<br>
> ><br>
> > I've just wrote on the wiki a new algorithm to better handle the<br>
> remote<br>
> > commands. You can find it in [1].<br>
> ><br>
> > If you have questions, suggestion or just want to discuss some<br>
> aspect,<br>
> > please do in thread. I'll update the wiki page based on this<br>
> discussion<br>
> ><br>
> > Thanks.<br>
> ><br>
> > Cheers,<br>
> > Pedro<br>
> ><br>
> ><br>
> [1]<a href="https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler-(Work-In-Progress.." target="_blank">https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler-(Work-In-Progress..</a>.)<br>
> ><br>
> _______________________________________________<br>
> infinispan-dev mailing list<br>
</span>> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a>><br>
> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
<div class="HOEnZb"><div class="h5">><br>
><br>
><br>
><br>
> _______________________________________________<br>
> infinispan-dev mailing list<br>
> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
><br>
_______________________________________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
</div></div></blockquote></div><br></div></div>