<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Aug 6, 2014 at 6:19 PM, Bela Ban <span dir="ltr"><<a href="mailto:bban@redhat.com" target="_blank">bban@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hey Dan,<br>
<div class=""><br>
On 06/08/14 16:13, Dan Berindei wrote:<br>
> I could create the issue in JIRA, but I wouldn't make it high priority<br>
> because I think it have lots of corner cases with NBST and cause<br>
> headaches for the maintainers of state transfer ;)<br>
<br>
</div>I do believe the put-while-holding-the-lock issue *is* a critical issue;<br>
anyone banging a cluster of Infinispan nodes with more than 1 thread<br>
will run into lock timeouts, with or without transactions. The only<br>
workaround for now is to use total order, but at the cost of reduced<br>
performance. However, once a system starts hitting the lock timeout<br>
issues, performance drops to a crawl, way slower than TO, and work<br>
starts to pile up, which compounds the problem.<br></blockquote><div><br></div><div>I wouldn't call it critical because you can always increase the number of threads. It won't be pretty, but it will work around the thread exhaustion issue.</div>
<div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
I believe doing a sync RPC while holding the lock on a key is asking for<br>
trouble and is (IMO) an anti-pattern.<br></blockquote><div><br></div><div>We also hold a lock on a key between the LockControlCommand and the TxCompletionNotificationCommand in pessimistic-locking caches, and there's at least one sync PrepareCommand RPC between them...</div>
<div><br></div><div>So I don't see it as an anti-pattern, the only problem is that we should be able to do that without blocking internal threads in addition to the user thread (which is how tx caches do it).</div><div>
<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
Sorry if this has a negative impact on NBST, but should we not fix this<br>
because we don't want to risk a change to NBST ?<br></blockquote><div><br></div><div>I'm not saying it will have a negative impact on NBST, I'm just saying I don't want to start implementing an incomplete proposal for the basic flow and leave the state transfer/topology change issues for "later". When happens when a node leaves, when a backup owner is added, or when the primary owner changes should be part of the initial discussion, not an afterthought.</div>
<div><br></div><div>E.g. with your proposal, any updates in the replication queue on the primary owner will be lost when that primary owner dies, even though we told the user that we successfully updated the key. To quote from my first email on this thread: "<span style="font-family:arial,sans-serif;font-size:13px">OTOH, if the primary owner dies, we have to ask a backup, and we can lose the modifications not yet replicated by the primary."</span></div>
<div><br></div><div>With Sanne's proposal, we wouldn't report to the user that we stored the value until all the backups confirmed the update, so we wouldn't have that problem. But I don't see how we could keep the sequence of versions monotonous when the primary owner of the key changes without some extra sync RPCs (also done while holding the key lock). IIRC TOA also needs some sync RPCs to generate its sequence numbers.</div>
<div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class=""><br>
> Besides, I'm still not sure I understood your proposals properly, e.g.<br>
> whether they are meant only for non-tx caches or you want to change<br>
> something for tx caches as well...<br>
<br>
</div>I think this can be used for both cases; however, I think either Sanne's<br>
solution of using seqnos *per key* and updating in the order of seqnos<br>
or using Pedro's total order impl are probably better solutions.<br>
<br>
I'm not pretending these solutions are final (e.g. Sanne's solution<br>
needs more thought when multiple keys are involved), but we should at<br>
least acknowledge the issue exists, create a JIRA to prioritize it and<br>
then start discussing solutions.<br>
<div class="im"><br></div></blockquote><div><br></div><div>We've been discussing solutions without a JIRA just fine :)</div><div><br></div><div>My feeling so far is that the thread exhaustion problem would be better served by porting TO to non-tx caches and/or changing non-tx locking to not require a thread. I have created an issue for TO [1], but IMO the locking rework [2] should be higher priority, as it can help both tx and non-tx caches.</div>
<div><br></div><div>[1] <a href="https://issues.jboss.org/browse/ISPN-4610">https://issues.jboss.org/browse/ISPN-4610</a><br></div><div>[2] <a href="https://issues.jboss.org/browse/ISPN-2849">https://issues.jboss.org/browse/ISPN-2849</a><br>
</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class="im">><br>
><br>
> On Wed, Aug 6, 2014 at 1:02 PM, Bela Ban <<a href="mailto:bban@redhat.com">bban@redhat.com</a><br>
</div><div class=""><div class="h5">> <mailto:<a href="mailto:bban@redhat.com">bban@redhat.com</a>>> wrote:<br>
><br>
> Seems like this discussion has died with the general agreement that this<br>
> is broken and with a few proposals on how to fix it, but without any<br>
> follow-up action items.<br>
><br>
> I think we (= someone from the ISPN team) need to create a JIRA,<br>
> preferably blocking.<br>
><br>
> WDYT ?<br>
><br>
> If not, here's what our options are:<br>
><br>
> #1 I'll create a JIRA<br>
><br>
> #2 We'll hold the team meeting in Krasnojarsk, Russia<br>
><br>
> #3 There will be only vodka, no beers in #2<br>
><br>
> #4 Bela will join the ISPN team<br>
><br>
> Thoughts ?<br>
<br>
<br>
--<br>
Bela Ban, JGroups lead (<a href="http://www.jgroups.org" target="_blank">http://www.jgroups.org</a>)<br>
_______________________________________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
</div></div></blockquote></div><br></div></div>