<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content="text/html; charset=iso-8859-1" http-equiv=Content-Type>
<META name=GENERATOR content="MSHTML 8.00.6001.19400"></HEAD>
<BODY>
<DIV dir=ltr align=left><FONT color=#0000ff size=2 face=Arial><SPAN
class=116224608-06062013>Hello.</SPAN></FONT></DIV>
<DIV dir=ltr align=left><FONT color=#0000ff size=2 face=Arial><SPAN
class=116224608-06062013></SPAN></FONT> </DIV>
<DIV dir=ltr align=left><FONT color=#0000ff size=2 face=Arial><SPAN
class=116224608-06062013>We are using pessimistic transaction mode. In this case
everything's already locked by the time of prepare, is not
it?</SPAN></FONT></DIV>
<DIV dir=ltr align=left><FONT color=#0000ff size=2 face=Arial><SPAN
class=116224608-06062013>As of merge, for quorum mode it's simple - take data
from quorum. I think I will try to simply suppress sending data from non-quorum
members on merge. Because currently everyone sends it's data and it creates
complete mess with unsynchronized data after merge (depending on the
timing).</SPAN></FONT></DIV>
<DIV dir=ltr align=left><FONT color=#0000ff size=2 face=Arial><SPAN
class=116224608-06062013></SPAN></FONT> </DIV>
<DIV dir=ltr align=left><FONT color=#0000ff size=2 face=Arial><SPAN
class=116224608-06062013>Best regards, Vitalii
Tymchyshyn</SPAN></FONT></DIV><BR>
<DIV dir=ltr lang=en-us class=OutlookMessageHeader align=left>
<HR tabIndex=-1>
<FONT size=2 face=Tahoma><B>From:</B> infinispan-dev-bounces@lists.jboss.org
[mailto:infinispan-dev-bounces@lists.jboss.org] <B>On Behalf Of </B>Dan
Berindei<BR><B>Sent:</B> Wednesday, June 05, 2013 12:04 PM<BR><B>To:</B>
infinispan -Dev List<BR><B>Subject:</B> Re: [infinispan-dev] Using infinispan as
quorum-based nosql<BR></FONT><BR></DIV>
<DIV></DIV>
<DIV dir=ltr>
<DIV class=gmail_extra><BR><BR>
<DIV class=gmail_quote>On Mon, Jun 3, 2013 at 4:23 PM, <SPAN dir=ltr><<A
href="mailto:vitalii.tymchyshyn@ubs.com"
target=_blank>vitalii.tymchyshyn@ubs.com</A>></SPAN> wrote:<BR>
<BLOCKQUOTE
style="BORDER-LEFT: rgb(204,204,204) 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex"
class=gmail_quote>Hello.<BR><BR>Thanks for your information. I will subscribe
and vote for the issues noted.<BR>In the meantime I've implemented hacky
JgroupsTransport that "downgrades" all (but CacheViewControlCommand and
StateTransferControlCommand) SYNCHRONOUS invokeRemotely calls to
SYNCHRONOUS_IGNORE_LEAVERS and checks if required number of answers was
received with a filter (I've tried to use original invokeRemotely return value
but it often returns some strange value, like empty map). It seems to do the
trick for me. But I am still not sure if this has any side
effects.<BR><BR></BLOCKQUOTE>
<DIV><BR></DIV>
<DIV>Indeed, I started working on a solution, but I over-engineered it and then
I got side-tracked with other stuff. Sorry about that.<BR></DIV>
<DIV><BR>The problem with using SYNCHRONOUS_IGNORE_LEAVERS everywhere, as I
found out, is that you don't want to ignore the primary owner of a key leaving
during a prepare/lock command (or the coordinator, in REPL mode prior to
5.3.0.CR1/ISPN-2772). If that happens, you have to retry on the new primary
owner, otherwise you can't know if the prepare command has locked the key or
not.<BR><BR>A similar problem appears in non-transactional caches with
supportsConcurrentUpdates=true: there the primary owner can ignore any of the
backup owners leaving, but the originator can't ignore the primary owner
leaving.<BR><BR> </DIV>
<BLOCKQUOTE
style="BORDER-LEFT: rgb(204,204,204) 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex"
class=gmail_quote>For now I can see merge problem in my test: different values
are picked during merge. I am going to dig a little deeper and follow up. But
it's already a little strange for me, since the test algorithm is:<BR>1)Assign
"old" value to full cluster (it's REPL_SYNC mode)<BR>2)Block
coordinator<BR>3)Writer "new" value to one of two remaining nodes. It's
syncrhonized to second remaining node<BR>4)Unblock coordinator<BR>5)Wait (I
could not find a good way to wait for state transfer but wait in this
case).<BR>6)Check the value on coordinator<BR><BR>And in my test I am randomly
getting "old" or "new" in assert. I am now going to check why. May be I will
need to "reinitialize" smaller cluster part to ensure data is taken from the
quorum part of the cluster.<BR><BR></BLOCKQUOTE>
<DIV><BR></DIV>
<DIV>We don't handle merges properly. See <A
href="https://issues.jboss.org/browse/ISPN-263"
target=_blank>https://issues.jboss.org/browse/ISPN-263</A> and the discussion at
<A
href="http://markmail.org/message/meyczotzobuva7js">http://markmail.org/message/meyczotzobuva7js</A></DIV>
<DIV><BR></DIV>
<DIV>What happens right now is that after a merge, all the caches are assumed to
have up-to-date data, so there is no state transfer. We had several ideas
floating around on how we could force the smaller partition to receive data from
the quorum partition, but I think with the public API your best option is to
stop all the caches in the smaller partition after the split and start them back
up after the merge.<BR><BR></DIV>
<DIV>Cheers<BR></DIV>
<DIV>Dan<BR></DIV>
<DIV><BR></DIV>
<DIV> </DIV>
<BLOCKQUOTE
style="BORDER-LEFT: rgb(204,204,204) 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex"
class=gmail_quote>Best regards, Vitalii Tymchyshyn<BR>
<DIV>
<DIV><BR>-----Original Message-----<BR>From: <A
href="mailto:infinispan-dev-bounces@lists.jboss.org"
target=_blank>infinispan-dev-bounces@lists.jboss.org</A> [mailto:<A
href="mailto:infinispan-dev-bounces@lists.jboss.org"
target=_blank>infinispan-dev-bounces@lists.jboss.org</A>] On Behalf Of Galder
Zamarreno<BR>Sent: Monday, June 03, 2013 9:04 AM<BR>To: infinispan -Dev
List<BR>Subject: Re: [infinispan-dev] Using infinispan as quorum-based
nosql<BR><BR><BR>On May 30, 2013, at 5:10 PM, <A
href="mailto:vitalii.tymchyshyn@ubs.com"
target=_blank>vitalii.tymchyshyn@ubs.com</A> wrote:<BR><BR>>
Hello.<BR>><BR>> We are going to use Infinispan in our project as NoSQL
solution. It<BR>> performs quite well for us, but currently we've faced
next problem.<BR>> Note: We are using Infinispan 5.1.6 in SYNC_REPL mode in
small cluster.<BR>> The problem is that when any node fails, any running
transactions wait<BR>> for Jgroups to decide if it've really failed or not
and rollback<BR>> because of SuspectException after that. While we can live
with a<BR>> delay, we'd really like to skip rolling back. As for me, I
actually<BR>> don't see a reason for rollback because transactions started
after<BR>> leave will succeed. So, as for me, previously running
transactions<BR>> could do the same.<BR><BR>We're aware of the problem (<A
href="https://issues.jboss.org/browse/ISPN-2402"
target=_blank>https://issues.jboss.org/browse/ISPN-2402</A>).<BR><BR>@Dan, has
there been any updates on this?<BR><BR>> The question for is if node that
left will synchronize it's state<BR>> after merge (even if merge was done
without infinispan restart). As<BR>> for me, it should or it won't work
correctly at all.<BR><BR>This is not in yet: <A
href="https://issues.jboss.org/browse/ISPN-263"
target=_blank>https://issues.jboss.org/browse/ISPN-263</A><BR><BR>> So,
I've found RpcManager's ResponseMode.SYNCHRONOUS_IGNORE_LEAVERS<BR>> and
think on switching to it for RpcManager calls that don't specify<BR>>
ResponseMode explicitly. As for me, it should do the trick. Also, I am<BR>>
going to enforce Quorum number of reponses, but that's another story.<BR>>
So, how do you think, would it work?<BR><BR>^ Not sure if that'll work.
@Dan?<BR><BR>> P.S. Another Q for me, how does it work now, when
SuspectException is<BR>> thrown from CommitCommand broadcasting. Af far as
I can see, commit is<BR>> still done on some remote nodes (that are still
in the cluster), but<BR>> rolled back on local node because of this
exception. Am I correct?<BR><BR>^ How Infinispan reacts in these situations
depends a lot on the type of communications (synchronous or asynchronous) and
the transaction configuration. Mircea can provide more details on
this.<BR><BR>Cheers,<BR><BR>> This<BR>> can cause inconsistencies, but
we must leave with something in<BR>> peer-to-peer world :) The only other
option is to switch from<BR>> write-all, read-local to write-quorum,
read-quorum scenario that is<BR>> too complex move for Infinispan as for
me.<BR>><BR>> Best regards, Vitalii Tymchyshyn<BR>><BR>> Please
visit our website at<BR>> <A
href="http://financialservicesinc.ubs.com/wealth/E-maildisclaimer.html"
target=_blank>http://financialservicesinc.ubs.com/wealth/E-maildisclaimer.html</A><BR>>
for important disclosures and information about our e-mail policies.<BR>>
For your protection, please do not transmit orders or instructions by<BR>>
e-mail or include account numbers, Social Security numbers, credit<BR>>
card numbers, passwords, or other personal information.<BR>><BR>>
_______________________________________________<BR>> infinispan-dev mailing
list<BR>> <A href="mailto:infinispan-dev@lists.jboss.org"
target=_blank>infinispan-dev@lists.jboss.org</A><BR>> <A
href="https://lists.jboss.org/mailman/listinfo/infinispan-dev"
target=_blank>https://lists.jboss.org/mailman/listinfo/infinispan-dev</A><BR><BR><BR>--<BR>Galder
Zamarreņo<BR><A href="mailto:galder@redhat.com"
target=_blank>galder@redhat.com</A><BR><A href="http://twitter.com/galderz"
target=_blank>twitter.com/galderz</A><BR><BR>Project Lead, Escalante<BR><A
href="http://escalante.io"
target=_blank>http://escalante.io</A><BR><BR>Engineer, Infinispan<BR><A
href="http://infinispan.org"
target=_blank>http://infinispan.org</A><BR><BR><BR>_______________________________________________<BR>infinispan-dev
mailing list<BR><A href="mailto:infinispan-dev@lists.jboss.org"
target=_blank>infinispan-dev@lists.jboss.org</A><BR><A
href="https://lists.jboss.org/mailman/listinfo/infinispan-dev"
target=_blank>https://lists.jboss.org/mailman/listinfo/infinispan-dev</A><BR>Please
visit our website at<BR><A
href="http://financialservicesinc.ubs.com/wealth/E-maildisclaimer.html"
target=_blank>http://financialservicesinc.ubs.com/wealth/E-maildisclaimer.html</A><BR>for
important disclosures and information about our e-mail<BR>policies. For your
protection, please do not transmit orders<BR>or instructions by e-mail or
include account numbers, Social<BR>Security numbers, credit card numbers,
passwords, or other<BR>personal
information.<BR><BR>_______________________________________________<BR>infinispan-dev
mailing list<BR><A href="mailto:infinispan-dev@lists.jboss.org"
target=_blank>infinispan-dev@lists.jboss.org</A><BR><A
href="https://lists.jboss.org/mailman/listinfo/infinispan-dev"
target=_blank>https://lists.jboss.org/mailman/listinfo/infinispan-dev</A><BR></DIV></DIV></BLOCKQUOTE></DIV><BR></DIV></DIV></BODY></HTML>