<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">None of the existing Hash
implementations can, but this new one will be special. It could
have access to the config (and CH) of the user's cache so it will
know the number of segments. The index cache will have to use the
same type of CH as the data cache in order to keep ownership in
sync and the Hash implementation will be the special delegating
Hash.<br>
<br>
There is a twist though, the above only works with
SyncConsistentHash. Bacause when two caches with identical
topology use DefaultConsistentHash they could still not be in sync
in terms of key ownership. Only SyncConsistentHash ensures that.<br>
<br>
Knowledge of how CH currently maps hashcodes to segments is
assumed already. I've spotted at least 3 places in code where it
happens, so it is time to document it or move this responsibility
to the Hash interface as you suggest to make it really pluggable.<br>
<br>
Adrian<br>
<br>
On 01/20/2015 03:32 PM, Dan Berindei wrote:<br>
</div>
<blockquote
cite="mid:CA+nfvwTVFy1jYuB0iO=YqczLXZQ6i8f2hhc7NJYW9+us9edB=Q@mail.gmail.com"
type="cite">
<div dir="ltr">Adrian, I don't think that will work. The Hash
doesn't know the number of segments so it can't tell where a
particular key will land - even assuming knowledge about how the
ConsistentHash will map hash codes to segments.
<div><br>
</div>
<div>However, I'm all for replacing the current Hash interface
with another interface that maps keys directly to segments.</div>
<div><br>
</div>
<div>Cheers</div>
<div>Dan</div>
<div><br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Jan 20, 2015 at 4:08 AM,
Adrian Nistor <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:anistor@redhat.com" target="_blank">anistor@redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hi
Sanne,<br>
<br>
An alternative approach would be to implement an<br>
org.infinispan.commons.hash.Hash which delegates to the
stock<br>
implementation for all keys except those that need to be
assigned to a<br>
specific segment. It should return the desired segment for
those.<br>
<span class=""><font color="#888888"><br>
Adrian<br>
</font></span>
<div class="">
<div class="h5"><br>
<br>
On 01/20/2015 02:48 AM, Sanne Grinovero wrote:<br>
> Hi all,<br>
><br>
> I'm playing with an idea for some internal
components to be able to<br>
> "tag" the key for an entry to be stored into
Infinispan in a very<br>
> specific segment of the CH.<br>
><br>
> Conceptually the plan is easy to understand by
looking at this patch:<br>
><br>
> <a moz-do-not-send="true"
href="https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f"
target="_blank">https://github.com/Sanne/infinispan/commit/45a3d9e62318d5f5f950a60b5bb174d23037335f</a><br>
><br>
> Hacking the change into ReplicatedConsistentHash
is quite barbaric,<br>
> please bear with me as I couldn't figure a better
way to be able to<br>
> experiment with this. I'll probably want to
extend this class, but<br>
> then I'm not sure how to plug it in?<br>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>You would need to create your own
ConsistentHashFactory, possibly extending
ReplicatedConsistentHashFactory. You can then plug the
factory in with </div>
<div><br>
</div>
<div>configurationBuilder.clustering().hash().consistentHashFactory(yourFactory)</div>
<div><br>
</div>
<div>However, this isn't a really good idea, because then
you need a different implementation for distributed mode,
and then another implementation for topology-aware
clusters (with rack/machine/site ids). And your users
would also need to select the proper factory for each
cache.</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="">
<div class="h5">
><br>
> What would you all think of such a "tagging"
mechanism?<br>
><br>
> # Why I didn't use the KeyAffinityService<br>
> - I need to use my own keys, not the meaningless
stuff produced by the service<br>
> - the extensive usage of Random in there doesn't
seem suited for a<br>
> performance critical path</div>
</div>
</blockquote>
<div><br>
</div>
<div>You can plug in your own KeyGenerator to generate keys,
and maybe replace the Random with a static/thread-local
counter.</div>
<div> <br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="">
<div class="h5"><span style="color:rgb(34,34,34)"> </span></div>
</div>
</blockquote>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="">
<div class="h5">
><br>
> # Why I didn't use the Grouping API<br>
> - I need to pick the specific storage segment,
not just co-locate with<br>
> a different key<br>
><br>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>This is actually a drawback of the KeyAffinityService
more than Grouping. With grouping, you can actually follow
the KeyAffinityService strategy and generate random
strings until you get one in the proper segment, and then
tag all your keys with that exact string.</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="">
<div class="h5">
><br>
> The general goal is to make it possible to "tag"
all entries of an<br>
> index, and have an independent index for each
segment of the CH. So<br>
> the resulting effect would be, that when a
primary owner for any key K<br>
> is making an update, and this triggers an index
update, that update is<br>
> A) going to happen on the same node -> no
need to forwarding to a<br>
> "master indexing node"<br>
> B) each such writes on the index happen on the
same node which is<br>
> primary owner for all the written entries of the
index.<br>
><br>
> There are two additional nice consequences:<br>
> - there would be no need to perform a reliable
"master election":<br>
> ownership singleton is already guaranteed by
Infinispan's essential<br>
> logic, so it would reuse that<br>
> - the propagation of writes on the index from
the primary owner<br>
> (which is the local node by definition) to backup
owners could use<br>
> REPL_ASYNC for most practical use cases.<br>
><br>
> So net result is that the overhead for indexing
is reduced to 0 (ZERO)<br>
> blocking RPCs if the async repl is acceptable, or
to only one blocking<br>
> roundtrip if very strict consistency is required.<br>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Sounds very interesting, but I think there may be a
problem with your strategy: Infinispan doesn't guarantee
you that one of the nodes executing the CommitCommand is
the primary owner at the time the CommitCommand is
executed. You could have something like this:</div>
<div><br>
</div>
<div>Cluster [A, B, C, D], key k, owners(k) = [A, B] (A is
primary)</div>
<div>C initiates a tx that executes put(k, v)</div>
<div>Tx prepare succeeds on A and B</div>
<div>A crashes, but the other nodes don't detect the crash
yet</div>
<div>Tx commit succeeds on B, who still thinks is a backup
owner</div>
<div>B detects the crash, installs a new cluster view
consistent hash with owners(k) = [B]</div>
<div><br>
</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="">
<div class="h5">
><br>
> Thanks,<br>
> Sanne<br>
> _______________________________________________<br>
> infinispan-dev mailing list<br>
> <a moz-do-not-send="true"
href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
> <a moz-do-not-send="true"
href="https://lists.jboss.org/mailman/listinfo/infinispan-dev"
target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
<br>
_______________________________________________<br>
infinispan-dev mailing list<br>
<a moz-do-not-send="true"
href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
<a moz-do-not-send="true"
href="https://lists.jboss.org/mailman/listinfo/infinispan-dev"
target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
infinispan-dev mailing list
<a class="moz-txt-link-abbreviated" href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a>
<a class="moz-txt-link-freetext" href="https://lists.jboss.org/mailman/listinfo/infinispan-dev">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a></pre>
</blockquote>
<br>
</body>
</html>