<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jul 30, 2014 at 12:00 PM, Pedro Ruivo <span dir="ltr"><<a href="mailto:pedro@infinispan.org" target="_blank">pedro@infinispan.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class=""><br>
<br>
On 07/30/2014 09:02 AM, Dan Berindei wrote:<br>
><br>
<br>
><br>
> if your proposal is only meant to apply to non-tx caches, you are right<br>
> you don't have to worry about multiple primary owners... most of the<br>
> time. But when the primary owner changes, then you do have 2 primary<br>
> owners (if the new primary owner installs the new topology first), and<br>
> you do need to coordinate between the 2.<br>
><br>
<br>
</div>I think it is the same for transactional cache. I.e. the commands wait<br>
for the transaction data from the new topology to be installed. In the<br>
non-tx caches, the old primary owner will send the next "sequence<br>
number" to the new primary owner and only after that, the new primary<br>
owner starts to give the orders.<br></blockquote><div><br></div><div>I'm not sure that's related, commands that wait for a newer topology do not block a thread since the ISPN-3527 fix.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Otherwise, I can implement a total order version for non-tx caches and<br>
all the write serialization would be done in JGroups and Infinispan only<br>
has to apply the updates as soon as they are delivered.<br></blockquote><div><br></div><div>Right, that sounds quite interesting. But you'd also need a less-blocking state transfer ;)</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><div class="h5"><br>
> Slightly related: we also considered generating a version number on the<br>
> client for consistency when the HotRod client retries after a primary<br>
> owner failure [1]. But the clients can't create a monotonic sequence<br>
> number, so we couldn't use that version number for this.<br>
><br>
> [1] <a href="https://issues.jboss.org/browse/ISPN-2956" target="_blank">https://issues.jboss.org/browse/ISPN-2956</a><br>
><br>
><br>
> Also I don't see it as an alternative to TOA, I rather expect it to<br>
> work nicely together: when TOA is enabled you could trust the<br>
> originating sequence source rather than generate a per-entry sequence,<br>
> and in neither case you need to actually use a Lock.<br>
> I haven't thought how the sequences would need to interact (if they<br>
> need), but they seem complementary to resolve different aspects, and<br>
> also both benefit from the same cleanup and basic structure.<br>
><br>
><br>
> We don't acquire locks at all on the backup owners - either in tx or<br>
> non-tx caches. If state transfer is in progress, we use<br>
> ConcurrentHashMap.compute() to store tracking information, which uses a<br>
> synchronized block, so I suppose we do acquire locks. I assume your<br>
> proposal would require a DataContainer.compute() or something similar on<br>
> the backups, to ensure that the version check and the replacement are<br>
> atomic.<br>
><br>
> I still think TOA does what you want for tx caches. Your proposal would<br>
> only work for non-tx caches, so you couldn't use them together.<br>
><br>
><br>
> >> Another aspect is that the "user thread" on the primary owner<br>
> needs to<br>
> >> wait (at least until we improve further) and only proceed after ACK<br>
> >> from backup nodes, but this is better modelled through a state<br>
> >> machine. (Also discussed in Farnborough).<br>
> ><br>
> ><br>
> > To be clear, I don't think keeping the user thread on the<br>
> originator blocked<br>
> > until we have the write confirmations from all the backups is a<br>
> problem - a<br>
> > sync operation has to block, and it also serves to rate-limit user<br>
> > operations.<br>
><br>
><br>
> There are better ways to rate-limit than to make all operations slow;<br>
> we don't need to block a thread, we need to react on the reply from<br>
> the backup owners.<br>
> You still have an inherent rate-limit in the outgoing packet queues:<br>
> if these fill up, then and only then it's nice to introduce some back<br>
> pressure.<br>
><br>
><br>
> Sorry, you got me confused when you called the thread on the primary<br>
> owner a "user thread". I agree that internal stuff can and should be<br>
> asynchronous, callback based, but the user still has to see a<br>
> synchronous blocking operation.<br>
><br>
><br>
> > The problem appears when the originator is not the primary owner,<br>
> and the<br>
> > thread blocking for backup ACKs is from the remote-executor pool<br>
> (or OOB,<br>
> > when the remote-executor pool is exhausted).<br>
><br>
> Not following. I guess this is out of scope now that I clarified the<br>
> proposed solution is only to be applied between primary and backups?<br>
><br>
><br>
> Yeah, I was just trying to clarify that there is no danger of exhausting<br>
> the remote executor/OOB thread pools when the originator of the write<br>
> command is the primary owner (as it happens in the HotRod server).<br>
><br>
><br>
> >><br>
> >> It's also conceptually linked to:<br>
> >> - <a href="https://issues.jboss.org/browse/ISPN-1599" target="_blank">https://issues.jboss.org/browse/ISPN-1599</a><br>
> >> As you need to separate the locks of entries from the effective user<br>
> >> facing lock, at least to implement transactions on top of this<br>
> model.<br>
> ><br>
> ><br>
> > I think we fixed ISPN-1599 when we changed passivation to use<br>
> > DataContainer.compute(). WDYT Pedro, is there anything else you'd<br>
> like to do<br>
> > in the scope of ISPN-1599?<br>
> ><br>
> >><br>
> >> I expect this to improve performance in a very significant way, but<br>
> >> it's getting embarrassing that it's still not done; at the next face<br>
> >> to face meeting we should also reserve some time for retrospective<br>
> >> sessions.<br>
> ><br>
> ><br>
> > Implementing the state machine-based interceptor stack may give us a<br>
> > performance boost, but I'm much more certain that it's a very<br>
> complex, high<br>
> > risk task... and we don't have a stable test suite yet :)<br>
><br>
> Cleaning up and removing some complexity such as<br>
> TooManyExecutorsException might help to get it stable, and keep it<br>
> there :)<br>
> BTW it was quite stable for me until you changed the JGroups UDP<br>
> default configuration.<br>
><br>
><br>
> Do you really use UDP to run the tests? The default is TCP, but maybe<br>
> the some tests doesn't use TestCacheManagerFactory...<br>
><br>
> I was just aligning our configs with Bela's recommandations: MERGE3<br>
> instead of MERGE2 and the removal of UFC in TCP stacks. If they cause<br>
> problems on your machine, you should make more noise :)<br>
><br>
> Dan<br>
><br>
> Sanne<br>
><br>
> ><br>
> ><br>
> >><br>
> >><br>
> >> Sanne<br>
> >><br>
> >> On 29 July 2014 15:50, Bela Ban <<a href="mailto:bban@redhat.com">bban@redhat.com</a><br>
</div></div><div class="">> <mailto:<a href="mailto:bban@redhat.com">bban@redhat.com</a>>> wrote:<br>
> >> ><br>
> >> ><br>
> >> > On 29/07/14 16:42, Dan Berindei wrote:<br>
> >> >> Have you tried regular optimistic/pessimistic transactions as<br>
> well?<br>
> >> ><br>
> >> > Yes, in my first impl. but since I'm making only 1 change per<br>
> request, I<br>
> >> > thought a TX is overkill.<br>
> >> ><br>
> >> >> They *should* have less issues with the OOB thread pool than<br>
> non-tx<br>
> >> >> mode, and<br>
> >> >> I'm quite curious how they stack against TO in such a large<br>
> cluster.<br>
> >> ><br>
> >> > Why would they have fewer issues with the thread pools ? AIUI,<br>
> a TX<br>
> >> > involves 2 RPCs (PREPARE-COMMIT/ROLLBACK) compared to one when<br>
> not using<br>
> >> > TXs. And we're sync anyway...<br>
> >> ><br>
> >> ><br>
> >> >> On Tue, Jul 29, 2014 at 5:38 PM, Bela Ban <<a href="mailto:bban@redhat.com">bban@redhat.com</a><br>
> <mailto:<a href="mailto:bban@redhat.com">bban@redhat.com</a>><br>
</div><div><div class="h5">> >> >> <mailto:<a href="mailto:bban@redhat.com">bban@redhat.com</a> <mailto:<a href="mailto:bban@redhat.com">bban@redhat.com</a>>>> wrote:<br>
> >> >><br>
> >> >> Following up on my own email, I changed the config to use<br>
> Pedro's<br>
> >> >> excellent total order implementation:<br>
> >> >><br>
> >> >> <transaction transactionMode="TRANSACTIONAL"<br>
> >> >> transactionProtocol="TOTAL_ORDER" lockingMode="OPTIMISTIC"<br>
> >> >> useEagerLocking="true" eagerLockSingleNode="true"><br>
> >> >> <recovery enabled="false"/><br>
> >> >><br>
> >> >> With 100 nodes and 25 requester threads/node, I did NOT<br>
> run into<br>
> >> >> any<br>
> >> >> locking issues !<br>
> >> >><br>
> >> >> I could even go up to 200 requester threads/node and the<br>
> perf was ~<br>
> >> >> 7'000-8'000 requests/sec/node. Not too bad !<br>
> >> >><br>
> >> >> This really validates the concept of lockless total-order<br>
> >> >> dissemination<br>
> >> >> of TXs; for the first time, this has been tested on a<br>
> large(r)<br>
> >> >> scale<br>
> >> >> (previously only on 25 nodes) and IT WORKS ! :-)<br>
> >> >><br>
> >> >> I still believe we should implement my suggested solution for<br>
> >> >> non-TO<br>
> >> >> configs, but short of configuring thread pools of 1000<br>
> threads or<br>
> >> >> higher, I hope TO will allow me to finally test a 500 node<br>
> >> >> Infinispan<br>
> >> >> cluster !<br>
> >> >><br>
> >> >><br>
> >> >> On 29/07/14 15:56, Bela Ban wrote:<br>
> >> >> > Hi guys,<br>
> >> >> ><br>
> >> >> > sorry for the long post, but I do think I ran into an<br>
> important<br>
> >> >> problem<br>
> >> >> > and we need to fix it ... :-)<br>
> >> >> ><br>
> >> >> > I've spent the last couple of days running the<br>
> IspnPerfTest [1]<br>
> >> >> perftest<br>
> >> >> > on Google Compute Engine (GCE), and I've run into a<br>
> problem with<br>
> >> >> > Infinispan. It is a design problem and can be mitigated by<br>
> >> >> sizing<br>
> >> >> thread<br>
> >> >> > pools correctly, but cannot be eliminated entirely.<br>
> >> >> ><br>
> >> >> ><br>
> >> >> > Symptom:<br>
> >> >> > --------<br>
> >> >> > IspnPerfTest has every node in a cluster perform<br>
> 20'000 requests<br>
> >> >> on keys<br>
> >> >> > in range [1..20000].<br>
> >> >> ><br>
> >> >> > 80% of the requests are reads and 20% writes.<br>
> >> >> ><br>
> >> >> > By default, we have 25 requester threads per node and<br>
> 100 nodes<br>
> >> >> in a<br>
> >> >> > cluster, so a total of 2500 requester threads.<br>
> >> >> ><br>
> >> >> > The cache used is NON-TRANSACTIONAL / dist-sync / 2<br>
> owners:<br>
> >> >> ><br>
> >> >> > <namedCache name="clusteredCache"><br>
> >> >> > <clustering mode="distribution"><br>
> >> >> > <stateTransfer awaitInitialTransfer="true"/><br>
> >> >> > <hash numOwners="2"/><br>
> >> >> > <sync replTimeout="20000"/><br>
> >> >> > </clustering><br>
> >> >> ><br>
> >> >> > <transaction transactionMode="NON_TRANSACTIONAL"<br>
> >> >> > useEagerLocking="true"<br>
> >> >> > eagerLockSingleNode="true" /><br>
> >> >> > <locking lockAcquisitionTimeout="5000"<br>
> >> >> concurrencyLevel="1000"<br>
> >> >> > isolationLevel="READ_COMMITTED"<br>
> >> >> useLockStriping="false" /><br>
> >> >> > </namedCache><br>
> >> >> ><br>
> >> >> > It has 2 owners, a lock acquisition timeout of 5s and<br>
> a repl<br>
> >> >> timeout of<br>
> >> >> > 20s. Lock stripting is off, so we have 1 lock per key.<br>
> >> >> ><br>
> >> >> > When I run the test, I always get errors like those below:<br>
> >> >> ><br>
> >> >> > org.infinispan.util.concurrent.TimeoutException: Unable to<br>
> >> >> acquire lock<br>
> >> >> > after [10 seconds] on key [19386] for requestor<br>
> >> >> [Thread[invoker-3,5,main]]!<br>
> >> >> > Lock held by [Thread[OOB-194,ispn-perf-test,m5.1,5,main]]<br>
> >> >> ><br>
> >> >> > and<br>
> >> >> ><br>
> >> >> > org.infinispan.util.concurrent.TimeoutException: Node<br>
> m8.1 timed<br>
> >> >> out<br>
> >> >> ><br>
> >> >> ><br>
> >> >> > Investigation:<br>
> >> >> > ------------<br>
> >> >> > When I looked at UNICAST3, I saw a lot of missing<br>
> messages on<br>
> >> >> the<br>
> >> >> > receive side and unacked messages on the send side.<br>
> This caused<br>
> >> >> me to<br>
> >> >> > look into the (mainly OOB) thread pools and - voila -<br>
> maxed out<br>
> >> >> !<br>
> >> >> ><br>
> >> >> > I learned from Pedro that the Infinispan internal<br>
> thread pool<br>
> >> >> (with a<br>
> >> >> > default of 32 threads) can be configured, so I<br>
> increased it to<br>
> >> >> 300 and<br>
> >> >> > increased the OOB pools as well.<br>
> >> >> ><br>
> >> >> > This mitigated the problem somewhat, but when I<br>
> increased the<br>
> >> >> requester<br>
> >> >> > threads to 100, I had the same problem again.<br>
> Apparently, the<br>
> >> >> Infinispan<br>
> >> >> > internal thread pool uses a rejection policy of "run"<br>
> and thus<br>
> >> >> uses the<br>
> >> >> > JGroups (OOB) thread when exhausted.<br>
> >> >> ><br>
> >> >> > I learned (from Pedro and Mircea) that GETs and PUTs<br>
> work as<br>
> >> >> follows in<br>
> >> >> > dist-sync / 2 owners:<br>
> >> >> > - GETs are sent to the primary and backup owners and<br>
> the first<br>
> >> >> response<br>
> >> >> > received is returned to the caller. No locks are<br>
> acquired, so<br>
> >> >> GETs<br>
> >> >> > shouldn't cause problems.<br>
> >> >> ><br>
> >> >> > - A PUT(K) is sent to the primary owner of K<br>
> >> >> > - The primary owner<br>
> >> >> > (1) locks K<br>
> >> >> > (2) updates the backup owner synchronously<br>
> *while holding<br>
> >> >> the lock*<br>
> >> >> > (3) releases the lock<br>
> >> >> ><br>
> >> >> ><br>
> >> >> > Hypothesis<br>
> >> >> > ----------<br>
> >> >> > (2) above is done while holding the lock. The sync<br>
> update of the<br>
> >> >> backup<br>
> >> >> > owner is done with the lock held to guarantee that the<br>
> primary<br>
> >> >> and<br>
> >> >> > backup owner of K have the same values for K.<br>
> >> >> ><br>
> >> >> > However, the sync update *inside the lock scope* slows<br>
> things<br>
> >> >> down (can<br>
> >> >> > it also lead to deadlocks?); there's the risk that the<br>
> request<br>
> >> >> is<br>
> >> >> > dropped due to a full incoming thread pool, or that<br>
> the response<br>
> >> >> is not<br>
> >> >> > received because of the same, or that the locking at<br>
> the backup<br>
> >> >> owner<br>
> >> >> > blocks for some time.<br>
> >> >> ><br>
> >> >> > If we have many threads modifying the same key, then<br>
> we have a<br>
> >> >> backlog<br>
> >> >> > of locking work against that key. Say we have 100<br>
> requester<br>
> >> >> threads and<br>
> >> >> > a 100 node cluster. This means that we have 10'000 threads<br>
> >> >> accessing<br>
> >> >> > keys; with 2'000 writers there's a big chance that<br>
> some writers<br>
> >> >> pick the<br>
> >> >> > same key at the same time.<br>
> >> >> ><br>
> >> >> > For example, if we have 100 threads accessing key K<br>
> and it takes<br>
> >> >> 3ms to<br>
> >> >> > replicate K to the backup owner, then the last of the 100<br>
> >> >> threads<br>
> >> >> waits<br>
> >> >> > ~300ms before it gets a chance to lock K on the<br>
> primary owner<br>
> >> >> and<br>
> >> >> > replicate it as well.<br>
> >> >> ><br>
> >> >> > Just a small hiccup in sending the PUT to the primary<br>
> owner,<br>
> >> >> sending the<br>
> >> >> > modification to the backup owner, waitting for the<br>
> response, or<br>
> >> >> GC, and<br>
> >> >> > the delay will quickly become bigger.<br>
> >> >> ><br>
> >> >> ><br>
> >> >> > Verification<br>
> >> >> > ----------<br>
> >> >> > To verify the above, I set numOwners to 1. This means<br>
> that the<br>
> >> >> primary<br>
> >> >> > owner of K does *not* send the modification to the<br>
> backup owner,<br>
> >> >> it only<br>
> >> >> > locks K, modifies K and unlocks K again.<br>
> >> >> ><br>
> >> >> > I ran the IspnPerfTest again on 100 nodes, with 25<br>
> requesters,<br>
> >> >> and NO<br>
> >> >> > PROBLEM !<br>
> >> >> ><br>
> >> >> > I then increased the requesters to 100, 150 and 200<br>
> and the test<br>
> >> >> > completed flawlessly ! Performance was around *40'000<br>
> requests<br>
> >> >> per node<br>
> >> >> > per sec* on 4-core boxes !<br>
> >> >> ><br>
> >> >> ><br>
> >> >> > Root cause<br>
> >> >> > ---------<br>
> >> >> > *******************<br>
> >> >> > The root cause is the sync RPC of K to the backup<br>
> owner(s) of K<br>
> >> >> while<br>
> >> >> > the primary owner holds the lock for K.<br>
> >> >> > *******************<br>
> >> >> ><br>
> >> >> > This causes a backlog of threads waiting for the lock<br>
> and that<br>
> >> >> backlog<br>
> >> >> > can grow to exhaust the thread pools. First the Infinispan<br>
> >> >> internal<br>
> >> >> > thread pool, then the JGroups OOB thread pool. The<br>
> latter causes<br>
> >> >> > retransmissions to get dropped, which compounds the<br>
> problem...<br>
> >> >> ><br>
> >> >> ><br>
> >> >> > Goal<br>
> >> >> > ----<br>
> >> >> > The goal is to make sure that primary and backup<br>
> owner(s) of K<br>
> >> >> have the<br>
> >> >> > same value for K.<br>
> >> >> ><br>
> >> >> > Simply sending the modification to the backup owner(s)<br>
> >> >> asynchronously<br>
> >> >> > won't guarantee this, as modification messages might get<br>
> >> >> processed out<br>
> >> >> > of order as they're OOB !<br>
> >> >> ><br>
> >> >> ><br>
> >> >> > Suggested solution<br>
> >> >> > ----------------<br>
> >> >> > The modification RPC needs to be invoked *outside of<br>
> the lock<br>
> >> >> scope*:<br>
> >> >> > - lock K<br>
> >> >> > - modify K<br>
> >> >> > - unlock K<br>
> >> >> > - send modification to backup owner(s) // outside the<br>
> lock scope<br>
> >> >> ><br>
> >> >> > The primary owner puts the modification of K into a<br>
> queue from<br>
> >> >> where a<br>
> >> >> > separate thread/task removes it. The thread then<br>
> invokes the<br>
> >> >> PUT(K) on<br>
> >> >> > the backup owner(s).<br>
> >> >> ><br>
> >> >> > The queue has the modified keys in FIFO order, so the<br>
> >> >> modifications<br>
> >> >> > arrive at the backup owner(s) in the right order.<br>
> >> >> ><br>
> >> >> > This requires that the way GET is implemented changes<br>
> slightly:<br>
> >> >> instead<br>
> >> >> > of invoking a GET on all owners of K, we only invoke<br>
> it on the<br>
> >> >> primary<br>
> >> >> > owner, then the next-in-line etc.<br>
> >> >> ><br>
> >> >> > The reason for this is that the backup owner(s) may<br>
> not yet have<br>
> >> >> > received the modification of K.<br>
> >> >> ><br>
> >> >> > This is a better impl anyway (we discussed this<br>
> before) becuse<br>
> >> >> it<br>
> >> >> > generates less traffic; in the normal case, all but 1 GET<br>
> >> >> requests are<br>
> >> >> > unnecessary.<br>
> >> >> ><br>
> >> >> ><br>
> >> >> ><br>
> >> >> > Improvement<br>
> >> >> > -----------<br>
> >> >> > The above solution can be simplified and even made more<br>
> >> >> efficient.<br>
> >> >> > Re-using concepts from IRAC [2], we can simply store the<br>
> >> >> modified<br>
> >> >> *keys*<br>
> >> >> > in the modification queue. The modification<br>
> replication thread<br>
> >> >> removes<br>
> >> >> > the key, gets the current value and invokes a<br>
> PUT/REMOVE on the<br>
> >> >> backup<br>
> >> >> > owner(s).<br>
> >> >> ><br>
> >> >> > Even better: a key is only ever added *once*, so if we<br>
> have<br>
> >> >> [5,2,17,3],<br>
> >> >> > adding key 2 is a no-op because the processing of key<br>
> 2 (in<br>
> >> >> second<br>
> >> >> > position in the queue) will fetch the up-to-date value<br>
> anyway !<br>
> >> >> ><br>
> >> >> ><br>
> >> >> > Misc<br>
> >> >> > ----<br>
> >> >> > - Could we possibly use total order to send the<br>
> updates in TO ?<br>
> >> >> TBD (Pedro?)<br>
> >> >> ><br>
> >> >> ><br>
> >> >> > Thoughts ?<br>
> >> >> ><br>
> >> >> ><br>
> >> >> > [1] <a href="https://github.com/belaban/IspnPerfTest" target="_blank">https://github.com/belaban/IspnPerfTest</a><br>
> >> >> > [2]<br>
> >> >> ><br>
> >> >><br>
> >> >><br>
> <a href="https://github.com/infinispan/infinispan/wiki/RAC:-Reliable-Asynchronous-Clustering" target="_blank">https://github.com/infinispan/infinispan/wiki/RAC:-Reliable-Asynchronous-Clustering</a><br>
> >> >> ><br>
> >> >><br>
> >> >> --<br>
> >> >> Bela Ban, JGroups lead (<a href="http://www.jgroups.org" target="_blank">http://www.jgroups.org</a>)<br>
> >> >> _______________________________________________<br>
> >> >> infinispan-dev mailing list<br>
> >> >> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a>><br>
</div></div>> >> >> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
<div class="">> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a>>><br>
> >> >> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> >> >><br>
> >> >><br>
> >> >><br>
> >> >><br>
> >> >> _______________________________________________<br>
> >> >> infinispan-dev mailing list<br>
> >> >> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
</div>> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a>><br>
<div class="">> >> >> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> >> >><br>
> >> ><br>
> >> > --<br>
> >> > Bela Ban, JGroups lead (<a href="http://www.jgroups.org" target="_blank">http://www.jgroups.org</a>)<br>
> >> > _______________________________________________<br>
> >> > infinispan-dev mailing list<br>
> >> > <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a>><br>
> >> > <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> >> _______________________________________________<br>
> >> infinispan-dev mailing list<br>
> >> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a>><br>
> >> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> ><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > infinispan-dev mailing list<br>
> > <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a>><br>
> > <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
> _______________________________________________<br>
> infinispan-dev mailing list<br>
</div>> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a> <mailto:<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a>><br>
<div class="HOEnZb"><div class="h5">> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
><br>
><br>
><br>
><br>
> _______________________________________________<br>
> infinispan-dev mailing list<br>
> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
><br>
_______________________________________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
</div></div></blockquote></div><br></div></div>