<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On 7 Feb 2013, at 10:30, Bela Ban wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>No, this *could* be in Infinispan 5.3 (it will be in JGroups 3.3).<br></div></blockquote>+1 TO improvements are scheduled for 5.3<br><blockquote type="cite"><div><br>A MessageDispatcher (RpcDispatcher) instance picks which dispatching <br>mechanism it wants to use. I have RequestHandler (the default) and a <br>sub-interface AsyncRequestHandler. MessageDispatcher (and subclass <br>RpcDispatcher) will implement both (they already do implement <br>handle(Message)).<br><br>So now a user simply sets an attribute in MessageDispatcher to select <br>async dispatching (sync is the default).<br><br>What needs to be done from the Infinispan side is to override <br>handle(Message,Response), and implement handling of requests in a thread <br>pool. The current behavior (inherited from MessageDispatcher) will be to <br>call handle(Message) which CommandAwareDispatcher already implements.<br><br>The Infinispan side can be done in 10 minutes. However, the real work <br>will be the dispatching of incoming requests to threads from the <br>Infinispan thread pool, and the impl of the thread pool, which doesn't <br>exist yet. I guess preserving ordering of requests will be the important <br>part.<br><br>If you have your own thread pool, sync RPCs can be sent without OOB, but <br>the handle() method in CommandAwareDispatcher can decide, based on the <br>mode (e.g. sync) whether to queue the request behind other requests, or <br>whether to invoke it directly.<br><br>I wanted to implement this quickly in JGroups so the hooks are in place <br>for Infinispan to use them later, once a pool has been implemented.<br><br><br>On 2/7/13 10:56 AM, Manik Surtani wrote:<br><blockquote type="cite">Very interesting. &nbsp;However I presume this would be something for Infinispan 6.0? &nbsp;Any thoughts on backward compat?<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">On 7 Feb 2013, at 04:53, Bela Ban &lt;<a href="mailto:bban@redhat.com">bban@redhat.com</a>&gt; wrote:<br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><blockquote type="cite">Hi Pedro,<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">this is almost exactly what I wanted to implement !<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Question:<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">- In RequestCorrelator.handleRequest():<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">protected void handleRequest(Message req, Header hdr) {<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Object retval;<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">boolean threwException = false;<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">MessageRequest messageRequest = new MessageRequestImpl(req, hdr);<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">try {<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">retval=request_handler.handle(messageRequest);<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">} catch(Throwable t) {<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">retval=t;<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">threwException = true;<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">}<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">messageRequest.sendReply(retval, threwException);// &lt;-- should be moved<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">up, or called only if threwException == true<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">}<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">, you create a MessageRequestImpl and pass it to the RequestHandler. The<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">request handler then dispatches the request (possibly) to a thread pool<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">and calls MessageRequestImpl.sendReply() when done.<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">However, you also call MessageRequest.sendReply() before returning from<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">handleRequest(). I think this is an error, and<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">MessageRequest.sendReply() should be moved up inside the catch clause,<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">or be called only if threwException is true, so that we send a reply on<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">behalf of the RequestHandler if and only if it threw an exception (e.g.<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">before it dispatches the request to a thread pool). Otherwise, we'd send<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">a reply *twice* !<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">A few changes I have in mind (need to think about it more):<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">- I want to leave the existing RequestHandler interface in place, so<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">current implementation continue to work<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">- There will be a new AsyncRequestHandler interface (possibly extending<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">RequestHandler, so an implementation can decide to implement both). The<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">RequestCorrelator needs to have either request_handler or<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">async_request_handler set. If the former is set, the logic is unchanged.<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">If the latter is set I'll invoke the async dispatching code<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">- AsyncRequestHandler will look similar to the following:<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">void handle(Message request, Handback hb, boolean requires_response)<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">throws Throwable;<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">- Handback is an interface, and its impl contains header information<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">(e.g. request ID)<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">- Handback has a sendReply(Object reply, boolean is_exception) method<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">which sends a response (or exception) back to the caller<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">- When requires_response is false, the AsyncRequestHandler doesn't need<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">to invoke sendReply()<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">- Message batching<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">- The above interfaces need to take message batching into account, e.g.<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">the ability to handle multiple requests concurrently (if they don't need<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">to be executed sequentially)<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Thoughts ?<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">On 2/6/13 8:29 PM, Pedro Ruivo wrote:<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Hi all,<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Recently I came up with a solution that can help with the thread pool<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">problem motivated by the following:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">In one of the first implementation of Total Order based commit<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">protocol (TO), I had the requirement to move the PrepareCommand to<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">another thread pool. In resume, the TO protocol delivers the<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">PrepareCommand in a deterministic order in all the nodes, by a single<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">deliver thread. To ensure consistency, if it delivers two conflicting<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">transactions, the second transaction must wait until the first<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">transaction finishes. However, blocking single deliver thread is not a<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">good solution, because no more transaction can be validated, even if<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">they don't conflict, while the thread is blocked.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">So, after creating a dependency graph (i.e. the second transaction<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">knows that it must wait for the first transaction to finish) I move<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">the PrepareCommand to another thread pool. Initially, I implemented a<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">new command, called PrepareResponseCommand, that sends back the reply<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">of the PrepareCommand. This solution has one disadvantage: I had to<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">implement an ack collector in ISPN, while JGroups already offers me<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">that with a synchronous communication.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Recently (2 or 3 months ago) I implemented a simple modification in<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">JGroups. In a more generic approach, it allows other threads to reply<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">to a RPC request (such as the PrepareCommand). In the previous<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">scenario, I replaced the PrepareResponseCommand and the ack collector<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">implementation with a synchronous RPC invocation. I've used this<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">solution in other issues in the Cloud-TM's ISPN fork.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">This solution is quite simple to implement and may help you to move<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">the commands to ISPN internal thread pools. The modifications I've<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">made are the following:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">1) I added a new interface (see [1]) that is sent to the application<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">instead of the Message object (see [4]). This interface contains the<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Message and it has a method to allow the application send the reply to<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">that particular request.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">2) I added a new object in [4] with the meaning: this return value is<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">not the reply to the RPC request. This is the returned value that I<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">return when I want to release the thread, because ISPN should return<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">some object in the handle() method. Of course, I know that ISPN will<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">invoke the sendReply() in some other place, otherwise, I will get a<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">TimeoutException in the sender side.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">3) Also I've changed the RequestCorrelator implementation to support<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">the previous modifications (see [2] and [3])<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">In the Cloud-TM's ISPN fork I added a reference in the<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">BaseCacheRpcCommand to [1] and added the method sendReply() [5]. In<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">addition, I have the following uses cases working perfectly with this:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">1) Total Order<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">The scenario described in the beginning. The ParallelTotalOrderManager<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">returns the DO_NOT_REPLY object when it receives a remote<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">PrepareCommand (see [6] line 77). When the PrepareCommand is finally<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">processed by the rest of the interceptor chain, it invokes the<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">PreapreCommand.sendReply() (see [6] line 230).<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">2) GMU remote get<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">GMU ensures SERIALIZABLE Isolation Level and the remote gets must<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">ensure that the node that is processing the request has a minimum<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">version available to ensure data consistency. The problem in ours<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">initial implementation in large cluster, is the number of remote gets<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">are very high and all the OOB are being blocked because of this condition.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Same thing I've done with the ClusteredRemoteGet as you can in see<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[7], line 93 and 105.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">3) GMU CommitCommand<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">In GMU, the CommitCommand cannot be processed by any order. If T1 is<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">serialized before T2, the commit command of T1 must be processed<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">before the commit command of T2, even if the transactions do not have<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">conflicts. This generates the same problem above and the same solution<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">was adopted.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">I know that you have discussed some solutions and I would like to know<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">what it is your opinion about what I've described.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">If you have questions, please let me know.<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Cheers,<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Pedro<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[1]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/MessageRequest.java">https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/MessageRequest.java</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">&lt;<a href="https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/MessageRequest.java">https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/MessageRequest.java</a>&gt;<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[2]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/RequestCorrelator.java#L463">https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/RequestCorrelator.java#L463</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">&lt;<a href="https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/RequestCorrelator.java#L463">https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/RequestCorrelator.java#L463</a>&gt;<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[3]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/RequestCorrelator.java#L495">https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/RequestCorrelator.java#L495</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">&lt;<a href="https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/RequestCorrelator.java#L495">https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/RequestCorrelator.java#L495</a>&gt;<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[4]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/RequestHandler.java">https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/RequestHandler.java</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">&lt;<a href="https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/RequestHandler.java">https://github.com/pruivo/JGroups/blob/t_cloudtm/src/org/jgroups/blocks/RequestHandler.java</a>&gt;<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[5]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="https://github.com/pruivo/infinispan/blob/cloudtm_v1/core/src/main/java/org/infinispan/commands/remote/BaseRpcCommand.java#L75">https://github.com/pruivo/infinispan/blob/cloudtm_v1/core/src/main/java/org/infinispan/commands/remote/BaseRpcCommand.java#L75</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">&lt;<a href="https://github.com/pruivo/infinispan/blob/cloudtm_v1/core/src/main/java/org/infinispan/commands/remote/BaseRpcCommand.java#L75">https://github.com/pruivo/infinispan/blob/cloudtm_v1/core/src/main/java/org/infinispan/commands/remote/BaseRpcCommand.java#L75</a>&gt;<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[6]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="https://github.com/pruivo/infinispan/blob/cloudtm_v1/core/src/main/java/org/infinispan/transaction/totalorder/ParallelTotalOrderManager.java">https://github.com/pruivo/infinispan/blob/cloudtm_v1/core/src/main/java/org/infinispan/transaction/totalorder/ParallelTotalOrderManager.java</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[7]<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="https://github.com/pruivo/infinispan/blob/cloudtm_v1/core/src/main/java/org/infinispan/commands/remote/GMUClusteredGetCommand.java">https://github.com/pruivo/infinispan/blob/cloudtm_v1/core/src/main/java/org/infinispan/commands/remote/GMUClusteredGetCommand.java</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">On 2/3/13 11:35 AM, Bela Ban wrote:<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">If you send me the details, I'll take a look. I'm pretty busy with<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">message batching, so I can't promise next week, but soon...<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">On 2/1/13 11:08 AM, Pedro Ruivo wrote:<br></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Hi,<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">I had a similar problem when I tried GMU[1] in "large" cluster (40 vms),<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">because the remote gets and the commit messages (I'm talking about ISPN<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">commands) must wait for some conditions before being processed.<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">I solved this problem by adding a feature in JGroups[2] that allows the<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">request to be moved to another thread, releasing the OOB thread. The<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">other thread will send the reply of the JGroups Request. Of course, I'm<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">only moving commands that I know they can block.<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">I can enter in some detail if you want =)<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Cheers,<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Pedro<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[1]<a href="http://www.gsd.inesc-id.pt/~romanop/files/papers/icdcs12.pdf">http://www.gsd.inesc-id.pt/~romanop/files/papers/icdcs12.pdf</a><br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">[2] I would like to talk with Bela about this, because it makes my life<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">easier to support total order in ISPN. I'll try to send an email this<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">weekend =)<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">On 01-02-2013 08:04, Radim Vansa wrote:<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Hi guys,<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">after dealing with the large cluster for a while I find the way how we use OOB threads in synchronous configuration non-robust.<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Imagine a situation where node which is not an owner of the key calls PUT. Then the a RPC is called to the primary owner of that key, which reroutes the request to all other owners and after these reply, it replies back.<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">There are two problems:<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">1) If we do simultanously X requests from non-owners to the primary owner where X is OOB TP size, all the OOB threads are waiting for the responses and there is no thread to process the OOB response and release the thread.<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">2) Node A is primary owner of keyA, non-primary owner of keyB and B is primary of keyB and non-primary of keyA. We got many requests for both keyA and keyB from other nodes, therefore, all OOB threads from both nodes call RPC to the non-primary owner but there's noone who could process the request.<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">While we wait for the requests to timeout, the nodes with depleted OOB threadpools start suspecting all other nodes because they can't receive heartbeats etc...<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">You can say "increase your OOB tp size", but that's not always an option, I have currently set it to 1000 threads and it's not enough. In the end, I will be always limited by RAM and something tells me that even nodes with few gigs of RAM should be able to form a huge cluster. We use 160 hotrod worker threads in JDG, that means that 160 * clusterSize = 10240 (64 nodes in my cluster) parallel requests can be executed, and if 10% targets the same node with 1000 OOB threads, it stucks. It's about scaling and robustness.<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Not that I'd have any good solution, but I'd really like to start a discussion.<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Thinking about it a bit, the problem is that blocking call (calling RPC on primary owner from message handler) can block non-blocking calls (such as RPC response or command that never sends any more messages). Therefore, having a flag on message "this won't send another message" could let the message be executed in different threadpool, which will be never deadlocked. In fact, the pools could share the threads but the non-blocking would have always a few threads spare.<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">It's a bad solution as maintaining which message could block in the other node is really, really hard (we can be sure only in case of RPC responses), especially when some locks come. I will welcome anything better.<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Radim<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">-----------------------------------------------------------<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Radim Vansa<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Quality Assurance Engineer<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">JBoss Datagrid<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">tel. +420532294559 ext. 62559<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Red Hat Czech, s.r.o.<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Brno, Purkyňova 99/71, PSČ 612 45<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">Czech Republic<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">_______________________________________________<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">infinispan-dev mailing list<br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br></blockquote></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">_______________________________________________<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">infinispan-dev mailing list<br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br></blockquote></blockquote></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">_______________________________________________<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite">infinispan-dev mailing list<br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><blockquote type="cite"><a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br></blockquote></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">-- <br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">Bela Ban, JGroups lead (<a href="http://www.jgroups.org">http://www.jgroups.org</a>)<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">_______________________________________________<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite">infinispan-dev mailing list<br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br></blockquote></blockquote><blockquote type="cite">--<br></blockquote><blockquote type="cite">Manik Surtani<br></blockquote><blockquote type="cite"><a href="mailto:manik@jboss.org">manik@jboss.org</a><br></blockquote><blockquote type="cite"><a href="http://twitter.com/maniksurtani">twitter.com/maniksurtani</a><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">Platform Architect, JBoss Data Grid<br></blockquote><blockquote type="cite"><a href="http://red.ht/data-grid">http://red.ht/data-grid</a><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite"><br></blockquote><blockquote type="cite">_______________________________________________<br></blockquote><blockquote type="cite">infinispan-dev mailing list<br></blockquote><blockquote type="cite"><a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br></blockquote><blockquote type="cite"><a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br></blockquote><br>-- <br>Bela Ban, JGroups lead (<a href="http://www.jgroups.org">http://www.jgroups.org</a>)<br><br>_______________________________________________<br>infinispan-dev mailing list<br><a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>https://lists.jboss.org/mailman/listinfo/infinispan-dev</div></blockquote></div><br><div apple-content-edited="true">
<span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div>Cheers,</div><div>--&nbsp;<br>Mircea Markus</div><div>Infinispan lead&nbsp;(<a href="http://www.infinispan.org">www.infinispan.org</a>)</div><div><br></div></div></span></div></span></div></span></div></span><br class="Apple-interchange-newline"></span><br class="Apple-interchange-newline">
</div>
<br></body></html>