[infinispan-dev] Remote command smarter dispatcher (merge ISPN-2808 and ISPN-2849)

Pedro Ruivo pedro at infinispan.org
Tue Mar 19 17:57:26 EDT 2013



On 03/19/2013 09:01 PM, Mircea Markus wrote:
>
> On 18 Mar 2013, at 16:09, Pedro Ruivo wrote:
>
>> Hi all,
>>
>> To solve ISPN-2808 (avoid blocking JGroups threads in order to allow to
>> deliver the request responses), I've created another thread pool to move
>> the possible blocking commands (i.e. the commands that may block until
>> some state is achieved).
>>
>> Problem description:
>>
>> With this solution, the new thread pool should be large in order to be
>> able to handle the remote commands without deadlocks. The problem is
>> that all the threads can be block to process the command that may
>> unblock other commands.
>>
>> Example: a bunch of commands are blocked waiting for a new topology ID
>> and the command that will increment the topology ID is in the thread
>> pool queue.
>> Solution:
>>
>> Use a smart command dispatcher, i.e., keep the command in the queue
>> until we are sure that it will not wait for other commands. I've already
>> implemented some kind of executor service (ConditionalExecutorService,
>> in ISPN-2635 and ISPN-2636 branches, Total Order stuff) that only put
>> the Runnable (more precisely a new interface called ConditionalRunnable)
>> in the thread pool when it is ready to be processed. Creative guys, it
>> may need a better name :)
>>
>> The ConditionalRunnable has a new method (boolean isReady()) that should
>> return true when the runnable should not block.
>>
>> Example how to apply this to ISPN-2808:
>>
>> Most of the commands awaits for a particular topology ID and/or for lock
>> acquisition. In this way, the isReady() implementation can be something
>> like:
>>
>> isReady()
>>   return commandTopologyId <= currentTopologyId && (for all keys; do if
>> !lock(key).tryLock(); return false; done)
>
> so this plans to cover ISPN-2849 as well then?

no, I see the as a prototype version of ISPN-2849, because I'm still 
using the current lock scheme.

>
>
>> With this, I believe we can keep the number of thread low and avoid the
>> thread deadlocks.
> +1.
>>
>> Now, I have two possible implementations:
>>
>> 1) put a reference for StateTransferManager and/or LockManager in the
>> commands, and invoke the methods directly (a little dirty)
>>
>> 2) added new method in the CommandInterceptor like: boolean
>> preProcess<command>(Command, InvocationContext). each interceptor will
>> check if the command will block on it (returning false) or not (invoke
>> the next interceptor). For example, the StateTransferInterceptor returns
>> immediately false if the commandToplogyId is higher than the
>> currentTopologyId and the *LockingIntercerptor will return false if it
>> cannot acquire some lock.
>>
>> Any other suggestions? If I was not clear let me know.
>
> can't we reuse the lock-dependency graph from total order for this as well?
It involves a lot of changes and I have to think better about it (to 
support tx and non-tx caches). I will leave ISPN-2849 open.
>
>>
>> Thanks.
>>
>> Cheers,
>> Pedro
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> Cheers,
>


More information about the infinispan-dev mailing list