[infinispan-dev] New algorithm to handle remote commands

Pedro Ruivo pedro at infinispan.org
Thu Sep 18 08:29:36 EDT 2014



On 09/18/2014 12:03 PM, Dan Berindei wrote:
> Thanks Pedro, this looks great.
>
> However, I don't think it's ok to treat CommitCommands/Pessimistic
> PrepareCommands as RemoteLockCommands just because they may send L1
> invalidation commands. It's true that those commands will block, but
> there's no need to wait for any other command before doing the L1
> invalidation. In fact, the non-tx writes on backup owners, which you
> consider to be non-blocking, can also send L1 invalidation commands (see
> L1NonTxInterceptor.invalidateL1).

They are not treated as RemoteLockCommands. I just said that they are 
processed in the remote executor service (need to double check what I 
wrote in the wiki). Unfortunately, I haven't think about the L1 in that 
scenario... :(

>
> On the other hand, one of the good things that the remote executor did
> was to allow queueing lots of commands with a higher topology id, when
> one of the nodes receives the new topology much later than the others.
> We still have to consider each TopologyAffectedCommand as potentially
> blocking and put it through the remote executor.
>
> And InvalidateL1Commands are also TopologyAffectedCommands, so there's
> still a potential for deadlock when L1 is enabled and we have maxThreads
> write commands blocked sending L1 invalidations and those L1
> invalidation commands are stuck in the remote executor's queue on
> another node. And with (very) unlucky timing the remote executor might
> not even get to create maxThreads threads before the deadlock appears. I
> wonder if we could write a custom executor that checks what the first
> task in the queue is every second or so, and creates a bunch of new
> threads if the first task in the queue hasn't changed.

I need to think a little more about it.

So, a single put can originate:
1 RPC to the primary owner (to lock)
X RPC to invalidate L1 from the primary owner
R RPC for the primary owner to the backups owner
Y RPC to invalidate L1 from the backup owner

is this correct?

any suggestions are welcome.

>
> You're right about the remote executor getting full as well, we're
> lacking any feedback mechanism to tell the sender to slow down, except
> for blocking the OOB thread. I wonder if we could tell JGroups somehow
> to discard the message from inside MessageDispatcher.handle (e.g. throw
> a DiscardMessageException), so the sender has to retransmit it and we
> don't block the OOB thread. That should allow us to set a size limit on
> the BlockingTaskAwareExecutor's blockedTasks collection as well. Bela, WDYT?

Even if we have a way to tell the JGroups to resend the message, we have 
no idea if the executor service is full or not. We allow a user to 
inject their own implementation of it.

>
> Cheers
> Dan
>
>
> On Wed, Sep 17, 2014 at 7:17 PM, Pedro Ruivo <pedro at infinispan.org
> <mailto:pedro at infinispan.org>> wrote:
>
>     new link:
>     https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler
>
>     On 09/17/2014 05:08 PM, Pedro Ruivo wrote:
>      > Hi,
>      >
>      > I've just wrote on the wiki a new algorithm to better handle the
>     remote
>      > commands. You can find it in [1].
>      >
>      > If you have questions, suggestion or just want to discuss some
>     aspect,
>      > please do in thread. I'll update the wiki page based on this
>     discussion
>      >
>      > Thanks.
>      >
>      > Cheers,
>      > Pedro
>      >
>      >
>     [1]https://github.com/infinispan/infinispan/wiki/Remote-Command-Handler-(Work-In-Progress...)
>      >
>     _______________________________________________
>     infinispan-dev mailing list
>     infinispan-dev at lists.jboss.org <mailto:infinispan-dev at lists.jboss.org>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>


More information about the infinispan-dev mailing list