[infinispan-issues] [JBoss JIRA] (ISPN-6799) OOB thread pool fills with threads trying to send remote get responses

Dan Berindei (JIRA) issues at jboss.org
Mon Jun 27 06:47:01 EDT 2016


    [ https://issues.jboss.org/browse/ISPN-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13257275#comment-13257275 ] 

Dan Berindei commented on ISPN-6799:
------------------------------------

There are 2 ways we can prevent the OOB threads from blocking:

1. Use the NO_FC flag.

I have already tried using the NO_FC flag for all sync RPCs (and implicitly their responses as well), but throughput went down a lot - so it looks like flow control is helpful.

However, I have had promising results on my machine by setting the NO_FC flag only for RPC _responses_ instead. It could be a good middle ground, but we need to test it on a real network.

2. Execute the remote gets on the remote-executor thread pool.

Here too, the first approach wasn't good enough. I have tried moving all remote gets to the remote-executor pool, and performance was much worse.

But we can make that decision dynamically, based on the state of the OOB thread pool. I have had very good results on my machine by sending the remote get commands to the remote-executor pool when the OOB pool is at least 3/4 full. This doesn't work very well when the OOB pool has a queue: either the {{min_threads > 3/4 * max_threads}}, and the queue is never used, or {{min_threads <= 3/4 * max_threads}}, and we execute all remote gets on the OOB pool.

If this turns out to work well, we can try expanding it to other commands, in order to avoid a context switch and to improve latency.

> OOB thread pool fills with threads trying to send remote get responses
> ----------------------------------------------------------------------
>
>                 Key: ISPN-6799
>                 URL: https://issues.jboss.org/browse/ISPN-6799
>             Project: Infinispan
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 9.0.0.Alpha2, 8.2.2.Final
>            Reporter: Dan Berindei
>             Fix For: 9.0.0.Alpha3
>
>
> Note: This is a scenario that happens in the stress tests, with 4 nodes in dist mode, and 200+ threads per node doing only reads. I have not been able to reproduce it locally, even with a much lower OOB thread pool size and UFC.max_credits.
> We don't use the {{NO_FC}} flag, so threads sending both requests and responses can block in UFC/MFC. Remote gets are executed directly on the OOB thread, so when we run out of credits for one node, the OOB pool can quickly become full with threads waiting to send a remote get response to that node.
> While we can't send responses to that node, we won't send credits to it, either, as credits are only sent *after* the message has been processed by the application. That means OOB threads on all nodes will start blocking, trying to send remote get responses to us.
> This is made a worse by our staggering of remote gets. As remote get responses block, the stagger timeout kicks in and we send even more remote gets, making it even harder for the system to recover.
> UFC/MFC can send a {{CREDIT_REQUES}}T message to ask for more credits. The {{REPLENISH}} messages are handled on JGroups' internal thread pool, so they are not blocked. However, the CREDIT_REQUEST can be sent at most once every {{UFC.max_block_time}} ms, so they can't be relied on to provide enough credits. With the default settings, the throughput would be {{max_credits / max_block_time == 2mb / 0.5s == 4mb/s}}, which is really small compared to regular throughput.



--
This message was sent by Atlassian JIRA
(v6.4.11#64026)


More information about the infinispan-issues mailing list