[JBoss JIRA] (JGRP-2172) Non-blocking flow control
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2172?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2172:
---------------------------
Description:
Sending a message through FlowControl (UFC, MFC) should not block if {{Message.Flag.NB_FC}} (non-blocking flow control) is set.
Instead, the message should be added to a queue (bounded if {{max_size}} > 0, else unbounded)
The queued messages are sent when credits arrive.
Non-blocking flow control can be used by both external and internal threads.
If the queue is unbounded, then it is the responsibility of the application (e.g. Infinispan) to make sure the queue doesn't grow to an untenable size.
was:
Sending a message through FlowControl (UFC, MFC) should not block if {{Message.Flag.NB_FC}} (non-blocking flow control) is set.
Instead, the message should be added to a queue (bounded if {{max_size}} > 0, else unbounded)
The queued messages are sent when credits arrive.
Non-blocking flow control can be used by both external and internal threads.
> Non-blocking flow control
> -------------------------
>
> Key: JGRP-2172
> URL: https://issues.jboss.org/browse/JGRP-2172
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0.4
>
>
> Sending a message through FlowControl (UFC, MFC) should not block if {{Message.Flag.NB_FC}} (non-blocking flow control) is set.
> Instead, the message should be added to a queue (bounded if {{max_size}} > 0, else unbounded)
> The queued messages are sent when credits arrive.
> Non-blocking flow control can be used by both external and internal threads.
> If the queue is unbounded, then it is the responsibility of the application (e.g. Infinispan) to make sure the queue doesn't grow to an untenable size.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JGRP-2171) New bundler with max_bundle_size for each destination
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2171?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2171:
--------------------------------
Contrary to the original idea (with {{num_flips}}), which is concerned about A at the head of {{C B B B B A}} getting delayed until max_bundle_size is reached, or C is sent, having a fixed remove-queue gives us a max upper time in which the first message of a sequence such as {{A A A A A}} is sent.
It doesn't really matter for the A at the head of the queue for whom it has to wait, as long as it has to wait.
The introduction of remove-queue bounds this wait time with an upper limit, which is either the time it takes to batch up and send N messages or {{max_bundle_size}}. The size of remove-queue determines the max latency.
> New bundler with max_bundle_size for each destination
> -----------------------------------------------------
>
> Key: JGRP-2171
> URL: https://issues.jboss.org/browse/JGRP-2171
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0.4
>
>
> The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
> This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
> The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
> This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
> We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
> So 1 main queue and 1 queue for each destination.
> h4. Example:
> * {{num_flips}} is 2
> * A message for A is sent, added to the main queue and removed by the runner. It is queued in A's queue
> * Another message for A is sent. Also queued (A's queue: {{A A}})
> * A message to B is sent: A's {{num_flips}} is now 1. A's queue is {{A A}}, B's queue is {{B}}
> * Another message to A is sent. This resets A's {{num_flips}} to 2, B's {{num_flips}} is now 1
> * 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
> * No more messages are received, so the batch to C is also sent
> The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
> h4. Misc
> * Should the sending of batches be delegated to a thread pool?
> * Should the senders add their messages directly to the destination queues instead of the main queue? That would result in less contention on the main queue, but it would also require 1 thread per destination queue, which creates too many threads...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JGRP-2171) New bundler with max_bundle_size for each destination
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2171?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-2171 at 6/13/17 10:18 AM:
----------------------------------------------------------
{{RemoveQueueBundler}} has performance that's similar to {{TransferQueueBundler}}! On cluster01-08 (8 nodes), the result for TQB was 108'000 reqs/sec/node, and with the following remove queue sizes for {{RemoveQueueBundler}}, I got these numbers (IspnPerfTest):
||remove-queue size || performance ||
|16|98'659 |
|32|108'199|
|64|108'436|
|128|107'774|
|256|107'716|
|512|107'786|
|1024|107'309|
|2048|106'994|
|4096|107'122|
|8192|106'980|
The remove-queue size doesn't seem to have any impact, but this is probably caused by the fact that 100 threads each per node are sending GET requests/responses, and since {{numOwners=2}}, only 75% of all requests go remote.
This means that we have a max of 75 requests being sent at the same time. This explains why a remove-queue size of 32 or 64 already achieves acceptable results.
was (Author: belaban):
{{RemoveQueueBundler}} has performance that's similar to {{TransferQueueBundler}}! On cluster01-08 (8 nodes), the result for TQB was 108'000 reqs/sec/node, and with the following remove queue sizes for {{RemoveQueueBundler}}, I got these numbers (IspnPerfTest):
||remove-queue size || performance ||
|16|98'659 |
|32|108'199|
|64|108'436|
|128|107'774|
|256|107'716|
|512|107'786|
|1024|107'309|
|2048|106'994|
|4096|107'122|
|8192|106'980|
The remove-queue size doesn't seem to have any impact, but this is probably caused by the fact that 100 threads each per node are sending GET requests/responses, and since {{numOwners=2}}, only 75% of all requests go remote.
> New bundler with max_bundle_size for each destination
> -----------------------------------------------------
>
> Key: JGRP-2171
> URL: https://issues.jboss.org/browse/JGRP-2171
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0.4
>
>
> The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
> This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
> The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
> This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
> We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
> So 1 main queue and 1 queue for each destination.
> h4. Example:
> * {{num_flips}} is 2
> * A message for A is sent, added to the main queue and removed by the runner. It is queued in A's queue
> * Another message for A is sent. Also queued (A's queue: {{A A}})
> * A message to B is sent: A's {{num_flips}} is now 1. A's queue is {{A A}}, B's queue is {{B}}
> * Another message to A is sent. This resets A's {{num_flips}} to 2, B's {{num_flips}} is now 1
> * 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
> * No more messages are received, so the batch to C is also sent
> The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
> h4. Misc
> * Should the sending of batches be delegated to a thread pool?
> * Should the senders add their messages directly to the destination queues instead of the main queue? That would result in less contention on the main queue, but it would also require 1 thread per destination queue, which creates too many threads...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JGRP-2171) New bundler with max_bundle_size for each destination
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2171?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-2171 at 6/13/17 10:15 AM:
----------------------------------------------------------
{{RemoveQueueBundler}} has performance that's similar to {{TransferQueueBundler}}! On cluster01-08 (8 nodes), the result for TQB was 108'000 reqs/sec/node, and with the following remove queue sizes for {{RemoveQueueBundler}}, I got these numbers (IspnPerfTest):
||remove-queue size || performance ||
|16|98'659 |
|32|108'199|
|64|108'436|
|128|107'774|
|256|107'716|
|512|107'786|
|1024|107'309|
|2048|106'994|
|4096|107'122|
|8192|106'980|
The remove-queue size doesn't seem to have any impact, but this is probably caused by the fact that 100 threads each per node are sending GET requests/responses, and since {{numOwners=2}}, only 75% of all requests go remote.
was (Author: belaban):
{{RemoveQueueBundler}} has performance that's similar to {{TransferQueueBundler}}! On cluster01-08 (8 nodes), the result for TQB was 108'000 reqs/sec/node, and with the following remove queue sizes for {{RemoveQueueBundler}}, I got these numbers (IspnPerfTest):
||remove-queue size || performance ||
|16|98'659 |
|32|108'199|
|64|108'436|
|128|107'774|
|256|107'716|
|512|107'786|
|1024||
|2048||
|4096||
|8192||
> New bundler with max_bundle_size for each destination
> -----------------------------------------------------
>
> Key: JGRP-2171
> URL: https://issues.jboss.org/browse/JGRP-2171
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0.4
>
>
> The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
> This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
> The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
> This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
> We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
> So 1 main queue and 1 queue for each destination.
> h4. Example:
> * {{num_flips}} is 2
> * A message for A is sent, added to the main queue and removed by the runner. It is queued in A's queue
> * Another message for A is sent. Also queued (A's queue: {{A A}})
> * A message to B is sent: A's {{num_flips}} is now 1. A's queue is {{A A}}, B's queue is {{B}}
> * Another message to A is sent. This resets A's {{num_flips}} to 2, B's {{num_flips}} is now 1
> * 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
> * No more messages are received, so the batch to C is also sent
> The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
> h4. Misc
> * Should the sending of batches be delegated to a thread pool?
> * Should the senders add their messages directly to the destination queues instead of the main queue? That would result in less contention on the main queue, but it would also require 1 thread per destination queue, which creates too many threads...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JGRP-2171) New bundler with max_bundle_size for each destination
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2171?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-2171 at 6/13/17 10:10 AM:
----------------------------------------------------------
{{RemoveQueueBundler}} has performance that's similar to {{TransferQueueBundler}}! On cluster01-08 (8 nodes), the result for TQB was 108'000 reqs/sec/node, and with the following remove queue sizes for {{RemoveQueueBundler}}, I got these numbers (IspnPerfTest):
||remove-queue size || performance ||
|16|98'659 |
|32|108'199|
|64|108'436|
|128|107'774|
|256|107'716|
|512|107'786|
|1024||
|2048||
|4096||
|8192||
was (Author: belaban):
{{RemoveQueueBundler}} has performance that's similar to {{TransferQueueBundler}}! On cluster01-08 (8 nodes), the result for TQB was 108'000 reqs/sec/node, and with the following remove queue sizes for {{RemoveQueueBundler}}, I got these numbers (IspnPerfTest):
||remove-queue size || performance ||
|16|98'659 |
|32|108'199|
|64||
|128||
|256||
|512||
|1024||
|2048||
|4096||
|8192||
> New bundler with max_bundle_size for each destination
> -----------------------------------------------------
>
> Key: JGRP-2171
> URL: https://issues.jboss.org/browse/JGRP-2171
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0.4
>
>
> The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
> This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
> The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
> This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
> We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
> So 1 main queue and 1 queue for each destination.
> h4. Example:
> * {{num_flips}} is 2
> * A message for A is sent, added to the main queue and removed by the runner. It is queued in A's queue
> * Another message for A is sent. Also queued (A's queue: {{A A}})
> * A message to B is sent: A's {{num_flips}} is now 1. A's queue is {{A A}}, B's queue is {{B}}
> * Another message to A is sent. This resets A's {{num_flips}} to 2, B's {{num_flips}} is now 1
> * 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
> * No more messages are received, so the batch to C is also sent
> The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
> h4. Misc
> * Should the sending of batches be delegated to a thread pool?
> * Should the senders add their messages directly to the destination queues instead of the main queue? That would result in less contention on the main queue, but it would also require 1 thread per destination queue, which creates too many threads...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (DROOLS-1515) Unable to install feature droolsjbpm-hibernate in Fuse
by Mario Fusco (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1515?page=com.atlassian.jira.plugi... ]
Mario Fusco updated DROOLS-1515:
--------------------------------
Sprint: 2017 Week 24-25
> Unable to install feature droolsjbpm-hibernate in Fuse
> ------------------------------------------------------
>
> Key: DROOLS-1515
> URL: https://issues.jboss.org/browse/DROOLS-1515
> Project: Drools
> Issue Type: Bug
> Components: integration
> Affects Versions: 7.0.0.Beta8
> Environment: JBOSS Fuse 6.2.1
> JBOSS Fuse 6.3.0
> Reporter: Lubomir Terifaj
> Assignee: Mario Fusco
> Labels: reported-by-qe
>
> Installation of droolsjbpm-hibernate fails with error:
> Error executing command: Could not start bundle wrap:mvn:org.hibernate/hibernate-entitymanager/5.1.4.Final$overwrite=merge&DynamicImport-Package=* in feature(s) droolsjbpm-hibernate-5.1.4.Final: Unresolved constraint in bundle org.hibernate.entitymanager [309]: Unable to resolve 309.0: missing requirement [309.0] osgi.wiring.package; (&(osgi.wiring.package=javax.persistence)(version>=2.1.0))
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JGRP-2171) New bundler with max_bundle_size for each destination
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2171?page=com.atlassian.jira.plugin.... ]
Bela Ban commented on JGRP-2171:
--------------------------------
{{RemoveQueueBundler}} has performance that's similar to {{TransferQueueBundler}}! On cluster01-08 (8 nodes), the result for TQB was 108'000 reqs/sec/node, and with the following remove queue sizes for {{RemoveQueueBundler}}, I got these numbers (IspnPerfTest):
||remove-queue size || performance ||
|16|98'659 |
|32|108'199|
|64||
|128||
|256||
|512||
|1024||
|2048||
|4096||
|8192||
> New bundler with max_bundle_size for each destination
> -----------------------------------------------------
>
> Key: JGRP-2171
> URL: https://issues.jboss.org/browse/JGRP-2171
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0.4
>
>
> The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
> This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
> The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
> This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
> We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
> So 1 main queue and 1 queue for each destination.
> h4. Example:
> * {{num_flips}} is 2
> * A message for A is sent, added to the main queue and removed by the runner. It is queued in A's queue
> * Another message for A is sent. Also queued (A's queue: {{A A}})
> * A message to B is sent: A's {{num_flips}} is now 1. A's queue is {{A A}}, B's queue is {{B}}
> * Another message to A is sent. This resets A's {{num_flips}} to 2, B's {{num_flips}} is now 1
> * 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
> * No more messages are received, so the batch to C is also sent
> The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
> h4. Misc
> * Should the sending of batches be delegated to a thread pool?
> * Should the senders add their messages directly to the destination queues instead of the main queue? That would result in less contention on the main queue, but it would also require 1 thread per destination queue, which creates too many threads...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JGRP-2171) New bundler with max_bundle_size for each destination
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2171?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-2171 at 6/13/17 9:58 AM:
---------------------------------------------------------
AlternatingBundler has much worse perf than TransferQueueBundler: ca. half of TQB (20'000 resq/sec/node versus 40'000). The average length of a batch was 2.6 (in my prelim tests) *not* counting single messages.
Let's try {{RemoveQueueBundler}} (alternative 2) now... the queue length need to be configurable.
was (Author: belaban):
AlternatingBundler has much worse perf than TransferQueueBundler: ca. half of TQB (20'000 resq/sec/node versus 40'000). The average length of a batch was 2.6 (in my prelim tests) *not* counting single messages.
Let's try AlternatingQueuingBundler (alternative 2) now... the queue length need to be configurable.
> New bundler with max_bundle_size for each destination
> -----------------------------------------------------
>
> Key: JGRP-2171
> URL: https://issues.jboss.org/browse/JGRP-2171
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0.4
>
>
> The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
> This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
> The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
> This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
> We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
> So 1 main queue and 1 queue for each destination.
> h4. Example:
> * {{num_flips}} is 2
> * A message for A is sent, added to the main queue and removed by the runner. It is queued in A's queue
> * Another message for A is sent. Also queued (A's queue: {{A A}})
> * A message to B is sent: A's {{num_flips}} is now 1. A's queue is {{A A}}, B's queue is {{B}}
> * Another message to A is sent. This resets A's {{num_flips}} to 2, B's {{num_flips}} is now 1
> * 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
> * No more messages are received, so the batch to C is also sent
> The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
> h4. Misc
> * Should the sending of batches be delegated to a thread pool?
> * Should the senders add their messages directly to the destination queues instead of the main queue? That would result in less contention on the main queue, but it would also require 1 thread per destination queue, which creates too many threads...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JGRP-2171) New bundler with max_bundle_size for each destination
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2171?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-2171 at 6/13/17 9:57 AM:
---------------------------------------------------------
h2. Alternative designs
h3. Alternating bundler
This bundler sends a batch as soon as the target destination changes, e.g. for sequence {{\[A A B C C C B A A A\]}}, batches {{\[A A A\]}}, {{B}} (single message), {{\[C C C\]}}, {{B}} and {{\[A A\]}} will be sent.
Of course, {{max_bundle_size}} is still observed; if we encounter a sequence whose accumulated size exceeds it, then a batch is sent immediately.
The advantage is that messages or message batches are sent immediately (reducing latency) and that this is a simple design (similar to the one above with {{num_flips=1}}. The disadvantage is that for sequences such as {{\[A B A B A B\]}}, we'll send 6 single messages, so this degenerates into a {{NoBundler}}.
h3. Remove-queue based bundler
This bundler drains messages from the main queue (to which sender threads add their messages) into a remove-queue of fixed length. Then we iterate through the queue and add messages to lists keyed by the target destination and finally send a batch (or single message) for each destination.
In the above example, we'd send 3 batches {{\[A A A A A\]}}, {{\[B B\]}} and {{\[C C C\]}}. Contrast this to the 5 batches (or single messages) that we send with the alternating bundler above.
The size of the queue determines the max latency: a bigger queue will result in more throughput but also higher latency. A queue of 1 is more or less the {{NoBundler}}.
Contrary to {{TransferQueueBundler}}, this bundler uses {{RingBuffer}} rather than an {{ArrayBlockingQueue}} and the size of the remove queue is fixed. {{TransferQueueBundler}} increases the size of the remove queue dynamically, which leads to higher latency if the remove queue grows too much.
was (Author: belaban):
h2. Alternative designs
h3. Alternating bundler
This bundler sends a batch as soon as the target destination changes, e.g. for sequence {{\[A A B C C C B A A A\]}}, batches {{\[A A A\]}}, {{B}} (single message), {{\[C C C\]}}, {{B}} and {{\[A A\]}} will be sent.
Of course, {{max_bundle_size}} is still observed; if we encounter a sequence whose accumulated size exceeds it, then a batch is sent immediately.
The advantage is that messages or message batches are sent immediately (reducing latency) and that this is a simple design (similar to the one above with {{num_flips=1}}. The disadvantage is that for sequences such as {{\[A B A B A B\]}}, we'll send 6 single messages, so this degenerates into a {{NoBundler}}.
h3. Queue-based bundler
This bundler drains messages from the main queue (to which sender threads add their messages) into a remove-queue of fixed length. Then we iterate through the queue and add messages to lists keyed by the target destination and finally send a batch (or single message) for each destination.
In the above example, we'd send 3 batches {{\[A A A A A\]}}, {{\[B B\]}} and {{\[C C C\]}}. Contrast this to the 5 batches (or single messages) that we send with the alternating bundler above.
The size of the queue determines the max latency: a bigger queue will result in more throughput but also higher latency. A queue of 1 is more or less the {{NoBundler}}.
> New bundler with max_bundle_size for each destination
> -----------------------------------------------------
>
> Key: JGRP-2171
> URL: https://issues.jboss.org/browse/JGRP-2171
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0.4
>
>
> The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
> This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
> The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
> This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
> We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
> So 1 main queue and 1 queue for each destination.
> h4. Example:
> * {{num_flips}} is 2
> * A message for A is sent, added to the main queue and removed by the runner. It is queued in A's queue
> * Another message for A is sent. Also queued (A's queue: {{A A}})
> * A message to B is sent: A's {{num_flips}} is now 1. A's queue is {{A A}}, B's queue is {{B}}
> * Another message to A is sent. This resets A's {{num_flips}} to 2, B's {{num_flips}} is now 1
> * 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
> * No more messages are received, so the batch to C is also sent
> The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
> h4. Misc
> * Should the sending of batches be delegated to a thread pool?
> * Should the senders add their messages directly to the destination queues instead of the main queue? That would result in less contention on the main queue, but it would also require 1 thread per destination queue, which creates too many threads...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (WFLY-8931) InfinispanSessionManager#getActiveSessions (active-sessions attribute in CLI) returns an incorrect count on a coordinator node in cluster
by Paul Ferraro (JIRA)
[ https://issues.jboss.org/browse/WFLY-8931?page=com.atlassian.jira.plugin.... ]
Paul Ferraro commented on WFLY-8931:
------------------------------------
This is a regression caused by fix for https://issues.jboss.org/browse/WFLY-6453
ConsistentHashLocality is not meant to be cached.
> InfinispanSessionManager#getActiveSessions (active-sessions attribute in CLI) returns an incorrect count on a coordinator node in cluster
> -----------------------------------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-8931
> URL: https://issues.jboss.org/browse/WFLY-8931
> Project: WildFly
> Issue Type: Bug
> Components: Clustering
> Affects Versions: 11.0.0.Alpha1
> Reporter: Paul Ferraro
> Assignee: Paul Ferraro
>
> Since EAP 7.0.1 coming with the fix for JBEAP-4646 (upstream WFLY-6453), {{InfinispanSessionManager#getActiveSessions}} ({{active-sessions}} attribute on CLI) was changed to return active session count with current node locality.
> However, it returns an incorrect count on a cluster coordinator node. For example, when two node cluster is configured, it returns total count of active sessions. This happens regardless of using "dist" cache or using "repl" cache.
> ---
> The followings are the related code. It appears {{ConsistentHashLocality#isLocal(key)}} always returns {{true}} on a coordinator node and it causes incorrect active session count.
> {code:java|title=clustering/web/infinispan/src/main/java/org/wildfly/clustering/web/infinispan/session/InfinispanSessionManager.java}
> 268 @Override
> 269 public Set<String> getActiveSessions() {
> 270 // Omit remote sessions (i.e. when using DIST mode) as well as passivated sessions
> 271 return this.getSessions(Flag.CACHE_MODE_LOCAL, Flag.SKIP_CACHE_LOAD);
> 272 }
> :
> 280 private Set<String> getSessions(Flag... flags) {
> 281 try (Stream<? extends Key<String>> keys = this.cache.getAdvancedCache().withFlags(flags).keySet().stream()) {
> 282 return keys.filter(this.filter.and(key -> this.locality.isLocal(key))).map(key -> key.getValue()).collect(Collectors.toSet());
> 283 }
> 284 }
> {code}
> {code:java|title=clustering/infinispan/spi/src/main/java/org/wildfly/clustering/infinispan/spi/distribution/ConsistentHashLocality.java}
> 51 @Override
> 52 public boolean isLocal(Object key) {
> 53 if (this.localAddress == null) return true;
> 54 if (this.hash == null) return true;
> 55 return this.localAddress.equals(this.hash.locatePrimaryOwner(key));
> 56 }
> {code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months