[JBoss JIRA] (JGRP-2143) TP: use only one thread per member to pass up regular messages
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2143?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2143:
---------------------------
Description:
This applies only to _regular_ messages; OOB and internal messages are processed by passing them to the thread pool directly when they they're received.
The processing of a message received from B is as follows:
* A regular message (or message batch) is assigned a thread from the thread pool and passed up to the reliability protocol, e.g. NAKACK2 or UNICAST3.
* There is is added to the table for B.
* The thread sees if another thread is already delivering messages from B to the application. If not, it grabs as many consecutive (ordered) messages from the table as it can and delivers them to the application. Otherwise, it returns and can be assigned other tasks.
The problem here is that more than one thread may be passing up messages from a given sender B; only at the NAKACK2 or UNICAST3 level will a single thread be selected to deliver the messages to the application.
This causes higher thread pool usage than required, with all of its drawbacks, e.g. more context switching, higher contention on adding messages to the table for B, and possibly exhaustion of the thread pool.
An example of where service is denied or delayed:
* We have a cluster of \{A,B,C,D\}
* A receives 10 messages from B, 4 from C and 1 from D
* The thread pool's max size is 20
* The 10 messages from B are processed; all 10 threads add their messages to the table, but only 1 delivers them to the application and the other 9 return to the pool
* 4 messages from C are added to C's table, 1 thread delivers them and 3 return
* The 1 message from D is added to D's table and the same thread is used to deliver the message up the stack to the application
So while we receive 15 messages, effectively only 3 threads are needed to deliver them to the application: as these are regular messages, they need to be delivered in _sender order_.
The 9 threads which process messages from B are only adding them to B's table and then return immediately. This causes increased context switching, plus more contention on B's table (which is synchronized), and possibly exhaustion of the thread pool. For example, if the pool's max size was only 10, then processing the first 10 messages from B would exhaust the table, and the other messages from C and D would be processed in newly spawned threads.
SOLUTION
* (Only applicable to _regular_ messages)
* When a message (or batch) from sender P is received, we check if another thread is already passing up messages from B. If not, we pass the message up by grabbing a thread from the thread pool. This will add the message to P's table and deliver as many messages (from from the table) as possible to the application.
* If there's currently a thread delivering P's message, we simply add the message (or batch) to a queue for P and return.
* When the delivery thread returns, it checks the queue for P and delivers all queued messages, or returns if the queue is empty.
* (The queue is actually a MessageBatch, and new messages are simply appended to it. On delivery, the batch is cleared)
The effects of this for regular messages are
* Fewer threads: the thread pool only has a max of <cluster-members> threads for regular messages where <cluster-members> is the number of members in the cluster from whom we are concurrently receiving messages. E.g. for a cluster \{A,B,C,D\}, if we're receiving messages at the same time from all members, then the max size is 4.
** Of course, OOB and internal messages, plus timer tasks will add to this number.
* Less contention on the table for a given member: instead of 10 threads all adding their messages to B's table (contention on the table lock) and then CASing a boolean, only 1 thread ever adds and removes messages to/from the table. This means uncontended (= fast) lock acquisition for regular messages (of course, if we use OOB messages, then we do have contention).
* Appending to a batch is much faster then adding to a table
* The downside is that we're storing messages actually twice: once in the batch for P and once in P's table. But these are arrays of pointers, so not a lot of memory required.
Example: the 10 threads for messages from B above, will create a batch of 9 messages in B's queue and grab 1 thread from the pool to deliver its message. When the thread is done, it will grab the message batch of 9 and also add it to the table and deliver it.
This is similar to the bulkhead pattern \[1\].
\[1\] http://stackoverflow.com/questions/30391809/what-is-bulkhead-pattern-used...
was:
This applies only to _regular_ messages; OOB and internal messages are processed by passing them to the thread pool directly when they they're received.
The processing of a message received from B is as follows:
* A regular message (or message batch) is assigned a thread from the thread pool and passed up to the reliability protocol, e.g. NAKACK2 or UNICAST3.
* There is is added to the table for B.
* The thread sees if another thread is already delivering messages from B to the application. If not, it grabs as many consecutive (ordered) messages from the table as it can and delivers them to the application. Otherwise, it returns and can be assigned other tasks.
The problem here is that more than one thread may be passing up messages from a given sender B; only at the NAKACK2 or UNICAST3 level will a single thread be selected to deliver the messages to the application.
This causes higher thread pool usage than required, with all of its drawbacks, e.g. more context switching, higher contention on adding messages to the table for B, and possibly exhaustion of the thread pool.
An example of where service is denied or delayed:
* We have a cluster of \{A,B,C,D\}
* A receives 10 messages from B, 4 from C and 1 from D
* The thread pool's max size is 20
* The 10 messages from B are processed; all 10 threads add their messages to the table, but only 1 delivers them to the application and the other 9 return to the pool
* 4 messages from C are added to C's table, 1 thread delivers them and 3 return
* The 1 message from D is added to D's table and the same thread is used to deliver the message up the stack to the application
So while we receive 15 messages, effectively only 3 threads are needed to deliver them to the application: as these are regular messages, they need to be delivered in _sender order_.
The 9 threads which process messages from B are only adding them to B's table and then return immediately. This causes increased context switching, plus more contention on B's table (which is synchronized), and possibly exhaustion of the thread pool. For example, if the pool's max size was only 10, then processing the first 10 messages from B would exhaust the table, and the other messages from C and D would be processed in newly spawned threads.
SOLUTION
* (Only applicable to _regular_ messages)
* When a message (or batch) from sender P is received, we check if another thread is already passing up messages from B. If not, we pass the message up by grabbing a thread from the thread pool. This will add the message to P's table and deliver as many messages (from from the table) as possible to the application.
* If there's currently a thread delivering P's message, we simply add the message (or batch) to a queue for P and return.
* When the delivery thread returns, it checks the queue for P and delivers all queued messages, or returns if the queue is empty.
* (The queue is actually a MessageBatch, and new messages are simply appended to it. On delivery, the batch is cleared)
The effects of this for regular messages are
* Fewer threads: the thread pool only has a max of <cluster-members> threads for regular messages where <cluster-members> is the number of members in the cluster from whom we are concurrently receiving messages. E.g. for a cluster \{A,B,C,D\}, if we're receiving messages at the same time from all members, then the max size is 4.
** Of course, OOB and internal messages, plus timer tasks will add to this number.
* Less contention on the table for a given member: instead of 10 threads all adding their messages to B's table (contention on the table lock) and then CASing a boolean, only 1 thread ever adds and removes messages to/from the table. This means uncontended (= fast) lock acquisition for regular messages (of course, if we use OOB messages, then we do have contention).
* Appending to a batch is much faster then adding to a table
* The downside is that we're storing messages actually twice: once in the batch for P and once in P's table. But these are arrays of pointers, so not a lot of memory required.
Example: the 10 threads for messages from B above, will create a batch of 9 messages in B's queue and grab 1 thread from the pool to deliver its message. When the thread is done, it will grab the message batch of 9 and also add it to the table and deliver it.
> TP: use only one thread per member to pass up regular messages
> --------------------------------------------------------------
>
> Key: JGRP-2143
> URL: https://issues.jboss.org/browse/JGRP-2143
> Project: JGroups
> Issue Type: Enhancement
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0
>
>
> This applies only to _regular_ messages; OOB and internal messages are processed by passing them to the thread pool directly when they they're received.
> The processing of a message received from B is as follows:
> * A regular message (or message batch) is assigned a thread from the thread pool and passed up to the reliability protocol, e.g. NAKACK2 or UNICAST3.
> * There is is added to the table for B.
> * The thread sees if another thread is already delivering messages from B to the application. If not, it grabs as many consecutive (ordered) messages from the table as it can and delivers them to the application. Otherwise, it returns and can be assigned other tasks.
> The problem here is that more than one thread may be passing up messages from a given sender B; only at the NAKACK2 or UNICAST3 level will a single thread be selected to deliver the messages to the application.
> This causes higher thread pool usage than required, with all of its drawbacks, e.g. more context switching, higher contention on adding messages to the table for B, and possibly exhaustion of the thread pool.
> An example of where service is denied or delayed:
> * We have a cluster of \{A,B,C,D\}
> * A receives 10 messages from B, 4 from C and 1 from D
> * The thread pool's max size is 20
> * The 10 messages from B are processed; all 10 threads add their messages to the table, but only 1 delivers them to the application and the other 9 return to the pool
> * 4 messages from C are added to C's table, 1 thread delivers them and 3 return
> * The 1 message from D is added to D's table and the same thread is used to deliver the message up the stack to the application
> So while we receive 15 messages, effectively only 3 threads are needed to deliver them to the application: as these are regular messages, they need to be delivered in _sender order_.
> The 9 threads which process messages from B are only adding them to B's table and then return immediately. This causes increased context switching, plus more contention on B's table (which is synchronized), and possibly exhaustion of the thread pool. For example, if the pool's max size was only 10, then processing the first 10 messages from B would exhaust the table, and the other messages from C and D would be processed in newly spawned threads.
> SOLUTION
> * (Only applicable to _regular_ messages)
> * When a message (or batch) from sender P is received, we check if another thread is already passing up messages from B. If not, we pass the message up by grabbing a thread from the thread pool. This will add the message to P's table and deliver as many messages (from from the table) as possible to the application.
> * If there's currently a thread delivering P's message, we simply add the message (or batch) to a queue for P and return.
> * When the delivery thread returns, it checks the queue for P and delivers all queued messages, or returns if the queue is empty.
> * (The queue is actually a MessageBatch, and new messages are simply appended to it. On delivery, the batch is cleared)
> The effects of this for regular messages are
> * Fewer threads: the thread pool only has a max of <cluster-members> threads for regular messages where <cluster-members> is the number of members in the cluster from whom we are concurrently receiving messages. E.g. for a cluster \{A,B,C,D\}, if we're receiving messages at the same time from all members, then the max size is 4.
> ** Of course, OOB and internal messages, plus timer tasks will add to this number.
> * Less contention on the table for a given member: instead of 10 threads all adding their messages to B's table (contention on the table lock) and then CASing a boolean, only 1 thread ever adds and removes messages to/from the table. This means uncontended (= fast) lock acquisition for regular messages (of course, if we use OOB messages, then we do have contention).
> * Appending to a batch is much faster then adding to a table
> * The downside is that we're storing messages actually twice: once in the batch for P and once in P's table. But these are arrays of pointers, so not a lot of memory required.
> Example: the 10 threads for messages from B above, will create a batch of 9 messages in B's queue and grab 1 thread from the pool to deliver its message. When the thread is done, it will grab the message batch of 9 and also add it to the table and deliver it.
> This is similar to the bulkhead pattern \[1\].
> \[1\] http://stackoverflow.com/questions/30391809/what-is-bulkhead-pattern-used...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 5 months
[JBoss JIRA] (DROOLS-1387) PHREAK is slower than ReteOO with accumulate and exists
by Toshiya Kobayashi (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1387?page=com.atlassian.jira.plugi... ]
Toshiya Kobayashi updated DROOLS-1387:
--------------------------------------
Attachment: brms-perf-comarison.zip
> PHREAK is slower than ReteOO with accumulate and exists
> -------------------------------------------------------
>
> Key: DROOLS-1387
> URL: https://issues.jboss.org/browse/DROOLS-1387
> Project: Drools
> Issue Type: Enhancement
> Components: core engine
> Affects Versions: 6.5.0.Final
> Reporter: Toshiya Kobayashi
> Assignee: Mario Fusco
> Labels: support
> Attachments: brms-perf-comarison.zip
>
>
> PHREAK is slower than ReteOO in case of:
> 1. with accumulate
> 2. with exists
> Attached test case brms-perf-comarison.zip contains 4 tests to compare the performance:
> ReteOO_Accumulate vs Phreak_Accumulate
> -> ReteOO is 30% faster than PHREAK in average
> ReteOO_Exists vs Phreak_Exists
> -> ReteOO is 30% faster than PHREAK in average
> You can run the test with
> {noformat}
> mvn -P BRMS640GA clean test
> {noformat}
> The example output:
> {noformat}
> Running org.mk300.brms.perf.Phreak_Accumulate
> 2016-12-27 16:51:40,280 INFO (main) [RuleBase] ##################### RuleBase start
> 2016-12-27 16:51:42,322 INFO (main) [KieRepositoryImpl] KieModule was added: MemoryKieModule[releaseId=org.default:artifact:1.0.0-SNAPSHOT]
> 2016-12-27 16:51:42,519 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 0 tx/sec
> 2016-12-27 16:51:43,529 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 29,020 tx/sec
> 2016-12-27 16:51:44,533 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 159,216 tx/sec
> 2016-12-27 16:51:45,539 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 203,391 tx/sec
> 2016-12-27 16:51:46,539 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 202,453 tx/sec
> 2016-12-27 16:51:47,553 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 204,160 tx/sec
> 2016-12-27 16:51:48,555 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 199,914 tx/sec
> 2016-12-27 16:51:49,556 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 193,496 tx/sec
> 2016-12-27 16:51:50,557 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 200,377 tx/sec
> 2016-12-27 16:51:51,557 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 202,060 tx/sec
> 2016-12-27 16:51:52,522 INFO (main) [RuleBase] ##################### disposeAll end
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.394 sec - in org.mk300.brms.perf.Phreak_Accumulate
> Running org.mk300.brms.perf.Phreak_Exists
> 2016-12-27 16:51:52,860 INFO (main) [RuleBase] ##################### RuleBase start
> 2016-12-27 16:51:54,521 INFO (main) [KieRepositoryImpl] KieModule was added: MemoryKieModule[releaseId=org.default:artifact:1.0.0-SNAPSHOT]
> 2016-12-27 16:51:54,716 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 0 tx/sec
> 2016-12-27 16:51:55,737 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 72,411 tx/sec
> 2016-12-27 16:51:56,737 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 257,065 tx/sec
> 2016-12-27 16:51:57,741 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 292,136 tx/sec
> 2016-12-27 16:51:58,746 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 283,862 tx/sec
> 2016-12-27 16:51:59,753 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 285,201 tx/sec
> 2016-12-27 16:52:00,753 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 290,762 tx/sec
> 2016-12-27 16:52:01,760 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 291,661 tx/sec
> 2016-12-27 16:52:02,760 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 296,011 tx/sec
> 2016-12-27 16:52:03,767 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 296,844 tx/sec
> 2016-12-27 16:52:04,719 INFO (main) [RuleBase] ##################### disposeAll end
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.004 sec - in org.mk300.brms.perf.Phreak_Exists
> Running org.mk300.brms.perf.ReteOO_Accumulate
> 2016-12-27 16:52:05,068 INFO (main) [RuleBase] ##################### RuleBase start
> 2016-12-27 16:52:07,139 INFO (main) [KieRepositoryImpl] KieModule was added: MemoryKieModule[releaseId=org.default:artifact:1.0.0-SNAPSHOT]
> 2016-12-27 16:52:07,349 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 0 tx/sec
> 2016-12-27 16:52:08,368 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 50,996 tx/sec
> 2016-12-27 16:52:09,380 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 217,269 tx/sec
> 2016-12-27 16:52:10,391 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 252,807 tx/sec
> 2016-12-27 16:52:11,453 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 245,798 tx/sec
> 2016-12-27 16:52:12,457 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 225,747 tx/sec
> 2016-12-27 16:52:13,458 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 254,959 tx/sec
> 2016-12-27 16:52:14,487 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 249,093 tx/sec
> 2016-12-27 16:52:15,496 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 237,899 tx/sec
> 2016-12-27 16:52:16,517 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 237,790 tx/sec
> 2016-12-27 16:52:17,375 INFO (main) [RuleBase] ##################### disposeAll end
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.455 sec - in org.mk300.brms.perf.ReteOO_Accumulate
> Running org.mk300.brms.perf.ReteOO_Exists
> 2016-12-27 16:52:17,751 INFO (main) [RuleBase] ##################### RuleBase start
> 2016-12-27 16:52:19,431 INFO (main) [KieRepositoryImpl] KieModule was added: MemoryKieModule[releaseId=org.default:artifact:1.0.0-SNAPSHOT]
> 2016-12-27 16:52:19,643 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 0 tx/sec
> 2016-12-27 16:52:20,656 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 99,934 tx/sec
> 2016-12-27 16:52:21,679 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 308,091 tx/sec
> 2016-12-27 16:52:22,700 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 331,749 tx/sec
> 2016-12-27 16:52:23,721 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 351,276 tx/sec
> 2016-12-27 16:52:24,721 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 351,285 tx/sec
> 2016-12-27 16:52:25,732 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 349,222 tx/sec
> 2016-12-27 16:52:26,746 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 339,559 tx/sec
> 2016-12-27 16:52:27,754 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 328,000 tx/sec
> 2016-12-27 16:52:28,756 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 328,937 tx/sec
> 2016-12-27 16:52:29,647 INFO (main) [RuleBase] ##################### disposeAll end
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.056 sec - in org.mk300.brms.perf.ReteOO_Exists
> {noformat}
> NOTE: These tests run in different JVMs (reuseForks = false in pom.xml) so the order of test execution doesn't affect the performance.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (DROOLS-1387) PHREAK is slower than ReteOO with accumulate and exists
by Toshiya Kobayashi (JIRA)
Toshiya Kobayashi created DROOLS-1387:
-----------------------------------------
Summary: PHREAK is slower than ReteOO with accumulate and exists
Key: DROOLS-1387
URL: https://issues.jboss.org/browse/DROOLS-1387
Project: Drools
Issue Type: Enhancement
Components: core engine
Affects Versions: 6.5.0.Final
Reporter: Toshiya Kobayashi
Assignee: Mario Fusco
PHREAK is slower than ReteOO in case of:
1. with accumulate
2. with exists
Attached test case brms-perf-comarison.zip contains 4 tests to compare the performance:
ReteOO_Accumulate vs Phreak_Accumulate
-> ReteOO is 30% faster than PHREAK in average
ReteOO_Exists vs Phreak_Exists
-> ReteOO is 30% faster than PHREAK in average
You can run the test with
{noformat}
mvn -P BRMS640GA clean test
{noformat}
The example output:
{noformat}
Running org.mk300.brms.perf.Phreak_Accumulate
2016-12-27 16:51:40,280 INFO (main) [RuleBase] ##################### RuleBase start
2016-12-27 16:51:42,322 INFO (main) [KieRepositoryImpl] KieModule was added: MemoryKieModule[releaseId=org.default:artifact:1.0.0-SNAPSHOT]
2016-12-27 16:51:42,519 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 0 tx/sec
2016-12-27 16:51:43,529 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 29,020 tx/sec
2016-12-27 16:51:44,533 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 159,216 tx/sec
2016-12-27 16:51:45,539 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 203,391 tx/sec
2016-12-27 16:51:46,539 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 202,453 tx/sec
2016-12-27 16:51:47,553 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 204,160 tx/sec
2016-12-27 16:51:48,555 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 199,914 tx/sec
2016-12-27 16:51:49,556 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 193,496 tx/sec
2016-12-27 16:51:50,557 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 200,377 tx/sec
2016-12-27 16:51:51,557 INFO (pool-1-thread-1) [PerfCounter] Phreak_Accumulate : 202,060 tx/sec
2016-12-27 16:51:52,522 INFO (main) [RuleBase] ##################### disposeAll end
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.394 sec - in org.mk300.brms.perf.Phreak_Accumulate
Running org.mk300.brms.perf.Phreak_Exists
2016-12-27 16:51:52,860 INFO (main) [RuleBase] ##################### RuleBase start
2016-12-27 16:51:54,521 INFO (main) [KieRepositoryImpl] KieModule was added: MemoryKieModule[releaseId=org.default:artifact:1.0.0-SNAPSHOT]
2016-12-27 16:51:54,716 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 0 tx/sec
2016-12-27 16:51:55,737 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 72,411 tx/sec
2016-12-27 16:51:56,737 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 257,065 tx/sec
2016-12-27 16:51:57,741 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 292,136 tx/sec
2016-12-27 16:51:58,746 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 283,862 tx/sec
2016-12-27 16:51:59,753 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 285,201 tx/sec
2016-12-27 16:52:00,753 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 290,762 tx/sec
2016-12-27 16:52:01,760 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 291,661 tx/sec
2016-12-27 16:52:02,760 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 296,011 tx/sec
2016-12-27 16:52:03,767 INFO (pool-1-thread-1) [PerfCounter] Phreak_Exists : 296,844 tx/sec
2016-12-27 16:52:04,719 INFO (main) [RuleBase] ##################### disposeAll end
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.004 sec - in org.mk300.brms.perf.Phreak_Exists
Running org.mk300.brms.perf.ReteOO_Accumulate
2016-12-27 16:52:05,068 INFO (main) [RuleBase] ##################### RuleBase start
2016-12-27 16:52:07,139 INFO (main) [KieRepositoryImpl] KieModule was added: MemoryKieModule[releaseId=org.default:artifact:1.0.0-SNAPSHOT]
2016-12-27 16:52:07,349 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 0 tx/sec
2016-12-27 16:52:08,368 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 50,996 tx/sec
2016-12-27 16:52:09,380 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 217,269 tx/sec
2016-12-27 16:52:10,391 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 252,807 tx/sec
2016-12-27 16:52:11,453 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 245,798 tx/sec
2016-12-27 16:52:12,457 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 225,747 tx/sec
2016-12-27 16:52:13,458 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 254,959 tx/sec
2016-12-27 16:52:14,487 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 249,093 tx/sec
2016-12-27 16:52:15,496 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 237,899 tx/sec
2016-12-27 16:52:16,517 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Accumulate : 237,790 tx/sec
2016-12-27 16:52:17,375 INFO (main) [RuleBase] ##################### disposeAll end
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.455 sec - in org.mk300.brms.perf.ReteOO_Accumulate
Running org.mk300.brms.perf.ReteOO_Exists
2016-12-27 16:52:17,751 INFO (main) [RuleBase] ##################### RuleBase start
2016-12-27 16:52:19,431 INFO (main) [KieRepositoryImpl] KieModule was added: MemoryKieModule[releaseId=org.default:artifact:1.0.0-SNAPSHOT]
2016-12-27 16:52:19,643 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 0 tx/sec
2016-12-27 16:52:20,656 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 99,934 tx/sec
2016-12-27 16:52:21,679 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 308,091 tx/sec
2016-12-27 16:52:22,700 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 331,749 tx/sec
2016-12-27 16:52:23,721 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 351,276 tx/sec
2016-12-27 16:52:24,721 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 351,285 tx/sec
2016-12-27 16:52:25,732 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 349,222 tx/sec
2016-12-27 16:52:26,746 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 339,559 tx/sec
2016-12-27 16:52:27,754 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 328,000 tx/sec
2016-12-27 16:52:28,756 INFO (pool-1-thread-1) [PerfCounter] ReteOO_Exists : 328,937 tx/sec
2016-12-27 16:52:29,647 INFO (main) [RuleBase] ##################### disposeAll end
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.056 sec - in org.mk300.brms.perf.ReteOO_Exists
{noformat}
NOTE: These tests run in different JVMs (reuseForks = false in pom.xml) so the order of test execution doesn't affect the performance.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (DROOLS-1386) NPE in org.drools.core.common.TupleSetsImpl.setNextTuple
by Arkady Syamtomov (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1386?page=com.atlassian.jira.plugi... ]
Arkady Syamtomov updated DROOLS-1386:
-------------------------------------
Affects Version/s: 7.0.0.Beta4
> NPE in org.drools.core.common.TupleSetsImpl.setNextTuple
> --------------------------------------------------------
>
> Key: DROOLS-1386
> URL: https://issues.jboss.org/browse/DROOLS-1386
> Project: Drools
> Issue Type: Bug
> Affects Versions: 6.5.0.Final, 7.0.0.Beta4
> Reporter: Arkady Syamtomov
> Assignee: Edson Tirelli
> Priority: Critical
>
> In our integration tests which were perfectly running with drools 6.3.0.Final, now we have failures with the following exception during the rules evaluation:
> java.lang.NullPointerException: null
> at org.drools.core.common.TupleSetsImpl.setNextTuple(TupleSetsImpl.java:349) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.TupleSetsImpl.removeUpdate(TupleSetsImpl.java:205) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.TupleSetsImpl.addDelete(TupleSetsImpl.java:110) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.reteoo.QueryElementNode$UnificationNodeViewChangedEventListener.rowRemoved(QueryElementNode.java:444) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.PhreakQueryTerminalNode.doLeftDeletes(PhreakQueryTerminalNode.java:154) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.PhreakQueryTerminalNode.doNode(PhreakQueryTerminalNode.java:46) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleNetworkEvaluator.innerEval(RuleNetworkEvaluator.java:282) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleNetworkEvaluator.evalStackEntry(RuleNetworkEvaluator.java:198) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleNetworkEvaluator.outerEval(RuleNetworkEvaluator.java:141) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleNetworkEvaluator.evaluateNetwork(RuleNetworkEvaluator.java:94) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleExecutor.reEvaluateNetwork(RuleExecutor.java:194) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.phreak.RuleExecutor.evaluateNetworkAndFire(RuleExecutor.java:73) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.DefaultAgenda.fireNextItem(DefaultAgenda.java:970) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.DefaultAgenda.fireLoop(DefaultAgenda.java:1312) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.common.DefaultAgenda.fireAllRules(DefaultAgenda.java:1251) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.impl.StatefulKnowledgeSessionImpl.internalFireAllRules(StatefulKnowledgeSessionImpl.java:1364) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1355) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1346) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.command.runtime.rule.FireAllRulesCommand.execute(FireAllRulesCommand.java:109) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.command.runtime.rule.FireAllRulesCommand.execute(FireAllRulesCommand.java:36) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.command.runtime.BatchExecutionCommandImpl.execute(BatchExecutionCommandImpl.java:137) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.command.runtime.BatchExecutionCommandImpl.execute(BatchExecutionCommandImpl.java:51) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
> at org.drools.core.impl.StatelessKnowledgeSessionImpl.execute(StatelessKnowledgeSessionImpl.java:254) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (DROOLS-1386) NPE in org.drools.core.common.TupleSetsImpl.setNextTuple
by Arkady Syamtomov (JIRA)
Arkady Syamtomov created DROOLS-1386:
----------------------------------------
Summary: NPE in org.drools.core.common.TupleSetsImpl.setNextTuple
Key: DROOLS-1386
URL: https://issues.jboss.org/browse/DROOLS-1386
Project: Drools
Issue Type: Bug
Affects Versions: 6.5.0.Final
Reporter: Arkady Syamtomov
Assignee: Edson Tirelli
Priority: Critical
In our integration tests which were perfectly running with drools 6.3.0.Final, now we have failures with the following exception during the rules evaluation:
java.lang.NullPointerException: null
at org.drools.core.common.TupleSetsImpl.setNextTuple(TupleSetsImpl.java:349) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.common.TupleSetsImpl.removeUpdate(TupleSetsImpl.java:205) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.common.TupleSetsImpl.addDelete(TupleSetsImpl.java:110) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.reteoo.QueryElementNode$UnificationNodeViewChangedEventListener.rowRemoved(QueryElementNode.java:444) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.phreak.PhreakQueryTerminalNode.doLeftDeletes(PhreakQueryTerminalNode.java:154) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.phreak.PhreakQueryTerminalNode.doNode(PhreakQueryTerminalNode.java:46) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.phreak.RuleNetworkEvaluator.innerEval(RuleNetworkEvaluator.java:282) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.phreak.RuleNetworkEvaluator.evalStackEntry(RuleNetworkEvaluator.java:198) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.phreak.RuleNetworkEvaluator.outerEval(RuleNetworkEvaluator.java:141) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.phreak.RuleNetworkEvaluator.evaluateNetwork(RuleNetworkEvaluator.java:94) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.phreak.RuleExecutor.reEvaluateNetwork(RuleExecutor.java:194) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.phreak.RuleExecutor.evaluateNetworkAndFire(RuleExecutor.java:73) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.common.DefaultAgenda.fireNextItem(DefaultAgenda.java:970) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.common.DefaultAgenda.fireLoop(DefaultAgenda.java:1312) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.common.DefaultAgenda.fireAllRules(DefaultAgenda.java:1251) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.impl.StatefulKnowledgeSessionImpl.internalFireAllRules(StatefulKnowledgeSessionImpl.java:1364) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1355) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.impl.StatefulKnowledgeSessionImpl.fireAllRules(StatefulKnowledgeSessionImpl.java:1346) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.command.runtime.rule.FireAllRulesCommand.execute(FireAllRulesCommand.java:109) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.command.runtime.rule.FireAllRulesCommand.execute(FireAllRulesCommand.java:36) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.command.runtime.BatchExecutionCommandImpl.execute(BatchExecutionCommandImpl.java:137) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.command.runtime.BatchExecutionCommandImpl.execute(BatchExecutionCommandImpl.java:51) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
at org.drools.core.impl.StatelessKnowledgeSessionImpl.execute(StatelessKnowledgeSessionImpl.java:254) ~[drools-core-6.5.0.Final-redhat-2.jar:6.5.0.Final-redhat-2]
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (WFLY-7838) EJBClient user transaction is not propagated properly to the receiver
by Mate Varga (JIRA)
[ https://issues.jboss.org/browse/WFLY-7838?page=com.atlassian.jira.plugin.... ]
Mate Varga closed WFLY-7838.
----------------------------
Resolution: Won't Do
I have misunderstood how single-VM transaction management works.
> EJBClient user transaction is not propagated properly to the receiver
> ---------------------------------------------------------------------
>
> Key: WFLY-7838
> URL: https://issues.jboss.org/browse/WFLY-7838
> Project: WildFly
> Issue Type: Bug
> Components: EJB
> Affects Versions: 10.1.0.Final
> Environment: not relevant
> Reporter: Mate Varga
> Attachments: wf-txn-fix.patch
>
>
> Setup:
> - WF 10.1.0.Final
> - two deployments, one EAR and one WAR
> - EAR exposes EJB methods (SFSBs and SLSBs)
> - WAR uses wildfly-ejb-client-bom 10.1.0 to call remote EJBs
> Problem:
> The client uses bean-managed transactions. The problem is that transactions are not propagated properly to the EJB side, therefore instead of using the existing BMT, container will use CMT. The flow in detail:
> - LocalEjbReceiver#processInvocation receives an EJBClientInvocationContext
> - EJBClientInvocationContext contains contextData, which was populated by the client. contextData correctly contains the appropriate UserTransactionId
> - processInvocation extracts the transaction Id and puts it into the interceptorContext's context data (NOT into privateData)
> - later in the interceptor chain, control reaches EJBRemoteTransactionPropagatingInterceptor, which is responsible for checking whether there is an user transaction present.
> - it tries to fetch the transaction ID from the interceptorContext's privateData (NOT from contextData)
> - it does not find the userTransaction there
> It looks to me that either EJBRemoteTransactionPropagatingInterceptor should look look for the userTransaction in contextData, or LocalEjbReceiver should put the userTransaction ID into privateData.
> I've attached a patch that fixes the problem for me.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (DROOLS-1385) Access the rule from outsidefolder
by Mylammal AV (JIRA)
[ https://issues.jboss.org/browse/DROOLS-1385?page=com.atlassian.jira.plugi... ]
Mylammal AV commented on DROOLS-1385:
-------------------------------------
Friends,Please comment your ideas
> Access the rule from outsidefolder
> ----------------------------------
>
> Key: DROOLS-1385
> URL: https://issues.jboss.org/browse/DROOLS-1385
> Project: Drools
> Issue Type: Feature Request
> Reporter: Mylammal AV
> Assignee: Edson Tirelli
> Priority: Blocker
>
> I am using Maven.And invoking rules we are using drools.If the rules.drl is present inside the project folder.. it is automatically searched and found the expected value.But my requirement is, I need to access rule - outside the class Folder. I thought we need to outside folder path in kmodule.xml,instead of giving package="*". But it is throwing " No files found for KieBase CrossPlan1, searching folder inside the project ".Is it possible to do this..
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (DROOLS-1385) Access the rule from outsidefolder
by Mylammal AV (JIRA)
Mylammal AV created DROOLS-1385:
-----------------------------------
Summary: Access the rule from outsidefolder
Key: DROOLS-1385
URL: https://issues.jboss.org/browse/DROOLS-1385
Project: Drools
Issue Type: Feature Request
Reporter: Mylammal AV
Assignee: Edson Tirelli
Priority: Blocker
I am using Maven.And invoking rules we are using drools.If the rules.drl is present inside the project folder.. it is automatically searched and found the expected value.But my requirement is, I need to access rule - outside the class Folder. I thought we need to outside folder path in kmodule.xml,instead of giving package="*". But it is throwing " No files found for KieBase CrossPlan1, searching folder inside the project ".Is it possible to do this..
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (WFLY-7840) elytron: authentication-context validation errors
by Claudio Miranda (JIRA)
Claudio Miranda created WFLY-7840:
-------------------------------------
Summary: elytron: authentication-context validation errors
Key: WFLY-7840
URL: https://issues.jboss.org/browse/WFLY-7840
Project: WildFly
Issue Type: Bug
Reporter: Claudio Miranda
Assignee: Jason Greene
elytron resource authentication-context has attribute match-rules marked as required=false and nillable=true, but fail to add an authentication-context with no match-rules attribute
{code}
/profile=default/subsystem=elytron/authentication-context=test123:add
{
"outcome" => "failed",
"failure-description" => {"domain-failure-description" => "WFLYCTL0155: 'match-rules' may not be null"},
"rolled-back" => true
}
{code}
Resource description snippet
{code}
"match-rules" => {
"type" => LIST,
"description" => "The match-rules for this authentication context.",
"expressions-allowed" => false,
"required" => false,
"nillable" => true,
{code}
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months
[JBoss JIRA] (WFLY-7823) AMQP remote client failed to connect
by Stuart Douglas (JIRA)
[ https://issues.jboss.org/browse/WFLY-7823?page=com.atlassian.jira.plugin.... ]
Stuart Douglas reassigned WFLY-7823:
------------------------------------
Component/s: JMS
(was: Application Client)
Assignee: Jeff Mesnil (was: Stuart Douglas)
> AMQP remote client failed to connect
> ------------------------------------
>
> Key: WFLY-7823
> URL: https://issues.jboss.org/browse/WFLY-7823
> Project: WildFly
> Issue Type: Bug
> Components: JMS
> Affects Versions: 10.1.0.Final
> Environment: OS: Windows10 x64,
> Java: 1.8.0_102
> WildFly 10.1 Final (the same also on 11 night build) - standalone
> client: AmqpNetLite, v.1.2.2.0, Proton 0.14
> Reporter: Oleg Kozakevych
> Assignee: Jeff Mesnil
> Labels: amqp, artemis, wildfly
> Attachments: server.log, server_debug.log, standalone.xml
>
>
> AMQP remote client cannot connect to Artemis broker while it is embedded into WildFly.
> When client connects, it doesn't receive anything from the server. I turned on frame logging in AmqpNetLite and it show like following:
> {noformat}
> [03:58:25.015] SEND AMQP 3 1 0 0
> [03:58:25.062] SEND sasl-init(mechanism:PLAIN,initial-response:006775657374006775657374,hostname:127.0.0.1)
> {noformat}
> Server just does nothing, any traces, any response to client.
> I made changes as suggested by Justin in corresponding thread (https://developer.jboss.org/thread/269424)
> {noformat}
> The Artemis AMQP protocol implementation module (at <WFLY_HOME>/modules/system/layers/base/org/apache/activemq/artemis/protocol/amqp/main) needs a dependency on Netty in its module.xml (e.g. <module name="io.netty"/>).
> Artemis requires Proton-J 0.10 and Wildfly ships with 0.8 so you can copy proton-j-0.10.jar and proton-jms-0.10.jar from Artemis' /lib directory to <WFLY_HOME>/modules/system/layers/base/org/apache/qpid/proton/main and update the module.xml accordingly.
> {noformat}
> Then I turned debug traces and see following exception:
> {noformat}
> 13:01:00,713 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) java.nio.ReadOnlyBufferException
> 13:01:00,716 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at java.nio.ByteBuffer.array(ByteBuffer.java:996)
> 13:01:00,720 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.buffer.UnsafeByteBufUtil.setBytes(UnsafeByteBufUtil.java:368)
> 13:01:00,730 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:205)
> 13:01:00,734 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:877)
> 13:01:00,745 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at org.proton.plug.handler.impl.ProtonHandlerImpl.outputBuffer(ProtonHandlerImpl.java:226)
> 13:01:00,749 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at org.proton.plug.context.AbstractConnectionContext.flushBytes(AbstractConnectionContext.java:145)
> 13:01:00,760 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at org.proton.plug.context.AbstractConnectionContext$LocalListener.onTransport(AbstractConnectionContext.java:160)
> 13:01:00,766 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at org.proton.plug.handler.impl.ProtonHandlerImpl.dispatch(ProtonHandlerImpl.java:349)
> 13:01:00,770 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at org.proton.plug.handler.impl.ProtonHandlerImpl.flush(ProtonHandlerImpl.java:257)
> 13:01:00,783 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at org.proton.plug.handler.impl.ProtonHandlerImpl.inputBuffer(ProtonHandlerImpl.java:158)
> 13:01:00,789 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at org.proton.plug.context.AbstractConnectionContext.inputBuffer(AbstractConnectionContext.java:81)
> 13:01:00,800 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at org.apache.activemq.artemis.core.protocol.proton.ActiveMQProtonRemotingConnection.bufferReceived(ActiveMQProtonRemotingConnection.java:127)
> 13:01:00,807 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at org.apache.activemq.artemis.core.remoting.server.impl.RemotingServiceImpl$DelegatingBufferHandler.bufferReceived(RemotingServiceImpl.java:619)
> 13:01:00,814 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at org.apache.activemq.artemis.core.remoting.impl.netty.ActiveMQChannelHandler.channelRead(ActiveMQChannelHandler.java:68)
> 13:01:00,820 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
> 13:01:00,832 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
> 13:01:00,839 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.handler.codec.ByteToMessageDecoder.handlerRemoved(ByteToMessageDecoder.java:216)
> 13:01:00,849 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:527)
> 13:01:00,855 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved(DefaultChannelPipeline.java:521)
> 13:01:00,866 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.DefaultChannelPipeline.remove0(DefaultChannelPipeline.java:351)
> 13:01:00,871 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:322)
> 13:01:00,882 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:299)
> 13:01:00,887 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at org.apache.activemq.artemis.core.protocol.ProtocolHandler$ProtocolDecoder.decode(ProtocolHandler.java:175)
> 13:01:00,902 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:360)
> 13:01:00,913 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
> 13:01:00,920 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at org.apache.activemq.artemis.core.protocol.ProtocolHandler$ProtocolDecoder.channelRead(ProtocolHandler.java:118)
> 13:01:00,931 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
> 13:01:00,937 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
> 13:01:00,949 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
> 13:01:00,954 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> 13:01:00,965 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> 13:01:00,969 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> 13:01:00,981 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> 13:01:00,986 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> 13:01:00,998 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
> 13:01:01,003 ERROR [stderr] (Thread-1 (activemq-netty-threads-1534405955)) at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Configuration and server logs are attached.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 6 months