[JBoss JIRA] (JGRP-2171) New bundler with max_bundle_size for each destination
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2171?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2171:
---------------------------
Description:
The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
So 1 main queue and 1 queue for each destination.
h4. Example:
* {{num_flips}} is 2
* A message for A is sent, added to the main queue and removed by the runner. It is queued in A's queue
* Another message for A is sent. Also queued (A's queue: {{A A}})
* A message to B is sent: A's {{num_flips}} is now 1. A's queue is {{A A}}, B's queue is {{B}}
* Another message to A is sent. This resets A's {{num_flips}} to 2, B's {{num_flips}} is now 1
* 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
* No more messages are received, so the batch to C is also sent
The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
h4. Misc
* Should the sending of batches be delegated to a thread pool?
* Should the senders add their messages directly to the destination queues instead of the main queue? That would result in less contention on the main queue, but it would also require 1 thread per destination queue, which creates too many threads...
was:
The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
So 1 main queue and 1 queue for each destination.
Example:
* {{num_flips}} is 2
* A message for A is sent, added to the main queue and removed by the runner. It is queued in A's queue
* Another message for A is sent. Also queued (A's queue: {{A A}})
* A message to B is sent: A's {{num_flips}} is now 1. A's queue is {{A A}}, B's queue is {{B}}
* Another message to A is sent. This resets A's {{num_flips}} to 2, B's {{num_flips}} is now 1
* 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
* No more messages are received, so the batch to C is also sent
The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
> New bundler with max_bundle_size for each destination
> -----------------------------------------------------
>
> Key: JGRP-2171
> URL: https://issues.jboss.org/browse/JGRP-2171
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0.4
>
>
> The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
> This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
> The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
> This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
> We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
> So 1 main queue and 1 queue for each destination.
> h4. Example:
> * {{num_flips}} is 2
> * A message for A is sent, added to the main queue and removed by the runner. It is queued in A's queue
> * Another message for A is sent. Also queued (A's queue: {{A A}})
> * A message to B is sent: A's {{num_flips}} is now 1. A's queue is {{A A}}, B's queue is {{B}}
> * Another message to A is sent. This resets A's {{num_flips}} to 2, B's {{num_flips}} is now 1
> * 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
> * No more messages are received, so the batch to C is also sent
> The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
> h4. Misc
> * Should the sending of batches be delegated to a thread pool?
> * Should the senders add their messages directly to the destination queues instead of the main queue? That would result in less contention on the main queue, but it would also require 1 thread per destination queue, which creates too many threads...
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ELY-1187) Mechanism names in 'or, 'and', 'eq' or 'if' predicates are not parsed correctly in mechanism selector in Elytron client
by Ondrej Lukas (JIRA)
[ https://issues.jboss.org/browse/ELY-1187?page=com.atlassian.jira.plugin.s... ]
Ondrej Lukas updated ELY-1187:
------------------------------
Affects Version/s: 1.1.0.Beta42
> Mechanism names in 'or, 'and', 'eq' or 'if' predicates are not parsed correctly in mechanism selector in Elytron client
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: ELY-1187
> URL: https://issues.jboss.org/browse/ELY-1187
> Project: WildFly Elytron
> Issue Type: Bug
> Affects Versions: 1.1.0.Beta42
> Reporter: Ondrej Lukas
> Assignee: Darran Lofthouse
> Priority: Critical
>
> When Elytron client configuration file includes sasl-mechanism-selector with string which contains 'or, 'and', 'eq' and 'if' predicate then its values are not parsed correctly. It seems that name of mechanism is parsed as 'name of mechanism'+'1 another character'.
> For example, following element for selector in Elytron client configuration file:
> {code}
> ...
> <sasl-mechanism-selector selector="(DIGEST-MD5||JBOSS-LOCAL-USER)"/>
> ...
> {code}
> is parsed as mechanisms {{DIGEST-MD5|}}, {{JBOSS-LOCAL-USER)}}.
> The same behavior happens also if predicate {{&&}}, {{==}}, {{x ? y : z}} is used.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (ELY-1187) Mechanism names in 'or, 'and', 'eq' or 'if' predicates are not parsed correctly in mechanism selector in Elytron client
by Ondrej Lukas (JIRA)
Ondrej Lukas created ELY-1187:
---------------------------------
Summary: Mechanism names in 'or, 'and', 'eq' or 'if' predicates are not parsed correctly in mechanism selector in Elytron client
Key: ELY-1187
URL: https://issues.jboss.org/browse/ELY-1187
Project: WildFly Elytron
Issue Type: Bug
Reporter: Ondrej Lukas
Assignee: Darran Lofthouse
Priority: Critical
When Elytron client configuration file includes sasl-mechanism-selector with string which contains 'or, 'and', 'eq' and 'if' predicate then its values are not parsed correctly. It seems that name of mechanism is parsed as 'name of mechanism'+'1 another character'.
For example, following element for selector in Elytron client configuration file:
{code}
...
<sasl-mechanism-selector selector="(DIGEST-MD5||JBOSS-LOCAL-USER)"/>
...
{code}
is parsed as mechanisms {{DIGEST-MD5|}}, {{JBOSS-LOCAL-USER)}}.
The same behavior happens also if predicate {{&&}}, {{==}}, {{x ? y : z}} is used.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JGRP-2171) New bundler with max_bundle_size for each destination
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2171?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2171:
---------------------------
Description:
The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
So 1 main queue and 1 queue for each destination.
Example:
* {{num_flips}} is 2
* A message for A is sent, added to the main queue and removed by the runner. It is queued in A's queue
* Another message for A is sent. Also queued (A's queue: {{A A}})
* A message to B is sent: A's {{num_flips}} is now 1. A's queue is {{A A}}, B's queue is {{B}}
* Another message to A is sent. This resets A's {{num_flips}} to 2, B's {{num_flips}} is now 1
* 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
* No more messages are received, so the batch to C is also sent
The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
was:
The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
So 1 main queue and 1 queue for each destination.
Example:
* {{num_flips}} is 2
* A message for A is sent; it is queued
* Another message for A is sent. Also queued (queue: {{A A}})
* A message to B is sent: {{num_flips}} is now 1, the queue is {{A A B}}
* Another message to A is sent. This resets {{num_flips}} to 2
* 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
* No more messages are received, so the batch to C is also sent
The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
> New bundler with max_bundle_size for each destination
> -----------------------------------------------------
>
> Key: JGRP-2171
> URL: https://issues.jboss.org/browse/JGRP-2171
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0.4
>
>
> The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
> This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
> The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
> This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
> We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
> So 1 main queue and 1 queue for each destination.
> Example:
> * {{num_flips}} is 2
> * A message for A is sent, added to the main queue and removed by the runner. It is queued in A's queue
> * Another message for A is sent. Also queued (A's queue: {{A A}})
> * A message to B is sent: A's {{num_flips}} is now 1. A's queue is {{A A}}, B's queue is {{B}}
> * Another message to A is sent. This resets A's {{num_flips}} to 2, B's {{num_flips}} is now 1
> * 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
> * No more messages are received, so the batch to C is also sent
> The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (WFLY-8823) Graceful shutdown doesn't work with JTS
by Flavia Rainone (JIRA)
Flavia Rainone created WFLY-8823:
------------------------------------
Summary: Graceful shutdown doesn't work with JTS
Key: WFLY-8823
URL: https://issues.jboss.org/browse/WFLY-8823
Project: WildFly
Issue Type: Bug
Components: EJB
Reporter: Flavia Rainone
Assignee: Flavia Rainone
In a scenario where:
- one thread commands server to shutdown with 30s timeout and sleeps for 20s (in order to keep the server alive).
- another thread tries to call server (the server is in SUSPENDING phase)
The call to the server should be refused and some exception should be thrown.
It works fine for JTA. But for JTS another call is successfully processed by the the server and no exception is thrown
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JGRP-2171) New bundler with max_bundle_size for each destination
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2171?page=com.atlassian.jira.plugin.... ]
Bela Ban updated JGRP-2171:
---------------------------
Description:
The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
So 1 main queue and 1 queue for each destination.
Example:
* {{num_flips}} is 2
* A message for A is sent; it is queued
* Another message for A is sent. Also queued (queue: {{A A}})
* A message to B is sent: {{num_flips}} is now 1, the queue is {{A A B}}
* Another message to A is sent. This resets {{num_flips}} to 2
* 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
* No more messages are received, so the batch to C is also sent
The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
was:
The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
Example:
* {{num_flips}} is 2
* A message for A is sent; it is queued
* Another message for A is sent. Also queued (queue: {{A A}})
* A message to B is sent: {{num_flips}} is now 1, the queue is {{A A B}}
* Another message to A is sent. This resets {{num_flips}} to 2
* 2 messages to C are sent. This causes {{num_flips}} for A to be 0, so the batch to A (with 3 msgs) is sent
* No more messages are received, so the batches to B and C are also sent
The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*.
> New bundler with max_bundle_size for each destination
> -----------------------------------------------------
>
> Key: JGRP-2171
> URL: https://issues.jboss.org/browse/JGRP-2171
> Project: JGroups
> Issue Type: Feature Request
> Reporter: Bela Ban
> Assignee: Bela Ban
> Fix For: 4.0.4
>
>
> The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
> This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
> The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
> This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
> We have a main queue, into which the senders write, and a runner thread (same as {{run()}} in TransferQueueBundler), which continually removes messages from the main queue and inserts them into queues for each destination.
> So 1 main queue and 1 queue for each destination.
> Example:
> * {{num_flips}} is 2
> * A message for A is sent; it is queued
> * Another message for A is sent. Also queued (queue: {{A A}})
> * A message to B is sent: {{num_flips}} is now 1, the queue is {{A A B}}
> * Another message to A is sent. This resets {{num_flips}} to 2
> * 2 messages to C are sent. This causes {{num_flips}} for A and B to be 0, so the batches to A (with 3 msgs) and B (1 msg) are also sent
> * No more messages are received, so the batch to C is also sent
> The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*. It is maintained for each destination separately (probably in the queue for that destination).
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JGRP-2171) New bundler with max_bundle_size for each destination
by Bela Ban (JIRA)
Bela Ban created JGRP-2171:
------------------------------
Summary: New bundler with max_bundle_size for each destination
Key: JGRP-2171
URL: https://issues.jboss.org/browse/JGRP-2171
Project: JGroups
Issue Type: Feature Request
Reporter: Bela Ban
Assignee: Bela Ban
Fix For: 4.0.4
The current bundlers queue all messages and when the total size of all messages for all destinations would exceed {{max_bundle_size}}, message batches for each destination are sent.
This negatively affects latency-sensitive applications, e.g. when we have a queue such as this: {{A B B C B B D B B}}, then the message for A has to wait until either the queue is full ({{max_bundle_size exceeded}}), or no more messages are received (and then we send the batches anyway).
The goal is to write a new bundler which keeps a count for _each destination_ and sends batches to different destinations sooner. Also introduce a counter {{num_flips}} (find a better name!), which determines when a message batch is to be sent.
This counter is decremented when a message to be sent has a destination that's different from the previous destination. When the counter is 0, we send the batch to the previous destination(s).
Example:
* {{num_flips}} is 2
* A message for A is sent; it is queued
* Another message for A is sent. Also queued (queue: {{A A}})
* A message to B is sent: {{num_flips}} is now 1, the queue is {{A A B}}
* Another message to A is sent. This resets {{num_flips}} to 2
* 2 messages to C are sent. This causes {{num_flips}} for A to be 0, so the batch to A (with 3 msgs) is sent
* No more messages are received, so the batches to B and C are also sent
The value of {{num_flips}} should be computed as the rolling (weighted) average of the number of *adjacent messages to the same destination*.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (JGRP-2162) Failed to send broadcast when opening the connection
by Bela Ban (JIRA)
[ https://issues.jboss.org/browse/JGRP-2162?page=com.atlassian.jira.plugin.... ]
Bela Ban edited comment on JGRP-2162 at 5/23/17 3:37 AM:
---------------------------------------------------------
I did change the description of {{initial_hosts}}. This will show up in the schema and the documentation now.
was (Author: belaban):
I did change the description of {{initial_hosts}}. This will show up in the schema and gthe documentation now.
> Failed to send broadcast when opening the connection
> ----------------------------------------------------
>
> Key: JGRP-2162
> URL: https://issues.jboss.org/browse/JGRP-2162
> Project: JGroups
> Issue Type: Bug
> Reporter: Radim Vansa
> Assignee: Bela Ban
> Fix For: 4.0.4
>
> Attachments: TcpNio2McastTest.java, infinispan_2.log.gz
>
>
> IRC discussion:
> {quote}
> bela_: Hi Bela, I have a weird failure in one test that seem to be rooted in JGroups. TCP_NIO2 is in charge, and there's a broadcast message to all nodes, but it seems it's not received on the other side.
> <bela_> rvansa: reproducible?
> <rvansa> bela_: it happens when the connection to a node is just being opened: I have added some trace logs and just a moment before writing to the NioConnection.send_buf it was in state "connection pending"
> <rvansa> bela_: sort of, after tens of runs of that test (on my machine) - and I've seen it first time in CI, so it could be
> <bela_> rvansa: NioConnection buffers writes up to a certain extent, then discards anything over the buffer limit
> <bela_> rvansa: max_send_buffers (default: 10). But retransmission should fix this, unless you don’t wait long enough
> <rvansa> bela_: I don't think it should go over the limit
> <rvansa> bela_: the test is not doing anything else, just sending CommitCommand (that should be couple hundred bytes at most) and then waiting
> <rvansa> bela_: according to the traces I've added, Buffers.write returned false when writing the local address, and then true when writing the actual message
> {quote}
> I have been trying to write a reproducer, and found that it's related to the fact that the failing test uses custom (fake) discovery protocol, that doesn't open the connection during startup. In my ~reproducer I had to modify tcp-nio.xml to use TCPPING with only the first node in hosts list (localhost[7800]):
> {code:xml}
> <TCPPING async_discovery="true" initial_hosts="${jgroups.tcpping.initial_hosts:localhost[7800]}" port_range="0"/>
> {code}
> This causes that the physical connection is not opened by discovery. However, the reproducer suffers from (always reproducible) flaw - it does not send the message to third node at all (and the test fails, therefore).
> Note that increasing the timeout in request options does not help.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months
[JBoss JIRA] (WFLY-8821) Operation removing http-connector requires full server reload but does not change the server state accordingly
by Michal Jurc (JIRA)
[ https://issues.jboss.org/browse/WFLY-8821?page=com.atlassian.jira.plugin.... ]
Michal Jurc moved JBEAP-11075 to WFLY-8821:
-------------------------------------------
Project: WildFly (was: JBoss Enterprise Application Platform)
Key: WFLY-8821 (was: JBEAP-11075)
Workflow: GIT Pull Request workflow (was: CDW with loose statuses v1)
Component/s: Remoting
(was: Remoting)
Affects Version/s: (was: 7.1.0.DR18)
> Operation removing http-connector requires full server reload but does not change the server state accordingly
> --------------------------------------------------------------------------------------------------------------
>
> Key: WFLY-8821
> URL: https://issues.jboss.org/browse/WFLY-8821
> Project: WildFly
> Issue Type: Bug
> Components: Remoting
> Reporter: Michal Jurc
> Assignee: David Lloyd
>
> Removing a {{http-connector}} with {{allow-resource-service-restart}} set to {{true}} will yield the following result:
> {code}{
> "operation" => "remove",
> "address" => [
> ("subsystem" => "remoting"),
> ("http-connector" => "ejb3-tests")
> ],
> "operation-headers" => {"allow-resource-service-restart" => true}
> }
> {"outcome" => "success"}{code}
> However, the removal of {{http-connector}} will only register after full server reload.
--
This message was sent by Atlassian JIRA
(v7.2.3#72005)
8 years, 11 months