Adding -dev list as we're discussing a new feature.
On 06/25/2018 05:20 PM, Galder Zamarreno wrote:
The question is more about the potential problem of filling up the
event queue:
First, which thread should be the one to queue up the event? Netty
thread or thread pool?
Second, if the queue is full, the thread hangs waiting for it to
become non-full. Should this be the case? If this is the Netty thread,
it's probably quite bad. If it's from the remote thread pool, maybe
not as bad but still bad.
Agreed, blocking is bad, always. I don't believe that we should
propagate the backpressure from slow clients (receiving events) to fast
clients doing the updates. The only thing about slow clients we should
do is reporting that we have some trouble with them (monitoring, logs...).
Regardless, If the event queue is full, we need to do something
different. Dropping the connection doesn't sound too bad. Our
guarantees are at least once, so we can get it again on reconnect with
includeCurrentState.
We could signal such an event with a different event, just like we do
with ClientCacheFailoverEvent, so that with this new event, you know
you've been disconnected because events are not being consumed fast
enough?
Yep, but we can't start sending the event to newer clients even if we
increase the protocol version; we could send such event only if the
client is explicitly listening for that and in the other case we should
still drop the connection.
Radim
Cheers
On Mon, Jun 25, 2018 at 2:47 PM Radim Vansa <rvansa(a)redhat.com
<mailto:rvansa@redhat.com>> wrote:
You mean that the writes are executed by the Netty thread instead of
remote-thread-pool thread? Ok, I think that the issue is still valid.
Since there's no event truncation, when the queue is over limit
all we
can do is probably dropping the connection. The client will reconnect
(if it's still alive) and it will ask for current state (btw. will
the
user know that it has lost some events when it reconnects on a
includeCurrentState=false listener? I don't think so... we might just
drop the events anyway then). The current state will be sent in
another
threadpool, and as an improvement we could somehow reject the
listener
registration if we're over limit on other connections.
Radim
On 06/25/2018 02:29 PM, William Burns wrote:
> Hrmm, I don't know for sure. Radim changed how the events all
work now, so it might be fixed.
>
> - Will
>
> ----- Original Message -----
>> From: "Galder Zamarreno" <galder(a)redhat.com
<mailto:galder@redhat.com>>
>> To: "William Burns" <wburns(a)redhat.com
<mailto:wburns@redhat.com>>
>> Sent: Tuesday, 19 June, 2018 6:58:57 AM
>> Subject: ISPN-6478
>>
>> Hey,
>>
>> Is [1] still an issue for us?
>>
>> Cheers,
>>
>> [1]
https://issues.jboss.org/browse/ISPN-6478
>>
--
Radim Vansa <rvansa(a)redhat.com <mailto:rvansa@redhat.com>>
JBoss Performance Team
--
Radim Vansa <rvansa(a)redhat.com>
JBoss Performance Team