On 25 May 2010, at 15:34, Mircea Markus wrote:
On 25 May 2010, at 17:22, Manik Surtani wrote:
>
> On 25 May 2010, at 15:09, Mircea Markus wrote:
>
>>
>> On 25 May 2010, at 16:26, Manik Surtani wrote:
>>
>>>
>>> On 25 May 2010, at 14:15, Mircea Markus wrote:
>>>
>>>>>>
>>>>>> But, I'd be concerned about an Infinispan thread that's
needed
>>>>>> for doing a lot of critical work during a view change getting
tied up
>>>>>> making a ton of notifications.
>>>>>
>>>>> ^^ Yeah that's my concern. If during a rehash we need to stop at
every entry that is being moved and issue a notification, that could be costly and really
slow down the rehashing process.
>>>> Can't we register an *async* notification listener on ViewChanged ?
>>>
>>> You'd still have a *lot* of notifications being queued up for the
notification executor since you will have 1 event *per entry* that is moved.
>> this would still happen in the same JVM as this information is needed. Just that
we offer this as a service, so that users(AS being one of them) won't have to write
this code.
>
> No, if the AS does it, it will be one notification (view change) + a scan of known
session ids (keys). If we do it, it will involve 1 notification *per key* being
migrated.
I don't intend to register more than one ViewChange listener per service. Then AS (or
other clients) can register as many keys with that service instance (and consequently with
that single listener) and, in that listener to iterate over the keys etc.
You may just register 1 listener, but the event gets dispatched by the rehasher thread,
which does:
1. Loop thru all entries. For each entry in the cache:
2. Determine if key needs to be moved, given the old topology and the new one after
considering a new joiner. If the key needs to be moved:
3. Add key to digest of entries to be moved.
4. Transfer state
Now if we were to notify listeners of each change, this would happen in the loop, after
the condition, around the same time as step 3. Which means a large number of events
generated and dispatched (even with just 1 listener).
Unless you are suggesting that the listener gets a set of keys (extracted from the digest,
after the loop) in a single notification event?
As per my understanding, Brian's concern was that the list of
keys can be quite large (all the web sessions) and that would delay the rehashing thread -
this is now mitigated by using @ViewChange(async=true).
> The cost of each notification is the construction and initialization of an event
object - which includes cloning the invocation context - and placing it on an executor
queue.
>
>
>>> Could be thousands, tens of thousands in cases.
>>> All other notifications will be severely delayed (depending on your async
notifier executor threadpool size)
>> If you have an big enough thread pool (actually at least 2 threads) this should
be no issue - and this should be made clear to the user.
>
> Executor queue size? This could start blocking?
>
>>>
>>>
>>> --
>>> Manik Surtani
>>> manik(a)jboss.org
>>> Lead, Infinispan
>>> Lead, JBoss Cache
>>>
http://www.infinispan.org
>>>
http://www.jbosscache.org
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev(a)lists.jboss.org
>>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev(a)lists.jboss.org
>>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> --
> Manik Surtani
> manik(a)jboss.org
> Lead, Infinispan
> Lead, JBoss Cache
>
http://www.infinispan.org
>
http://www.jbosscache.org
>
>
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Manik Surtani
manik(a)jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org