[infinispan-dev] ISPN-232 - feedback needed

Mircea Markus mircea.markus at jboss.com
Tue May 25 11:44:11 EDT 2010


On 25 May 2010, at 18:16, Manik Surtani wrote:

> 
> On 25 May 2010, at 16:06, Mircea Markus wrote:
> 
>> 
>> On 25 May 2010, at 17:44, Manik Surtani wrote:
>> 
>>> 
>>> On 25 May 2010, at 15:34, Mircea Markus wrote:
>>> 
>>>> 
>>>> On 25 May 2010, at 17:22, Manik Surtani wrote:
>>>> 
>>>>> 
>>>>> On 25 May 2010, at 15:09, Mircea Markus wrote:
>>>>> 
>>>>>> 
>>>>>> On 25 May 2010, at 16:26, Manik Surtani wrote:
>>>>>> 
>>>>>>> 
>>>>>>> On 25 May 2010, at 14:15, Mircea Markus wrote:
>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> But, I'd be concerned about an Infinispan thread that's needed 
>>>>>>>>>> for doing a lot of critical work during a view change getting tied up 
>>>>>>>>>> making a ton of notifications.
>>>>>>>>> 
>>>>>>>>> ^^ Yeah that's my concern.  If during a rehash we need to stop at every entry that is being moved and issue a notification, that could be costly and really slow down the rehashing process.
>>>>>>>> Can't we register an *async* notification listener on ViewChanged ? 
>>>>>>> 
>>>>>>> You'd still have a *lot* of notifications being queued up for the notification executor since you will have 1 event *per entry* that is moved.  
>>>>>> this would still happen in the same JVM as this information is needed. Just that we offer this as a service, so that users(AS being one of them) won't have to write this code.
>>>>> 
>>>>> No, if the AS does it, it will be one notification (view change) + a scan of known session ids (keys).  If we do it, it will involve 1 notification *per key* being migrated.
>>>> I don't intend to register more than one ViewChange listener per service. Then AS (or other clients) can register as many keys with that service instance (and consequently with that single listener) and, in that listener to iterate over the keys etc.
>>> 
>>> You may just register 1 listener, but the event gets dispatched by the rehasher thread, which does:
>> I was thinking about something totally different.
>> 1. The NotificationService (NS) is started. Internally it registers a single listener: @ViewChange(async=true)
>> 2. The user has registers to NS the set of keys for which he wants to be notified on topology changes. Whenever a key is registered, the NS calculates and holds current key address based on the CH. 
>> 3. View change happens, NS is notified with a thread from the async pool
>> 4. It iterates over the set of registered keys and checks weather the prev address has changed.
> 
> That's a whole alternate notification system to what we have.  :)  
This is logic that Brian needs to write, and possibly others too - why not to offer it as a reusable service?
> -1 to supporting 2 such systems.
> 
>>> 
>>> 1.  Loop thru all entries.  For each entry in the cache:
>>> 2.     Determine if key needs to be moved, given the old topology and the new one after considering a new joiner.  If the key needs to be moved:
>>> 3.         Add key to digest of entries to be moved.
>>> 4.  Transfer state
>>> 
>>> Now if we were to notify listeners of each change, this would happen in the loop, after the condition, around the same time as step 3.  Which means a large number of events generated and dispatched (even with just 1 listener).  
>>> 
>>> Unless you are suggesting that the listener gets a set of keys (extracted from the digest, after the loop) in a single notification event?  
>>> 
>>> 
>>>> As per my understanding, Brian's concern was that the list of keys can be quite large (all the web sessions) and that would delay the rehashing thread - this is now mitigated by using @ViewChange(async=true).
>>>>> The cost of each notification is the construction and initialization of an event object - which includes cloning the invocation context - and placing it on an executor queue.
>>>>> 
>>>>> 
>>>>>>> Could be thousands, tens of thousands in cases.  
>>>>>>> All other notifications will be severely delayed (depending on your async notifier executor threadpool size)
>>>>>> If you have an big enough thread pool (actually at least 2 threads) this should be no issue - and this should be made clear to the user.
>>>>> 
>>>>> Executor queue size?  This could start blocking?
>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> Manik Surtani
>>>>>>> manik at jboss.org
>>>>>>> Lead, Infinispan
>>>>>>> Lead, JBoss Cache
>>>>>>> http://www.infinispan.org
>>>>>>> http://www.jbosscache.org
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> infinispan-dev mailing list
>>>>>>> infinispan-dev at lists.jboss.org
>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>> 
>>>>>> 
>>>>>> _______________________________________________
>>>>>> infinispan-dev mailing list
>>>>>> infinispan-dev at lists.jboss.org
>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>> 
>>>>> --
>>>>> Manik Surtani
>>>>> manik at jboss.org
>>>>> Lead, Infinispan
>>>>> Lead, JBoss Cache
>>>>> http://www.infinispan.org
>>>>> http://www.jbosscache.org
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> infinispan-dev at lists.jboss.org
>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>> 
>>>> 
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> 
>>> --
>>> Manik Surtani
>>> manik at jboss.org
>>> Lead, Infinispan
>>> Lead, JBoss Cache
>>> http://www.infinispan.org
>>> http://www.jbosscache.org
>>> 
>>> 
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> --
> Manik Surtani
> manik at jboss.org
> Lead, Infinispan
> Lead, JBoss Cache
> http://www.infinispan.org
> http://www.jbosscache.org
> 
> 
> 
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20100525/6e8a4ae0/attachment-0001.html 


More information about the infinispan-dev mailing list