[infinispan-dev] ISPN-232 - feedback needed

Manik Surtani manik at jboss.org
Tue May 25 04:30:59 EDT 2010


On 24 May 2010, at 20:55, Brian Stansberry wrote:

> On 05/24/2010 01:39 PM, Mircea Markus wrote:
>> On 24 May 2010, at 19:30, Brian Stansberry wrote:
>> 
>>> Thanks, Mircea:
>>> 
>>> A related use case I need to cover[1] is checking whether an existing
>>> key still hashes to the local node and if not, where it hashes. This may
>>> not belong in KeyAffinityService, but I'd like it in some place easily
>>> accessible to the same caller the creates KeyAffinityService.
>>> 
>>> Set<Address> getAddressesForKey(K key)
>>> 
>>> Usage:
>>> 
>>> Set<Address> addresses = getAddressesForKey(sessionId);
>>> if (!addresses.contains(localAddress)) {
>>> redirectRequest(addresses);
>>> }
>>> 
>>> getAddressesForKey() could return a List<Address> but then checking
>>> whether the local address is OK would require a scan. AIUI there
>>> ordering in the set of DIST nodes is not meaningful, so a Set should be
>>> fine.
>> there's already something similar in the cache API:
>> cache.getAdvancedCache().getDistributionManager().locate(Object
>> key)::List<Address>
> 
> Haha, I figured there was something, but was lazy. Thanks!
> 
>> If you have many sessions(which I think is the case), then transforming
>> the List in a Set is quicker indeed.
> 
> Yes, this could be pretty heavily invoked.  Although...
> 
>> Taking the idea one step further, this would be an possibly useful
>> service as well: a notification service, to which
>> you register keys and listeners, and the service would call the listener
>> whenever key's distribution is changed.
>> Wdyt?
> 
> That would be helpful. I was thinking this morning about how to avoid 
> calling getAddressesForKey() all the time (i.e. every request) and was 
> thinking of using a view change notification as a trigger to mark all 
> sessions as needing a check. What you propose would be more efficient 
> for me.

Maybe, but what you described is precisely what we'd do, except that it would be the rehashing thread that does this.

> But, I'd be concerned about an Infinispan thread that's needed 
> for doing a lot of critical work during a view change getting tied up 
> making a ton of notifications.

^^ Yeah that's my concern.  If during a rehash we need to stop at every entry that is being moved and issue a notification, that could be costly and really slow down the rehashing process.

> 
>>> 
>>> Re: K getCollocatedKey(K otherKey) please note that for the AS use cases
>>> I know of where colocation is helpful,
>> I can't help but notice your American spelling of "colocation". I know,
>> I know, you won a War :)
>>> the colocated data will be stored
>>> in separate Cache instances, while the KAS is scoped to a cache. The
>>> method should still be helpful though, as long as the caches use the
>>> same underlying CacheManager/Channel. It's up to the caller to check that.
>>> 
>> agreed. All the caches associated to a CM share the same transport. They
>> can potentially have different ConsistentHash functions though, that's
>> why the service runs per Cache instance, rather than running per
>> CacheManager instance.
>>> On KeyAffinityServiceFactory, there's the bufferSize param:
>>> 
>>> * @param keyBufferSize the number of generated keys per {@link
>>> org.infinispan.remoting.transport.Address}.
>>> 
>>> That implies the factory will maintain a buffer of keys for *all*
>>> addresses, not just the local address. The use cases I'm concerned with,
>>> the only address for which I want keys is the local one. Perhaps it
>>> makes sense to offer a separate param for the local address buffer size?
>>> 
>>> In a small cluster maintaining some keys in a buffer for irrelevant
>>> addresses is no big deal, but for large clusters it may waste a fair
>>> amount of memory.
>> Good point! what about adding a new parameter to the factory's methods:
>> Collection<Address> - to specify the list of addresses for which keys
>> will be generated.
> 
> Perhaps take Map<Address addr, Integer keyBufferSize> ?
> 
> TBH, I can't think of a use case where you'd use different size buffers 
> (other than 0) for different addresses, but a map gives more imaginative 
> people flexibility.
> 
> Hmm, for any use case where people are interested in particular 
> addresses other than the local address, they are vulnerable to view 
> changes introducing unexpected members. So, to be complete perhaps be a 
> default buffer size for members not specifically listed:
> 
> public static <K,V> KeyAffinityService<K> 
> newKeyAffinityService(Cache<K,V> cache, ExecutorFactory ex, KeyGenerator 
> keyGenerator, Map<Address, Integer> keyBufferSizes, int 
> defaultKeyBufferSize) {
> 
> Kind of complex! :-)

> 
>> And another factory method that would transparently pass the local
>> address to the previous method so that the caller won't be concerned
>> with getting the local address.
>> 
> 
> Sure, if you can come up with a signature for that isn't confusing vs. 
> the other factory methods. Otherwise I don't mind getting the local address.
> 
>> Cheers,
>> Mircea
>>> 
>>> [1] https://jira.jboss.org/browse/JBAS-7853 plus a similar requirement
>>> for EJB3 as discussed on https://community.jboss.org/docs/DOC-15052
>>> 
>>> 
>>> On 05/24/2010 05:15 AM, Mircea Markus wrote:
>>>> Hi,
>>>> 
>>>> I've committed[1] the interfaces for the key-affinity service in
>>>> ISPN-232.
>>>> Would you mind taking a look and let me know what you think?
>>>> 
>>>> Thanks!
>>>> Mircea
>>>> 
>>>> [1]
>>>> http://anonsvn.jboss.org/repos/infinispan/trunk/core/src/main/java/org/infinispan/affinity/
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev at lists.jboss.org <mailto:infinispan-dev at lists.jboss.org>
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>> 
>>> 
>>> --
>>> Brian Stansberry
>>> Lead, AS Clustering
>>> JBoss by Red Hat
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org <mailto:infinispan-dev at lists.jboss.org>
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> 
>> 
>> 
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 
> -- 
> Brian Stansberry
> Lead, AS Clustering
> JBoss by Red Hat
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev

--
Manik Surtani
manik at jboss.org
Lead, Infinispan
Lead, JBoss Cache
http://www.infinispan.org
http://www.jbosscache.org







More information about the infinispan-dev mailing list