[infinispan-dev] ISPN-232 - feedback needed
Mircea Markus
mircea.markus at jboss.com
Mon May 24 14:39:26 EDT 2010
On 24 May 2010, at 19:30, Brian Stansberry wrote:
> Thanks, Mircea:
>
> A related use case I need to cover[1] is checking whether an existing
> key still hashes to the local node and if not, where it hashes. This may
> not belong in KeyAffinityService, but I'd like it in some place easily
> accessible to the same caller the creates KeyAffinityService.
>
> Set<Address> getAddressesForKey(K key)
>
> Usage:
>
> Set<Address> addresses = getAddressesForKey(sessionId);
> if (!addresses.contains(localAddress)) {
> redirectRequest(addresses);
> }
>
> getAddressesForKey() could return a List<Address> but then checking
> whether the local address is OK would require a scan. AIUI there
> ordering in the set of DIST nodes is not meaningful, so a Set should be
> fine.
there's already something similar in the cache API:
cache.getAdvancedCache().getDistributionManager().locate(Object key)::List<Address>
If you have many sessions(which I think is the case), then transforming the List in a Set is quicker indeed.
Taking the idea one step further, this would be an possibly useful service as well: a notification service, to which
you register keys and listeners, and the service would call the listener whenever key's distribution is changed.
Wdyt?
>
> Re: K getCollocatedKey(K otherKey) please note that for the AS use cases
> I know of where colocation is helpful,
I can't help but notice your American spelling of "colocation". I know, I know, you won a War :)
> the colocated data will be stored
> in separate Cache instances, while the KAS is scoped to a cache. The
> method should still be helpful though, as long as the caches use the
> same underlying CacheManager/Channel. It's up to the caller to check that.
>
agreed. All the caches associated to a CM share the same transport. They can potentially have different ConsistentHash functions though, that's why the service runs per Cache instance, rather than running per CacheManager instance.
> On KeyAffinityServiceFactory, there's the bufferSize param:
>
> * @param keyBufferSize the number of generated keys per {@link
> org.infinispan.remoting.transport.Address}.
>
> That implies the factory will maintain a buffer of keys for *all*
> addresses, not just the local address. The use cases I'm concerned with,
> the only address for which I want keys is the local one. Perhaps it
> makes sense to offer a separate param for the local address buffer size?
>
> In a small cluster maintaining some keys in a buffer for irrelevant
> addresses is no big deal, but for large clusters it may waste a fair
> amount of memory.
Good point! what about adding a new parameter to the factory's methods: Collection<Address> - to specify the list of addresses for which keys will be generated.
And another factory method that would transparently pass the local address to the previous method so that the caller won't be concerned with getting the local address.
Cheers,
Mircea
>
> [1] https://jira.jboss.org/browse/JBAS-7853 plus a similar requirement
> for EJB3 as discussed on https://community.jboss.org/docs/DOC-15052
>
>
> On 05/24/2010 05:15 AM, Mircea Markus wrote:
>> Hi,
>>
>> I've committed[1] the interfaces for the key-affinity service in ISPN-232.
>> Would you mind taking a look and let me know what you think?
>>
>> Thanks!
>> Mircea
>>
>> [1] http://anonsvn.jboss.org/repos/infinispan/trunk/core/src/main/java/org/infinispan/affinity/
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
> --
> Brian Stansberry
> Lead, AS Clustering
> JBoss by Red Hat
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20100524/05b717bc/attachment.html
More information about the infinispan-dev
mailing list