[infinispan-dev] ISPN-3051 configuration

Sanne Grinovero sanne at infinispan.org
Mon Sep 9 07:12:57 EDT 2013


On 9 September 2013 10:37, Pedro Ruivo <pedro at infinispan.org> wrote:
>
>
> On 09/09/2013 10:18 AM, Dan Berindei wrote:
>> Hi guys
>>
>> As you know, I'm working on ISPN-3051, allowing each node to take a
>> higher or lower proportion of the entries in the cache. I've implemented
>> this by adding a float "loadFactor" setting in each node's
>> configuration, with 1 being the default and any positive value being
>> accepted (including 0).
>>
>> There are two questions I wanted to ask you about the configuration:
>>
>> 1. What do you think about the "loadFactor" name? I started having
>> doubts about it, since it has a very different meaning in HashMap. I
>> have come up with a couple alternatives, but I don't love any of them:
>> "proportionalLoad" and "proportionalCapacity".
>
> I think capacity is a good name...

+1
"capacityFactor"

>
>>
>> 2. Where should we put this setting? I have added it as
>> CacheConfiguration.clustering().hash().loadFactor(), but I can't think
>> of a reason for having different values for each cache, so we might as
>> well put it in the global configuration.
>
> My vote is to having it per cache, so we can support use cases like this:
> having 2 caches in the same node but one is more prioritized/requested
> than the other and you may want to keep more data for the prioritized cache.

+1 to have it cache specific. A node could be interested in
participating with some caches but not needing a high degree of
locality, while being more interested in keeping as much as possible
in memory for a different Cache.

>
>>
>> Cheers
>> Dan
>>
>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


More information about the infinispan-dev mailing list