On 30 mars 2010, at 13:14, Sanne Grinovero wrote:
Emmanuel Bernard wrote:
> Bela Ban wrote:
>> You could use the cluster view, which is the same across all nodes, and
>> pick the first element in the list. Or you could run an agreement
>> protocol, which deterministically elects a master.
>
> Looks simple, deterministic and elegant.
Agreed, that looks like a clean solution, but would scale better if we
could decide a master for each different index, to spread the load
instead of choosing always the same node for all the work, ideas on
that?
I could hash the indexname (the identifier) and % cluster view list size, WDYT ?
That would work assuming you take the actual directory provider name (ie the one that can
potentially contain the sharded information like Account.1, Account.2 etc).