Will,
I like the API proposal but one thing that jumps out for me is to
combine or rather overload singleNodeSubmission with
singleNodeSubmission(int failOverCount) and remove failoverRetries
method. The first singleNodeSubmission does not failover while the
second one does with the specified failOverCount. That way we don't have
to keep state and throw IllegalStateException when someone erroneously
calls failOverRetries.
Vladimir
On 2017-02-21 10:11 AM, William Burns wrote:
As many of you are may or may not be aware the ClusterExecutor
interface and implementation were introduced into Infinispan 8.2 [1].
This class is a new API that can be used to submit commands to other
nodes in a way similar to DistributedExecutor does while also not
being tied to a cache.
The first implementation of ClusterExecutor did not include a couple
features that DistributedExecutor has. For this post I will
concentrate on failover and execution policies. My plan is to
introduce some API to Infinispan 9 to allow for ClusterExecutor to
also offer these capabilities.
The first change is that I wanted to add additional options to
Execution Policies. The execution policy is used to limit sending
messages to nodes based on their topology (site, rack & machine id).
The old execution policy allowed for SAME_MACHINE, SAME_RACK,
SAME_SITE and ALL. I plan on adding the opposite of the SAME and also
supporting DIFFERENT_MACHINE, DIFFERENT_RACK and DIFFERENT_SITE in
case if the user wants to ensure that data is processed elsewhere.
Unless you think this is unneeded?
The API changes I am thinking of are as below (included in email to
allow for responses inline). Note that existing methods would be
unchanged and thus submit and execute methods would be used to send
commands still. One big difference is that I have not allowed for the
user to control the failover node or the target node when doing a
single submission with multiple available targets. In my mind if a
user wants this they should do it themselves manually, but this is
open for discussion as well.
/** * When a command is submitted it will only be submitted to one
node of the available nodes, there is no strict * requirements as to
which node is chosen and is implementation specific. Fail over can be
used with configuration, * please see {@link
ClusterExecutor#failOverRetries(int)} for more information. * @return
this executor again with commands submitted to a single node */ ClusterExecutor
singleNodeSubmission();
/** * When a command is submitted it will submit this command to all
of the available nodes. Fail over is not supported * with this
configuration. This is the default submission method. * @return this
executor again with commands submitted to all nodes */ ClusterExecutor
allNodeSubmission();
/** * Enables fail over to occur when using {@link
ClusterExecutor#singleNodeSubmission()}. If the executor * is not
currently in the single node submission mode, this method will throw
{@link IllegalStateException}. * When fail over count is applied, a
submitted command will be retried up to that many times on the
available * command up to desired amount of times until an exception
is not met. The one exception that is not retried is a *
TimeoutException since this could be related to {@link
ClusterExecutor#timeout(long, TimeUnit)}. Each time the * fail over
occurs a random node in the available nodes will be used (trying not
to reuse the same node).* @param failOverCount how many times this
executor will attempt a failover * @return this executor again with
fail over retries applied * @throws IllegalStateException if this
cluster executor is not currently configured for single node
submission */ ClusterExecutor failOverRetries(int failOverCount)throws
IllegalStateException;
/** * Allows for filtering of address nodes by only allowing addresses
that match the given execution policy to be used. * Note this method
overrides any previous filtering that was done (ie. calling * {@link
ClusterExecutor#filterTargets(Collection)}). * @param policy the
policy to determine which nodes can be used * @return this executor
again with the execution policy applied to determine which nodes are
contacted */ ClusterExecutor filterTargets(ClusterExecutionPolicy policy);
/** * Allows for filtering of address nodes dynamically per
invocation. The predicate is applied to each member that * is part of
the execution policy. Note that this method overrides any previous *
filtering that was done (ie. calling {@link
ClusterExecutor#filterTargets(Collection)}). * @param policy the
execution policy applied before predicate to allow only nodes in that
group * @param predicate the dynamic predicate applied each time an
invocation is done * @return */ ClusterExecutor filterTargets(ClusterExecutionPolicy
policy, Predicate<?super Address> predicate);
Thanks for any input,
- Will
[1]
https://github.com/infinispan/infinispan/blob/master/core/src/main/java/o...
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev