[infinispan-dev] Distributed Counter Discussion

Dan Berindei dan.berindei at gmail.com
Tue Mar 22 13:23:04 EDT 2016


On Tue, Mar 22, 2016 at 7:12 PM, Bela Ban <bban at redhat.com> wrote:
>
>
> On 22/03/16 18:04, Dan Berindei wrote:
>> On Mon, Mar 21, 2016 at 1:43 PM, Bela Ban <bban at redhat.com> wrote:
>>>
>>>
>>> On 21/03/16 11:12, Pedro Ruivo wrote:
>>>> Hi all,
>>>>
>>>> @Eric, thanks for the requirements.
>>>>
>>>> @Bela, does JGroups counter supports that semantics (AP)?
>>>
>>> No. You'd have to catch the MergeView and do this manually.
>>
>> I should also mention that you don't get a "cluster split" event. With
>> when a cluster ABC splits into A and BC and merges back, you could get
>> quite a view sequence like this:
>>
>> A, B, C: A|3 [A, B, C]
>> A: A|4 [A, B]
>> A: A|5 [A] (could be missing)
>
> How's that possible? Are you assuming that A still gets heartbeats from B?
>
> Otherwise A will definitely get a singleton view A|5=[A], unless the
> cluster heals in the meantime (before B is suspected and excluded).

Exactly, the cluster can heal before B is suspected by A.

>
> Note that A|4 on A may or may not be received; if FD_ALL is used then
> chances are view [A] will be received directly after [ABC] on A.

Even with FD_ALL, the heartbeats from B and C will arrive on A at
different times, so they may be suspected at different times. True,
it's not the most likely scenario, but I wanted to emphasize the
corner cases that the counter implementation would have to deal with.

>
>> B, C: B|4 [B, C]
>> A, B, C: B|6 [B, C, A] (merge view)
>
> Note that you could keep the order if you installed a custom
> view/merge-view creation policy, but by default members are sorted
> according to the UUIDs.
>
>
>> So it's not that easy to keep track of counter additions "since the split".
>
> Agreed.
>
>>>> Infinispan does not have eventually consistency (yet) neither an update log. So, it
>>>> can't reconcile the counter and you will lose one of the partition updates.
>>>
>>> Same for the JGroups counter service. The jgroups-raft CounterService
>>> provides strong consistency, but at the expense of availability.
>>>
>>>> On 03/18/2016 02:19 PM, Eric Wittmann wrote:
>>>>> Agreed. :)
>>>>>
>>>>> On 3/18/2016 9:31 AM, Bela Ban wrote:
>>>>>> So actually you don't care if you have multiple counters in case of a
>>>>>> network split, but you do care that the numbers of different counters
>>>>>> get reconciled when a network partition heals.
>>>>>>
>>>>>> Example
>>>>>> - C1: 1000
>>>>>> - Network split: C1: 1000, C2: 1000
>>>>>> - Different clients update counters on both sides of the partition: C1:
>>>>>> 1500 C2: 1600
>>>>>> - Network split disappears, reconciling C1 to 2100: 1000 +500 +600. This
>>>>>> means the 500 added to C1 should have been added to C2 as well, and the
>>>>>> 600 to C2 should have been added to C1
>>>>>>
>>>>>> If such a behavior would be acceptable, then we could do without CP and
>>>>>> live with AP
>>>>>>
>>>>>> On 18/03/16 14:19, Eric Wittmann wrote:
>>>>>>> Yes, precisely.  The API Gateway itself is clustered.  It services a
>>>>>>> large volume of inbound traffic which it reverse-proxies to appropriate
>>>>>>> back-end APIs after applying policies such as security, rate limiting,
>>>>>>> caching, etc.
>>>>>>>
>>>>>>> -Eric
>>>>>>>
>>>>>>> On 3/18/2016 2:32 AM, Bela Ban wrote:
>>>>>>>> Stupid question: whay do you need a distributed counter for this? Is the
>>>>>>>> service you're monitoring replicated in a cluster?
>>>>>>>>
>>>>>>>> On 17/03/16 18:06, Eric Wittmann wrote:
>>>>>>>>> Greetings.  Apologies for coming in a bit late on this conversation.
>>>>>>>>> Tristan pointed me to it a couple of days ago and unfortunately I'm just
>>>>>>>>> now getting time to reply.
>>>>>>>>>
>>>>>>>>> I can try to quickly give an overview of apiman's (JBoss API Management
>>>>>>>>> Gateway) requirements.
>>>>>>>>>
>>>>>>>>> What we're trying to do is implement support for Limiting policies:
>>>>>>>>>
>>>>>>>>> * Rate Limiting/Throttling (e.g. limit of 100 requests per second)
>>>>>>>>> * Quotas (e.g. limit of 100,000,000 requests per month)
>>>>>>>>> * Transfer Quotas (e.g. limit of 2.5GB of data downloaded per day)
>>>>>>>>>
>>>>>>>>> We will need to support multiple backing implementations of the Rate
>>>>>>>>> Limiter, and we're trying to get Infinispan to be one of those
>>>>>>>>> implementations.
>>>>>>>>>
>>>>>>>>> In no particular order, we would need the following characteristics:
>>>>>>>>>
>>>>>>>>> - Can be "squishy" for quotas and transfer quotas:  If you
>>>>>>>>>           get 100,001,017 requests that's OK
>>>>>>>>> - Strict would be cool as an option:  Hard-fail when the
>>>>>>>>>           counter reaches the limit - no chance it will go over.
>>>>>>>>> - Lots of individual counters:  users may publish 100s of
>>>>>>>>>           APIs to the Gateway, and each API may be consumed by
>>>>>>>>>           100s or 1000s of users/client.  Depending on configuration
>>>>>>>>>           of the policy, *each* user/client has a separate limit.
>>>>>>>>> - Counters need to be created dynamically:  users can
>>>>>>>>>           add APIs via the Management UI, configure them to add
>>>>>>>>>           policies (e.g. a Quota policy) and then publish them to
>>>>>>>>>           a running Gateway, at which point end users can invoke
>>>>>>>>>           the API through the Gateway, which will use a counter
>>>>>>>>>           to enforce the Quota.
>>>>>>>>> - Counter values reset at the end of a time boundary:  for
>>>>>>>>>           example, at the end of the month the counter value for
>>>>>>>>>           the example quota above would reset to 0.
>>>>>>>>> - Don't care (right now) what the counter value is: at the
>>>>>>>>>           moment we simply need to know if some counter max value
>>>>>>>>>           has been reached.  In the future we would like to know
>>>>>>>>>           when a max value is being "approached" (e.g. to notify a
>>>>>>>>>           user)
>>>>>>>>> - Should be persistent: it would not be ideal for e.g. per-
>>>>>>>>>           month quota values to be lost on server restart.
>>>>>>>>>
>>>>>>>>> That's all the high level requirements I can think of off the top of my
>>>>>>>>> head, and after reading all of the current messages in this thread. :)
>>>>>>>>>
>>>>>>>>> -Eric
>>>>>>>>> _______________________________________________
>>>>>>>>> infinispan-dev mailing list
>>>>>>>>> infinispan-dev at lists.jboss.org
>>>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>>>>
>>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> infinispan-dev mailing list
>>>>>>> infinispan-dev at lists.jboss.org
>>>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> infinispan-dev mailing list
>>>>> infinispan-dev at lists.jboss.org
>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>> _______________________________________________
>>>> infinispan-dev mailing list
>>>> infinispan-dev at lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>
>>> --
>>> Bela Ban, JGroups lead (http://www.jgroups.org)
>>>
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>
> --
> Bela Ban, JGroups lead (http://www.jgroups.org)
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


More information about the infinispan-dev mailing list