[infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc)

Wolf Fink wfink at redhat.com
Mon May 22 12:59:55 EDT 2017


This is mentioned by Tristan "L4 client intelligence" which mean HotRod not
Network

On Mon, May 22, 2017 at 1:52 PM, Sebastian Laskawiec <slaskawi at redhat.com>
wrote:

>
>
> On Fri, May 19, 2017 at 1:18 PM Wolf Fink <wfink at redhat.com> wrote:
>
>> +1 for Vojtech
>>
>> yes the client's need to moved to the new cluster in one shot current,
>> that was discussed  before.
>> And it makes the migration because most of the customers are not able to
>> make that happen.
>> So there is a small possibility of inconsistence if clients connect to
>> the old server update entries until the new server already migrated it.
>>
>> I see two options
>> 1)
>> source server need to propagate active to target on update
>> 2)
>> with the new L4 strategy all clients are moved automatically to the
>> target. So the source is not updated.
>> I only see a small possibility for this to happen during switch
>> - a client might still have a request to the source until other clients
>> are moved to target and already accessed the key
>> - a new client connects with old properties, here we need to ensure that
>> the first request is redirected to the target and not update the source
>>
>
> Could you please tell me what L4 means in this context? Are you referring
> to L4 routing/switching (transport level) or new Hot Rod client
> intelligence?
>
> In Kubernetes/OpenShift governing an Infinispan cluster by a Load Balancer
> could do the trick. If all clients will use Service URL, once Kubernetes
> kills all "old" Pods, all TCP socket connection will break and the client
> will retry. This will result in massive load of error messages but the
> client will eventually connect to the new cluster.
>
>
>>
>> On Fri, May 19, 2017 at 12:50 PM, Vojtech Juranek <vjuranek at redhat.com>
>> wrote:
>>
>>> On středa 17. května 2017 16:56:25 CEST Tristan Tarrant wrote:
>>> > 2) Need a way to "rollback" the process in case of failures during the
>>> > migration: redirecting the clients back to the original cluster without
>>> > data loss. This would use the above L4 strategy.
>>>
>>> it's not only about redirecting clients - IIRC newly created entries on
>>> target
>>> cluster are not propagated back to source cluster during rolling
>>> upgrade, so
>>> we need also somehow sync these new data back to source cluster during
>>> the
>>> rollback to avoid data losses. Same applies to "cancel process" feature
>>> _______________________________________________
>>> infinispan-dev mailing list
>>> infinispan-dev at lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>
>> _______________________________________________
>> infinispan-dev mailing list
>> infinispan-dev at lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> --
>
> SEBASTIAN ŁASKAWIEC
>
> INFINISPAN DEVELOPER
>
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev at lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.jboss.org/pipermail/infinispan-dev/attachments/20170522/fc2e0f17/attachment-0001.html 


More information about the infinispan-dev mailing list