<div dir="ltr">Hey Tristan, Emmanuel!<div><br></div><div>Comments inlined.</div><div><br></div><div>Thanks</div><div>Sebastian</div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jul 19, 2016 at 10:08 AM, Emmanuel Bernard <span dir="ltr"><<a href="mailto:emmanuel@hibernate.org" target="_blank">emmanuel@hibernate.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Considering very few options can be changed at runtime safely, should we<br>
rather focus of a strategy where we start a new grid and populate it<br>
with the old grid before flipping the proxy to the new one?<br></blockquote><div><br></div><div>+1, that's exactly what the Infinispan Rolling Upgrade does.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5"><br>
On Mon 2016-07-18 17:12, Tristan Tarrant wrote:<br>
> On 14/07/16 12:17, Sebastian Laskawiec wrote:<br>
> > Hey!<br>
> ><br>
> > I've been thinking about potential use of Kubernetes/OpenShift<br>
> > (OpenShift = Kubernetes + additional features) Rolling Update<br>
> > mechanism for updating configuration of Hot Rod servers. You might<br>
> > find some more information about the rolling updates here [1][2] but<br>
> > putting it simply, Kubernetes replaces nodes in the cluster one at a<br>
> > time. What's worth mentioning, Kubernetes ensures that the newly<br>
> > created replica is fully operational before taking down another one.<br>
> ><br>
> > There are two things that make me scratching my head...<br>
> ><br>
> > #1 - What type of configuration changes can we introduce using rolling<br>
> > updates?<br>
> ><br>
> > I'm pretty sure introducing a new cache definition won't do any harm.<br>
> > But what if we change a cache type from Distributed to Replicated? Do<br>
> > you have any idea which configuration changes are safe and which are<br>
> > not? Could come up with such list?<br>
> Very few changes are safe, but obviously this would need to be verified<br>
> on a per-attribute basis. All of the attributes which can be changed at<br>
> runtime (timeouts, eviction size) are safe.<br>
><br>
> ><br>
> > #2 - How to prevent loosing data during the rolling update process?<br>
> I believe you want to write losing :)<br></div></div></blockquote><div><br></div><div>Good one :)</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
> > In Kubernetes we have a mechanism called lifecycle hooks [3] (we can<br>
> > invoke a script during container startup/shutdown). The problem with<br>
> > shutdown script is that it's time constrained (if it won't end up<br>
> > within certain amount of time, Kubernetes will simply kill the<br>
> > container). Fortunately this time is configurable.<br>
> ><br>
> > The idea to prevent from loosing data would be to invoke (enquque and<br>
> > wait for finish) state transfer process triggered by the shutdown hook<br>
> > (with timeout set to maximum value). If for some reason this won't<br>
> > work (e.g. a user has so much data that migrating it this way would<br>
> > take ages), there is a backup plan - Infinispan Rolling Upgrades [4].<br>
> The thing that concerns me here is the amount of churn involved: the<br>
> safest bet for us is that the net topology doesn't change, i.e. you end<br>
> up with the exact number of nodes you started with </div></div></blockquote><div><br></div><div>Yes, Kubernetes Rolling Update works this way. The number of nodes at the end of the process is equal to the number you started with. </div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">and they are replaced<br>
> one by one in a way that the replacement assumes the identity of the<br>
> replaced (both as persistent uuid, owned segments and data in a<br>
> persistent store).<span style="color:rgb(34,34,34)"> </span></div></div></blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">Other types could be supported but they will<br>
> definitely have a level of risk.<br>
> Also we don't have any guarantees that a newer version will be able to<br>
> cluster with an older one...<br></div></div></blockquote><div><br></div><div>I'm not sure we can ensure the same identity of the replaced node. If we consider configuration changes, a user can change anything...</div><div><br></div><div>I think I'm convinced that the Infinispan Rolling Upgrade procedure is the only proper solution at this stage. Other ways (although much simpler) must be treated as - 'do it at your own risk'.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
><br>
> Tristan<br>
> _______________________________________________<br>
> infinispan-dev mailing list<br>
> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
_______________________________________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
</div></div></blockquote></div><br></div></div>