[
https://issues.jboss.org/browse/ISPN-6673?page=com.atlassian.jira.plugin....
]
Sebastian Łaskawiec edited comment on ISPN-6673 at 7/21/16 9:35 AM:
--------------------------------------------------------------------
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration. Later on I will use REST
interface for playing with data, so turn on compatibility mode:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my
cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test'
http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain'
http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which
will avoid joining those two clusters together. The new Docker image with Infinispan Hot
Rod server needs to contain the following Remote Cache Store definition:
{code}
212,214d214
< <remote-store cache="default"
hotrod-wrapping="true" read-only="true">
< <remote-server
outbound-socket-binding="remote-store-hotrod-server" />
< </remote-store>
449,451c449
< <!-- If you have properly configured DNS, this could be a service name
or even a Headless Service -->
< <!-- However DNS configuration with local cluster might be tricky
-->
< <remote-destination host="172.30.14.112"
port="11222"/>
{code}
# Spinning new cluster involves the following commands:
{code}
oc new-app slaskawi/infinispan-experiments-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments-2 cluster=cluster-2
oc expose svc infinispan-experiments-2
{code}
# At this stage we have 2 clusters (the old one with selector {{cluster=cluster-1}} and
the new one with selector {{cluster=cluster-2}}). Both should be up and running (check
that with {{oc status -v}}). Cluster-2 has remote stores which point to Cluster-1.
was (Author: sebastian.laskawiec):
WARNING - WORK IN PROGRESS NOTES
The procedure for OpenShift looks like the following:
# Start OpenShift cluster using:
{code}
oc cluster up
{code}
Note you are logged as a developer.
# Create new Infinispan cluster using standard configuration:
{code}
oc new-app slaskawi/infinispan-experiments
{code}
# Node that you should always be using labels for your clusters. I'll call my
cluster=cluster-1
{code}
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-1
oc env dc/infinispan-experiments OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments cluster=cluster-1
{code}
# Scale up the deployment
{code}
oc scale dc/infinispan-experiments --replicas=3
{code}
# Create a route to the service
{code}
oc expose svc/infinispan-experiments
{code}
# Add some entries using REST
{code}
curl -X POST -H 'Content-type: text/plain' -d 'test'
http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Check if the entry is there
{code}
curl -X GET -H 'Content-type: text/plain'
http://infinispan-experiments-myproject.192.168.0.17.xip.io/rest/default/1
{code}
# Now we can spin up a new cluster. Again, it's very important to use labels which
will avoid joining those two clusters together. The new Docker image with Infinispan Hot
Rod server needs to contain the following Remote Cache Store definition:
{code}
212,214d214
< <remote-store cache="default"
hotrod-wrapping="true" read-only="true">
< <remote-server
outbound-socket-binding="remote-store-hotrod-server" />
< </remote-store>
449,451c449
< <!-- If you have properly configured DNS, this could be a service name
or even a Headless Service -->
< <!-- However DNS configuration with local cluster might be tricky
-->
< <remote-destination host="172.30.14.112"
port="11222"/>
{code}
# Spinning new cluster involves the following commands:
{code}
oc new-app slaskawi/infinispan-experiments-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_LABELS=cluster=cluster-2
oc env dc/infinispan-experiments-2 OPENSHIFT_KUBE_PING_NAMESPACE=myproject
oc label dc/infinispan-experiments-2 cluster=cluster-2
oc expose svc infinispan-experiments-2
{code}
# At this stage we have 2 clusters (the old one with selector {{cluster=cluster-1}} and
the new one with selector {{cluster=cluster-2}}). Both should be up and running (check
that with {{oc status -v}}). Cluster-2 has remote stores which point to Cluster-1.
Implement Rolling Upgrades with Kubernetes
------------------------------------------
Key: ISPN-6673
URL:
https://issues.jboss.org/browse/ISPN-6673
Project: Infinispan
Issue Type: Feature Request
Components: Cloud Integrations
Reporter: Sebastian Łaskawiec
Assignee: Sebastian Łaskawiec
There are 2 mechanisms which seems to do the same but are totally different:
* [Kubernetes Rolling Update|http://kubernetes.io/docs/user-guide/rolling-updates/] -
replaces Pods in controllable fashon
* [Infinispan Rolling
Updgrate|http://infinispan.org/docs/stable/user_guide/user_guide.html#_Ro...] -
a procedure for upgrading Infinispan or changing the configuration
Kubernetes Rolling Updates can be used very easily for changing the configuration however
if changes are not runtime-compatible, one might loss data. Potential way to avoid this is
to use a Cache Store. All other changes must be propagated using Infinispan Rolling
Upgrade procedure.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)