[JBoss JIRA] (ISPN-6720) Module tests depend on both testng and junit
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6720?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec updated ISPN-6720:
--------------------------------------
Status: Resolved (was: Pull Request Sent)
Resolution: Done
> Module tests depend on both testng and junit
> --------------------------------------------
>
> Key: ISPN-6720
> URL: https://issues.jboss.org/browse/ISPN-6720
> Project: Infinispan
> Issue Type: Bug
> Components: Build process
> Affects Versions: 9.0.0.Alpha2
> Reporter: Tristan Tarrant
> Assignee: Tristan Tarrant
> Fix For: 9.0.0.Alpha4, 9.0.0.Final
>
>
> The parent pom adds default dependencies to testng and junit (as well as mockito). Because of this many modules use a mix of test utility classes.
> We should drop the default dependencies and force each module to choose the testing libraries it needs.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6900) Cut down unnecessary dependencies
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6900?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec moved JDG-54 to ISPN-6900:
----------------------------------------------
Project: Infinispan (was: JBoss Data Grid)
Key: ISPN-6900 (was: JDG-54)
Workflow: GIT Pull Request with Triage workflow (was: CDW with loose statuses v1)
> Cut down unnecessary dependencies
> ---------------------------------
>
> Key: ISPN-6900
> URL: https://issues.jboss.org/browse/ISPN-6900
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
> Priority: Optional
>
> Analyze the output of:
> {code}
> ./build.sh clean install -DskipTests -T4
> find . -name pom.xml -exec mvn -f {} dependency:analyze-report \;
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6890) Infinispan server can not start with Kubernetes
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-6890?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-6890:
------------------------------------
Status: Pull Request Sent (was: Open)
Git Pull Request: https://github.com/jboss-dockerfiles/infinispan/pull/19
> Infinispan server can not start with Kubernetes
> -----------------------------------------------
>
> Key: ISPN-6890
> URL: https://issues.jboss.org/browse/ISPN-6890
> Project: Infinispan
> Issue Type: Bug
> Components: Cloud Integrations
> Affects Versions: 9.0.0.Alpha3, 8.2.3.Final
> Reporter: Sebastian Łaskawiec
> Assignee: Gustavo Fernandes
>
> Infinispan server can not start when deploying on Kubernetes.
> Error message:
> {code}
> $ oc logs pod/infinispan-server-1-t53ad
> =========================================================================
> JBoss Bootstrap Environment
> JBOSS_HOME: /opt/jboss/infinispan-server
> JAVA: /usr/lib/jvm/java/bin/java
> JAVA_OPTS: -server -server -Xms64m -Xmx512m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
> =========================================================================
> java.lang.IllegalArgumentException: Failed to instantiate class "org.jboss.logmanager.handlers.PeriodicRotatingFileHandler" for handler "FILE"
> at org.jboss.logmanager.config.AbstractPropertyConfiguration$ConstructAction.validate(AbstractPropertyConfiguration.java:116)
> at org.jboss.logmanager.config.LogContextConfigurationImpl.doPrepare(LogContextConfigurationImpl.java:335)
> at org.jboss.logmanager.config.LogContextConfigurationImpl.prepare(LogContextConfigurationImpl.java:288)
> at org.jboss.logmanager.config.LogContextConfigurationImpl.commit(LogContextConfigurationImpl.java:297)
> at org.jboss.logmanager.PropertyConfigurator.configure(PropertyConfigurator.java:546)
> at org.jboss.logmanager.PropertyConfigurator.configure(PropertyConfigurator.java:97)
> at org.jboss.logmanager.LogManager.readConfiguration(LogManager.java:514)
> at org.jboss.logmanager.LogManager.readConfiguration(LogManager.java:476)
> at java.util.logging.LogManager$3.run(LogManager.java:399)
> at java.util.logging.LogManager$3.run(LogManager.java:396)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.util.logging.LogManager.readPrimordialConfiguration(LogManager.java:396)
> at java.util.logging.LogManager.access$800(LogManager.java:145)
> at java.util.logging.LogManager$2.run(LogManager.java:345)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.util.logging.LogManager.ensureLogManagerInitialized(LogManager.java:338)
> at java.util.logging.LogManager.getLogManager(LogManager.java:378)
> at org.jboss.modules.Main.main(Main.java:482)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.jboss.logmanager.config.AbstractPropertyConfiguration$ConstructAction.validate(AbstractPropertyConfiguration.java:114)
> ... 17 more
> Caused by: java.io.FileNotFoundException: /opt/jboss/infinispan-server/standalone/log/server.log (No such file or directory)
> at java.io.FileOutputStream.open0(Native Method)
> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
> at org.jboss.logmanager.handlers.FileHandler.setFile(FileHandler.java:151)
> at org.jboss.logmanager.handlers.PeriodicRotatingFileHandler.setFile(PeriodicRotatingFileHandler.java:102)
> at org.jboss.logmanager.handlers.FileHandler.setFileName(FileHandler.java:189)
> at org.jboss.logmanager.handlers.FileHandler.<init>(FileHandler.java:119)
> at org.jboss.logmanager.handlers.PeriodicRotatingFileHandler.<init>(PeriodicRotatingFileHandler.java:70)
> ... 22 more
> java.lang.IllegalStateException: WFLYSRV0124: Could not create server data directory: /opt/jboss/infinispan-server/standalone/data
> at org.jboss.as.server.ServerEnvironment.<init>(ServerEnvironment.java:473)
> at org.jboss.as.server.Main.determineEnvironment(Main.java:297)
> at org.jboss.as.server.Main.main(Main.java:94)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.jboss.modules.Module.run(Module.java:329)
> at org.jboss.modules.Main.main(Main.java:507)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6890) Infinispan server can not start with Kubernetes
by Gustavo Fernandes (JIRA)
[ https://issues.jboss.org/browse/ISPN-6890?page=com.atlassian.jira.plugin.... ]
Gustavo Fernandes updated ISPN-6890:
------------------------------------
Status: Open (was: New)
> Infinispan server can not start with Kubernetes
> -----------------------------------------------
>
> Key: ISPN-6890
> URL: https://issues.jboss.org/browse/ISPN-6890
> Project: Infinispan
> Issue Type: Bug
> Components: Cloud Integrations
> Affects Versions: 9.0.0.Alpha3, 8.2.3.Final
> Reporter: Sebastian Łaskawiec
> Assignee: Gustavo Fernandes
>
> Infinispan server can not start when deploying on Kubernetes.
> Error message:
> {code}
> $ oc logs pod/infinispan-server-1-t53ad
> =========================================================================
> JBoss Bootstrap Environment
> JBOSS_HOME: /opt/jboss/infinispan-server
> JAVA: /usr/lib/jvm/java/bin/java
> JAVA_OPTS: -server -server -Xms64m -Xmx512m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
> =========================================================================
> java.lang.IllegalArgumentException: Failed to instantiate class "org.jboss.logmanager.handlers.PeriodicRotatingFileHandler" for handler "FILE"
> at org.jboss.logmanager.config.AbstractPropertyConfiguration$ConstructAction.validate(AbstractPropertyConfiguration.java:116)
> at org.jboss.logmanager.config.LogContextConfigurationImpl.doPrepare(LogContextConfigurationImpl.java:335)
> at org.jboss.logmanager.config.LogContextConfigurationImpl.prepare(LogContextConfigurationImpl.java:288)
> at org.jboss.logmanager.config.LogContextConfigurationImpl.commit(LogContextConfigurationImpl.java:297)
> at org.jboss.logmanager.PropertyConfigurator.configure(PropertyConfigurator.java:546)
> at org.jboss.logmanager.PropertyConfigurator.configure(PropertyConfigurator.java:97)
> at org.jboss.logmanager.LogManager.readConfiguration(LogManager.java:514)
> at org.jboss.logmanager.LogManager.readConfiguration(LogManager.java:476)
> at java.util.logging.LogManager$3.run(LogManager.java:399)
> at java.util.logging.LogManager$3.run(LogManager.java:396)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.util.logging.LogManager.readPrimordialConfiguration(LogManager.java:396)
> at java.util.logging.LogManager.access$800(LogManager.java:145)
> at java.util.logging.LogManager$2.run(LogManager.java:345)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.util.logging.LogManager.ensureLogManagerInitialized(LogManager.java:338)
> at java.util.logging.LogManager.getLogManager(LogManager.java:378)
> at org.jboss.modules.Main.main(Main.java:482)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.jboss.logmanager.config.AbstractPropertyConfiguration$ConstructAction.validate(AbstractPropertyConfiguration.java:114)
> ... 17 more
> Caused by: java.io.FileNotFoundException: /opt/jboss/infinispan-server/standalone/log/server.log (No such file or directory)
> at java.io.FileOutputStream.open0(Native Method)
> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
> at org.jboss.logmanager.handlers.FileHandler.setFile(FileHandler.java:151)
> at org.jboss.logmanager.handlers.PeriodicRotatingFileHandler.setFile(PeriodicRotatingFileHandler.java:102)
> at org.jboss.logmanager.handlers.FileHandler.setFileName(FileHandler.java:189)
> at org.jboss.logmanager.handlers.FileHandler.<init>(FileHandler.java:119)
> at org.jboss.logmanager.handlers.PeriodicRotatingFileHandler.<init>(PeriodicRotatingFileHandler.java:70)
> ... 22 more
> java.lang.IllegalStateException: WFLYSRV0124: Could not create server data directory: /opt/jboss/infinispan-server/standalone/data
> at org.jboss.as.server.ServerEnvironment.<init>(ServerEnvironment.java:473)
> at org.jboss.as.server.Main.determineEnvironment(Main.java:297)
> at org.jboss.as.server.Main.main(Main.java:94)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.jboss.modules.Module.run(Module.java:329)
> at org.jboss.modules.Main.main(Main.java:507)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6673) Implement Rolling Upgrades with Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6673?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec edited comment on ISPN-6673 at 7/27/16 8:55 AM:
--------------------------------------------------------------------
The Rolling update for Kubernetes and OpenShift looks the following:
# Create a new app for infinispan (I'm using my own image with additional health and readiness checks) - {{slaskawi/infinispan-ru-1}}:
{code}
/opt/jboss/infinispan-server/bin/is_ready.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=*:read-attribute(name=cache-rebalancing-status)' | awk '/result/{gsub("\"", "", $3); print $3}' | awk '{if(NR>1)print}' | grep -v 'PENDING\|IN_PROGRESS\|SUSPENDED'
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
{code}
/opt/jboss/infinispan-server/bin/is_healthy.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/:read-attribute(name=server-state)' | awk '/result/{gsub("\"", "", $3); print $3}' | grep running
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
Since the rebalance status might vary from run to run (imagine a node joining the cluster), there are two ways to deal with it - either use wait as I did or set {{successThreshold}} to a number larger than 1 in the deployment configuration.
# Update the deployment configuration:
{code}
apiVersion: v1
kind: DeploymentConfig
metadata:
name: infinispan-ru-1
namespace: myproject
selfLink: /oapi/v1/namespaces/myproject/deploymentconfigs/infinispan-ru-1
uid: 6def5411-53e2-11e6-97aa-54ee751d46e3
resourceVersion: '6570'
generation: 28
creationTimestamp: '2016-07-27T10:11:05Z'
labels:
app: infinispan-ru-1
annotations:
openshift.io/deployment.instantiated: 'true'
openshift.io/generated-by: OpenShiftNewApp
spec:
strategy:
type: Rolling
rollingParams:
updatePeriodSeconds: 1
intervalSeconds: 1
timeoutSeconds: 600
maxUnavailable: 0%
maxSurge: 25%
resources:
triggers:
-
type: ConfigChange
-
type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- infinispan-ru-1
from:
kind: ImageStreamTag
namespace: myproject
name: 'infinispan-ru-1:latest'
lastTriggeredImage: 'slaskawi/infinispan-ru-1@sha256:6d2de3cad2970fcb1207df2b7f947a74c990f5be2e02bc9aaf9671098547bc82'
replicas: 5
test: false
selector:
app: infinispan-ru-1
deploymentconfig: infinispan-ru-1
template:
metadata:
creationTimestamp: null
labels:
app: infinispan-ru-1
deploymentconfig: infinispan-ru-1
annotations:
openshift.io/container.infinispan-ru-1.image.entrypoint: '["/bin/sh","-c","/opt/jboss/infinispan-server/bin/standalone.sh -c cloud.xml -Djboss.default.jgroups.stack=kubernetes \t-b `hostname -i` \t-bmanagement `hostname -i` --debug"]'
openshift.io/generated-by: OpenShiftNewApp
spec:
containers:
-
name: infinispan-ru-1
image: 'slaskawi/infinispan-ru-1@sha256:6d2de3cad2970fcb1207df2b7f947a74c990f5be2e02bc9aaf9671098547bc82'
ports:
-
containerPort: 8080
protocol: TCP
-
containerPort: 8888
protocol: TCP
-
containerPort: 8181
protocol: TCP
-
containerPort: 9990
protocol: TCP
-
containerPort: 11211
protocol: TCP
-
containerPort: 11222
protocol: TCP
env:
-
name: OPENSHIFT_KUBE_PING_NAMESPACE
value: myproject
resources:
livenessProbe:
exec:
command: [/opt/jboss/infinispan-server/bin/is_ready.sh]
initialDelaySeconds: 60
timeoutSeconds: 180
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
exec:
command: [/opt/jboss/infinispan-server/bin/is_healthy.sh]
initialDelaySeconds: 60
timeoutSeconds: 180
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext:
status:
latestVersion: 18
observedGeneration: 28
replicas: 5
updatedReplicas: 5
availableReplicas: 5
details:
causes:
-
type: ConfigChange
{code}
Key features:
** Use proper configuration for liveness and readiness probs
** Use {{maxUnavailable: 0%}} and {{maxSurge: 25%}} which means that OpenShift will first create some new nodes, wait for rebalance and then will start destroying existing
# Redeploy (update config, image whatever) the application:
{code}
oc deploy infinispan-ru-1 --latest -n myproject
{code}
# Check if the number of entries is the same at the end of the procedure.
# Observations:
** It takes some time for the node to properly join the cluster. Readiness probe should probably pass more than once in production configuration
** Even though the readiness probe passes - it doesn't necessarily mean that the joined the cluster. During the testing I once had a slit brain (4 nodes vs 1 node). This is a very dangerous situation. A readiness and health check should always validate if number of nodes in the cluster is ok.
** The nodes are currently not killed properly (they should always perform a graceful shutdown
was (Author: sebastian.laskawiec):
The Rolling update for Kubernetes and OpenShift looks the following:
# Create a new app for infinispan (I'm using my own image with additional health and readiness checks) - {{slaskawi/infinispan-ru-1}}:
{code}
/opt/jboss/infinispan-server/bin/is_ready.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=*:read-attribute(name=cache-rebalancing-status)' | awk '/result/{gsub("\"", "", $3); print $3}' | awk '{if(NR>1)print}' | grep -v 'PENDING\|IN_PROGRESS\|SUSPENDED'
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
{code}
/opt/jboss/infinispan-server/bin/is_healthy.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/:read-attribute(name=server-state)' | awk '/result/{gsub("\"", "", $3); print $3}' | grep running
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
Since the rebalance status might vary from run to run (imagine a node joining the cluster), there are two ways to deal with it - either use wait as I did or set {{successThreshold}} to a number larger than 1 in the deployment configuration.
# Update the deployment configuration:
{code}
apiVersion: v1
kind: DeploymentConfig
metadata:
name: infinispan-ru-1
namespace: myproject
selfLink: /oapi/v1/namespaces/myproject/deploymentconfigs/infinispan-ru-1
uid: 6def5411-53e2-11e6-97aa-54ee751d46e3
resourceVersion: '6570'
generation: 28
creationTimestamp: '2016-07-27T10:11:05Z'
labels:
app: infinispan-ru-1
annotations:
openshift.io/deployment.instantiated: 'true'
openshift.io/generated-by: OpenShiftNewApp
spec:
strategy:
type: Rolling
rollingParams:
updatePeriodSeconds: 1
intervalSeconds: 1
timeoutSeconds: 600
maxUnavailable: 0%
maxSurge: 25%
resources:
triggers:
-
type: ConfigChange
-
type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- infinispan-ru-1
from:
kind: ImageStreamTag
namespace: myproject
name: 'infinispan-ru-1:latest'
lastTriggeredImage: 'slaskawi/infinispan-ru-1@sha256:6d2de3cad2970fcb1207df2b7f947a74c990f5be2e02bc9aaf9671098547bc82'
replicas: 5
test: false
selector:
app: infinispan-ru-1
deploymentconfig: infinispan-ru-1
template:
metadata:
creationTimestamp: null
labels:
app: infinispan-ru-1
deploymentconfig: infinispan-ru-1
annotations:
openshift.io/container.infinispan-ru-1.image.entrypoint: '["/bin/sh","-c","/opt/jboss/infinispan-server/bin/standalone.sh -c cloud.xml -Djboss.default.jgroups.stack=kubernetes \t-b `hostname -i` \t-bmanagement `hostname -i` --debug"]'
openshift.io/generated-by: OpenShiftNewApp
spec:
containers:
-
name: infinispan-ru-1
image: 'slaskawi/infinispan-ru-1@sha256:6d2de3cad2970fcb1207df2b7f947a74c990f5be2e02bc9aaf9671098547bc82'
ports:
-
containerPort: 8080
protocol: TCP
-
containerPort: 8888
protocol: TCP
-
containerPort: 8181
protocol: TCP
-
containerPort: 9990
protocol: TCP
-
containerPort: 11211
protocol: TCP
-
containerPort: 11222
protocol: TCP
env:
-
name: OPENSHIFT_KUBE_PING_NAMESPACE
value: myproject
resources:
livenessProbe:
exec:
command: [/opt/jboss/infinispan-server/bin/is_ready.sh]
initialDelaySeconds: 60
timeoutSeconds: 180
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
exec:
command: [/opt/jboss/infinispan-server/bin/is_healthy.sh]
initialDelaySeconds: 60
timeoutSeconds: 180
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext:
status:
latestVersion: 18
observedGeneration: 28
replicas: 5
updatedReplicas: 5
availableReplicas: 5
details:
causes:
-
type: ConfigChange
{code}
Key features:
** Use proper configuration for liveness and readiness probs
** Use {{maxUnavailable: 0%}} and {{maxSurge: 25%}} which means that OpenShift will first create some new nodes, wait for rebalance and then will start destroying existing
# Redeploy (update config, image whatever) the application:
{code}
oc deploy infinispan-ru-1 --latest -n myproject
{code}
# Check if the number of entries is the same at the end of the procedure.
# Observations:
* It takes some time for the node to properly join the cluster. Readiness probe should probably pass more than once in production configuration
* Even though the readiness probe passes - it doesn't necessarly mean that the joined the cluster. During the testing I once had a slit brain (4 nodes vs 1 node). This is a very dangereus situation. A readiness and health check should always validate if number of nodes in the cluster is ok.
* The nodes are currently not killed properly (they should always perform a graceful shutdown
> Implement Rolling Upgrades with Kubernetes
> ------------------------------------------
>
> Key: ISPN-6673
> URL: https://issues.jboss.org/browse/ISPN-6673
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> There are 2 mechanisms which seems to do the same but are totally different:
> * [Kubernetes Rolling Update|http://kubernetes.io/docs/user-guide/rolling-updates/] - replaces Pods in controllable fashon
> * [Infinispan Rolling Updgrate|http://infinispan.org/docs/stable/user_guide/user_guide.html#_Ro...] - a procedure for upgrading Infinispan or changing the configuration
> Kubernetes Rolling Updates can be used very easily for changing the configuration however if changes are not runtime-compatible, one might loss data. Potential way to avoid this is to use a Cache Store. All other changes must be propagated using Infinispan Rolling Upgrade procedure.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6673) Implement Rolling Upgrades with Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6673?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec edited comment on ISPN-6673 at 7/27/16 8:54 AM:
--------------------------------------------------------------------
The Rolling update for Kubernetes and OpenShift looks the following:
# Create a new app for infinispan (I'm using my own image with additional health and readiness checks) - {{slaskawi/infinispan-ru-1}}:
{code}
/opt/jboss/infinispan-server/bin/is_ready.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=*:read-attribute(name=cache-rebalancing-status)' | awk '/result/{gsub("\"", "", $3); print $3}' | awk '{if(NR>1)print}' | grep -v 'PENDING\|IN_PROGRESS\|SUSPENDED'
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
{code}
/opt/jboss/infinispan-server/bin/is_healthy.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/:read-attribute(name=server-state)' | awk '/result/{gsub("\"", "", $3); print $3}' | grep running
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
Since the rebalance status might vary from run to run (imagine a node joining the cluster), there are two ways to deal with it - either use wait as I did or set {{successThreshold}} to a number larger than 1 in the deployment configuration.
# Update the deployment configuration:
{code}
apiVersion: v1
kind: DeploymentConfig
metadata:
name: infinispan-ru-1
namespace: myproject
selfLink: /oapi/v1/namespaces/myproject/deploymentconfigs/infinispan-ru-1
uid: 6def5411-53e2-11e6-97aa-54ee751d46e3
resourceVersion: '6570'
generation: 28
creationTimestamp: '2016-07-27T10:11:05Z'
labels:
app: infinispan-ru-1
annotations:
openshift.io/deployment.instantiated: 'true'
openshift.io/generated-by: OpenShiftNewApp
spec:
strategy:
type: Rolling
rollingParams:
updatePeriodSeconds: 1
intervalSeconds: 1
timeoutSeconds: 600
maxUnavailable: 0%
maxSurge: 25%
resources:
triggers:
-
type: ConfigChange
-
type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- infinispan-ru-1
from:
kind: ImageStreamTag
namespace: myproject
name: 'infinispan-ru-1:latest'
lastTriggeredImage: 'slaskawi/infinispan-ru-1@sha256:6d2de3cad2970fcb1207df2b7f947a74c990f5be2e02bc9aaf9671098547bc82'
replicas: 5
test: false
selector:
app: infinispan-ru-1
deploymentconfig: infinispan-ru-1
template:
metadata:
creationTimestamp: null
labels:
app: infinispan-ru-1
deploymentconfig: infinispan-ru-1
annotations:
openshift.io/container.infinispan-ru-1.image.entrypoint: '["/bin/sh","-c","/opt/jboss/infinispan-server/bin/standalone.sh -c cloud.xml -Djboss.default.jgroups.stack=kubernetes \t-b `hostname -i` \t-bmanagement `hostname -i` --debug"]'
openshift.io/generated-by: OpenShiftNewApp
spec:
containers:
-
name: infinispan-ru-1
image: 'slaskawi/infinispan-ru-1@sha256:6d2de3cad2970fcb1207df2b7f947a74c990f5be2e02bc9aaf9671098547bc82'
ports:
-
containerPort: 8080
protocol: TCP
-
containerPort: 8888
protocol: TCP
-
containerPort: 8181
protocol: TCP
-
containerPort: 9990
protocol: TCP
-
containerPort: 11211
protocol: TCP
-
containerPort: 11222
protocol: TCP
env:
-
name: OPENSHIFT_KUBE_PING_NAMESPACE
value: myproject
resources:
livenessProbe:
exec:
command: [/opt/jboss/infinispan-server/bin/is_ready.sh]
initialDelaySeconds: 60
timeoutSeconds: 180
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
exec:
command: [/opt/jboss/infinispan-server/bin/is_healthy.sh]
initialDelaySeconds: 60
timeoutSeconds: 180
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext:
status:
latestVersion: 18
observedGeneration: 28
replicas: 5
updatedReplicas: 5
availableReplicas: 5
details:
causes:
-
type: ConfigChange
{code}
Key features:
** Use proper configuration for liveness and readiness probs
** Use {{maxUnavailable: 0%}} and {{maxSurge: 25%}} which means that OpenShift will first create some new nodes, wait for rebalance and then will start destroying existing
# Redeploy (update config, image whatever) the application:
{code}
oc deploy infinispan-ru-1 --latest -n myproject
{code}
# Check if the number of entries is the same at the end of the procedure.
# Observations:
* It takes some time for the node to properly join the cluster. Readiness probe should probably pass more than once in production configuration
* Even though the readiness probe passes - it doesn't necessarly mean that the joined the cluster. During the testing I once had a slit brain (4 nodes vs 1 node). This is a very dangereus situation. A readiness and health check should always validate if number of nodes in the cluster is ok.
* The nodes are currently not killed properly (they should always perform a graceful shutdown
was (Author: sebastian.laskawiec):
The Rolling update for Kubernetes and OpenShift looks the following:
# Create a new app for infinispan (I'm using my own image with additional health and readiness checks) - {{slaskawi/infinispan-ru-1}}:
{code}
/opt/jboss/infinispan-server/bin/is_ready.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=*:read-attribute(name=cache-rebalancing-status)' | awk '/result/{gsub("\"", "", $3); print $3}' | awk '{if(NR>1)print}' | grep -v 'PENDING\|IN_PROGRESS\|SUSPENDED'
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
{code}
/opt/jboss/infinispan-server/bin/is_healthy.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/:read-attribute(name=server-state)' | awk '/result/{gsub("\"", "", $3); print $3}' | grep running
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
Since the rebalance status might vary from run to run (imagine a node joining the cluster), there are two ways to deal with it - either use wait as I did or set {{successThreshold}} to a number larger than 1 in the deployment configuration.
# Update the deployment configuration:
{code}
apiVersion: v1
kind: DeploymentConfig
metadata:
name: infinispan-ru-1
namespace: myproject
selfLink: /oapi/v1/namespaces/myproject/deploymentconfigs/infinispan-ru-1
uid: 6def5411-53e2-11e6-97aa-54ee751d46e3
resourceVersion: '6570'
generation: 28
creationTimestamp: '2016-07-27T10:11:05Z'
labels:
app: infinispan-ru-1
annotations:
openshift.io/deployment.instantiated: 'true'
openshift.io/generated-by: OpenShiftNewApp
spec:
strategy:
type: Rolling
rollingParams:
updatePeriodSeconds: 1
intervalSeconds: 1
timeoutSeconds: 600
maxUnavailable: 0%
maxSurge: 25%
resources:
triggers:
-
type: ConfigChange
-
type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- infinispan-ru-1
from:
kind: ImageStreamTag
namespace: myproject
name: 'infinispan-ru-1:latest'
lastTriggeredImage: 'slaskawi/infinispan-ru-1@sha256:6d2de3cad2970fcb1207df2b7f947a74c990f5be2e02bc9aaf9671098547bc82'
replicas: 5
test: false
selector:
app: infinispan-ru-1
deploymentconfig: infinispan-ru-1
template:
metadata:
creationTimestamp: null
labels:
app: infinispan-ru-1
deploymentconfig: infinispan-ru-1
annotations:
openshift.io/container.infinispan-ru-1.image.entrypoint: '["/bin/sh","-c","/opt/jboss/infinispan-server/bin/standalone.sh -c cloud.xml -Djboss.default.jgroups.stack=kubernetes \t-b `hostname -i` \t-bmanagement `hostname -i` --debug"]'
openshift.io/generated-by: OpenShiftNewApp
spec:
containers:
-
name: infinispan-ru-1
image: 'slaskawi/infinispan-ru-1@sha256:6d2de3cad2970fcb1207df2b7f947a74c990f5be2e02bc9aaf9671098547bc82'
ports:
-
containerPort: 8080
protocol: TCP
-
containerPort: 8888
protocol: TCP
-
containerPort: 8181
protocol: TCP
-
containerPort: 9990
protocol: TCP
-
containerPort: 11211
protocol: TCP
-
containerPort: 11222
protocol: TCP
env:
-
name: OPENSHIFT_KUBE_PING_NAMESPACE
value: myproject
resources:
livenessProbe:
exec:
command: [/opt/jboss/infinispan-server/bin/is_ready.sh]
initialDelaySeconds: 60
timeoutSeconds: 180
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
exec:
command: [/opt/jboss/infinispan-server/bin/is_healthy.sh]
initialDelaySeconds: 60
timeoutSeconds: 180
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext:
status:
latestVersion: 18
observedGeneration: 28
replicas: 5
updatedReplicas: 5
availableReplicas: 5
details:
causes:
-
type: ConfigChange
{code}
Key features:
* Use proper configuration for liveness and readiness probs
* Use {{maxUnavailable: 0%}} and {{maxSurge: 25%}} which means that OpenShift will first create some new nodes, wait for rebalance and then will start destroying existing
# Redeploy (update config, image whatever) the application:
{code}
oc deploy infinispan-ru-1 --latest -n myproject
{code}
# Check if the number of entries is the same at the end of the procedure.
# Observations:
* It takes some time for the node to properly join the cluster. Readiness probe should probably pass more than once in production configuration
* Even though the readiness probe passes - it doesn't necessarly mean that the joined the cluster. During the testing I once had a slit brain (4 nodes vs 1 node). This is a very dangereus situation. A readiness and health check should always validate if number of nodes in the cluster is ok.
* The nodes are currently not killed properly (they should always perform a graceful shutdown
> Implement Rolling Upgrades with Kubernetes
> ------------------------------------------
>
> Key: ISPN-6673
> URL: https://issues.jboss.org/browse/ISPN-6673
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> There are 2 mechanisms which seems to do the same but are totally different:
> * [Kubernetes Rolling Update|http://kubernetes.io/docs/user-guide/rolling-updates/] - replaces Pods in controllable fashon
> * [Infinispan Rolling Updgrate|http://infinispan.org/docs/stable/user_guide/user_guide.html#_Ro...] - a procedure for upgrading Infinispan or changing the configuration
> Kubernetes Rolling Updates can be used very easily for changing the configuration however if changes are not runtime-compatible, one might loss data. Potential way to avoid this is to use a Cache Store. All other changes must be propagated using Infinispan Rolling Upgrade procedure.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6673) Implement Rolling Upgrades with Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6673?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec edited comment on ISPN-6673 at 7/27/16 8:53 AM:
--------------------------------------------------------------------
The Rolling update for Kubernetes and OpenShift looks the following:
# Create a new app for infinispan (I'm using my own image with additional health and readiness checks) - {{slaskawi/infinispan-ru-1}}:
{code}
/opt/jboss/infinispan-server/bin/is_ready.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=*:read-attribute(name=cache-rebalancing-status)' | awk '/result/{gsub("\"", "", $3); print $3}' | awk '{if(NR>1)print}' | grep -v 'PENDING\|IN_PROGRESS\|SUSPENDED'
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
{code}
/opt/jboss/infinispan-server/bin/is_healthy.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/:read-attribute(name=server-state)' | awk '/result/{gsub("\"", "", $3); print $3}' | grep running
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
Since the rebalance status might vary from run to run (imagine a node joining the cluster), there are two ways to deal with it - either use wait as I did or set {{successThreshold}} to a number larger than 1 in the deployment configuration.
# Update the deployment configuration:
{code}
apiVersion: v1
kind: DeploymentConfig
metadata:
name: infinispan-ru-1
namespace: myproject
selfLink: /oapi/v1/namespaces/myproject/deploymentconfigs/infinispan-ru-1
uid: 6def5411-53e2-11e6-97aa-54ee751d46e3
resourceVersion: '6570'
generation: 28
creationTimestamp: '2016-07-27T10:11:05Z'
labels:
app: infinispan-ru-1
annotations:
openshift.io/deployment.instantiated: 'true'
openshift.io/generated-by: OpenShiftNewApp
spec:
strategy:
type: Rolling
rollingParams:
updatePeriodSeconds: 1
intervalSeconds: 1
timeoutSeconds: 600
maxUnavailable: 0%
maxSurge: 25%
resources:
triggers:
-
type: ConfigChange
-
type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- infinispan-ru-1
from:
kind: ImageStreamTag
namespace: myproject
name: 'infinispan-ru-1:latest'
lastTriggeredImage: 'slaskawi/infinispan-ru-1@sha256:6d2de3cad2970fcb1207df2b7f947a74c990f5be2e02bc9aaf9671098547bc82'
replicas: 5
test: false
selector:
app: infinispan-ru-1
deploymentconfig: infinispan-ru-1
template:
metadata:
creationTimestamp: null
labels:
app: infinispan-ru-1
deploymentconfig: infinispan-ru-1
annotations:
openshift.io/container.infinispan-ru-1.image.entrypoint: '["/bin/sh","-c","/opt/jboss/infinispan-server/bin/standalone.sh -c cloud.xml -Djboss.default.jgroups.stack=kubernetes \t-b `hostname -i` \t-bmanagement `hostname -i` --debug"]'
openshift.io/generated-by: OpenShiftNewApp
spec:
containers:
-
name: infinispan-ru-1
image: 'slaskawi/infinispan-ru-1@sha256:6d2de3cad2970fcb1207df2b7f947a74c990f5be2e02bc9aaf9671098547bc82'
ports:
-
containerPort: 8080
protocol: TCP
-
containerPort: 8888
protocol: TCP
-
containerPort: 8181
protocol: TCP
-
containerPort: 9990
protocol: TCP
-
containerPort: 11211
protocol: TCP
-
containerPort: 11222
protocol: TCP
env:
-
name: OPENSHIFT_KUBE_PING_NAMESPACE
value: myproject
resources:
livenessProbe:
exec:
command: [/opt/jboss/infinispan-server/bin/is_ready.sh]
initialDelaySeconds: 60
timeoutSeconds: 180
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
exec:
command: [/opt/jboss/infinispan-server/bin/is_healthy.sh]
initialDelaySeconds: 60
timeoutSeconds: 180
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext:
status:
latestVersion: 18
observedGeneration: 28
replicas: 5
updatedReplicas: 5
availableReplicas: 5
details:
causes:
-
type: ConfigChange
{code}
Key features:
* Use proper configuration for liveness and readiness probs
* Use {{maxUnavailable: 0%}} and {{maxSurge: 25%}} which means that OpenShift will first create some new nodes, wait for rebalance and then will start destroying existing
# Redeploy (update config, image whatever) the application:
{code}
oc deploy infinispan-ru-1 --latest -n myproject
{code}
# Check if the number of entries is the same at the end of the procedure.
# Observations:
* It takes some time for the node to properly join the cluster. Readiness probe should probably pass more than once in production configuration
* Even though the readiness probe passes - it doesn't necessarly mean that the joined the cluster. During the testing I once had a slit brain (4 nodes vs 1 node). This is a very dangereus situation. A readiness and health check should always validate if number of nodes in the cluster is ok.
* The nodes are currently not killed properly (they should always perform a graceful shutdown
was (Author: sebastian.laskawiec):
The Rolling update for Kubernetes and OpenShift looks the following:
# Create a new app for infinispan (I'm using my own image with additional health and readiness checks) - {{slaskawi/infinispan-ru-1}}:
{code}
/opt/jboss/infinispan-server/bin/is_ready.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=*:read-attribute(name=cache-rebalancing-status)' | awk '/result/{gsub("\"", "", $3); print $3}' | awk '{if(NR>1)print}' | grep -v 'PENDING\|IN_PROGRESS\|SUSPENDED'
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
{code}
/opt/jboss/infinispan-server/bin/is_healthy.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/:read-attribute(name=server-state)' | awk '/result/{gsub("\"", "", $3); print $3}' | grep running
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
Since the rebalance status might vary from run to run (imagine a node joining the cluster), there are two ways to deal with it - either use wait as I did or set {{successThreshold}} to a number larger than 1 in the deployment configuration.
#
> Implement Rolling Upgrades with Kubernetes
> ------------------------------------------
>
> Key: ISPN-6673
> URL: https://issues.jboss.org/browse/ISPN-6673
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> There are 2 mechanisms which seems to do the same but are totally different:
> * [Kubernetes Rolling Update|http://kubernetes.io/docs/user-guide/rolling-updates/] - replaces Pods in controllable fashon
> * [Infinispan Rolling Updgrate|http://infinispan.org/docs/stable/user_guide/user_guide.html#_Ro...] - a procedure for upgrading Infinispan or changing the configuration
> Kubernetes Rolling Updates can be used very easily for changing the configuration however if changes are not runtime-compatible, one might loss data. Potential way to avoid this is to use a Cache Store. All other changes must be propagated using Infinispan Rolling Upgrade procedure.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6673) Implement Rolling Upgrades with Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6673?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec commented on ISPN-6673:
-------------------------------------------
The Rolling update for Kubernetes and OpenShift looks the following:
# Create a new app for infinispan (I'm using my own image with additional health and readiness checks) - {{slaskawi/infinispan-ru-1}}:
{code}
/opt/jboss/infinispan-server/bin/is_ready.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/subsystem=datagrid-infinispan/cache-container=clustered/distributed-cache=*:read-attribute(name=cache-rebalancing-status)' | awk '/result/{gsub("\"", "", $3); print $3}' | awk '{if(NR>1)print}' | grep -v 'PENDING\|IN_PROGRESS\|SUSPENDED'
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
{code}
/opt/jboss/infinispan-server/bin/is_healthy.sh
#!/bin/bash
for i in `seq 1 10`;
do
sleep 1s
/opt/jboss/infinispan-server/bin/ispn-cli.sh -c --controller=$(hostname -i):9990 '/:read-attribute(name=server-state)' | awk '/result/{gsub("\"", "", $3); print $3}' | grep running
if [ $? -eq 0 ]; then
exit 0
fi
done
exit 1
{code}
Since the rebalance status might vary from run to run (imagine a node joining the cluster), there are two ways to deal with it - either use wait as I did or set {{successThreshold}} to a number larger than 1 in the deployment configuration.
#
> Implement Rolling Upgrades with Kubernetes
> ------------------------------------------
>
> Key: ISPN-6673
> URL: https://issues.jboss.org/browse/ISPN-6673
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> There are 2 mechanisms which seems to do the same but are totally different:
> * [Kubernetes Rolling Update|http://kubernetes.io/docs/user-guide/rolling-updates/] - replaces Pods in controllable fashon
> * [Infinispan Rolling Updgrate|http://infinispan.org/docs/stable/user_guide/user_guide.html#_Ro...] - a procedure for upgrading Infinispan or changing the configuration
> Kubernetes Rolling Updates can be used very easily for changing the configuration however if changes are not runtime-compatible, one might loss data. Potential way to avoid this is to use a Cache Store. All other changes must be propagated using Infinispan Rolling Upgrade procedure.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months
[JBoss JIRA] (ISPN-6893) Remove scala code
by Tristan Tarrant (JIRA)
[ https://issues.jboss.org/browse/ISPN-6893?page=com.atlassian.jira.plugin.... ]
Tristan Tarrant commented on ISPN-6893:
---------------------------------------
[~ijuma] indeed, but we need something that works now. Reasons for dropping Scala (in no particular order of importance / impact):
* no usable build for Java 9 available yet
* getting rid of the horrible hybrid code we have
* more predictable performance of the constructs we use
* lower the barrier to contribution both by internal and external developers
* reduce the distribution size by 7MB
* speed-up the build
> Remove scala code
> -----------------
>
> Key: ISPN-6893
> URL: https://issues.jboss.org/browse/ISPN-6893
> Project: Infinispan
> Issue Type: Enhancement
> Reporter: William Burns
> Assignee: William Burns
>
> Scala doesn't support Java 9. We need to remove it to make sure we have compatibility.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
8 years, 5 months