[JBoss JIRA] (ISPN-6922) Support for loading keystores from classpath in the Hot Rod client
by Adrian Nistor (JIRA)
[ https://issues.jboss.org/browse/ISPN-6922?page=com.atlassian.jira.plugin.... ]
Adrian Nistor resolved ISPN-6922.
---------------------------------
Resolution: Done
Integrated in both master and 8.2.x. Thanks [~gustavonalle]!
> Support for loading keystores from classpath in the Hot Rod client
> ------------------------------------------------------------------
>
> Key: ISPN-6922
> URL: https://issues.jboss.org/browse/ISPN-6922
> Project: Infinispan
> Issue Type: Enhancement
> Affects Versions: 9.0.0.Alpha3, 8.2.3.Final
> Reporter: Gustavo Fernandes
> Assignee: Gustavo Fernandes
> Fix For: 8.2.4.Final, 9.0.0.Alpha4
>
>
> Configuration of truststores and keystores in the Hot Rod client only work with files, In certain situation the application is packaged together with the hotrod client in a single jar and thus cannot load the keystores.
> Suggestion of supporting classpath resources:
> {code:title=hotrod-client.properties}
> infinispan.client.hotrod.trust_store_file_name=classpath:/some/loc/truststore.jks
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (ISPN-5507) Transactions committed immediately before cache stop can block shutdown
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-5507?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-5507:
-----------------------------------------------
Vojtech Juranek <vjuranek(a)redhat.com> changed the Status of [bug 1315393|https://bugzilla.redhat.com/show_bug.cgi?id=1315393] from ON_QA to VERIFIED
> Transactions committed immediately before cache stop can block shutdown
> -----------------------------------------------------------------------
>
> Key: ISPN-5507
> URL: https://issues.jboss.org/browse/ISPN-5507
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Test Suite - Core
> Affects Versions: 7.2.1.Final, 8.0.0.Alpha1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Blocker
> Labels: testsuite_stability
> Fix For: 8.2.1.Final, 9.0.0.Alpha1, 8.1.4.Final
>
>
> This is causing random failures in {{DistributedEntryRetrieverTxTest.verifyNodeLeavesBeforeGettingData}}.
> The test inserts some values into the cache, starts an iteration, and then kills one of the nodes. In rare instances, the killed cache only receives the TxCompletionCommand for one of the writes after it started the shutdown, and ignores it. That leaves the remote tx on-going, and {{TransactionTable.shutDownGracefully()}} blocks for 30 seconds - causing a {{TimeoutException}} elsewhere in the test.
> {noformat}
> 10:52:18,129 TRACE (remote-thread-NodeAM-p12133-t6:) [CommandAwareRpcDispatcher] About to send back response SuccessfulResponse{responseValue=null} for command CommitCommand {gtx=GlobalTransaction:<NodeAL-45757>:22325:remote, cacheName='org.infinispan.iteration.DistributedEntryRetrieverTxTest', topologyId=4}
> 10:52:18,129 TRACE (testng-DistributedEntryRetrieverTxTest:) [JGroupsTransport] dests=[NodeAM-45518, NodeAL-45757], command=TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=4, gtx=GlobalTransaction:<NodeAL-45757>:22325:local, cacheName=org.infinispan.iteration.DistributedEntryRetrieverTxTest} , mode=ASYNCHRONOUS, timeout=15000
> 10:52:18,133 DEBUG (testng-DistributedEntryRetrieverTxTest:) [CacheImpl] Stopping cache org.infinispan.iteration.DistributedEntryRetrieverTxTest on NodeAM-45518
> 10:52:18,133 TRACE (OOB-2,NodeAM-45518:) [GlobalInboundInvocationHandler] Attempting to execute CacheRpcCommand: TxCompletionNotificationCommand{ xid=null, internalId=0, topologyId=4, gtx=GlobalTransaction:<NodeAL-45757>:22325:local, cacheName=org.infinispan.iteration.DistributedEntryRetrieverTxTest} [sender=NodeAL-45757]
> 10:52:18,133 TRACE (OOB-2,NodeAM-45518:) [GlobalInboundInvocationHandler] Silently ignoring that org.infinispan.iteration.DistributedEntryRetrieverTxTest cache is not defined
> 10:52:18,133 DEBUG (testng-DistributedEntryRetrieverTxTest:) [TransactionTable] Wait for on-going transactions to finish for 30 seconds.
> 10:52:48,139 WARN (testng-DistributedEntryRetrieverTxTest:) [TransactionTable] ISPN000100: Stopping, but there are 0 local transactions and 1 remote transactions that did not finish in time.
> 10:52:48,386 ERROR (testng-DistributedEntryRetrieverTxTest:) [UnitTestTestNGListener] Test verifyNodeLeavesBeforeGettingData(org.infinispan.iteration.DistributedEntryRetrieverTxTest) failed.
> java.lang.IllegalStateException: Thread already timed out waiting for event pre_send_response_released
> at org.infinispan.test.fwk.CheckPoint.trigger(CheckPoint.java:131)
> at org.infinispan.test.fwk.CheckPoint.trigger(CheckPoint.java:116)
> at org.infinispan.iteration.DistributedEntryRetrieverTest.verifyNodeLeavesBeforeGettingData(DistributedEntryRetrieverTest.java:105)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (ISPN-6341) StateTransferManager should be the first component to stop
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-6341?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-6341:
-----------------------------------------------
Vojtech Juranek <vjuranek(a)redhat.com> changed the Status of [bug 1315393|https://bugzilla.redhat.com/show_bug.cgi?id=1315393] from ON_QA to VERIFIED
> StateTransferManager should be the first component to stop
> ----------------------------------------------------------
>
> Key: ISPN-6341
> URL: https://issues.jboss.org/browse/ISPN-6341
> Project: Infinispan
> Issue Type: Bug
> Components: Core
> Affects Versions: 8.2.0.CR1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Fix For: 8.2.1.Final, 9.0.0.Alpha1, 8.1.4.Final
>
>
> When a cache stops, it first removes the component registry from the {{GlobalComponentsRegistry}}'s {{namedComponents}} map, which means the node (let's call it {{A}}) will reply with a {{CacheNotFoundResponse}} to any remote command.
> Another node {{B}} trying to execute a write/transactional command will receive the {{CacheNotFoundResponse}}, assume that a new cache topology with id {{current topology id + 1}} is coming soon, and wait for that new topology before retrying.
> Normally this is not a problem, because {{StateTransferManagerImpl.stop()}} sends a {{CacheTopologyControlCommand(LEAVE)}} to the coordinator quickly enough, then {{B}} receives the {{current topology id + 1}} topology and retries the command.
> But in some cases, the cache components that stop before {{StateTransferManagerImpl}} can take a long time to do so. In particular, because of {{ISPN-5507}}, {{TransactionTable}} can block for {{cacheStopTimeout}} if there are remote transactions in progress, even though the cache can no longer process remote commands.
> We should give {{StateTransferManagerImpl.stop()}} a priority of {{0}}, so that the {{CacheTopologyControlCommand(LEAVE)}} comand is sent as soon as possible.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (ISPN-3938) AdvancedAsyncCacheLoader.process() concurrency issues
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-3938?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-3938:
-----------------------------------------------
Vojtech Juranek <vjuranek(a)redhat.com> changed the Status of [bug 1312186|https://bugzilla.redhat.com/show_bug.cgi?id=1312186] from ON_QA to VERIFIED
> AdvancedAsyncCacheLoader.process() concurrency issues
> -----------------------------------------------------
>
> Key: ISPN-3938
> URL: https://issues.jboss.org/browse/ISPN-3938
> Project: Infinispan
> Issue Type: Bug
> Components: Core, Loaders and Stores
> Affects Versions: 6.0.0.Final
> Reporter: Dan Berindei
> Assignee: Sebastian Łaskawiec
> Fix For: 8.2.0.CR1
>
>
> {{AdvancedAsyncCacheLoader.process()}} calls {{advancedLoader().process()}} to collect all the keys in the store, but the HashSet used to collect the keys it not thread-safe. This can cause problems, e.g. during state transfer:
> {noformat}
> WARN cheTopologyControlCommand | ISPN000071: Caught exception when handling command CacheTopologyControlCommand{cache=sessions, type=CH_UPDATE, sender=alfie-lt-46127, joinInfo=null, topologyId=3, currentCH=DefaultConsistentHash{numSegments=60, numOwners=1, members=[alfie-lt-46127]}, pendingCH=null, throwable=null, viewId=1}java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
> at java.util.HashMap$KeyIterator.next(HashMap.java:960)
> at org.infinispan.persistence.async.AdvancedAsyncCacheLoader.process(AdvancedAsyncCacheLoader.java:80)
> at org.infinispan.persistence.manager.PersistenceManagerImpl.processOnAllStores(PersistenceManagerImpl.java:414)
> at org.infinispan.statetransfer.StateConsumerImpl.invalidateSegments(StateConsumerImpl.java:910)
> at org.infinispan.statetransfer.StateConsumerImpl.onTopologyUpdate(StateConsumerImpl.java:393)
> at org.infinispan.statetransfer.StateTransferManagerImpl.doTopologyUpdate(StateTransferManagerImpl.java:178)
> at org.infinispan.statetransfer.StateTransferManagerImpl.access$000(StateTransferManagerImpl.java:38)
> at org.infinispan.statetransfer.StateTransferManagerImpl$1.updateConsistentHash(StateTransferManagerImpl.java:100)
> at org.infinispan.topology.LocalTopologyManagerImpl.handleConsistentHashUpdate(LocalTopologyManagerImpl.java:191)
> at org.infinispan.topology.CacheTopologyControlCommand.doPerform(CacheTopologyControlCommand.java:152)
> at org.infinispan.topology.CacheTopologyControlCommand.perform(CacheTopologyControlCommand.java:124)
> at org.infinispan.topology.ClusterTopologyManagerImpl$3.run(ClusterTopologyManagerImpl.java:606)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (ISPN-6276) Non-threadsafe use of HashSet in AdvancedAsyncCacheLoader
by RH Bugzilla Integration (JIRA)
[ https://issues.jboss.org/browse/ISPN-6276?page=com.atlassian.jira.plugin.... ]
RH Bugzilla Integration commented on ISPN-6276:
-----------------------------------------------
Vojtech Juranek <vjuranek(a)redhat.com> changed the Status of [bug 1312186|https://bugzilla.redhat.com/show_bug.cgi?id=1312186] from ON_QA to VERIFIED
> Non-threadsafe use of HashSet in AdvancedAsyncCacheLoader
> ----------------------------------------------------------
>
> Key: ISPN-6276
> URL: https://issues.jboss.org/browse/ISPN-6276
> Project: Infinispan
> Issue Type: Bug
> Components: Loaders and Stores
> Affects Versions: 6.0.2.Final
> Reporter: Dennis Reed
> Assignee: Sebastian Łaskawiec
>
> org.infinispan.persistence.async.AdvancedAsyncCacheLoader$process creates a HashSet, and passes it to loadAllKeys().
> loadAllKeys() creates a task to get each key and add it to the HashSet.
> This task is run by org.infinispan.persistence.file.SingleFileStore#process, which runs it in multiple threads at once (one thread per key).
> There is no synchronization on that HashSet that is shared by the multiple threads.
> HashSet is not thread safe. One known side effect of non-synchronized access by multiple threads is infinite loops, which has been witnessed here.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (ISPN-6675) Better configuration management for Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6675?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec commented on ISPN-6675:
-------------------------------------------
Other options (less promising):
* [S2I|https://github.com/openshift/source-to-image]
** Takes a Docker image and a Github repository and produces a final image.
** We could merge {{jboss/infinispan-server}} image together with a {{cloud.xml}} file from Github repository and produce the final image in the OpenShift
** This also requires performing a Rolling Update (replacing all the Pods) procedure to populate changes
* Extend the docker image and use Github repository
** All you need to do is to create a Dockerfile which extends {{jboss/infinispan-server}}, copy your custom {{cloud.xml}} file to {{/opt/jboss/infinispan-server/standalone/configuration/cloud.xml}} and push it to the Github repository
** Next, create a new application on the Openshift using this repository ({{oc new-app https://github.com/foo/bar}}). Openshift will detect the building strategy automatically (it will find a Dockerfile inside the repository).
* Extend the Docker image and use Docker hub
** Actually it's the easiest way...
** Build everything you need and push it to the Docker HUB
** Use {{oc new-app <your_image>}}
> Better configuration management for Kubernetes
> ----------------------------------------------
>
> Key: ISPN-6675
> URL: https://issues.jboss.org/browse/ISPN-6675
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> Currently we either store the configuration in the Docker Image or use system properties to turn the things on/off. Both approaches do not scale.
> There are some other alternatives like [Kubernetes ConfigMap|http://kubernetes.io/docs/user-guide/configmap] which are worth exploring
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (ISPN-6675) Better configuration management for Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6675?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec updated ISPN-6675:
--------------------------------------
Status: Open (was: New)
> Better configuration management for Kubernetes
> ----------------------------------------------
>
> Key: ISPN-6675
> URL: https://issues.jboss.org/browse/ISPN-6675
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> Currently we either store the configuration in the Docker Image or use system properties to turn the things on/off. Both approaches do not scale.
> There are some other alternatives like [Kubernetes ConfigMap|http://kubernetes.io/docs/user-guide/configmap] which are worth exploring
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months
[JBoss JIRA] (ISPN-6675) Better configuration management for Kubernetes
by Sebastian Łaskawiec (JIRA)
[ https://issues.jboss.org/browse/ISPN-6675?page=com.atlassian.jira.plugin.... ]
Sebastian Łaskawiec edited comment on ISPN-6675 at 8/11/16 4:29 AM:
--------------------------------------------------------------------
POC with Configmaps:
# Spin a new OpenShift cluster with Infinispan app. Use [this blog post|http://blog.infinispan.org/2016/08/running-infinispan-cluster-on-ope...] for reference.
# Download the latest infinispan and store it somewhere on the disk
# Modify {{$ISPN_HOME/standalone/configuration/cloud.xml}}
# Create a ConfigMap based on {{$ISPN_HOME/standalone/configuration}} directory (unfortunately you can't mount ConfigMap as a single file - you need to mount it as a volume (directory)).
{code}
$ oc create configmap cloud-xml-3 --from-file=infinispan-server-9.0.0-SNAPSHOT/standalone/configuration
configmap "cloud-xml-3" created
{code}
# Mount the directory in the pods
{code}
$ oc edit dc/infinispan-server
.. Modify parts shown below ..
apiVersion: v1
kind: DeploymentConfig
metadata:
name: infinispan-server
namespace: myproject
selfLink: /oapi/v1/namespaces/myproject/deploymentconfigs/infinispan-server
uid: 09353e5f-5f98-11e6-b7f5-54ee751d46e3
resourceVersion: '10864'
generation: 6
creationTimestamp: '2016-08-11T07:48:47Z'
labels:
app: infinispan-server
annotations:
openshift.io/generated-by: OpenShiftNewApp
spec:
strategy:
type: Rolling
rollingParams:
updatePeriodSeconds: 1
intervalSeconds: 1
timeoutSeconds: 600
maxUnavailable: 25%
maxSurge: 25%
resources:
triggers:
-
type: ConfigChange
-
type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- infinispan-server
from:
kind: ImageStreamTag
namespace: myproject
name: 'infinispan-server:latest'
lastTriggeredImage: 'jboss/infinispan-server@sha256:52b4fcb1530159176ceb81ea8d9638fa69b8403c8ca5ac8aea1cdbcb645beb9a'
replicas: 1
test: false
selector:
app: infinispan-server
deploymentconfig: infinispan-server
template:
metadata:
creationTimestamp: null
labels:
app: infinispan-server
deploymentconfig: infinispan-server
annotations:
openshift.io/container.infinispan-server.image.entrypoint: '["docker-entrypoint.sh"]'
openshift.io/generated-by: OpenShiftNewApp
spec:
(3) volumes:
-
name: config-volume
configMap:
name: cloud-xml-3
containers:
-
name: infinispan-server
image: 'jboss/infinispan-server@sha256:52b4fcb1530159176ceb81ea8d9638fa69b8403c8ca5ac8aea1cdbcb645beb9a'
ports:
-
containerPort: 8888
protocol: TCP
-
containerPort: 9990
protocol: TCP
-
containerPort: 11211
protocol: TCP
-
containerPort: 11222
protocol: TCP
-
containerPort: 57600
protocol: TCP
-
containerPort: 7600
protocol: TCP
-
containerPort: 8080
protocol: TCP
-
containerPort: 8181
protocol: TCP
resources:
(2) volumeMounts:
-
name: config-volume
mountPath: /opt/jboss/infinispan-server/standalone/configuration
terminationMessagePath: /dev/termination-log
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext:
status:
latestVersion: 5
observedGeneration: 6
replicas: 1
updatedReplicas: 1
unavailableReplicas: 1
details:
message: 'caused by a config change'
causes:
-
type: ConfigChange
(1) - creates a volume based on a ConfigMap
(2) - mounts it in {{$ISPN_HOME/standalone/configuration}}
{code}
# Trigger the rolling upgrade (WARNING: This heavily relies on Kubernetes Rolling Updates so all Health/Readiness probes needs to be in place, otherwise data loss might occur)
Comments:
* This works almost exactly the same in OpenShift and Kubernetes
* Proper readiness and health check configuration is a must
* Requires restarting all pods to refresh the configuration
was (Author: sebastian.laskawiec):
POC with Configmaps:
# Spin a new OpenShift cluster with Infinispan app. Use [this blog post|http://blog.infinispan.org/2016/08/running-infinispan-cluster-on-ope...] for reference.
# Download the latest infinispan and store it somewhere on the disk
# Modify {{$ISPN_HOME/standalone/configuration/cloud.xml}}
# Create a ConfigMap based on {{$ISPN_HOME/standalone/configuration}} directory (unfortunately you can't mount ConfigMap as a single file - you need to mount it as a volume (directory)).
{code}
$ oc create configmap cloud-xml-3 --from-file=infinispan-server-9.0.0-SNAPSHOT/standalone/configuration
configmap "cloud-xml-3" created
{code}
# Mount the directory in the pods
{code}
$ oc edit dc/infinispan-server
.. Modify parts shown below ..
apiVersion: v1
kind: DeploymentConfig
metadata:
name: infinispan-server
namespace: myproject
selfLink: /oapi/v1/namespaces/myproject/deploymentconfigs/infinispan-server
uid: 09353e5f-5f98-11e6-b7f5-54ee751d46e3
resourceVersion: '10864'
generation: 6
creationTimestamp: '2016-08-11T07:48:47Z'
labels:
app: infinispan-server
annotations:
openshift.io/generated-by: OpenShiftNewApp
spec:
strategy:
type: Rolling
rollingParams:
updatePeriodSeconds: 1
intervalSeconds: 1
timeoutSeconds: 600
maxUnavailable: 25%
maxSurge: 25%
resources:
triggers:
-
type: ConfigChange
-
type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- infinispan-server
from:
kind: ImageStreamTag
namespace: myproject
name: 'infinispan-server:latest'
lastTriggeredImage: 'jboss/infinispan-server@sha256:52b4fcb1530159176ceb81ea8d9638fa69b8403c8ca5ac8aea1cdbcb645beb9a'
replicas: 1
test: false
selector:
app: infinispan-server
deploymentconfig: infinispan-server
template:
metadata:
creationTimestamp: null
labels:
app: infinispan-server
deploymentconfig: infinispan-server
annotations:
openshift.io/container.infinispan-server.image.entrypoint: '["docker-entrypoint.sh"]'
openshift.io/generated-by: OpenShiftNewApp
spec:
(3) volumes:
-
name: config-volume
configMap:
name: cloud-xml-3
containers:
-
name: infinispan-server
image: 'jboss/infinispan-server@sha256:52b4fcb1530159176ceb81ea8d9638fa69b8403c8ca5ac8aea1cdbcb645beb9a'
ports:
-
containerPort: 8888
protocol: TCP
-
containerPort: 9990
protocol: TCP
-
containerPort: 11211
protocol: TCP
-
containerPort: 11222
protocol: TCP
-
containerPort: 57600
protocol: TCP
-
containerPort: 7600
protocol: TCP
-
containerPort: 8080
protocol: TCP
-
containerPort: 8181
protocol: TCP
resources:
(2) volumeMounts:
-
name: config-volume
mountPath: /opt/jboss/infinispan-server/standalone/configuration
terminationMessagePath: /dev/termination-log
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext:
status:
latestVersion: 5
observedGeneration: 6
replicas: 1
updatedReplicas: 1
unavailableReplicas: 1
details:
message: 'caused by a config change'
causes:
-
type: ConfigChange
(1) - creates a volume based on a ConfigMap
(2) - mounts it in {{$ISPN_HOME/standalone/configuration}}
{code}
# Trigger the rolling upgrade (WARNING: This heavily relies on Kubernetes Rolling Updates so all Health/Readiness probes needs to be in place, otherwise data loss might occur)
> Better configuration management for Kubernetes
> ----------------------------------------------
>
> Key: ISPN-6675
> URL: https://issues.jboss.org/browse/ISPN-6675
> Project: Infinispan
> Issue Type: Feature Request
> Components: Cloud Integrations
> Reporter: Sebastian Łaskawiec
> Assignee: Sebastian Łaskawiec
>
> Currently we either store the configuration in the Docker Image or use system properties to turn the things on/off. Both approaches do not scale.
> There are some other alternatives like [Kubernetes ConfigMap|http://kubernetes.io/docs/user-guide/configmap] which are worth exploring
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)
9 years, 8 months