[JBoss JIRA] (ISPN-10176) ClusterCacheStatsImpl not picking up NonTxInvalidationInterceptor for non-tx invalidation cache
by Koen Serneels (Jira)
[ https://issues.jboss.org/browse/ISPN-10176?page=com.atlassian.jira.plugin... ]
Koen Serneels updated ISPN-10176:
---------------------------------
Description:
We are using a clustered non-tx <invalidation-cache> for which we enable JMX stats. The invalidation counter reported by ClusterCacheStatsImpl remains always zero.
Further investigation showed that a NonTxInvalidationInterceptor is created for the cache, but not picked up by ClusterCacheStatsImpl since it's trying to retrieve the interceptor like this:
{code:java}
//invalidations
InvalidationInterceptor invalidationInterceptor = getFirstInterceptorWhichExtends(remoteCache,
InvalidationInterceptor.class);
if (invalidationInterceptor != null) {
map.put(INVALIDATIONS, invalidationInterceptor.getInvalidations());
} else {
map.put(INVALIDATIONS, 0);
}
{code}
But NonTxInvalidationInterceptor does not extend InvalidationInterceptor. Further more the javadoc states:
{code:java}
/**
** This interceptor should completely replace default InvalidationInterceptor.*
...
..
.
{code}
was:
We are using a clustered non-tx <invalidation-cache> for which we enable JMX stats. The invalidation counter reported by ClusterCacheStatsImpl remains always zero.
Further investigation showed that a NonTxInvalidationInterceptor is created for the cache, but not picked up by ClusterCacheStatsImpl since it's trying to retrieve the interceptor like this:
{code:java}
//invalidations
InvalidationInterceptor invalidationInterceptor = getFirstInterceptorWhichExtends(remoteCache,
InvalidationInterceptor.class);
if (invalidationInterceptor != null) {
map.put(INVALIDATIONS, invalidationInterceptor.getInvalidations());
} else {
map.put(INVALIDATIONS, 0);
}
{code}
But NonTxInvalidationInterceptor does not extend InvalidationInterceptor. Further more the javadoc states:
{code:java}
/**
** * This interceptor should completely replace default InvalidationInterceptor.**
* We need to send custom invalidation commands with transaction identifier (as the invalidation)
* since we have to do a two-phase invalidation (releasing the locks as JTA synchronization),
* although the cache itself is non-transactional.
*
{code}
> ClusterCacheStatsImpl not picking up NonTxInvalidationInterceptor for non-tx invalidation cache
> -----------------------------------------------------------------------------------------------
>
> Key: ISPN-10176
> URL: https://issues.jboss.org/browse/ISPN-10176
> Project: Infinispan
> Issue Type: Bug
> Affects Versions: 9.4.13.Final
> Reporter: Koen Serneels
> Priority: Major
>
> We are using a clustered non-tx <invalidation-cache> for which we enable JMX stats. The invalidation counter reported by ClusterCacheStatsImpl remains always zero.
> Further investigation showed that a NonTxInvalidationInterceptor is created for the cache, but not picked up by ClusterCacheStatsImpl since it's trying to retrieve the interceptor like this:
> {code:java}
> //invalidations
> InvalidationInterceptor invalidationInterceptor = getFirstInterceptorWhichExtends(remoteCache,
> InvalidationInterceptor.class);
> if (invalidationInterceptor != null) {
> map.put(INVALIDATIONS, invalidationInterceptor.getInvalidations());
> } else {
> map.put(INVALIDATIONS, 0);
> }
> {code}
> But NonTxInvalidationInterceptor does not extend InvalidationInterceptor. Further more the javadoc states:
> {code:java}
> /**
> ** This interceptor should completely replace default InvalidationInterceptor.*
> ...
> ..
> .
> {code}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 4 months
[JBoss JIRA] (ISPN-10175) Long test killer doesn't kill the JVM on Windows
by Dan Berindei (Jira)
Dan Berindei created ISPN-10175:
-----------------------------------
Summary: Long test killer doesn't kill the JVM on Windows
Key: ISPN-10175
URL: https://issues.jboss.org/browse/ISPN-10175
Project: Infinispan
Issue Type: Bug
Components: Test Suite - Core
Affects Versions: 9.4.13.Final, 10.0.0.Beta3
Reporter: Dan Berindei
Neither {{jstack}} nor {{taskkill}} work:
{noformat}
[ERROR] Test org.infinispan.client.hotrod.impl.iteration.ProtobufRemoteIteratorIndexingTest.testFilteredIterationWithQuery has been running for more than 300 seconds. Interrupting the test thread and dumping threads of the test suite process and its children.
Cannot find jstack in T:\opt\windows\x86_64\openjdk-1.8.0\jre, programmatically dumping thread stacks of testsuite process to C:\home\jenkins\workspace\jdg-7.3.x-jdg-func-ispn-testsuite-win-openjdk\b521375b\infinispan\client\hotrod-client\threaddump-org_infinispan_client_hotrod_impl_iteration_ProtobufRemoteIteratorIndexingTest_testFilteredIterationWithQuery-2019-05-10.log
Interrupted thread testng-ProtobufRemoteIteratorIndexingTest (57).
Killed processes 9904
[ERROR] Test org.infinispan.client.hotrod.impl.iteration.ReplFailOverRemoteIteratorTest.testFailOver has been running for more than 300 seconds. Interrupting the test thread and dumping threads of the test suite process and its children.
Cannot find jstack in T:\opt\windows\x86_64\openjdk-1.8.0\jre, programmatically dumping thread stacks of testsuite process to C:\home\jenkins\workspace\jdg-7.3.x-jdg-func-ispn-testsuite-win-openjdk\b521375b\infinispan\client\hotrod-client\threaddump-org_infinispan_client_hotrod_impl_iteration_ReplFailOverRemoteIteratorTest_testFailOver-2019-05-10.log
Interrupted thread testng-ReplFailOverRemoteIteratorTest (62).
Killed processes 9904
{noformat}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 4 months
[JBoss JIRA] (ISPN-10174) Adjusting thread pools for container environments
by Galder Zamarreño (Jira)
[ https://issues.jboss.org/browse/ISPN-10174?page=com.atlassian.jira.plugin... ]
Galder Zamarreño commented on ISPN-10174:
-----------------------------------------
Even after reducing thread pools and removing off-heap caches, I've tried to see if the pod would run with 256mb of container memory, but that didn't work. Pods would be OOMKilled.
> Adjusting thread pools for container environments
> -------------------------------------------------
>
> Key: ISPN-10174
> URL: https://issues.jboss.org/browse/ISPN-10174
> Project: Infinispan
> Issue Type: Enhancement
> Components: Cloud
> Affects Versions: 10.0.0.Beta3, 9.4.13.Final
> Reporter: Galder Zamarreño
> Priority: Major
> Labels: rhdemo-2019
>
> Default thread pool values in Infinispan Server cloud.xml can make containers be killed if all of them are in use. The main defaults are:
> * HotRod-hotrod-internal-ServerHandler: core=160 max=160
> * remote-thread: core=25 max=25
> * transport-thread: core=25 max=25
> * async-thread: core=25 max=25
> * jgroups: core=0 max=200
> * jgroups-int: core=0 max=16
> * stateTransferExecutor-thread: core=1 max=60
> * add-listener-thread: core=0 max=10
> * REST-rest-ServerHandler: core=1 max=1
> * DefaultExecutorService: core=1 max=1
> * notification-thread: core=1 max=1
> The total number of core threads is 239, and if the system is under load, the threads alone can take ~239mb of native memory. That's before the heap and other parameters are counted. Our defaults are 0.5 CPU and 512mb.
> This thread pools should be trimmed, since if used at full capacity, 0.5 CPU won't be able to do much with ~200+ threads.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 4 months
[JBoss JIRA] (ISPN-10174) Adjusting thread pools for container environments
by Galder Zamarreño (Jira)
[ https://issues.jboss.org/browse/ISPN-10174?page=com.atlassian.jira.plugin... ]
Galder Zamarreño commented on ISPN-10174:
-----------------------------------------
I've done some testing but the results are not very conclusive:
In a server with 6 off-heap caches and default thread pool sizes, a container uses between 332mb-340mb out of the box. After 10-20 mins this goes up to 410-420mb and stays stable. That's without any data or any incoming requests and with a limit of 512mb. Native memory tracking shows:
{code}
Native Memory Tracking:
Total: reserved=1953934KB -630KB, committed=493618KB +678KB
- Java Heap (reserved=262144KB, committed=68096KB -1024KB)
(mmap: reserved=262144KB, committed=68096KB -1024KB)
- Class (reserved=1131296KB +18KB, committed=93344KB +530KB)
(classes #14440 +3)
(malloc=4896KB +18KB #23987 +364)
(mmap: reserved=1126400KB, committed=88448KB +512KB)
- Thread (reserved=116682KB -1033KB, committed=116682KB -1033KB)
(thread #113 -1)
(stack: reserved=116136KB -1028KB, committed=116136KB -1028KB)
(malloc=381KB -3KB #563 -5)
(arena=164KB -1 #223 -2)
- Code (reserved=254153KB +374KB, committed=26461KB +2194KB)
(malloc=4553KB +374KB #7814 +517)
(mmap: reserved=249600KB, committed=21908KB +1820KB)
- GC (reserved=11909KB, committed=11285KB)
(malloc=2325KB #525 +1)
(mmap: reserved=9584KB, committed=8960KB)
- Compiler (reserved=257KB -4KB, committed=257KB -4KB)
(malloc=126KB -4KB #675 +6)
(arena=131KB #5)
- Internal (reserved=154058KB, committed=154058KB)
(malloc=154026KB #27664 -2)
(mmap: reserved=32KB, committed=32KB)
- Symbol (reserved=19609KB, committed=19609KB)
(malloc=16788KB #170432)
(arena=2821KB #1)
- Native Memory Tracking (reserved=3646KB +14KB, committed=3646KB +14KB)
(malloc=18KB #205 -1)
(tracking overhead=3628KB +14KB)
- Arena Chunk (reserved=180KB +1KB, committed=180KB +1KB)
(malloc=180KB +1KB)
{code}
The interesting thing here is that the number of threads for which memory is allocated in an unused server is 113. So, not all threads are taking space, and as Dan suggested in a separate chat, only a small fraction of that might be in use, but it's committed anyway.
Then I tried to reduce the number of threads with these changes:
* HotRod-hotrod-internal-ServerHandler: core=16 max=16
* remote-thread: core=4 max=4
* transport-thread: core=4 max=4
* async-thread: core=4 max=4
With still the same off-heap caches, the container took 320-330mb to start with and then grew to 330-410mb. Looking at the native memory tracking:
{code}
Total: reserved=1930181KB +514KB, committed=468493KB +3894KB
- Java Heap (reserved=262144KB, committed=67072KB -512KB)
(mmap: reserved=262144KB, committed=67072KB -512KB)
- Class (reserved=1131354KB +69KB, committed=94170KB +581KB)
(classes #14684 +8)
(malloc=4954KB +69KB #24153 +866)
(mmap: reserved=1126400KB, committed=89216KB +512KB)
- Thread (reserved=76411KB, committed=76411KB)
(thread #74)
(stack: reserved=76044KB, committed=76044KB)
(malloc=248KB #368)
(arena=119KB #145)
- Code (reserved=253888KB +727KB, committed=25088KB +4107KB)
(malloc=4288KB +727KB #7277 +986)
(mmap: reserved=249600KB, committed=20800KB +3380KB)
- GC (reserved=11909KB, committed=11277KB)
(malloc=2325KB #525 +6)
(mmap: reserved=9584KB, committed=8952KB)
- Compiler (reserved=233KB -9KB, committed=233KB -9KB)
(malloc=102KB -9KB #630 -11)
(arena=131KB #5)
- Internal (reserved=170643KB +1KB, committed=170643KB +1KB)
(malloc=170611KB +1KB #28177 +8)
(mmap: reserved=32KB, committed=32KB)
- Symbol (reserved=19750KB +16KB, committed=19750KB +16KB)
(malloc=16929KB +16KB #172444 +9)
(arena=2821KB #1)
- Native Memory Tracking (reserved=3670KB +29KB, committed=3670KB +29KB)
(malloc=14KB #166)
(tracking overhead=3656KB +29KB)
- Arena Chunk (reserved=179KB -319KB, committed=179KB -319KB)
(malloc=179KB -319KB)
{code}
We see the number of threads has gone down to 74, but that didn't make much difference overall.
Next I removed the 6 off heap caches and that made the containers start with 260-280mb, and after 20 minutes it settled in 350-360. That would seem each off heap caches takes ~10mb of memory empty. There is certain allocation happening there for buckets, so that's expected AFAIK.
The full effect of reducing the thread pools can't been seen like this. A more realistic scenario would be a soak test where we define the number of concurrent requests a server should be able to handle and see if with a constant load, the server can handle that. For example, say that with 3 node cluster of 160 Hot Rod worker threads, you can (hyphotetically) handle 3*160 client threads, each doing a put/get on a given key. Assuming the that data stored does not increase over time (each thread read/writes same key), the container should be able to handle the load without being killed by Kubernetes.
> Adjusting thread pools for container environments
> -------------------------------------------------
>
> Key: ISPN-10174
> URL: https://issues.jboss.org/browse/ISPN-10174
> Project: Infinispan
> Issue Type: Enhancement
> Components: Cloud
> Affects Versions: 10.0.0.Beta3, 9.4.13.Final
> Reporter: Galder Zamarreño
> Priority: Major
> Labels: rhdemo-2019
>
> Default thread pool values in Infinispan Server cloud.xml can make containers be killed if all of them are in use. The main defaults are:
> * HotRod-hotrod-internal-ServerHandler: core=160 max=160
> * remote-thread: core=25 max=25
> * transport-thread: core=25 max=25
> * async-thread: core=25 max=25
> * jgroups: core=0 max=200
> * jgroups-int: core=0 max=16
> * stateTransferExecutor-thread: core=1 max=60
> * add-listener-thread: core=0 max=10
> * REST-rest-ServerHandler: core=1 max=1
> * DefaultExecutorService: core=1 max=1
> * notification-thread: core=1 max=1
> The total number of core threads is 239, and if the system is under load, the threads alone can take ~239mb of native memory. That's before the heap and other parameters are counted. Our defaults are 0.5 CPU and 512mb.
> This thread pools should be trimmed, since if used at full capacity, 0.5 CPU won't be able to do much with ~200+ threads.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 4 months
[JBoss JIRA] (ISPN-10174) Adjusting thread pools for container environments
by Galder Zamarreño (Jira)
Galder Zamarreño created ISPN-10174:
---------------------------------------
Summary: Adjusting thread pools for container environments
Key: ISPN-10174
URL: https://issues.jboss.org/browse/ISPN-10174
Project: Infinispan
Issue Type: Enhancement
Components: Cloud
Affects Versions: 9.4.13.Final, 10.0.0.Beta3
Reporter: Galder Zamarreño
Default thread pool values in Infinispan Server cloud.xml can make containers be killed if all of them are in use. The main defaults are:
* HotRod-hotrod-internal-ServerHandler: core=160 max=160
* remote-thread: core=25 max=25
* transport-thread: core=25 max=25
* async-thread: core=25 max=25
* jgroups: core=0 max=200
* jgroups-int: core=0 max=16
* stateTransferExecutor-thread: core=1 max=60
* add-listener-thread: core=0 max=10
* REST-rest-ServerHandler: core=1 max=1
* DefaultExecutorService: core=1 max=1
* notification-thread: core=1 max=1
The total number of core threads is 239, and if the system is under load, the threads alone can take ~239mb of native memory. That's before the heap and other parameters are counted. Our defaults are 0.5 CPU and 512mb.
This thread pools should be trimmed, since if used at full capacity, 0.5 CPU won't be able to do much with ~200+ threads.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 4 months
[JBoss JIRA] (ISPN-10174) Adjusting thread pools for container environments
by Galder Zamarreño (Jira)
[ https://issues.jboss.org/browse/ISPN-10174?page=com.atlassian.jira.plugin... ]
Galder Zamarreño updated ISPN-10174:
------------------------------------
Labels: rhdemo-2019 (was: )
> Adjusting thread pools for container environments
> -------------------------------------------------
>
> Key: ISPN-10174
> URL: https://issues.jboss.org/browse/ISPN-10174
> Project: Infinispan
> Issue Type: Enhancement
> Components: Cloud
> Affects Versions: 10.0.0.Beta3, 9.4.13.Final
> Reporter: Galder Zamarreño
> Priority: Major
> Labels: rhdemo-2019
>
> Default thread pool values in Infinispan Server cloud.xml can make containers be killed if all of them are in use. The main defaults are:
> * HotRod-hotrod-internal-ServerHandler: core=160 max=160
> * remote-thread: core=25 max=25
> * transport-thread: core=25 max=25
> * async-thread: core=25 max=25
> * jgroups: core=0 max=200
> * jgroups-int: core=0 max=16
> * stateTransferExecutor-thread: core=1 max=60
> * add-listener-thread: core=0 max=10
> * REST-rest-ServerHandler: core=1 max=1
> * DefaultExecutorService: core=1 max=1
> * notification-thread: core=1 max=1
> The total number of core threads is 239, and if the system is under load, the threads alone can take ~239mb of native memory. That's before the heap and other parameters are counted. Our defaults are 0.5 CPU and 512mb.
> This thread pools should be trimmed, since if used at full capacity, 0.5 CPU won't be able to do much with ~200+ threads.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 4 months
[JBoss JIRA] (ISPN-9291) BasePartitionHandlingTest.Partition.installMergeView() doesn't compute the merge digest
by Ryan Emerson (Jira)
[ https://issues.jboss.org/browse/ISPN-9291?page=com.atlassian.jira.plugin.... ]
Ryan Emerson resolved ISPN-9291.
--------------------------------
Resolution: Done
> BasePartitionHandlingTest.Partition.installMergeView() doesn't compute the merge digest
> ---------------------------------------------------------------------------------------
>
> Key: ISPN-9291
> URL: https://issues.jboss.org/browse/ISPN-9291
> Project: Infinispan
> Issue Type: Bug
> Components: Test Suite - Core
> Affects Versions: 9.3.0.CR1
> Reporter: Dan Berindei
> Assignee: Dan Berindei
> Priority: Minor
> Labels: testsuite_stability
> Fix For: 10.0.0.Final
>
>
> The partition handling tests use {{BasePartitionHandlingTest.Partition.installMergeView(view1, view2)}} to install the merge view without waiting for {{MERGE3}} to run, making them much faster. Unfortunately, the implementation is incorrect: {{GMS.installView(view)}} only works for regular views, merge views need to be installed with {{GMS.installView(mergeView, digest)}}.
> The result is that the nodes that got isolated from the coordinator request the retransmission of all the {{NAKACK2}} messages (including view updates) since the cluster first started. The isolated nodes cannot install the merge view until they deliver all the older messages (even without knowing whether they're OOB or not). But if {{STABLE}} ran and cleared a range of messages already, the retransmission request cannot be satisfied, so the view updates will never be delivered.
> This is easily reproducible in {{CrashedNodeDuringConflictResolutionTest}} if we add a delay before updating the topology in {{StateConsumerImpl}}. The test installs the merge view manually, but then kills NodeC and expects the cluster to install the new view automatically. NodeD can't install the new view because it's waiting for earlier messages from NodeA:
> {noformat}
> 18:27:13,054 INFO (testng-test:[]) [TestSuiteProgress] Test starting: org.infinispan.conflict.impl.CrashedNodeDuringConflictResolutionTest.testPartitionMergePolicy[DIST_SYNC]
> 18:27:13,640 DEBUG (testng-test:[]) [GMS] test-NodeA-39513: installing view MergeView::[test-NodeA-39513|10] (4) [test-NodeA-39513, test-NodeB-9439, test-NodeC-43706, test-NodeD-59078], 2 subgroups: [test-NodeA-39513|8] (2) [test-NodeA-39513, test-NodeB-9439], [test-NodeC-43706|9] (2) [test-NodeC-43706, test-NodeD-59078]
> 18:27:13,674 DEBUG (testng-test:[]) [GMS] test-NodeD-59078: installing view MergeView::[test-NodeA-39513|10] (4) [test-NodeA-39513, test-NodeB-9439, test-NodeC-43706, test-NodeD-59078], 2 subgroups: [test-NodeA-39513|8] (2) [test-NodeA-39513, test-NodeB-9439], [test-NodeC-43706|9] (2) [test-NodeC-43706, test-NodeD-59078]
> 18:27:13,828 TRACE (jgroups-7,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((1): {50}) to test-NodeA-39513
> 18:27:13,966 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((49): {1-49}) to test-NodeA-39513
> 18:27:14,067 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((45): {1-45}) to test-NodeA-39513
> 18:27:14,504 DEBUG (testng-test:[]) [DefaultCacheManager] Stopping cache manager ISPN on test-NodeC-43706
> 18:27:18,642 TRACE (VERIFY_SUSPECT.TimerThread-89,test-NodeA-39513:[]) [GMS] test-NodeA-39513: joiners=[], suspected=[test-NodeC-43706], leaving=[], new view: [test-NodeA-39513|11] (3) [test-NodeA-39513, test-NodeB-9439, test-NodeD-59078]
> 18:27:18,643 TRACE (VERIFY_SUSPECT.TimerThread-89,test-NodeA-39513:[]) [GMS] test-NodeA-39513: mcasting view [test-NodeA-39513|11] (3) [test-NodeA-39513, test-NodeB-9439, test-NodeD-59078]
> 18:27:18,646 DEBUG (VERIFY_SUSPECT.TimerThread-89,test-NodeA-39513:[]) [GMS] test-NodeA-39513: installing view [test-NodeA-39513|11] (3) [test-NodeA-39513, test-NodeB-9439, test-NodeD-59078]
> 18:27:18,652 TRACE (VERIFY_SUSPECT.TimerThread-89,test-NodeA-39513:[]) [TCP_NIO2] test-NodeA-39513: sending msg to null, src=test-NodeA-39513, headers are GMS: GmsHeader[VIEW], NAKACK2: [MSG, seqno=63], TP: [cluster_name=ISPN]
> 18:27:18,656 TRACE (jgroups-20,test-NodeA-39513:[]) [TCP_NIO2] test-NodeA-39513: received [dst: test-NodeA-39513, src: test-NodeB-9439 (3 headers), size=0 bytes, flags=OOB|INTERNAL], headers are GMS: GmsHeader[VIEW_ACK], UNICAST3: DATA, seqno=100, TP: [cluster_name=ISPN]
> 18:27:20,554 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((45): {1-45}) to test-NodeA-39513
> 18:27:20,653 WARN (VERIFY_SUSPECT.TimerThread-89,test-NodeA-39513:[]) [GMS] test-NodeA-39513: failed to collect all ACKs (expected=2) for view [test-NodeA-39513|11] after 2000ms, missing 1 ACKs from (1) test-NodeD-59078
> 18:27:20,656 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((45): {1-45}) to test-NodeA-39513
> 18:27:20,756 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((45): {1-45}) to test-NodeA-39513
> ...
> 18:28:14,412 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((45): {1-45}) to test-NodeA-39513
> 18:28:14,513 TRACE (Timer runner-1,test-NodeD-59078:[]) [NAKACK2] test-NodeD-59078: sending XMIT_REQ ((45): {1-45}) to test-NodeA-39513
> 18:28:14,589 ERROR (testng-test:[]) [TestSuiteProgress] Test failed: org.infinispan.conflict.impl.CrashedNodeDuringConflictResolutionTest.testPartitionMergePolicy[DIST_SYNC]
> java.lang.RuntimeException: Cache ___defaultcache timed out waiting for rebalancing to complete on node test-NodeA-39513, current topology is CacheTopology{id=21, phase=CONFLICT_RESOLUTION, rebalanceId=7, currentCH=PartitionerConsistentHash:DefaultConsistentHash{ns=256, owners = (3)[test-NodeD-59078: 256+0, test-NodeA-39513: 0+256, test-NodeB-9439: 0+256]}, pendingCH=null, unionCH=null, actualMembers=[test-NodeD-59078, test-NodeA-39513, test-NodeB-9439], persistentUUIDs=[828108c4-4251-49fc-9481-ff6392bea9fb, 1d4b6f07-b71b-41a1-adfb-abbe68944a9f, 3a1ece05-c282-433e-9eb5-7b3e0f1932aa]}. rebalanceInProgress=true, currentChIsBalanced=true
> at org.infinispan.test.TestingUtil.waitForNoRebalance(TestingUtil.java:392) ~[test-classes/:?]
> at org.infinispan.conflict.impl.CrashedNodeDuringConflictResolutionTest.performMerge(CrashedNodeDuringConflictResolutionTest.java:113) ~[test-classes/:?]
> at org.infinispan.conflict.impl.BaseMergePolicyTest.testPartitionMergePolicy(BaseMergePolicyTest.java:137) ~[test-classes/:?]
> {noformat}
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 4 months
[JBoss JIRA] (ISPN-10173) infinispan-springX-remote doesn't allow to disable cached session with a configuration property
by Jose Antonio Iñigo (Jira)
[ https://issues.jboss.org/browse/ISPN-10173?page=com.atlassian.jira.plugin... ]
Jose Antonio Iñigo updated ISPN-10173:
--------------------------------------
Description:
I am using infinispan-spring4-remote in a Spring Boot 1.5 project. This declares the EnableInfinispanRemoteHttpSession in its main class.
I don't want to use the distribuited cache for http session externalization in unit/integration tests launched with JUnit. The problem is the application crashes when launching the tests:
{code:java}
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'org.infinispan.spring.provider.SpringRemoteCacheManager' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {}
at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoMatchingBeanFound(DefaultListableBeanFactory.java:1491)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1102)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1064)
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:835)
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741)
... 63 more
{code}
EnableInfinispanRemoteHttpSession imports InfinispanRemoteHttpSessionConfiguration and the latter doesn't have any way (@ConditionalOnProperty for instance) that would allow us to disable that behaviour.
was:
I am using infinispan-spring4-remote in a Spring Boot 1.5 project. This declares the EnableInfinispanRemoteHttpSession in its main class.
I don't want to use the distribuited cache for http session externalization in unit/integration tests launched with JUnit. The problem is the application crashes when launching the tests:
```
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'org.infinispan.spring.provider.SpringRemoteCacheManager' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {}
at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoMatchingBeanFound(DefaultListableBeanFactory.java:1491)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1102)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1064)
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:835)
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741)
... 63 more
```
EnableInfinispanRemoteHttpSession imports InfinispanRemoteHttpSessionConfiguration and the latter doesn't have any way (@ConditionalOnProperty for instance) that would allow us to disable that behaviour.
> infinispan-springX-remote doesn't allow to disable cached session with a configuration property
> -----------------------------------------------------------------------------------------------
>
> Key: ISPN-10173
> URL: https://issues.jboss.org/browse/ISPN-10173
> Project: Infinispan
> Issue Type: Feature Request
> Components: Spring Integration
> Reporter: Jose Antonio Iñigo
> Priority: Major
>
> I am using infinispan-spring4-remote in a Spring Boot 1.5 project. This declares the EnableInfinispanRemoteHttpSession in its main class.
> I don't want to use the distribuited cache for http session externalization in unit/integration tests launched with JUnit. The problem is the application crashes when launching the tests:
> {code:java}
> Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'org.infinispan.spring.provider.SpringRemoteCacheManager' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {}
> at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoMatchingBeanFound(DefaultListableBeanFactory.java:1491)
> at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1102)
> at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1064)
> at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:835)
> at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741)
> ... 63 more
> {code}
> EnableInfinispanRemoteHttpSession imports InfinispanRemoteHttpSessionConfiguration and the latter doesn't have any way (@ConditionalOnProperty for instance) that would allow us to disable that behaviour.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 4 months
[JBoss JIRA] (ISPN-10173) infinispan-springX-remote doesn't allow to disable cached session with a configuration property
by Jose Antonio Iñigo (Jira)
Jose Antonio Iñigo created ISPN-10173:
-----------------------------------------
Summary: infinispan-springX-remote doesn't allow to disable cached session with a configuration property
Key: ISPN-10173
URL: https://issues.jboss.org/browse/ISPN-10173
Project: Infinispan
Issue Type: Feature Request
Components: Spring Integration
Reporter: Jose Antonio Iñigo
I am using infinispan-spring4-remote in a Spring Boot 1.5 project. This declares the EnableInfinispanRemoteHttpSession in its main class.
I don't want to use the distribuited cache for http session externalization in unit/integration tests launched with JUnit. The problem is the application crashes when launching the tests:
```
Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'org.infinispan.spring.provider.SpringRemoteCacheManager' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {}
at org.springframework.beans.factory.support.DefaultListableBeanFactory.raiseNoMatchingBeanFound(DefaultListableBeanFactory.java:1491)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1102)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1064)
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:835)
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741)
... 63 more
```
EnableInfinispanRemoteHttpSession imports InfinispanRemoteHttpSessionConfiguration and the latter doesn't have any way (@ConditionalOnProperty for instance) that would allow us to disable that behaviour.
--
This message was sent by Atlassian Jira
(v7.12.1#712002)
6 years, 4 months