<div dir="ltr"><div><div>Hi Will,<br><br></div>I will create the JIRA and provide the TRACE level logs as soon as possible.<br><br></div>Thanks for the update.<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">2018-04-20 20:10 GMT+02:00 William Burns <span dir="ltr">&lt;<a href="mailto:mudokonman@gmail.com" target="_blank">mudokonman@gmail.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span class=""><div class="gmail_quote"><div dir="ltr">On Fri, Apr 20, 2018 at 9:43 AM Thomas SEGISMONT &lt;<a href="mailto:tsegismont@gmail.com" target="_blank">tsegismont@gmail.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>I tried our test suite on a slower machine (iMac from 2011). It passes consistently there.<br><br></div>On my laptop, I keep seeing this from time to time (in different tests):<br><br>2018-04-19T19:53:09.513 WARN [Context=org.infinispan.LOCKS]<wbr>ISPN000320: After merge (or coordinator change), cache still hasn&#39;t recovered a majority of members and must stay in degraded mode. Current members are [sombrero-19385], lost members are [sombrero-42917], stable members are [sombrero-42917, sombrero-19385] <br></div></div></div></blockquote><div><br></div></div></span><div dir="ltr"><div class="gmail_quote"><div>I would expect the nodes to be leaving gracefully, which shouldn&#39;t cause a merge. I am not sure how your test is producing that. Can you produce a TRACE log and a JIRA for it?</div><div><br></div><div>However if there is a merge, if you go down a single node it will always be in DEGRADED mode, when using partition handling. This is due to not having a simple majority as described in <a href="http://infinispan.org/docs/stable/user_guide/user_guide.html#partition_handling" target="_blank">http://infinispan.org/docs/<wbr>stable/user_guide/user_guide.<wbr>html#partition_handling</a></div></div></div><div dir="ltr"><div class="gmail_quote"><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><br></div>It happens when we shutdown nodes one after the other (even when waiting for cluster status to be &quot;healthy&quot; plus extra 2 seconds).<br><br></div>After that the nodes remains blocked in DefaultCacheManager.stop<br><br>2018-04-19T19:49:29.242 AVERTISSEMENT Thread Thread[vert.x-worker-thread-5,<wbr>5,main] has been blocked for 60774 ms, time limit is 60000 <br>io.vertx.core.VertxException: Thread blocked<br>    at sun.misc.Unsafe.park(Native Method)<br>    at java.util.concurrent.locks.<wbr>LockSupport.park(LockSupport.<wbr>java:175)<br>    at java.util.concurrent.<wbr>CompletableFuture$Signaller.<wbr>block(CompletableFuture.java:<wbr>1693)<br>    at java.util.concurrent.<wbr>ForkJoinPool.managedBlock(<wbr>ForkJoinPool.java:3323)<br>    at java.util.concurrent.<wbr>CompletableFuture.waitingGet(<wbr>CompletableFuture.java:1729)<br>    at java.util.concurrent.<wbr>CompletableFuture.join(<wbr>CompletableFuture.java:1934)<br>    at org.infinispan.manager.<wbr>DefaultCacheManager.terminate(<wbr>DefaultCacheManager.java:688)<br>    at org.infinispan.manager.<wbr>DefaultCacheManager.<wbr>stopCaches(<wbr>DefaultCacheManager.java:734)<br>    at org.infinispan.manager.<wbr>DefaultCacheManager.stop(<wbr>DefaultCacheManager.java:711)<br>    at io.vertx.ext.cluster.<wbr>infinispan.<wbr>InfinispanClusterManager.<wbr>lambda$leave$5(<wbr>InfinispanClusterManager.java:<wbr>285)<br>    at io.vertx.ext.cluster.<wbr>infinispan.<wbr>InfinispanClusterManager$$<wbr>Lambda$421/578931659.handle(<wbr>Unknown Source)<br>    at io.vertx.core.impl.<wbr>ContextImpl.lambda$<wbr>executeBlocking$1(ContextImpl.<wbr>java:265)<br>    at io.vertx.core.impl.<wbr>ContextImpl$$Lambda$27/<wbr>1330754528.run(Unknown Source)</div><div dir="ltr"><br>    at java.util.concurrent.<wbr>ThreadPoolExecutor.runWorker(<wbr>ThreadPoolExecutor.java:1149)<br>    at java.util.concurrent.<wbr>ThreadPoolExecutor$Worker.run(<wbr>ThreadPoolExecutor.java:624)<br></div><div dir="ltr">    at io.netty.util.concurrent.<wbr>FastThreadLocalRunnable.run(<wbr>FastThreadLocalRunnable.java:<wbr>30)<br>    at java.lang.Thread.run(Thread.<wbr>java:748)<br></div></blockquote><div><br></div></span><div>This looks like the exact issue that Radim mentioned with <a href="https://issues.jboss.org/browse/ISPN-8859" target="_blank">https://issues.jboss.org/<wbr>browse/ISPN-8859</a>.</div><div><div class="h5"><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div><div><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">2018-04-18 17:00 GMT+02:00 Thomas SEGISMONT <span dir="ltr">&lt;<a href="mailto:tsegismont@gmail.com" target="_blank">tsegismont@gmail.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>So here&#39;s the Circular Referenced Suppressed Exception<br><br>[stateTransferExecutor-thread-<wbr>-p221-t33] 2018-04-18T16:15:06.662+02:00 WARN [org.infinispan.statetransfer.<wbr>InboundTransferTask]  ISPN000210: Failed to request state of cache __vertx.subs from node sombrero-25286, segments {0}<br>org.infinispan.remoting.<wbr>transport.jgroups.<wbr>SuspectException: ISPN000400: Node sombrero-25286 was suspected<br>    at org.infinispan.remoting.<wbr>transport.ResponseCollectors.<wbr>remoteNodeSuspected(<wbr>ResponseCollectors.java:33)<br>    at org.infinispan.remoting.<wbr>transport.impl.<wbr>SingleResponseCollector.<wbr>targetNotFound(<wbr>SingleResponseCollector.java:<wbr>31)<br>    at org.infinispan.remoting.<wbr>transport.impl.<wbr>SingleResponseCollector.<wbr>targetNotFound(<wbr>SingleResponseCollector.java:<wbr>17)<br>    at org.infinispan.remoting.<wbr>transport.<wbr>ValidSingleResponseCollector.<wbr>addResponse(<wbr>ValidSingleResponseCollector.<wbr>java:23)<br>    at org.infinispan.remoting.<wbr>transport.impl.<wbr>SingleTargetRequest.<wbr>receiveResponse(<wbr>SingleTargetRequest.java:51)<br>    at org.infinispan.remoting.<wbr>transport.impl.<wbr>SingleTargetRequest.onNewView(<wbr>SingleTargetRequest.java:42)<br>    at org.infinispan.remoting.<wbr>transport.jgroups.<wbr>JGroupsTransport.addRequest(<wbr>JGroupsTransport.java:921)<br>    at org.infinispan.remoting.<wbr>transport.jgroups.<wbr>JGroupsTransport.<wbr>invokeCommand(<wbr>JGroupsTransport.java:815)<br>    at org.infinispan.remoting.<wbr>transport.jgroups.<wbr>JGroupsTransport.<wbr>invokeCommand(<wbr>JGroupsTransport.java:123)<br>    at org.infinispan.remoting.rpc.<wbr>RpcManagerImpl.invokeCommand(<wbr>RpcManagerImpl.java:138)<br>    at org.infinispan.statetransfer.<wbr>InboundTransferTask.<wbr>startTransfer(<wbr>InboundTransferTask.java:134)<br>    at org.infinispan.statetransfer.<wbr>InboundTransferTask.<wbr>requestSegments(<wbr>InboundTransferTask.java:113)<br>    at org.infinispan.conflict.impl.<wbr>StateReceiverImpl$<wbr>SegmentRequest.lambda$<wbr>requestState$2(<wbr>StateReceiverImpl.java:164)<br>    at org.infinispan.executors.<wbr>LimitedExecutor.lambda$<wbr>executeAsync$1(<wbr>LimitedExecutor.java:101)<br>    at org.infinispan.executors.<wbr>LimitedExecutor.runTasks(<wbr>LimitedExecutor.java:144)<br>    at org.infinispan.executors.<wbr>LimitedExecutor.access$100(<wbr>LimitedExecutor.java:33)<br>    at org.infinispan.executors.<wbr>LimitedExecutor$Runner.run(<wbr>LimitedExecutor.java:174)<br>    at java.util.concurrent.<wbr>ThreadPoolExecutor.runWorker(<wbr>ThreadPoolExecutor.java:1149)<br>    at java.util.concurrent.<wbr>ThreadPoolExecutor$Worker.run(<wbr>ThreadPoolExecutor.java:624)<br>    at java.lang.Thread.run(Thread.<wbr>java:748)<br>    Suppressed: java.util.concurrent.<wbr>ExecutionException: org.infinispan.remoting.<wbr>transport.jgroups.<wbr>SuspectException: ISPN000400: Node sombrero-25286 was suspected<br>        at java.util.concurrent.<wbr>CompletableFuture.reportGet(<wbr>CompletableFuture.java:357)<br>        at java.util.concurrent.<wbr>CompletableFuture.get(<wbr>CompletableFuture.java:1915)<br>        at org.infinispan.util.<wbr>concurrent.CompletableFutures.<wbr>await(CompletableFutures.java:<wbr>82)<br>        at org.infinispan.remoting.rpc.<wbr>RpcManagerImpl.blocking(<wbr>RpcManagerImpl.java:260)<br>        ... 10 more<br>    [CIRCULAR REFERENCE:org.infinispan.<wbr>remoting.transport.jgroups.<wbr>SuspectException: ISPN000400: Node sombrero-25286 was suspected]<br><br></div>It does not happen with 9.2.0.Final and prevents from using ISPN embedded with logback. Do you want me to file an issue ?<br></div><div class="m_-1969439238621085305m_1684198156933745795m_7389632118788342481HOEnZb"><div class="m_-1969439238621085305m_1684198156933745795m_7389632118788342481h5"><div class="gmail_extra"><br><div class="gmail_quote">2018-04-18 11:45 GMT+02:00 Thomas SEGISMONT <span dir="ltr">&lt;<a href="mailto:tsegismont@gmail.com" target="_blank">tsegismont@gmail.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div>Hi folks,<br><br></div>Sorry I&#39;ve been busy on other things and couldn&#39;t get back to you earlier.<br><br></div>I tried running vertx-infinispan test suite with 9.2.1.Final today. There are some problems still but I can&#39;t say which ones yet because I hit: <a href="https://jira.qos.ch/browse/LOGBACK-1027" target="_blank">https://jira.qos.ch/browse/<wbr>LOGBACK-1027</a><br><br></div>We use logback for test logs and all I get is:<br><br>2018-04-18 11:37:46,678 [stateTransferExecutor-thread-<wbr>-p4453-t24] ERROR o.i.executors.LimitedExecutor - Exception in task<br>java.lang.StackOverflowError: null<br>    at ch.qos.logback.classic.spi.<wbr>ThrowableProxy.&lt;init&gt;(<wbr>ThrowableProxy.java:54)<br>    at ch.qos.logback.classic.spi.<wbr>ThrowableProxy.&lt;init&gt;(<wbr>ThrowableProxy.java:60)<br>    at ch.qos.logback.classic.spi.<wbr>ThrowableProxy.&lt;init&gt;(<wbr>ThrowableProxy.java:72)<br>    at ch.qos.logback.classic.spi.<wbr>ThrowableProxy.&lt;init&gt;(<wbr>ThrowableProxy.java:60)<br>    at ch.qos.logback.classic.spi.<wbr>ThrowableProxy.&lt;init&gt;(<wbr>ThrowableProxy.java:72)<br>    at ch.qos.logback.classic.spi.<wbr>ThrowableProxy.&lt;init&gt;(<wbr>ThrowableProxy.java:60)<br>    at ch.qos.logback.classic.spi.<wbr>ThrowableProxy.&lt;init&gt;(<wbr>ThrowableProxy.java:72)<br>    at ch.qos.logback.classic.spi.<wbr>ThrowableProxy.&lt;init&gt;(<wbr>ThrowableProxy.java:60)<br>    at ch.qos.logback.classic.spi.<wbr>ThrowableProxy.&lt;init&gt;(<wbr>ThrowableProxy.java:72)<br>    at ch.qos.logback.classic.spi.<wbr>ThrowableProxy.&lt;init&gt;(<wbr>ThrowableProxy.java:60)<br>    at ch.qos.logback.classic.spi.<wbr>ThrowableProxy.&lt;init&gt;(<wbr>ThrowableProxy.java:72)<br></div><div>... so on so forth<br></div><div><br></div><div>I will run the suite again without logback and tell you what the actual problem is.<br><br></div><div>Regards,<br></div><div>Thomas<br></div></div><div class="m_-1969439238621085305m_1684198156933745795m_7389632118788342481m_4662884477622455721HOEnZb"><div class="m_-1969439238621085305m_1684198156933745795m_7389632118788342481m_4662884477622455721h5"><div class="gmail_extra"><br><div class="gmail_quote">2018-03-27 11:15 GMT+02:00 Pedro Ruivo <span dir="ltr">&lt;<a href="mailto:pedro@infinispan.org" target="_blank">pedro@infinispan.org</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">JIRA: <a href="https://issues.jboss.org/browse/ISPN-8994" rel="noreferrer" target="_blank">https://issues.jboss.org/<wbr>browse/ISPN-8994</a><br>
<span class="m_-1969439238621085305m_1684198156933745795m_7389632118788342481m_4662884477622455721m_-950152444521841971im m_-1969439238621085305m_1684198156933745795m_7389632118788342481m_4662884477622455721m_-950152444521841971HOEnZb"><br>
On 27-03-2018 10:08, Pedro Ruivo wrote:<br>
&gt; <br>
&gt; <br>
&gt; On 27-03-2018 09:03, Sebastian Laskawiec wrote:<br>
&gt;&gt; At the moment, the cluster health status checker enumerates all caches <br>
&gt;&gt; in the cache manager [1] and checks whether those cashes are running <br>
&gt;&gt; and not in degraded more [2].<br>
&gt;&gt;<br>
&gt;&gt; I&#39;m not sure how counter caches have been implemented. One thing is <br>
&gt;&gt; for sure - they should be taken into account in this loop [3].<br>
&gt; <br>
&gt; The private caches aren&#39;t listed by CacheManager.getCacheNames(). We <br>
&gt; have to check them via InternalCacheRegistry.<wbr>getInternalCacheNames().<br>
&gt; <br>
&gt; I&#39;ll open a JIRA if you don&#39;t mind :)<br>
&gt; <br>
&gt;&gt;<br>
&gt;&gt; [1] <br>
&gt;&gt; <a href="https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L22" rel="noreferrer" target="_blank">https://github.com/infinispan/<wbr>infinispan/blob/master/core/<wbr>src/main/java/org/infinispan/<wbr>health/impl/ClusterHealthImpl.<wbr>java#L22</a> <br>
&gt;&gt;<br>
&gt;&gt; [2] <br>
&gt;&gt; <a href="https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/CacheHealthImpl.java#L25" rel="noreferrer" target="_blank">https://github.com/infinispan/<wbr>infinispan/blob/master/core/<wbr>src/main/java/org/infinispan/<wbr>health/impl/CacheHealthImpl.<wbr>java#L25</a> <br>
&gt;&gt;<br>
&gt;&gt; [3] <br>
&gt;&gt; <a href="https://github.com/infinispan/infinispan/blob/master/core/src/main/java/org/infinispan/health/impl/ClusterHealthImpl.java#L23-L24" rel="noreferrer" target="_blank">https://github.com/infinispan/<wbr>infinispan/blob/master/core/<wbr>src/main/java/org/infinispan/<wbr>health/impl/ClusterHealthImpl.<wbr>java#L23-L24</a> <br>
</span><div class="m_-1969439238621085305m_1684198156933745795m_7389632118788342481m_4662884477622455721m_-950152444521841971HOEnZb"><div class="m_-1969439238621085305m_1684198156933745795m_7389632118788342481m_4662884477622455721m_-950152444521841971h5">______________________________<wbr>_________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/<wbr>mailman/listinfo/infinispan-<wbr>dev</a><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
______________________________<wbr>_________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/<wbr>mailman/listinfo/infinispan-<wbr>dev</a></blockquote></div></div></div></div></div>
<br>______________________________<wbr>_________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/<wbr>mailman/listinfo/infinispan-<wbr>dev</a><br></blockquote></div><br></div>