<div dir="ltr">I'll give it a try and keep you posted. Thanks<br></div><div class="gmail_extra"><br><div class="gmail_quote">2018-03-23 14:41 GMT+01:00 Radim Vansa <span dir="ltr"><<a href="mailto:rvansa@redhat.com" target="_blank">rvansa@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">This looks similar to [1] which has a fix [2] ready for a while. Please<br>
try with it to see if it solves your problem.<br>
<br>
[1] <a href="https://issues.jboss.org/browse/ISPN-8859" rel="noreferrer" target="_blank">https://issues.jboss.org/<wbr>browse/ISPN-8859</a><br>
[2] <a href="https://github.com/infinispan/infinispan/pull/5786" rel="noreferrer" target="_blank">https://github.com/infinispan/<wbr>infinispan/pull/5786</a><br>
<div class="HOEnZb"><div class="h5"><br>
On 03/23/2018 01:25 PM, Pedro Ruivo wrote:<br>
> Hi Thomas,<br>
><br>
> Is the test in question using any counter/lock?<br>
><br>
> I did see similar behavior with the counter's in our server test suite.<br>
> The partition handling makes the cache degraded because nodes are<br>
> starting and stopping concurrently.<br>
><br>
> I'm not sure if there are any JIRA to tracking. Ryan, Dan do you know?<br>
> If there is none, it should be created.<br>
><br>
> I improved the counters by making the cache start lazily when you first<br>
> get or define a counter [1]. This workaround solved the issue for us.<br>
><br>
> As a workaround for your test suite, I suggest to make sure the caches<br>
> (___counter_configuration and org.infinispan.LOCK) have finished their<br>
> state transfer before stopping the cache managers, by invoking<br>
> DefaultCacheManager.getCache(*<wbr>cache-name*) in all the caches managers.<br>
><br>
> Sorry for the inconvenience and the delay in replying.<br>
><br>
> Cheers,<br>
> Pedro<br>
><br>
> [1] <a href="https://issues.jboss.org/browse/ISPN-8860" rel="noreferrer" target="_blank">https://issues.jboss.org/<wbr>browse/ISPN-8860</a><br>
><br>
> On 21-03-2018 16:16, Thomas SEGISMONT wrote:<br>
>> Hi everyone,<br>
>><br>
>> I am working on integrating Infinispan 9.2.Final in vertx-infinispan.<br>
>> Before merging I wanted to make sure the test suite passed but it<br>
>> doesn't. It's not the always the same test involved.<br>
>><br>
>> In the logs, I see a lot of messages like "After merge (or coordinator<br>
>> change), cache still hasn't recovered a majority of members and must<br>
>> stay in degraded mode.<br>
>> The context involved are "___counter_configuration" and<br>
>> "org.infinispan.LOCKS"<br>
>><br>
>> Most often it's harmless but, sometimes, I also see this exception<br>
>> "ISPN000210: Failed to request state of cache"<br>
>> Again the cache involved is either "___counter_configuration" or<br>
>> "org.infinispan.LOCKS"<br>
>> After this exception, the cache manager is unable to stop. It blocks in<br>
>> method "terminate" (join on cache future).<br>
>><br>
>> I thought the test suite was too rough (we stop all nodes at the same<br>
>> time). So I changed it to make sure that:<br>
>> - nodes start one after the other<br>
>> - a new node is started only when the previous one indicates HEALTHY status<br>
>> - nodes stop one after the other<br>
>> - a node is stopped only when it indicates HEALTHY status<br>
>> Pretty much what we do on Kubernetes for the readiness check actually.<br>
>> But it didn't get any better.<br>
>><br>
>> Attached are the logs of such a failing test.<br>
>><br>
>> Note that the Vert.x test itself does not fail, it's only when closing<br>
>> nodes that we have issues.<br>
>><br>
>> Here's our XML config:<br>
>> <a href="https://github.com/vert-x3/vertx-infinispan/blob/ispn92/src/main/resources/default-infinispan.xml" rel="noreferrer" target="_blank">https://github.com/vert-x3/<wbr>vertx-infinispan/blob/ispn92/<wbr>src/main/resources/default-<wbr>infinispan.xml</a><br>
>><br>
>> Does that ring a bell? Do you need more info?<br>
>><br>
>> Regards,<br>
>> Thomas<br>
>><br>
>><br>
>><br>
>> ______________________________<wbr>_________________<br>
>> infinispan-dev mailing list<br>
>> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
>> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/<wbr>mailman/listinfo/infinispan-<wbr>dev</a><br>
>><br>
> ______________________________<wbr>_________________<br>
> infinispan-dev mailing list<br>
> <a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/<wbr>mailman/listinfo/infinispan-<wbr>dev</a><br>
<br>
<br>
</div></div><span class="HOEnZb"><font color="#888888">--<br>
Radim Vansa <<a href="mailto:rvansa@redhat.com">rvansa@redhat.com</a>><br>
JBoss Performance Team<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
______________________________<wbr>_________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/<wbr>mailman/listinfo/infinispan-<wbr>dev</a><br>
</div></div></blockquote></div><br></div>