<div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Thu, Mar 1, 2018 at 11:14 AM Thomas SEGISMONT <<a href="mailto:tsegismont@gmail.com">tsegismont@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">2018-03-01 16:36 GMT+01:00 Tristan Tarrant <span dir="ltr"><<a href="mailto:ttarrant@redhat.com" target="_blank">ttarrant@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">You need to use the brand new CacheAdmin API:<br>
<br>
<a href="http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches" rel="noreferrer" target="_blank">http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches</a></blockquote><div><br></div></div></div></div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2.<br><br></div>Is there any way to achieve these goals with 9.1.x?</div></div></div></blockquote><div><br></div><div>You could try using the ClusterExecutor to invoke getCache across all nodes. Note it has to return null since a Cache is not Serializable.</div><div><br></div><div>String cacheName = ;</div><div><div>cache.getCacheManager().executor().submitConsumer(cm -> {</div><div> cm.getCache(cacheName);</div><div> return null;</div><div> }, (a, v, t) -> {</div><div> if (v != null) {</div><div> System.out.println("There was an exception retrieving " + cacheName + " from node: " + a);</div><div> }</div><div> }</div><div> );</div></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><br><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
<br>
Tristan<br>
<div><div class="m_-3952894493400500524h5"><br>
On 3/1/18 4:30 PM, Thomas SEGISMONT wrote:<br>
> Hi,<br>
><br>
> This email follows up on my testing of the Infinispan Cluster Manager<br>
> for Vert.x on Kubernetes.<br>
><br>
> In one of the tests, we want to make sure that, after a rolling update<br>
> of the application, the data submitted to Vert.x' AsyncMap is still<br>
> present. And I found that when the underlying cache is predefined in<br>
> infinispan.xml, the data is present, otherwise it's not.<br>
><br>
> I pushed a simple reproducer on GitHub:<br>
> <a href="https://github.com/tsegismont/cachedataloss" rel="noreferrer" target="_blank">https://github.com/tsegismont/cachedataloss</a><br>
><br>
> The code does this:<br>
> - a first node is started, and creates data<br>
> - new nodes are started, but they don't invoke cacheManager.getCache<br>
> - the initial member is killed<br>
> - a "testing" member is started, printing out the data in the console<br>
><br>
> Here are my findings.<br>
><br>
> 1/ Even when caches are declared in infinispan.xml, the data is lost<br>
> after the initial member goes away.<br>
><br>
> A little digging showed that the caches are really distributed only<br>
> after you invoke cacheManager.getCache<br>
><br>
> 2/ Checking cluster status "starts" triggers distribution<br>
><br>
> I was wondering why the behavior was not the same as with my Vert.x<br>
> testing on Openshift. And then realized the only difference was the<br>
> cluster readiness check, which reads the cluster health. So I updated<br>
> the reproducer code to add such a check (still without invoking<br>
> cacheManager.getCache). Then the caches defined in infinispan.xml have<br>
> their data distributed.<br>
><br>
> So,<br>
><br>
> 1/ How can I make sure caches are distributed on all nodes, even if some<br>
> nodes never try to get a reference with cacheManager.getCache, or don't<br>
> check cluster health?<br>
> 2/ Are we doing something wrong with our way to declare the default<br>
> configuration for caches [1][2]?<br>
><br>
> Thanks,<br>
> Thomas<br>
><br>
> [1]<br>
> <a href="https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10" rel="noreferrer" target="_blank">https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10</a><br>
> [2]<br>
> <a href="https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22" rel="noreferrer" target="_blank">https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22</a><br>
><br>
</div></div>> _______________________________________________<br>
> infinispan-dev mailing list<br>
> <a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
> <a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
><br>
<span class="m_-3952894493400500524HOEnZb"><font color="#888888"><br>
--<br>
Tristan Tarrant<br>
Infinispan Lead and Data Grid Architect<br>
JBoss, a division of Red Hat<br>
_______________________________________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a><br>
</font></span></blockquote></div></div></div>
_______________________________________________<br>
infinispan-dev mailing list<br>
<a href="mailto:infinispan-dev@lists.jboss.org" target="_blank">infinispan-dev@lists.jboss.org</a><br>
<a href="https://lists.jboss.org/mailman/listinfo/infinispan-dev" rel="noreferrer" target="_blank">https://lists.jboss.org/mailman/listinfo/infinispan-dev</a></blockquote></div></div>