2018-03-01 16:36 GMT+01:00 Tristan Tarrant <ttarrant(a)redhat.com
<mailto:ttarrant@redhat.com>>:
You need to use the brand new CacheAdmin API:
http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_ca...
<
http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_ca...
I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2.
Is there any way to achieve these goals with 9.1.x?
Tristan
On 3/1/18 4:30 PM, Thomas SEGISMONT wrote:
> Hi,
>
> This email follows up on my testing of the Infinispan Cluster Manager
> for Vert.x on Kubernetes.
>
> In one of the tests, we want to make sure that, after a rolling
update
> of the application, the data submitted to Vert.x' AsyncMap is still
> present. And I found that when the underlying cache is predefined in
> infinispan.xml, the data is present, otherwise it's not.
>
> I pushed a simple reproducer on GitHub:
>
https://github.com/tsegismont/cachedataloss
<
https://github.com/tsegismont/cachedataloss>
>
> The code does this:
> - a first node is started, and creates data
> - new nodes are started, but they don't invoke cacheManager.getCache
> - the initial member is killed
> - a "testing" member is started, printing out the data in the
console
>
> Here are my findings.
>
> 1/ Even when caches are declared in infinispan.xml, the data is lost
> after the initial member goes away.
>
> A little digging showed that the caches are really distributed only
> after you invoke cacheManager.getCache
>
> 2/ Checking cluster status "starts" triggers distribution
>
> I was wondering why the behavior was not the same as with my Vert.x
> testing on Openshift. And then realized the only difference was the
> cluster readiness check, which reads the cluster health. So I updated
> the reproducer code to add such a check (still without invoking
> cacheManager.getCache). Then the caches defined in infinispan.xml
have
> their data distributed.
>
> So,
>
> 1/ How can I make sure caches are distributed on all nodes, even
if some
> nodes never try to get a reference with cacheManager.getCache, or
don't
> check cluster health?
> 2/ Are we doing something wrong with our way to declare the default
> configuration for caches [1][2]?
>
> Thanks,
> Thomas
>
> [1]
>
https://github.com/tsegismont/cachedataloss/blob/master/src/main/resource...
<
https://github.com/tsegismont/cachedataloss/blob/master/src/main/resource...
> [2]
>
https://github.com/tsegismont/cachedataloss/blob/master/src/main/resource...
<
https://github.com/tsegismont/cachedataloss/blob/master/src/main/resource...
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev(a)lists.jboss.org
<mailto:infinispan-dev@lists.jboss.org>
>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
<
https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>
--
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
https://lists.jboss.org/mailman/listinfo/infinispan-dev
<
https://lists.jboss.org/mailman/listinfo/infinispan-dev>
_______________________________________________
infinispan-dev mailing list
infinispan-dev(a)lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev
--
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat